text
stringlengths 216
4.52M
| meta
dict |
---|---|
\section{RoCTL{*}}
\label{sec:roctl}\label{sub:RoCTL-Structures} In this section
we define the RoCTL{*} logic. We first provide some basic definitions,
starting with our set of variables.
\begin{defi}
We let $\mathcal{V}$ be our set of variables. The set $\mathcal{V}$ contains a
special variable ${\bf v}$. A \uline{valuation} $g$ is
a map from a set of worlds $\allworlds$ to the power set of the variables.
The statement $p\ing(w)$ means roughly ``the variable
$p$ is true at world $w$''.
\end{defi}
The ${\bf v}$ atom%
\footnote{A variant of RoCTL{*} was presented in~\citep{FrDaRe07book}, which
had two accessibility relations, a success and failure transition
and thus did not need the special atom ${\bf v}$. The definition we
use here was presented in \citep{DBLP:conf/jelia/McCabe-Dansted08}).
These definitions are equivalent if we disallow the RoCTL{*} formulas
from directly accessing the ${\bf v}$ atom \citep{MCD10}. All the
known results on RoCTL{*} apply equally well using either definition,
and no advantage is known to the definition in \citep{FrDaRe07book}.
Using the definition of the structures for RoCTL{*} that have a single
accessibility relation allows us to define both CTL{*} and RoCTL{*}
structures in the same way, greatly simplifying the definition of
the translations.%
} will be used to define failing transitions. Informally it may be
possible to enter a state labelled with ${\bf v}$, but it is forbidden
to do so; entering such a state will be considered a failure.
As is normal we say a binary relation is serial if every element has
a successor.
\begin{defi}
We say that a binary relation $R$ on $S$ is \uline{serial} (total)
if for every $a$ in $S$ there exists $b$ in $S$ such that $aRb$.
\end{defi}
We now provide a definition of a structure.
\begin{defi}
A \uline{structure} $M=\tuple{\mfw,\access,\valuation}$ is a 3-tuple containing a
set of worlds $\allworlds$, a serial binary relation $\access$ on $S$,
a valuation $g$ on the set of worlds $\allworlds$.
\end{defi}
While in some logics the truth of formulas depends solely on the current
world, the truth of CTL{*} (and hence QCTL{*} and RoCTL{*}) may depend
on which future eventuates. These futures are represented as infinitely
long (full) paths through the structure. For this reason, we provide
a formal definition of fullpaths.
\begin{defi}
We call an $\omega$-sequence $\sigma=\left\langle w_{0},w_{1},\ldots\right\rangle $
of worlds a \uline{fullpath} iff for all non-negative integers
$i$ we have $w_{i}\access w_{i+1}$. For all $i$ in $\mathbb{N}$ we define
$\sigma_{\geq i}$ to be the fullpath $\left\langle w_{i},w_{i+1},\ldots\right\rangle $,
we define $\sigma_{i}$ to be $w_{i}$ and we define $\sigma_{\leq i}$
to be the sequence $\left\langle \jzseq wi\right\rangle $.
\end{defi}
We now define the property of failure-freeness. This means that, in
the future, no failing transitions are taken. Informally, a failure-free
fullpath represents a perfect future. Whereas the Obligatory operator
in SDL quantifies over acceptable worlds, the Obligatory operator
we will define quantifies over failure-free fullpaths.
\begin{defi}
We say that a fullpath $\sigma$ is \uline{failure-free} iff for
all $i>0$ we have ${\bf v}\noting\left(\sigma_{i}\right)$.
We define $\func{ap}(w)$ to be the set of all fullpaths starting with world
$w$ and $S(w)$ to be the set of all failure-free fullpaths starting
with $w$. We call a structure a RoCTL-structure iff $S(w)$ is
non-empty for every $w\inS$.
\end{defi}
We will now define deviations. Informally, these represent the possibility
of adding an additional failure to some step $i$ along a path. After
$i$ we follow a different path, and we allow only a single failure
not on the existing path so no failures occur after $i+1$. Deviations
are intended to represent possible failures we may wish to be able
to recover from, and if our system is robust to failures we also want
it to be robust in the face of correct transitions. For this reason
we allow the new transition added at step $i$ to be a success as
well as a failure.
\begin{defi}
For two fullpaths $\sigma$ and $\pi$ we say that $\pi$ is
an \uline{$i$-deviation} from $\sigma$ iff $\sigma_{\leq i}=\pi_{\leq i}$
and $\pi_{\geq i+1}\in S(\pi_{i+1})$. We say that $\pi$
is a \uline{deviation} from $\sigma$ if there exists a non-negative
integer $i$ such that $\pi$ is an $i$-deviation from $\sigma$.
We define a function $\delta$ from fullpaths to sets of fullpaths
such that where $\sigma$ and $\pi$ are fullpaths, $\pi$ is
a member of $\delta(\sigma)$ iff $\pi$ is a deviation from $\sigma$.
\end{defi}
We see that $S\left(\sigma_{0}\right)\subseteq\delta(\sigma)\subseteq\func{ap}(\sigma_{0})$.
Where $p$ varies over $\mathcal{V}$, we define RoCTL{*} formul\ae{}{}
according to the following abstract syntax
\begin{align*}
\aform & :=p\,|\,\neg\aform\,|\,\lAnd\aform\forma\,|\,\lUntil\aform\phi\,|\, N\phi\,|\, A\aform\,|\, O\aform\,|\,\blacktriangle\phi\mbox{ .}
\end{align*}
A formula that begins with $A$, $\neg A$, $O$, $\neg O$, $p$
or $\neg p$ is called a \term{state formula}. For consistency with
\citep{FrDaRe07book}, we do not consider a formula that explicitly
contains ${\bf v}$ to be a RoCTL{*} formula, although our translation
into CTL{*} works equally well for such formul\ae{}{}. The $\top,\,\neg,\,\wedge,\,N,\,U$
and $A$ are the familiar ``true'', ``not'', ``and'', ``next'',
``until'' and ``all paths'' operators from CTL\@. The abbreviations
$\bot$, $\vee$, $F$, $G$, $W$, $E$ $\rightarrow$ and $\leftrightarrow$
are defined as in CTL{*} logic. As with Standard Deontic Logic (SDL)
logic, we define $P\equiv\negO\neg$. Finally, we define the dual
$\triangle$ of $\blacktriangle$ as the abbreviation $\triangle\equiv\neg\blacktriangle\neg$.
We call the $O$, $P$, $\blacktriangle$, $\triangle$ operators Obligatory, Permissible,
Robustly and Prone respectively.
We define truth of a RoCTL{*} formula $\aform$ on a fullpath $\sigma=\left\langle w_{0},w_{1},\ldots\right\rangle $
in a RoCTL-structure $M$ recursively as follows:
\begin{align*}
M,\sigma\forcesN\aform & \text{ iff }M,\sigma_{\geq1}\vDash\aform\\
M,\sigma\vDash\formaU\psi & \text{ iff }\exists_{i\in\mathbb{N}}\text{ s.t. }M,\sigma_{\geq i}\vDash\psi\text{ and }\\
& \qquad\forall_{j\in\mathbb{N}}j<i\impliesM,\sigma_{\geq j}\vDash\phi\\
M,\sigma\vDash A\aform & \text{ iff }\forall_{\pi\in\func{ap}(\sigma_{0})}M,\pi\vDash\aform\\
M,\sigma\vDash O\aform & \text{ iff }\forall_{\pi\in S(\sigma_{0})}M,\pi\vDash\aform\\
M,\sigma\vDash\blacktriangle\aform & \text{ iff }\forall_{\pi\in\delta(\sigma)}M,\pi\vDash\aform\mbox{ and }M,\sigma\vDash\aform
\end{align*}
The definition for $\top$, $p$, $\neg$ and $\wedge$ is as we
would expect from classical logic. The intuition behind the $\blacktriangle$
operator is that it quantifies over paths that could result if a single
error was introduced; the deviations only have at most one failure
not on the original path, and they are identical to the original path
until this failure occurs.
\begin{defi}
We say that a function $\tau$ from formul\ae{}{} to formul\ae{}{} is
\uline{truth-preserving} iff for all $M,\sigma$ and $\phi$ it
is the case that $M,\sigma\vDash\phi\iff M,\sigma\vDash\tau\left(\phi\right)$\textup{.}
\end{defi}
Given that traditional modal logics define truth at worlds, instead
of over paths, many important properties of modal logics assume such
a definition of truth. When dealing with those properties we can use
the following definition of truth of RoCTL{*} formulas at worlds.
\begin{defi}
A RoCTL{*} formula is true at a world if it is true on any path leading
from that world, or more formally:
\begin{align*}
M,w\vDash\aform & \text{ iff }\Exists{\pi\text{ s.t. }\pi_{0}=w}M,\pi\vDash\aform\mbox{ .}
\end{align*}
\end{defi}
\section{Examples}
\label{sec:examples}\input{Body_Examples__v.tex}
\section{Technical Preliminaries }
\label{sec:machinery}\label{sec:Preliminaries}
In this section we will provide definitions and background results
that will be used in this paper. In \prettyref{sub:CTL*-and-QCTL*}
we will define CTL{*} and its syntactic extension QCTL{*}. In \prettyref{sub:Automata}
we will define various forms of Automata. In \prettyref{sub:Bisimulations}
we will define Bisimulations. We will discuss expressive equivalences
\prettyref{sec:Expressive-Equivalences}, in particular between LTL
and automata.
\subsection{\label{sub:CTL*-and-QCTL*}Trees, LTL, CTL{*} and QCTL{*} }
In \thispaper{} we will also briefly consider Linear Temporal Logic
(LTL), CTL{*} and an extension QCTL{*} of CTL{*}. For the purposes
of this paper we will define CTL{*} to be a syntactic restriction
of RoCTL{*} excluding the $O$ and $\blacktriangle$ operator.
\begin{defi}
Where $p$ varies over $\mathcal{V}$, we define CTL{*} formul\ae{}{} according
to the following abstract syntax
\begin{align*}
\aform & :=p\,|\,\neg\aform\,|\,\lAnd{\aform}{\aform}\,|\,\lUntil{\aform}{\phi}\,|\, N\phi\,|\, A\phi\mbox{ .}
\end{align*}
\end{defi}
We likewise define LTL to be the restriction of CTL{*} without the
$A$ operator.
\begin{defi}
Where $p$ varies over $\mathcal{V}$, we define LTL formul\ae{}{} according
to the following abstract syntax
\begin{align*}
\aform & :=p\,|\,\neg\aform\,|\,\lAnd{\aform}{\aform}\,|\,\lUntil{\aform}{\phi}\,|\, N\phi\mbox{ .}
\end{align*}
\end{defi}
In turn we define QCTL{*} as an extension of CTL{*} with a $\forall$
operator.
\begin{defi}
\label{QCTL*-syntax}A\emph{ QCTL{*}}\logicindex{QCTL*} formula has
the following syntax:
\begin{align*}
\aform & :=p\,|\,\neg\aform\,|\,\lAnd{\aform}{\aform}\,|\,\lUntil{\aform}{\phi}\,|\, N\phi\,|\, A\phi\,|\,\QForall p\phi\mbox{ .}
\end{align*}
The semantics of $p$, $\neg$, $\wedge$, $U$, $N$, and $A$ are
the same as in CTL{*} and RoCTL{*}. Before defining the Kripke semantics
for QCTL{*} we need to define the concept of a $p$-variant. Informally
a $p$-variant $M'$ of a structure $M$ is a structure that identical
except in the valuation of the $p$ atom.
\end{defi}
\begin{defi}
\label{def:p-variant}Given some CTL-structure $M=(S,R,g)$
and some $p\in\mathcal{V}$, a $p$-\uline{variant} of $M$ is some structure
$M=(S,R,g')$ where $g'(w)-\{p\}=g(w)-\{p\}$
for all $w\inS$.
Under the Kripke semantics for QCTL{*}, $\QForall p\phi$ is defined as
\begin{align*}
M,b\vDash\QForall p\alpha\Longleftrightarrow & \textrm{ For every }p\textrm{-variant }M'\textrm{ of }M\\
& \quad\textrm{ we have }M',b\vDash\alpha\ .
\end{align*}
\end{defi}
In \thispaper{} we will use the \uline{tree-semantics} for QCTL{*}.
These semantics are the same as the Kripke semantics except that,
whereas the Kripke semantics evaluates satisfiability over the class
$\mathbb{C}$ of CTL-structures, the tree semantics evaluate satisfiability
over the class $\classC_{t}$ of CTL-structures which are trees (see
\prettyref{def:tree} below for a formal definition of trees). This
changes which formul\ae{}{} are satisfiable in the logic as, unlike
CTL{*}~\citep{em83}, QCTL{*} is sensitive to unwinding into a tree
structure~\citep{kup95}. Note that the atom $p$ in $\QForall p$ often
called a \uline{variable}.
\begin{theorem}
\label{thm:tree-QCTL-decidable}The tree-semantics for QCTL{*} are
decidable. \citep{Fr06}
\end{theorem}
We now provide the formal definition of a tree.
\begin{defi}
\label{def:tree}We say $T=\tuple{\mfw,\access,\valuation}$ is a $\mathcal{V}$-labelled \uline{tree},
for some set $\mathcal{V}$, iff\end{defi}
\begin{enumerate}
\item $S$ is a non-empty set of nodes
\item for all $x,y,z\inS$ if $\tuple{x,z}\in\access$ and $\tuple{y,z}\in\access$
then $x=y$
\item there does not exist any cycle $x_{0}\access x_{1}\cdots\access x_{0}$ through
$\access$
\item there exists a unique node $x$ such that for all $y\inS$,
if $y\ne x$ there exists a sequence $x\access x_{1}\cdots\access y$ through
$\access$. We call the node $x$ the \uline{root} of the tree $T$
\item the valuation $g$ (or labelling) is a function from $S$
to $2^{\mathcal{V}}$, that is for each $w\inS$, $g\left(w\right)\subseteq\mathcal{V}$\end{enumerate}
\begin{defi}
We define the height of a finite tree $T=\ctlstruct$ as follows:
we let $\func{root}\left(T\right)$ be the root of the tree $T$.
We let $\func{height}\left(T\right)=\func{\func{height}}_{\ra}\left(\func{root}\left(T\right)\right)$
where $\func{\func{height}}_{\ra}$ is a function from $S$ to $\mathbb{N}$ such
that for all $x\inS$, we let $\func{\func{height}}_{\ra}\left(x\right)$ be
the smallest non-negative integer such that $\func{\func{height}}_{\ra}\left(x\right)>\func{\func{height}}_{\ra}\left(y\right)$
for all $y$ such that $\tuple{x,y}\in\access$.
\end{defi}
For example, a leaf node has a height of 0 since 0 is the smallest
non-negative integer.
\begin{defi}
We say that $v$ is \uline{reachable} from $w$, with respect to
an accessibility relation $R$, iff there is a path through
$R$ from $w$ to $v$.
\end{defi}
\begin{defi}
We say that a binary relation $R'$ is the \uline{fragment} of
another binary relation $R$ on a set $X$ iff
\begin{align*}
\Forall{x,y}xR'y\iff x,y\in X\wedge xRy & \mbox{ .}
\end{align*}
We say that a function $g'$ is the fragment of another function $g$
on a set $X$ iff $\func{range}\left(g\right)=X\subseteq\func{range}\left(g'\right)$
and $g\left(x\right)=g'\left(x\right)$ for all $x$ in $X$.
\end{defi}
\begin{defi}
We say $C=\left\langle S_{C},\access_{C},g_{C}\right\rangle $
is a \uline{subtree} of $T=\tuple{\mfw,\access,\valuation}$ iff there exists $w\inS$
such that $S_{C}$ is the subset of $S$ reachable
from $w$ and $\access_{C}$ and $g_{C}$ are the fragments of $\access$
and $g$ on $S_{C}$ respectively. We say $C$ is
a \uline{direct subtree} of $T=\ctlstruct$ if $C$ is a subtree
of $T$ and $\tuple{\textbf{root}\left(T\right),\textbf{root}\left(C\right)}\in\access$.
\end{defi}
\subsection{\label{sub:Automata}Automata}
In this section we will define some basic terms and properties of
automata that will be used later in this paper. We focus on showing
that we can translate between counter-free automata and LTL formul\ae{}{}.
\begin{defi}
A \uline{Finite State Automaton (FSA)} $\mathcal{A}=(\Sigma,S,\States_{0},\delta,F)$
contains
\end{defi}
$\Sigma$: set of symbols (alphabet)
$S$: finite set of automaton states
$\States_{0}$: set of initial states $\subseteq S$
$\delta$ : a transition relation $\subseteq$ $(S\times\Sigma\times S)$
$F$: A set of accepting states $\subseteq S$
We call the members of $\Sigma^{*}$ words. Each transition of
a path through an automaton is labelled with an element $e$ of $\Sigma$.
We say $s_{0}\overset{e_{0}}{\rightarrow}s_{1}\overset{e_{1}}{\rightarrow}\cdots\overset{e_{n-1}}{\rightarrow}s_{n}$
is a path of $\mathcal{A}$ iff for all $0\leq i<n$ the tuple $\left\langle s_{i},e_{i},s_{i+1}\right\rangle $
is in $\delta$. The label of the path is the word $\left\langle e_{0},e_{1},\ldots,e_{n}\right\rangle $.
We say that a path through an automaton is a run iff $s_{0}\in Q_{0}$.
A run of an FSA is called accepting if it ends in an accepting state.
We define the language $\mathcal{L}(\mathcal{A})$ recognised by an automaton
to be the set of all words for which there is an accepting run.
\begin{defi}
\textup{We let $L_{p,q}\left(\mathcal{A}\right)$ be the set of all labels
of paths through $\mathcal{A}$ from $p$ to $q$.}
\end{defi}
Of particular importance to this paper are counter-free automata.
As will be discussed later we can translate LTL formulas to and from
counter-free automata.
\begin{defi}
A \uline{counter-free} automaton is an automaton such that for
all positive integers $m$, states $s\in S$ and words $u$ in $\Sigma^{*}$,
if $u^{m}\in L_{s,s}$ then $u\in L_{s,s}$~\citep{DiGa08Thomas}.
\end{defi}
\begin{defi}
\label{def:a-determinisation}We define a Deterministic Finite Automaton
(DFA) to be an FSA $\mathcal{A}=(\Sigma,S,\States_{0},\delta,F)$
where $\left|Q_{0}\right|=1$ and for every $s$ in $S$ and $e$
in $\Sigma$ there exists exactly one $t\in S$ such that $\left(s,e,t\right)\in\delta$.
\end{defi}
\global\long\def\detm#1{\hat{#1}}
Having given the obvious definition of DFAs as a special case of FSAs,
we will now define a standard determinisation for FSAs.
\begin{defi}
Given an FSA $\mathcal{A}=(\Sigma,S,\States_{0},\delta,F)$, we define
the determinisation of $\mathcal{A}$ to be the DFA $\detm{\mathcal{A}}=(\Sigma,\detm S,\left\{ \States_{0}\right\} ,\detm{\delta},\detm F)$
with:\end{defi}
\begin{itemize}
\item $\detm S=2^{S}$. Each $\detm s\in\detm S$ represents the set of
states of $\mathcal{A}$ that $\mathcal{A}$ could be in now.
\item For each $\detm s,\detm t\in\detm S$, the tuple $\left\langle \detm s,e,\detm t\right\rangle $
is in $\detm{\delta}$ iff $\detm t$ is the maximal subset of
$S$ satisfying $\Forall{t\in\detm t}\Exists{s\in\detm s}\left\langle s,e,t\right\rangle \in\delta$.
\item $\detm s\in\detm F$ iff there is an $s\in\detm s$ such that $s\in F$.
\end{itemize}
The reason for presenting the above determinisation is to so that
we can show that we can determinise FSA while preseriving counter-free
automata. While this intuitive, it is important to this paper so we
will provide a formal proof.
\begin{lemma}
\label{lem:preserve-counter-free}If $\mathcal{A}$ is counter-free then
the determinisation $\detm{\mathcal{A}}$ produced by the above procedure
is counter-free.\end{lemma}
\begin{proof}
Say that $\detm{\mathcal{A}}$ is not counter-free. Thus there exists $u$,
$m$ and $\detm s$ such that $u^{m}\in\detm L_{\detm s,\detm s}$
but $u\notin\detm L_{\detm s,\detm s}$.
Note that we have a cycle such that the word $u$ takes us from $\detm s_{0}=\detm s$
to $\detm s_{1}$, from $\detm s_{1}$ to $\detm s_{2}$ and so on
back to $\detm s_{0}=\detm s$, or more formally: $u\in\bigcap_{i<m}L_{\detm s_{i},\detm s_{i+1}}$
and $u\in L_{\detm s_{m-1},\detm s_{0}}$. Note also that $\detm s\subseteq\allworlds$,
and we see that $u^{m}\in L_{s,s}$ for all $s\in\detm s$. As $\mathcal{A}$
is counter-free is it also the case that $u\in L_{s,s}$ for all $s\in\detm s$.
As $u\in L_{s,s}$ and $s\in\detm s_{0}$ it follows that $s\in\detm s_{1}$;
we may repeat this argument to show that as $s\in\detm s_{1}$ it
must also be the case that $s\in\detm s_{2}$ and so on. Thus $\detm s_{0}\subseteq\detm s_{1}\subseteq\cdots\subseteq\detm s_{0}$
and so $\detm s_{0}=\detm s_{1}=\cdots=\detm s_{0}$. We see $L_{\detm s_{0},\detm s_{1}}=L_{\detm s,\detm s}$
and since $u\in L_{\detm s_{0},\detm s_{1}}$ it follows that $u\in L_{\detm s,\detm s}$,
but we have assumed that $u\notin\detm L_{\detm s,\detm s}$. Hence
by contradiction, $\detm{\mathcal{A}}$ is counter-free.
\end{proof}
We will use the fact that the determinisation is counter-free to generalise
the following theorem to non-deterministic automata.
\begin{theorem}
\label{thm:Translating-a-counter-free}Translating a counter-free
DFA into an LTL formula results in a formula of length at most $m2^{2^{\mathcal{O}\left(n\ln n\right)}}$
where $m$ is the size of the alphabet and $n$ is the number of states
\citep{Wilke99classifyingdiscrete}.
\end{theorem}
One minor note is that \citep{Wilke99classifyingdiscrete} uses stutter-free
operators so their $\left(\alpha U\beta\right)$ is equivalent to
our $N\left(\alpha U\beta\right)$; however, this is trivial to translate.
As the determinisation from \prettyref{def:a-determinisation} has
$2^{n}$ states where $n$ is the number of states in the original
FSA, \prettyref{cor:counter-free-FSA-to-LTL-3EXP} below follows from
\prettyref{lem:preserve-counter-free} and \prettyref{thm:Translating-a-counter-free}.
\begin{corollary}
\label{cor:counter-free-FSA-to-LTL-3EXP}Translating a counter-free
FSA into an LTL formula results in a formula of length at most $m2^{2^{\mathcal{O}\left(2^{n}n\right)}}$
where $m$ is the size of the alphabet and $n$ is the number of states.
\end{corollary}
We now define shorthand for discussing a variant of an automaton
starting at a different state.
\begin{defi}
\label{def:shorthand-As}Given an automaton $\mathcal{A}=(\Sigma,S,\States_{0},\delta,F)$,
we use $\mathcal{A}^{s}$ as shorthand for $(\Sigma,S,\left\{ s\right\} ,\delta,F)$
where $s\in S$. We say that an automaton $\mathcal{A}$ accepts a word
\uline{from} state $s$ if the automata $\mathcal{A}^{s}$ accepts the
word.
\end{defi}
\subsubsection{Automata on Infinite Words}
In \thispaper{} we use automata as an alternate representation of
temporal logic formul\ae{}{}. LTL is interpreted over infinitely long
paths, and so we are particularly interested in automata that are
similarly interpreted over infinitely long runs. We will define an
infinite run now.
\begin{defi}
We call the members of $\Sigma^{\omega}$ infinite words. We say\textup{
$s_{0}\overset{e_{0}}{\rightarrow}s_{1}\overset{e_{1}}{\rightarrow}\cdots$
is an infinite path of $\mathcal{A}$ iff for all }$i\geq0$ the tuple
$\left\langle s_{i},e_{i},s_{i+1}\right\rangle $ is in $\delta$\textup{.
The label of the path is $\left\langle e_{0},e_{1},\ldots\right\rangle $.
}An infinite run $\rho$ of $\mathcal{A}$ is a path starting at a state
in $\States_{0}$.
\end{defi}
There are a number of different types of automata that can be interpreted
over infinite runs. These are largely similar to FSA, but have different
accepting conditions. \index{Buchi@B\"uchi{} automata}\emph{B\"uchi{}
automata} are extensions of finite state automata to infinite worlds.
A B\"uchi{} automaton is similar to an FSA, but we say an infinite
run is accepting iff a state in $F$ occurs infinitely often in the
run.
\begin{defi}
\label{def:path-to-word}For a fixed structure $M$, a fullpath $\sigma$
through $M$, and a set of state formul\ae{}{} $\Phi$ we let $g_{\Phi}\left(\sigma_{\leq n}\right)=\left(w_{0},w_{1},\ldots,w_{n}\right)$
and $g_{\Phi}\left(\sigma_{\leq n}\right)=\left(w_{0},w_{1},\ldots\right)$
where $w_{i}=\left\{ \phi\colon\:\phi\in\Phi\wedge M,\sigma_{i}\vDash\phi\right\} $
for each non-negative integer $i$.
\end{defi}
We are interested in counter-free automata because it is known that
a language $L$ is definable in \logicindex{LTL}LTL iff $L$ is accepted
by some counter-free B\"uchi{} automaton~\citep{DiGa08Thomas} (see
\prettyref{thm:LTL=00003Dcounter-free}).
It is well known that we can represent a CTL{*} formula as an LTL
formula over a path, where that path includes state formula as atoms; this is commonly used in model checking~\citep{ModelChecking,318620,clarke1986automatic}.
Recall that \prettyref{thm:LTL=00003Dcounter-free} states that a
language $L$ is definable in LTL iff $L$ is accepted by some counter-free
B\"uchi{} automaton~\citep{DiGa08Thomas}; thus we can also express
this LTL formula as a counter-free B\"uchi{} automaton.
Formally, for any CTL{*} formula $\phi$ there exists a set of state
formul\ae{}{} $\Phi$ and a counter-free automaton $\mathcal{A}=(2^{\Phi},Q,\States_{0},\delta,F)$
such that $\mathcal{A}$ accepts $g_{\Phi}\left(\sigma\right)$ iff $M,\sigma\vDash\phi$.
\begin{defi}
\label{def:equiv-autom2CTL}We say an automaton \textup{$\mathcal{A}=(2^{\Phi},Q,\States_{0},\delta,F)$
is equivalent to a formula $\phi$ iff for all structures $M$ and
fullpaths $\sigma$ through $M$ we have:
\begin{eqnarray*}
\left(\Forall{M,\sigma}M,\sigma\vDash\phi\right)\iff\left(\mathcal{A}\mbox{ accepts }g_{\Phi}\left(\sigma\right)\right)\mbox{ .}
\end{eqnarray*}
}
\end{defi}
\subsubsection{Alternating Tree Automata}
Our succinctness proof in \prettyref{sec:Succinctness} uses results
that show CTL{*} can be translated to tree automata.
We will define a type of tree automata called \emph{symmetric alternating
automata} (SAA) (see for example~\citep{kupferman00automatatheoretic}),
these are a subclass of alternating automata, and can also be referred
to as just alternating automata (see for example~\citep{DBLP:journals/tcs/Dam94}).
Every node, in the run of an SAA on an input structure $M$, represents
a world of $M$. However, a world $w$ in the input structure $M$
may occur many times in a run. Where a non-deterministic automata
would non-deterministically pick a next state, an SAA non-deterministically
picks a conjunction of elements of the form $\left(\square,q\right)$
and $\left(\lozenge,q\right)$; alternatively we may define SAA as
deterministically picking a Boolean combination of requirements of
this form, see for example~\citep{kupferman00automatatheoretic}.
Alternating automata can also be thought of as a type of parity game,
see for example~\citep{GrThWi02}. An element of the form $\left(\square,q\right)$/$\left(\lozenge,q\right)$
indicates for every/some child $u$ of the current world $w$ of the
input structure $M$, a run on $M$ must have a branch which follows
$u$ and where $q$ is the next state. Before defining SAA we will
first define parity acceptance conditions.
\global\long\defL{L}
\begin{defi}
A \uline{parity acceptance condition} $F$ of an automaton $(\Sigma,S,\States_{0},\delta,F)$
is a map from $S$ to $\mathbb{N}$. We say that a path satisfies the
parity condition $F$ iff the largest integer $n$, such that $F\left(q\right)=n$
for some $q$ that occurs infinitely often on the path, is even.
\end{defi}
We can now define SAA\@.
\begin{defi}
A \emph{symmetric alternating automata} (\uline{SAA}) is a tuple
\begin{align*}
(\Sigma,S,\States_{0},\delta,F)
\end{align*}
where $\Sigma,S$ and $S_{0}$ are defined as in B\"uchi{} automata,
and
$\delta$ : a transition function $\subseteq(S\times\Sigma\times2^{\left\{ \square,\lozenge\right\} \times S})$
\end{defi}
We define the acceptance condition $F$ of an SAA to be a parity acceptance
condition, but note that we can express B\"uchi{} parity conditions
as parity acceptance conditions. The SAA accepts a run iff every infinite
path through the run satisfies $F$.
A run $L=\left\langle S_{L},\access_{L},g_{L}\right\rangle $
of the SAA on a $\mathcal{V}$-labelled pointed value structure $\pvsOf{(\allworlds,\access,g)}{\pvsW}$ is
an $S\times S$-labelled tree structure satisfying the following
properties. Where $g_{L}\left(\func{root}\left(L\right)\right)=\tuple{w,q}$,
it is the case that $q\in S_{0}$ and $w=a$. For every $w_{L}$
in $S_{L}$, where $\tuple{w,q}=g_{L}\left(w_{L}\right)$
and $e=g\left(w\right)$, there exists some set $X\in2^{\left\{ \square,\lozenge\right\} \times S}$
such that $\tuple{q,e,X}\in\delta$ and
\begin{enumerate}
\item For each $r\in S$ such that $\left(\square,r\right)\in X$,
for \emph{each} $u$ such that $w\access u$ there must exist $u_{L}$
such that $w_{L}\access_{L}u_{L}$ and $\left(u,r\right)\ing_{L}\left(u_{L}\right)$.
\item For each $r\in S$ such that $\left(\lozenge,r\right)\in X$,
for \emph{some} $u$ such that $w\access u$ there must exist $u_{L}$
such that $w_{L}\access_{L}u_{L}$ and $\left(u,r\right)\ing_{L}\left(u_{L}\right)$.\end{enumerate}
\begin{theorem}
\label{thm:CTL2AA-single-exponential}Given a CTL{*} formula $\psi$
we can construct an SAA $\mathcal{A}_{\psi}$ with a number of states that
is singly exponential in the length of $\psi$. \end{theorem}
\begin{proof}
\citet{DBLP:journals/tcs/Dam94} provides a translation of CTL{*}
formul\ae{}{} into equivalent $\mu$-calculus. The nodes are sets of
formul\ae{}{}, so this is a singly exponential translation.
There are a number of translations of $\mu$-calculus into alternating
automata. Wilke gives a simple translation that does not assume that
the tree has any particular structure~\citep{Wilke00alternatingtree}.
The states in the resulting automata are subformul\ae{}{} of the $\mu$-calculus
formula. Hence the translation into alternating automata is linear.
\end{proof}
The translation via $\mu$-calculus above is sufficient for \thispaper{}.
There are translations that result in more optimised model checking
and decision procedure results~\citep{kupferman00automatatheoretic}.
\subsection{\label{sub:Bisimulations}Bisimulations}
An important concept relating to structures is bisimilarity, as two
bisimilar structures satisfy the same set of modal formul\ae{}{}.
We credit \citet{milner1980calculus} and \citet{ParkDavid1981} for
developing the concept of bisimulation.
\begin{defi}
\label{def:PVS}Where $M=\tuple{\mfw,\access,\valuation}$ is a structure and $a\inS$,
we say that $M_{a}$ is a Pointed Valued Structure (\uline{PVS}).
\end{defi}
We now provide a definition of a bisimulation.
\begin{defi}
\label{def:bisimulation}Given a PVS $\pvsOf{(\allworlds,\access,g)}{\pvsW}$ and a PVS $\pvsOf{(\hat{\allworlds},\hat{R},\hat{g})}{\pvsWB}$ we
say that a relation $\mathfrak{B}$ from $S$ to $\hat{S}$
is a \uline{bisimulation} from $\pvsOf{(\allworlds,\access,g)}{\pvsW}$ to $\pvsOf{(\hat{\allworlds},\hat{R},\hat{g})}{\pvsWB}$ iff\end{defi}
\begin{enumerate}
\item $\left(w,\hat{w}\right)\in\mathfrak{B}$
\item for all $\left(u,\hat{u}\right)\in\mathfrak{B}$ we have $g\left(u\right)=\hat{g}\left(\hat{u}\right)$.
\item for all $\left(u,\hat{u}\right)\in\mathfrak{B}$ and $v\in uR$ there
is some $\hat{v}\in\hat{u}\hat{R}$ such that $\left(v,\hat{v}\right)\in\mathfrak{B}$.
\item for all $\left(u,\hat{u}\right)\in\mathfrak{B}$ and $\hat{v}\in\hat{u}\hat{R}$
there is some $v\in uR$ such that $\left(v,\hat{v}\right)\in\mathfrak{B}$.
\end{enumerate}
Bisimulations can be used to define bisimilarity.
\begin{defi}
\label{def:bisimilar}We say that $\pvsOf{(\allworlds,\access,g)}{\pvsW}$ and $\pvsOf{(\hat{\allworlds},\hat{R},\hat{g})}{\pvsWB}$ are \uline{bisimilar}
iff there exists a bisimulation from $\pvsOf{(\allworlds,\access,g)}{\pvsW}$ to $\pvsOf{(\hat{\allworlds},\hat{R},\hat{g})}{\pvsWB}$.
\end{defi}
\begin{defi}
\label{def:bisimulation-invariant}We say that a formula $\phi$ of
some logic L is \uline{bisimulation invariant} iff for every bisimilar
pair of PVS's $\left(M,w\right)$ and $(\hat{M},\hat{w})$ where $M$
and $\hat{M}$ are structures that L is interpreted over, we have
$M,w\vDash\phi$ iff $\hat{M},\hat{w}\vDash\phi$. We say the logic
L is bisimulation invariant iff every formula $\phi$ of L is bisimulation
invariant.
\end{defi}
Knowing that a logic is bisimulation invariant is useful because we
can take the tree-unwinding of a structure without changing the set
of formul\ae{}{} that it satisfies.
\input{Thesis_Background_Equiv__.tex}
\global\long\def\beta{\beta}
\section{Bisimulation Invariance}
\label{sec:bisim}\label{sub:Bisimulation-invariance-of-RoCTL*}
\renewcommand{CTL{}$_{\Viol}${}}{CTL}\renewcommandRo\CTLvstruct{RoCTL-structure}\renewcommandRoCTL{}*\hspace{-1ex}$_{\Viol}${RoCTL*}
Recall that bisimulation invariance was defined in \prettyref{def:bisimulation-invariant}.
We shall now begin to prove some basic lemmas necessary to show that
RoCTL{*} is bisimulation invariant. First we will prove that RoCTL{}*\hspace{-1ex}$_{\Viol}${}
is bisimulation invariant, and define bisimulations on RoCTL-structures.
Before reading the following definition recall the definition of a
PVS, or pointed valued structure, from \prettyref{def:PVS}.
\begin{defi}
Let $\mathfrak{B}$ be any bisimulation from some PVS $M_{w}$ to
another PVS $\hat{M}_{\hat{w}}$. We define $\mathfrak{B}^{\omega}$
to be a binary relation from fullpaths through $M$ to fullpaths though
$\hat{M}$ such that $\left(\sigma,\hat{\sigma}\right)\in\mathfrak{B}^{\omega}$
iff $\left(\sigma_{i},\hat{\sigma}_{i}\right)\in\mathfrak{B}$ for
all $i\in\mathbb{N}$. We say that a PVS $M_{w}$ is a RoCTL{}$_{\Viol}${}{}-model
iff $M$ is a Ro\CTLvstruct{}.
\end{defi}
It is important that for a path $\sigma$ though $M$ we can find
a similar path $\hat{\sigma}$ through $\hat{M}$. We will now show
that this is the case.
\begin{lemma}
\label{lem:bi-construct-path}Let $\mathfrak{B}$ be any bisimulation
from some RoCTL{}$_{\Viol}${}{}-model $M_{w}$ to another RoCTL{}$_{\Viol}${}{}-model $\hat{M}_{\hat{w}}$.
For any fullpath $\sigma$ where $\sigma_{0}=w$ through $M$ there
exists a fullpath $\hat{\sigma}$ through $\hat{M}$ such that $\left(\sigma,\hat{\sigma}\right)\in\mathfrak{B}^{\omega}$
and $\hat{\sigma}_{0}=\hat{w}$; likewise for any fullpath $\hat{\sigma}$
where $\hat{\sigma}_{0}=\hat{w}$ through $\hat{M}$ there exists
a fullpath $\sigma$ through $M$ such that $\left(\sigma,\hat{\sigma}\right)\in\mathfrak{B}^{\omega}$
and $\sigma_{0}=w$.\end{lemma}
\begin{proof}
We construct $\hat{\sigma}$ from $\sigma$ as follows: let $\hat{\sigma}_{0}=\hat{w}$.
Once we have chosen $\hat{\sigma}_{i}$ we choose $\hat{\sigma}_{i+1}$
as follows: since $\left(\sigma_{i},\hat{\sigma}_{i}\right)\in\mathfrak{B}$
and $\sigma_{i+1}\in\sigma_{i}R$ there is some $\hat{v}\in\hat{\sigma}_{i}\hat{R}$
such that $\left(\sigma_{i+1},\hat{v}\right)\in\mathfrak{B}$; we
let $\hat{\sigma}_{i+1}=\hat{v}$. We may construct $\sigma$ from
$\hat{\sigma}$ likewise.
\end{proof}
The following lemma is similar; however, we are specifically attempting
to construct deviations.
\begin{lemma}
\label{lem:bi-construct-deviation}Let $\mathfrak{B}$ be a bisimulation
from some RoCTL{}$_{\Viol}${}{}-model $M_{w}$ to another RoCTL{}$_{\Viol}${}{}-model $\hat{M}_{\hat{w}}$.
Let $\left(\sigma,\hat{\sigma}\right)\in\mathfrak{B}^{\omega}$.
Given a deviation $\hat{\pi}$ from $\hat{\sigma}$ we can construct
a fullpath $\pi$ such that $\pi$ is a deviation from $\sigma$ and
$\left(\pi,\hat{\pi}\right)\in\mathfrak{B}^{\omega}$.\end{lemma}
\begin{proof}
As $\hat{\pi}$ is a deviation from $\hat{\sigma}$, it is the case
that $\hat{\pi}$ is an $i$-deviation from $\hat{\sigma}$ for some
non-negative integer $i$. Since $\left(\sigma_{i},\hat{\pi}_{i}\right)\in\mathfrak{B}$
we can construct a fullpath $\tau$ such that $\left(\tau,\hat{\pi}_{\geq i}\right)\in\mathfrak{B}^{\omega}$
and $\tau_{0}=\sigma_{i}$. We see that $\sigma_{\leq i-1}\cdot\tau$
is a fullpath through $M$, we call this fullpath $\pi$. Since $\hat{\pi}_{\geq i+1}$
is failure-free $\tau_{\geq1}$ is failure-free and thus $\pi_{\geq i+1}$
is failure-free. Thus $\pi$ is a deviation from $\sigma$.
\end{proof}
We will now state and prove the truth lemma.
\begin{lemma}
\label{lem:bi-path-invariant}Let $M_{w}$ and $\hat{M}_{\hat{w}}$
be a pair of arbitrary RoCTL{}$_{\Viol}${}{}-models and let $\mathfrak{B}$
be a bisimulation from $M_{w}$ to $\hat{M}_{\hat{w}}$. Then for
any $\left(\sigma,\hat{\sigma}\right)\in\mathfrak{B}^{\omega}$,
and for any formula $\phi$ it is the case that $M,\sigma\vDash\phi\iff\hat{M},\hat{\sigma}\vDash\phi$. \end{lemma}
\begin{proof}
For contradiction, let $\phi$ be the shortest formula such that there
exists a pair $\sigma$, $\hat{\sigma}$ of fullpaths in $\mathfrak{B}^{\omega}$
not satisfying $M,\sigma\vDash\phi\iff\hat{M},\hat{\sigma}\vDash\phi$.
Without loss of generality we can assume that $M,\sigma\vDash\phi$
but $\hat{M},\hat{\sigma}\nvDash\phi$. Consider the possible forms
of $\phi$.
$\phi=p$: Since $M,\sigma\vDash p$ we know that $p\in g\left(\sigma_{0}\right)$.
As $\mathfrak{B}$ is a bisimulation and $\left(\sigma_{0},\hat{\sigma}_{0}\right)\in\mathfrak{B}$
we know that $p\in g\left(\sigma_{0}\right)$. Hence $\hat{M},\hat{\sigma}\vDash p$.
This contradicts our assumption that $\hat{M},\hat{\sigma}\nvDash\phi$.
$\phi=N\psi$: Since $M,\sigma\vDash N\psi$, we know that $M,\sigma_{\geq1}\vDash\psi$,
and since $\phi$ is the shortest counter example, we know that $\hat{M},\hat{\sigma}_{\geq1}\vDash\psi$.
We see that $\hat{M},\hat{\sigma}\vDash\phi$.
$\phi=\theta U\psi$: Since $M,\sigma\vDash\theta U\psi$, we know
that a non-negative integer $i$ such that $M,\sigma_{\geq i}\vDash\psi$
and for all non-negative $j$ less than $i$ we have $M,\sigma_{\geq j}\vDash\theta$.
As $\psi$ and $\theta$ are shorter than $\phi$ we know $\hat{M},\hat{\sigma}_{\geq i}\vDash\psi$
and $\hat{M},\hat{\sigma}_{\geq j}\vDash\theta$. Thus $\hat{M},\hat{\sigma}\vDash\theta U\psi$.
$\phi=A\psi$: Since $\hat{M},\hat{\sigma}\nvDash A\psi$ we know
there exists $\hat{\pi}$ such that $\hat{M},\hat{\pi}\nvDash\psi$.
From \prettyref{lem:bi-construct-path} we know that there exists
a path $\pi$ such that $\left(\pi,\hat{\pi}\right)\in\mathfrak{B}$.
As $\psi$ is shorter than $\phi$ we know that $M,\pi\nvDash\psi$.
Hence $M,\sigma\nvDash A\psi$.
$\phi=O\psi$: Since $\hat{M},\hat{\sigma}\nvDash O\psi$ we know
there exists $\hat{\pi}$ such that $\hat{M},\hat{\pi}\nvDash\psi$
and $\hat{\pi}$ is failure-free. From \prettyref{lem:bi-construct-path}
we know that there exists a path $\pi$ such that $\left(\pi,\hat{\pi}\right)\in\mathfrak{B}$.
As $\psi$ is shorter than $\phi$ we know that $M,\pi\nvDash\psi$.
As $\hat{\pi}$ is failure-free, for all $i>0$ we know ${\bf v}\notin g\left(\hat{\pi}_{i}\right)$,
from the definition of a bisimulation we know that ${\bf v}\notin g\left(\pi_{i}\right)$.
Hence $\pi$ is failure-free and $M,\sigma\nvDash O\psi$.
$\phi=\blacktriangle\psi$: Since $\hat{M},\hat{\sigma}\nvDash\blacktriangle\psi$ we know
there exists $\hat{\pi}$ such that $\hat{M},\hat{\pi}\nvDash\psi$
and $\hat{\pi}$ is either $\hat{\sigma}$ or a deviation from $\hat{\sigma}$.
If $\hat{\pi}=\hat{\sigma}$ then $M,\sigma\nvDash\psi$ and $M,\sigma\nvDash\blacktriangle\psi$.
If $\hat{\pi}$ is an $i$-deviation from $\hat{\sigma}$ then from
\prettyref{lem:bi-construct-deviation} we know there is a deviation
$\pi$ from $\sigma$ such that $(\pi,\hat{\pi})\in\mathfrak{B}^{\omega}$.
We see that $M,\pi\nvDash\psi$ and thus $M,\sigma\nvDash\blacktriangle\psi$.
By contradiction we know that no such $\phi$ exists.
\end{proof}
\begin{lemma}
\label{lem:RoCTL*v-is-bisimulation-invariant}\label{lem:RoCTL*-is-bisimulation-invariant}RoCTL{}*\hspace{-1ex}$_{\Viol}${}
is \index{bisimulation invariant}bisimulation invariant. \end{lemma}
\begin{proof}
Consider any RoCTL{*} formula $\phi$. Let $\mathfrak{B}$ be a bisimulation
from some pair of PVS's $\left(M,w\right)$ and $(\hat{M},\hat{w})$,
and say that $M,w\vDash\phi$ but $\hat{M},\hat{w}\nvDash\phi$.
Recall that under RoCTL{*} we define truth at a world as follows:
\begin{align*}
M,w\vDash\aform & \text{ iff }\Exists{\pi\text{ s.t. }\pi_{0}=w}M,\pi\vDash\aform\mbox{ .}
\end{align*}
From \prettyref{lem:bi-path-invariant} we know that there exists
a fullpath $\hat{\pi}$ through $\hat{M}$ such that $\hat{\pi}=\hat{w}$
and $\hat{M},\hat{\pi}\vDash\phi$. Hence $\hat{M},\hat{w}\vDash\phi$.
Thus we see that for any bisimilar pair of PVS's $\left(M,w\right)$
and $(\hat{M},\hat{w})$ we have
\begin{eqnarray*}
\left(M,w\right)\vDash\phi & \iff & (\hat{M},\hat{w})\vDash\phi\mbox{ .}
\end{eqnarray*}
By definition we see that $\phi$ is bisimulation invariant. Since
$\phi$ is an arbitrary RoCTL{*} formula, we see that RoCTL{*} is
bisimulation invariant.
\end{proof}
\section{Reduction into QCTL{*}}
\label{sec:qctl}\label{sec:A-Linear-reduction-into-QCTL*}\input{Body_ALinearReductionIntoQCTLstar__.tex}
\subsection{\label{sec:Hybrid-Logic}A Comment on Hybrid Logic}
Even the tree-semantics of QCTL{*} is non-elementary to decide and
no translation into CTL{*} is elementary in length. For this reason
we also investigated other logics to translate RoCTL{*} into. We know
that we can represent RoCTL{*} with a single variable fragment of
a hybrid extension of CTL{*}, by translating $\triangle\phi$ into a
formula such as the following:
\begin{align*}
\phi\vee\exists x.\left(Fx\wedge E\left(\phi\wedge F\left(x\wedge NNG\neg{\bf v}\right)\right)\right) & \mbox{ ,}
\end{align*}
where $\exists x.\psi$ is the hybrid logic formula indicating that
the exists a valuation of $x$ such that $x$ is true at exactly one
node on the tree and $\psi$ is true. This is still a way away from
producing a decision procedure for RoCTL{*}. There has been considerable
research into single variable fragments of Hybrid Logic recently (see
for example \citep{DBLP:conf/mfcs/KaraWLS09} for a good overview
of the results in this area). However, these fragments do not contain
the $\exists$ operator as a base operator. Although $\exists x.\psi$
can be defined as an abbreviation, this requires two variables. Even
adding a single variable hybrid logic to CTL{*} leads to a non-elementary
decision procedure (see for example \citep{DBLP:conf/mfcs/KaraWLS09}),
and adding two variables to an otherwise quite weak temporal logic
again gives a non-elementary decision procedure \citep{SW07}. A
potential avenue of research is investigating the complexity of deciding
the fragment of Hybrid CTL{*} (HCTL{*}) where the hybrid component
consists solely of the $\exists$ operator over a single variable,
as the translation of RoCTL{*} into HCTL{*} falls inside this fragment.
Although we have also given a linear translation into the tree-semantics
of QCTL{*} logic, this single variable fragment of HCTL{*} seems much
more restricted than QCTL{*}. Additionally this fragment of HCTL{*}
seems to have a closer relationship with pebble automata. Never-the-less
this avenue does not seem likely to result in an elementary decision
procedure for RoCTL{*}.
\section{$\mathcal{A}LTL$}
\label{sec:ALTL}\label{sec:altl}
Here we define a possible extension of LTL allowing automata to be
used as operators, and briefly show to convert an $\mathcal{A}LTL$ formula
$\phi$ into an automaton $\mathcal{A}_{\phi}$.
\begin{defi}
We define $\mathcal{A}LTL$ formul\ae{}{} recursively according to the following
BNF notation,
\begin{align*}
\aform & ::=p\,|\,\neg\aform\,|\,\lAnd{\aform}{\aform}\,|\,\lUntil{\aform}{\phi}\,|\, N\phi\,|\,\mathcal{A}\mbox{ ,}
\end{align*}
where $p$ varies over $\mathcal{V}$ and $\mathcal{A}$ can be any counter-free
FSA that accepts $2^{\mathcal{V}}$ as input, that is $\Sigma=2^{\mathcal{V}}$.
Recall from \prettyref{def:path-to-word} that $g_{\Phi}$ is a simple
conversion from fullpaths to words of an automaton. In this section
we will assume that the special atoms required for the translation
are members of $\mathcal{V}$, and so will use $\mathcal{V}$ as $\Phi$. The semantics
of $\mathcal{A}LTL$ are defined similarly to LTL, with the addition that
$M,\sigma\vDash\mathcal{A}$ iff the automata $\mathcal{A}$ accepts $g_{\mathcal{V}}\left(\sigma\right)$,
or in other words.
\begin{eqnarray*}
M,\sigma\vDash\mathcal{A} & \text{ iff } & \exists_{i}\text{ s.t. } g_{\mathcal{V}}\left(\sigma_{\leq i}\right)\in\mathcal{L}\left(\mathcal{A}\right)
\end{eqnarray*}
\end{defi}
Note that since automata can be $\mathcal{A}LTL$ formul\ae{}{}, the following
definition also gives us a definition of equivalence between formul\ae{}{}
and automata.
\begin{defi}
We say that a pair of formul\ae{}{} $\phi$, $\psi$ are equivalent
($\phi\equiv\psi$) iff for all structures $M$ and paths $\sigma$
through $M$:
\begin{align*}
M,\sigma\vDash\phi\iff & M,\sigma\vDash\psi\mbox{ .}
\end{align*}
\end{defi}
We will now give a partial translation from $\mathcal{A}LTL$ formul\ae{}{} into
automata; we will not define the acceptance condition $F$ since $F$
is discarded when we produce $\mathcal{A}_{{\bf \Lambda}\phi}$ from $\mathcal{A}_{\phi}$.
\begin{defi}
\label{def:length-ATL}We define the length of an $\mathcal{A}LTL$ formula
recursively as follows:
\begin{eqnarray*}
\left|\phi\wedge\psi\right|=\left|\phi U\psi\right| & = & \left|\phi\right|+\left|\psi\right|\\
\left|\neg\phi\right|=\left|N\phi\right| & = & \left|\phi\right|+1\\
\left|p\right| & = & 1\\
\left|(\Sigma,S,\States_{0},\delta,F)\right| & = & \left|S\right|
\end{eqnarray*}
In some translations we encode state-formul\ae{}{} (e.g. $A\psi$)
into atoms (labelled $p_{A\psi}$). We define the complexity $\left|\phi\right|^{\star}$
of an $\mathcal{A}LTL$ formula $\phi$ similarly, except that we define the
complexity $\left|p_{\psi}\right|^{\star}$ of an atom labelled $p_{\psi}$
to be $\left|\psi\right|^{\star}$.
\end{defi}
\begin{lemma}
\label{lem:ALTL-decidable}The satisfiability problem for $\mathcal{A}LTL$
is decidable.\end{lemma}
\begin{proof}
From \prettyref{cor:counter-free-FSA-to-LTL-3EXP} we can replace
each automata in a $\mathcal{A}LTL$ formula $\phi$ with an equivalent LTL
formula. This will result in an LTL formula $\phi$' equivalent to
$\phi$. We can then use any decision procedure for LTL to decide
$\phi$.
\end{proof}
\subsection{A partial translation from $\mathcal{A}LTL$ into automata}
\label{sub:ALTL-to-automata}Here we define a translation of an $\mathcal{A}LTL$
formula $\phi$ into an automaton $\mathcal{A}_{\phi}$. However, we do
not define a traditional acceptance condition as this is not required
when constructing $\mathcal{A}_{{\bf \Lambda}\phi}$ from $\mathcal{A}_{\phi}$. In
this section we will use $s_{\phi}$ and $t_{\phi}$ to represent arbitrary
states of $\mathcal{A}_{\phi}$; we use $x$ and $y$ to represent
arbitrary states of automata in $\phi$.
\begin{defi}
The closure $\textbf{cl}\phi$ of the formula $\phi$ is defined as the smallest
set that satisfies the four following requirements:\end{defi}
\begin{enumerate}
\item $\phi\in\textbf{cl}\phi$
\item For all $\psi\in\textbf{cl}\phi$, if $\delta\leq\psi$ then $\delta\in\textbf{cl}\phi$.
\item For all $\psi\in\textbf{cl}\phi$, $\neg\psi\in\textbf{cl}\phi$ or there exists $\delta$
such that $\psi=\neg\delta$ and $\delta\in\textbf{cl}\phi$.
\item If $\mathcal{A}\in\textbf{cl}\phi$ then $\mathcal{A}^{x}\in\textbf{cl}\phi$ for all
states $x$ of $\mathcal{A}$.
\end{enumerate}
The states of $\mathcal{A}_{\phi}$ are sets of formul\ae{}{} that could
hold along a single fullpath.
\begin{proposition}
\label{prop:The-size-of}The size of the closure set is linear in
$\left|\phi\right|$.
\end{proposition}
\begin{defi}
\label{defn:MPC-Tableau-3}[MPC] We say that $s_{\phi}\subseteq\cl\phi$
is Maximally Propositionally Consistent (\uline{MPC)} iff for all
$\alpha,\beta\ins_{\phi}$\end{defi}
\begin{description}
\item [{(M1)}] if $\beta=\neg\alpha$ then $\beta\in a$ iff $\alpha\notins_{\phi}$,
\item [{(M2)}] if $\alpha\wedge\beta\in\cl\phi$ then $\left(\alpha\wedge\beta\right)\ins_{\phi}\leftrightarrow\left(\alpha\ins_{\phi}\text{ and }\beta\ins_{\phi}\right)$ \end{description}
\begin{defi}
\label{def:set-of-states-Sphi} The set of states $S_{\phi}$ is
the set of all subsets $s_{\phi}\subseteq\cl\phi$ satisfying: \end{defi}
\begin{description}
\item [{(S1)}] $s_{\phi}$ is MPC\@
\item [{(S2)}] if $\alpha U\beta\ins_{\phi}$ then $\alpha\ins_{\phi}$ or
$\beta\ins_{\phi}$
\item [{(S3)}] if $\neg\left(\alpha U\beta\right)\ins_{\phi}$ then $\beta\notins_{\phi}$
\item [{(S4)}] $s_{\phi}$ is non-contradictory, i.e. $\bigwedges_{\phi}$
is satisfiable.
\end{description}
Note that $\mathcal{A}LTL$ is decidable, so we can compute whether $s_{\phi}$
is contradictory. We now define a standard temporal successor relation
for LTL formula.
\global\long\defH_{\phi}{H_{\phi}}
\global\long\defr_{N}{r_{N}}
\global\long\defr_{A}{r_{A}}
\global\long\def\tmpsucc{r_{N}}
\global\long\def\mathcal{V}{\mathcal{V}}
\global\long\defp{p}
\begin{defi}
\label{defn:rX-Tableau}[$\tmpsucc$] The temporal successor $r_{N}$
relation on states is defined as follows: for all states $s_{\phi}$,
$t_{\phi}$ put $\left(s_{\phi},t_{\phi}\right)$ in $\tmpsucc$ iff the following
conditions are satisfied:\end{defi}
\begin{description}
\item [{(R1)}] $N\alpha\ins_{\phi}$ implies $\alpha\int_{\phi}$
\item [{(R2)}] $\neg N\alpha\ins_{\phi}$ implies $\alpha\notint_{\phi}$
\item [{(R3)}] $\alpha U\beta\ins_{\phi}$ and $\beta\notins_{\phi}$ implies
$\alpha U\beta\int_{\phi}$
\item [{(R4)}] $\neg(\alpha U\beta)\ins_{\phi}$ and $\alpha\ins_{\phi}$
implies $\neg(\alpha U\beta)\int_{\phi}$\end{description}
\begin{defi}
We define the transition relation $\delta_{\phi}\subseteq S_{\phi}\times\Sigma\times S_{\phi}$
as follows: a member $\left\langle s_{\phi},e,t_{\phi}\right\rangle $
of $S_{\phi}\times\Sigma\times S_{\phi}$ is a member of $\delta_{\phi}$
iff \end{defi}
\begin{description}
\item [{(T1)}] $\left\langle s_{\phi},t_{\phi}\right\rangle \in\tmpsucc$
\item [{(T2)}] For each $p\in\mathcal{V}$, it is the case that $p\in e$
iff $p\ins_{\phi}$
\item [{(T3)}] If $\mathcal{A}^{x}\ins_{\phi}$, and $x$ is not
an accepting state of $\mathcal{A}^{x}$, then there must exist a
state $y$ of $\mathcal{A}^{x}$ such that $\mathcal{A}^{y}\int_{\phi}$
and $\left\langle x,e,y\right\rangle $ is a transition
of $\mathcal{A}^{x}$.
\item [{(T4)}] If $\neg\mathcal{A}^{x}\ins_{\phi}$, then for each state
$y$ of $\mathcal{A}^{x}$ such that $\left\langle x,e,y\right\rangle $
is a transition of $\mathcal{A}^{x}$ it must be the case that $\mathcal{A}^{y}\notint_{\phi}$.
\end{description}
\begin{defi}
The automata $\mathcal{A}_{\phi}$ is the tuple $(\Sigma,S_{\phi},Q_{0},\delta_{\phi})$,
where $Q_{0}$ is the set $\left\{ a\colon a\in S_{\phi}\wedge\phi\in a\right\} $.
\end{defi}
Note that the tuple above does not include an acceptance condition.
The automata $\mathcal{A}_{\phi}$ is used only to generate the automata
$\mathcal{A}_{{\bf \Lambda}\phi}$. The automata $\mathcal{A}_{{\bf \Lambda}\phi}$ reads the
finite prefix $\sigma_{\leq i}=\pi_{\leq i}$ of an $i$-deviation
$\pi$ from $\sigma$ and then reads a state formula indicating that
we can deviate. This in essence splits the deviation into a prefix
and suffix. For this reason we do not define a standard acceptance
condition, instead we will say that $\mathcal{A}_{\phi}$ accepts a pair
$\left(\pi,i\right)$ iff there exists a state $s_{\phi}\in S_{\phi}$
such that the automaton can reach state $s_{\phi}$ after reading the
prefix $\pi_{\leq i-1}$, and $\pi_{\geq i}\vDash\bigwedges_{\phi}$.
Or formally:
\begin{defi}
Given a fullpath $\pi$ though some structure $M$, and non-negative
integer $i$, we say that $\mathcal{A}_{\phi}$ accepts a pair $\left(\pi,i\right)$
iff there exists a state $s_{\phi}\in S_{\phi}$ such that $\pi_{\geq i}\vDash\bigwedges_{\phi}$,
and there exists a path of $\mathcal{A}_{\phi}$ labelled $g_{\mathcal{V}}\left(\pi_{\leq i-1}\right)$
which ends in the state $s_{\phi}$. \end{defi}
\begin{lemma}
\label{lem:s-iff-t}For any fullpath $\pi$, integer $j$, pair of
states $s_{\phi},t_{\phi}$ such that
\begin{eqnarray*}
\left\langle s_{\phi},g_{\mathcal{V}}\left(\pi_{j}\right),t_{\phi}\right\rangle & \in & \delta_{\phi}
\end{eqnarray*}
we have $\pi_{\geq j+1}\vDash\bigwedget_{\phi}\implies\pi_{\geq j}\vDash\bigwedges_{\phi}$.\end{lemma}
\begin{proof}
For contradiction assume that this lemma is false. Then $\pi_{\geq j+1}\vDash\bigwedget_{\phi}$
and $\pi_{\geq j}\nvDash\bigwedges_{\phi}$. Since $\pi_{\geq j}\nvDash\bigwedges_{\phi}$
then there exists some $\psi\ins_{\phi}$ such that $\pi_{\geq j}\nvDash\psi$.
We assume without loss of generality that $\psi$ is the shortest
such formula. We now consider each possible form of $\psi$, in each
case recall that $\psi\ins_{\phi}$, $\pi_{\geq j+1}\vDash\bigwedget_{\phi}$
and $\left\langle s_{\phi},g_{\mathcal{V}}\left(\pi_{j}\right),t_{\phi}\right\rangle \in\delta_{\phi}$.
$\psi=\neg\neg\alpha$ From M1 and $\psi\ins_{\phi}$ we get $\alpha\ins_{\phi}$
and since $\alpha$ is shorter than $\neg\neg\alpha$ it follows that
$\pi_{\geq j}\vDash\alpha$ and so $\pi_{\geq j}\vDash\neg\neg\alpha$.
However, by assumption $\pi_{\geq j}\nvDash\psi$.
$\psi=p$: From T2 we know that as $p\ins_{\phi}$, we have $p\in g_{\mathcal{V}}\left(\pi_{j}\right)$
and so $\pi_{\geq j}\vDash p$. But by assumption $\pi_{\geq j}\nvDash\psi$.
$\psi=\neg p$: From M1 we know that $p\notins_{\phi}$, and from T2
we have $p\notin g_{\mathcal{V}}\left(\pi_{j}\right)$ and so $\pi_{\geq j}\vDash\neg p$.
$\psi=\alpha\wedge\beta$: As $s_{\phi}$ is MPC we see that $\alpha,\beta\ins_{\phi}$.
As we have assumed that $\psi$ is the shortest formula that provides
a counterexample we see that $\pi_{\geq j}\vDash\alpha$ and $\pi_{\geq j}\vDash\beta$.
Hence $\pi_{\geq j}\vDash\alpha\wedge\beta$.
$\psi=\neg\left(\alpha\wedge\beta\right)$: As $s_{\phi}$ is MPC we
see that $\alpha\wedge\beta\notins_{\phi}$. It follows that $\alpha\notins_{\phi}$
or $\beta\notins_{\phi}$. Without loss of generality, assume $\alpha\notins_{\phi}$.
Thus $\pi_{\geq j}\nvDash\alpha$ and $\pi_{\geq j}\nvDash\left(\alpha\wedge\beta\right)$.
Hence $\pi_{\geq j}\vDash\neg\left(\alpha\wedge\beta\right)$.
$\psi=N\theta$: We see that if $\pi_{\geq j}\nvDash N\theta$ then
$\pi_{\geq j+1}\nvDash\theta$, but we see that from R1 that $\theta\int_{\phi}$.
By contradiction $\theta$ cannot be of the form $N\theta$.
$\psi=\neg N\theta$: We see that if $\pi_{\geq j}\nvDash\neg N\theta$
then $\pi_{\geq j+1}\vDash\theta$, but we see that from R2 that
$\theta\notint_{\phi}$.
$\psi=\alpha U\beta$: We see that if $\alpha U\beta\ins_{\phi}$ then
from S2 either $\alpha\ins_{\phi}$ or $\beta\ins_{\phi}$. Since $\pi_{\geq j}\nvDash\alpha U\beta$
it follows that $\pi_{\geq j}\vDash\neg\beta$. As $\neg\beta$ is
shorter than $\psi$ we have $\neg\beta\ins_{\phi}$ and so $\beta\notins_{\phi}$.
Since $\beta\notins_{\phi}$, from R3 we have $\alpha U\beta\int_{\phi}$
and so $\pi_{\geq j+1}\vDash\alpha U\beta$. As $\alpha\ins_{\phi}$
and $\alpha$ is shorter than $\psi$ we see that $\pi_{\geq j}\vDash\alpha$.
As $\pi_{\geq j}\vDash\alpha$ and $\pi_{\geq j+1}\vDash\alpha U\beta$
we see that $\pi_{\geq j}\vDash\alpha U\beta$.
$\psi=\neg\left(\alpha U\beta\right)$ We see that if $\neg\left(\alpha U\beta\right)\ins_{\phi}$
then from S3 we have $\beta\notins_{\phi}$ and thus $\neg\beta\ins_{\phi}$.
As $\neg\beta$ is shorter than $\psi$ we have $\pi_{\geq j}\vDash\neg\beta$.
Since $\pi_{\leq j}\nvDash\neg\left(\mbox{\ensuremath{\alpha}}U\beta\right)$
we have $\pi_{\geq j}\vDash\alpha U\beta$; as $\pi_{\geq j}\vDash\neg\beta$
it follows that $\pi_{\geq j}\vDash\alpha$. Thus $\alpha\ins_{\phi}$,
and from R4 we know $\neg\left(\alpha U\beta\right)\int_{\phi}$ and
hence $\pi_{\geq j+1}\vDash\neg\left(\alpha U\beta\right)$. As $\pi_{\geq j}\vDash\neg\beta$
it follows that $\pi_{\geq j}\vDash\neg\left(\alpha U\beta\right)$.
By contradiction, $\psi$ cannot be of the form $\neg\left(\alpha U\beta\right)$.
$\psi=\mathcal{A}^{x}$: If $x$ is an accepting state of $\mathcal{A}^{x}$,
then we see that $\mathcal{A}^{x}$ is satisfied on all fullpaths through
$M$, including $\pi_{\geq j}$ and so $x$ is not an accepting state.
We see from T3 that there exists a state $y$ of $\mathcal{A}^{x}$ such
that $\mathcal{A}^{y}\int_{\phi}$ and $\left\langle x,g_{\mathcal{V}}\left(\pi_{j}\right),y\right\rangle $
is a transition of $\mathcal{A}^{x}$. As $\pi_{\geq j+1}\vDash\bigwedget_{\phi}$
we see $\pi_{\geq j+1}\vDash\mathcal{A}^{y}$. We can prepend the state
$x$ and the symbol $g_{\mathcal{V}}\left(\pi_{j}\right)$ to the accepting
path for $\mathcal{A}^{y}$ to construct an accepting path for $\mathcal{A}^{x}$,
so we see that $\pi_{\geq j}\vDash\mathcal{A}^{x}$.
$\psi=\neg\mathcal{A}^{x}$: Since $\pi_{\geq j}\nvDash\psi$ we see
$\pi_{\geq j}\vDash\mathcal{A}^{x}$. Thus there must exist a state $y$
of $\mathcal{A}^{x}$ such that $\left\langle x,g_{\mathcal{V}}\left(\pi_{j}\right),y\right\rangle $
is in the transition relation of $\mathcal{A}^{x}$ and $\pi_{\geq j+1}\vDash\mathcal{A}^{y}$.
However, from T4 and M1, we see that $\neg\mathcal{A}^{y}\int_{\phi}$,
and since $\pi_{\geq j+1}\vDash\bigwedget_{\phi}$, we have $\pi_{\geq j+1}\vDash\neg\mathcal{A}^{y}$\@.
We have considered all possible forms of $\psi$ and in each case
got a contradiction. By contradiction this lemma must be true.
\end{proof}
We will now state the lemma demonstrating the correctness of the translation.
\begin{lemma}
\label{lem:accepts,pi,i}For any fullpath $\pi$ though $M$, and
non-negative integer $i$, the automata $\mathcal{A}_{\phi}$ accepts the
pair $\left(\pi,i\right)$ iff $\pi\vDash\phi$.
\end{lemma}
\begin{proof}
We first show that this lemma holds for $i=0$.
$\left(\Longrightarrow\right)$ We let $s_{\phi}$ be the maximal subset
of $\textbf{cl}\phi$ such that for each $\psi\ins_{\phi}$ we have $\pi\vDash\psi$.
We see that $s_{\phi}$ satisfies S1--4 and so $s_{\phi}\in S_{\phi}$.
We see $\phi\ins_{\phi}$ and so $s_{\phi}\in Q_{0}$. Clearly $\pi\vDash\bigwedges_{\phi}$.
$\left(\Longleftarrow\right)$ By definition each $s_{\phi}\in Q_{0}$
includes $\phi$ and so clearly if $\pi\nvDash\phi$ then $\pi\nvDash\bigwedges_{\phi}$.
Say that the lemma holds for $i=j$, where $j$ is some non-negative
integer. We now show that the lemma holds for $i=j+1$.
$\left(\Longleftarrow\right)$ Say that $\pi\vDash\phi$. Since the
lemma holds for $i=j$, we see that there exists a state $s_{\phi}\in S_{\phi}$
such that $\pi_{\geq j}\vDash\bigwedges_{\phi}$ and there exists
a path of $\mathcal{A}_{\phi}$ labelled $g_{\mathcal{V}}\left(\pi_{\leq j-1}\right)$
which ends in the state $s_{\phi}$. We let $t_{\phi}$ be the maximal
subset of $\textbf{cl}\phi$ such that for each $\psi\int_{\phi}$ we have
$\pi_{\geq j+1}\vDash\psi$. Again, we see that $t_{\phi}$ satisfies
S1--4 and so $t_{\phi}\in S_{\phi}$. We now show that $\left\langle s_{\phi},g_{\mathcal{V}}\left(\pi_{j+1}\right),t_{\phi}\right\rangle \in\delta_{\phi}$.
\begin{description}
\item [{T1}] Say that $N\alpha\ins_{\phi}$, then since $\pi_{\geq j+1}\vDash\bigwedges_{\phi}$
it is clear that $\pi_{\geq j+1}\vDash\alpha$. Since either $\alpha$
or its negation is in $t_{\phi}$ and $\pi_{\geq j+1}\vDash\bigwedget_{\phi}$,
we see that $\alpha$ in $t_{\phi}$. We see that $r_{N}$ is a
standard temporal successor function, and so a similar argument can
be made for R2--4.
\item [{T2}] We see from the semantics that, for each atom $p$, we have
$\pi_{\geq j+1}\vDash p$ iff $p\in g_{\mathcal{V}}\left(\pi_{j+1}\right)$.
Since either $p$ or $\neg p$ in $t_{\phi}$, we again see that $p\int_{\phi}$
iff $p\in g_{\mathcal{V}}\left(\pi_{j+1}\right)$.
\item [{T3}] It is clear that if an automaton accepts a word ``abcd$\cdots$z''
starting at a state $x$ then there must exist state $y$ from which
the automata accepts the word ``bcd$\cdots$z'', and such that $\left\langle x,a,y\right\rangle $
is in the transition relation. Again, $t_{\phi}$ contains either $\mathcal{A}^{y}$
or $\neg\mathcal{A}^{y}$. Since $\pi_{\geq j+1}\vDash\bigwedget_{\phi}$
and $\pi_{\geq j+1}\vDash\mathcal{A}^{y}$ it is clear that $\mathcal{A}^{y}\int_{\phi}$.
\item [{T4}] This is the converse of T3. We see that if there exists a
state $y$ from which the automata accepts the word ``bcd$\cdots$z'',
and $\left\langle x,a,y\right\rangle $ is in the transition
relation then the automata accepts a word ``abcd$\cdots$z''. Say
that $\neg\mathcal{A}^{x}\ins_{\phi}$, then $\pi_{\geq j}\vDash\neg\mathcal{A}^{x}$
and so $\pi_{\geq j+1}\nvDash\mathcal{A}^{y}$ for any $y$ reachable
from $x$ in $\mathcal{A}^{x}$ by reading the symbol $g_{\mathcal{V}}\left(\pi_{j+1}\right)$.
Yet again, $t_{\phi}$ contains either $\mathcal{A}^{y}$ or $\neg\mathcal{A}^{y}$.
Since $\pi_{\geq j+1}\vDash\bigwedget_{\phi}$ and $\pi_{\geq j+1}\nvDash\mathcal{A}^{y}$
it is clear that $\neg\mathcal{A}^{y}\int_{\phi}$.\end{description}
$\left(\Longrightarrow\right)$ Say that $\pi\nvDash\phi$, but that
the automata $\mathcal{A}_{\phi}$ accepts the pair $\left(\pi,j+1\right)$.
Then there exists a path through $\mathcal{A}_{\phi}$ labelled $g_{\Phi}\left(\pi_{\leq j}\right)$
ending at a state $t_{\phi}$ such that $\pi_{\geq j+1}\vDash\bigwedget_{\phi}$;
let $s_{\phi}$ be the state immediately preceding $t_{\phi}$ along
that path. Since $\pi\nvDash\phi$ and the lemma holds for $i=j$
we see that $\pi_{\geq j}\nvDash\bigwedges_{\phi}$. From \prettyref{lem:s-iff-t},
we get a contradiction.
\end{proof}
\begin{defi}
We say that an $\mathcal{A}LTL$ formula is counter-free if all automata contained
in the formula are counter-free.
\end{defi}
Although we know that every LTL formula is equivalent to some counter-free
automata in that they accept precisely the same strings/paths \citep{DiGa08Thomas},
note that it is not the case that no non-counter free automata is
equivalent to an LTL formula. For example, the following automata
accepts the same paths that satisfy $Gp$, yet it is not counter free
as $pp\in L_{a,a}$ but $p\notin L_{a,a}$.
\begin{center}
\input{LTL_non_counterfree.pdftex_t}
\par\end{center}
We cannot assume that $\mathcal{A}_{\phi}$ is counter free simply because
$\phi$ is equivalent to an LTL formula. We will now prove that $\mathcal{A}_{\phi}$
is counter-free. Although we have not defined a traditional acceptance
condition for $\mathcal{A}_{\phi}$, for the purposes of the next lemma
we will say that the automata accepts a word $g_{\mathcal{V}}\left(\pi\right)$
iff $\mathcal{A}_{\phi}$ accepts $\left(\pi,i\right)$ for all $i\geq0$.
\begin{lemma}
\label{lem:powerset-is-counter-free}If $\phi$ is counter-free then
the automata $\mathcal{A}_{\phi}$ is counter-free.\end{lemma}
\begin{proof}
Each state is a set of $\mathcal{A}LTL$ formula, by taking the conjunction
of these formul\ae{}{} we get an $\mathcal{A}LTL$ formula $\psi$. Each automata
$\mathcal{A}^{2}$ in $\psi$ comes from some automata $\mathcal{A}^{1}$ in
$\phi$, and $\mathcal{A}^{1}$ differs from $\mathcal{A}^{2}$ only in the
initial states. Since $\mathcal{A}^{1}$ is counter-free we see that $\mathcal{A}^{2}$
is counter-free. Since each automata in $\mathcal{A}LTL$ is counter-free we
can find an equivalent LTL formula, and so $\psi$ is equivalent to
some LTL formula $\psi'$.
If $\mathcal{A}_{\phi}$ is not counter-free then there exists a positive
integer $m$, state $s_{\phi}\in S_{\Phi}$ and word $u$ in $\Sigma^{*}$
such that $u^{m}\in L_{s_{\phi},s_{\phi}}$ and $u\notin L_{s_{\phi},s_{\phi}}$.
Since the states are non-contradictory we know that $\mathcal{A}_{\phi}^{s_{\phi}}$
accepts some word $w$. For any state $t_{\phi}$ there exists some
formula $\theta$ such that $\neg\theta\ins_{\phi}$ and $\theta\int_{\phi}$
or visa-versa. As such $\mathcal{A}_{\phi}^{t_{\phi}}$ does not accept
the word $w$. Since $u\notin L_{s_{\phi}},_{s_{\phi}}$ and $u^{m}\in L_{s_{\phi},s_{\phi}}$
we see that $\mathcal{A}_{\phi}^{s_{\phi}}$ does not accept $u\cdot w$
but it does accept $u^{m}\cdot w$. By induction we discover that
for all non-negative $i$ the automaton $\mathcal{A}_{\phi}^{s_{\phi}}$
does not accept $u^{im+1}\cdot w$ but it does accept $u^{im}\cdot w$.
We see that any automaton that accepts this language must have a counter,
yet $\mathcal{A}_{\phi}^{s_{\phi}}$ is equivalent to an LTL formula and
so the language must be accepted by some counter-free automata. By
contradiction we know that $\mathcal{A}_{\phi}$ is counter-free.
\end{proof}
\section{Translation into CTL{*}}
\label{sec:ctlstar}\foreignlanguage{english}{\label{sec:Translation-RoCTL*->CTL*.}}We
now present a translation from RoCTL{*} into CTL{*}. Note that $\triangle\phi$
indicates that $\phi$ holds on the current path or a deviation. As
a convenience we use a psuedo-operator ${\bf \Lambda}$ which indicates that
$\phi$ holds on a deviation. In \prettyref{sub:ALTL-to-automata}
we presented a translation from $\mathcal{A}LTL$ into an automaton $\mathcal{A}_{\phi}$;
In \prettyref{sub:devi} we will show how to construct an automaton
$\mathcal{A}_{{\bf \Lambda}\phi}$ which accepts iff $\mathcal{A}_{\phi}$ would accept
on a deviation from the current path, and then translate $\triangle\phi$
into $\phi\vee\mathcal{A}_{{\bf \Lambda}\phi}$. In \prettyref{sub:RoCTL*-to-ALTL-and-CTL*}
we combine these translations to provide a translation of RoCTL{*}
into $\mathcal{A}LTL$ and then into CTL{*}.
\subsection{$\mathcal{A}_{\phi}$ to $\mathcal{A}_{{\bf \Lambda}\phi}$}
\label{sub:devi}
\input{sub,devi.tex}
\input{sub,devi,proof.tex}
\begin{lemma}
The automaton $\mathcal{A}_{{\bf \Lambda}\phi}$ is counter-free.
\begin{proof}
Recall that a counter-free automaton is an automaton such that for
all states $s\in S$ and words $u$ in $\Sigma^{*}$, if $u^{m}\in L_{s,s}$
then $u\in L_{s,s}$.
If $s=s_{F}$ then every word $u$ is in $L_{s,s}$. If $s\nes_{F}$ then
every path from $s$ to $s$ in $\mathcal{A}_{{\bf \Lambda}\phi}$ is also a path
from $s$ to $s$ in $\mathcal{A}_{\phi}$, and $\mathcal{A}_{\phi}$ is counter-free.\end{proof}
\end{lemma}
\subsection{RoCTL{*} to $\mathcal{A}LTL$ and CTL{*}}
\label{sub:RoCTL*-to-ALTL-and-CTL*}
\global\long\def\varrho{\varrho}
Here we define a translation $\varrho$ from RoCTL{*} into $\mathcal{A}LTL$. It is well known that we can express a CTL{*} formula as an LTL formula
over a path, where that path includes state formula as atoms; this is commonly used in model checking, see for example \citep{ModelChecking,318620,clarke1986automatic}.
This translation likewise replaces state formul\ae{}{} with atoms.
It uses the standard translation of the $O$ operator found in \citep{FrDaRe07book},
and the $f_{\triangle}$ translation from \prettyref{def:fprone}. The
translation $\varrho$ is defined recursively as follows:
\begin{eqnarray*}
\varrho(\phi\wedge\psi) & = & \varrho\left(\phi\right)\wedge\varrho\left(\psi\right)\\
\varrho(\neg\phi) & = & \neg\varrho(\phi)\\
\varrho(A\phi) & = & p_{A\varrho\left(\phi\right)}\\
\varrho(O\phi) & = & p_{A\left(NG\neg{\bf v}\rightarrow\varrho\left(\phi\right)\right)}\\
\varrho(\blacktriangle\phi) & = & \neg f_{\triangle}\left(\neg\varrho\left(\phi\right)\right)\\
\varrho(N\phi) & = & N\varrho(\phi)\\
\varrho(\phi U\psi) & = & \varrho(\phi)U\varrho(\psi)
\end{eqnarray*}
\begin{defi}
\label{def:fprone}For any $\mathcal{A}LTL$ formula $\phi$, we define $f_{\triangle}\left(\phi\right)$
to be $\phi\vee\mathcal{A}_{{\bf \Lambda}\phi}$.\end{defi}
\begin{theorem}
\label{thm:ALTL-truth}The translation $\varrho$ of RoCTL{*} into $\mathcal{A}LTL$
is truth-preserving if the atoms of the form $p_{A\psi}$ are assumed
to hold precisely at those worlds where $A\psi$ holds.\end{theorem}
\begin{proof}
It is easy to see from \prettyref{lem:devi-correct} that $\sigma\vDash f_{\triangle}\left(\phi\right)$
iff $\sigma\vDash\triangle\phi$. It is clear that $\sigma\vDash O\phi$
iff $\sigma\vDash A\left(NG\neg{\bf v}\rightarrow\phi\right)$ as $NG\neg{\bf v}$
satisfied precisely on the failure-free paths, this was proven more
formally in \citep{FrDaRe07book,MCD10}. From these facts it is easy
to see that $\varrho$ is truth-preserving.\end{proof}
\begin{lemma}
\label{lem:fprone-1exp}The complexity of $f_{\triangle}\left(\phi\right)$
is singly exponential in $\left|\phi\right|$.\end{lemma}
\begin{proof}
We see from \prettyref{def:set-of-states-Sphi} that the translation
of $\phi$ into $\mathcal{A}_{\phi}$ results in an automaton that has
a number of states singly exponential in $\left|\phi\right|$. The
automaton $\mathcal{A}_{{\bf \Lambda}\phi}$ has exactly one more state than the
automata $\mathcal{A}_{\phi}$, and so the number of states in $\mathcal{A}_{{\bf \Lambda}\phi}$
is also singly exponential in $\left|\phi\right|$. From \prettyref{def:length-ATL},
the length of the $\mathcal{A}LTL$ formula $\mathcal{A}_{{\bf \Lambda}\phi}$ is the number
of states in $\mathcal{A}_{{\bf \Lambda}\phi}$, and so $\left|\mathcal{A}_{{\bf \Lambda}\phi}\right|$
is singly exponential in $\left|\phi\right|$. As $f_{\triangle}\left(\phi\right)=\mathcal{A}_{{\bf \Lambda}\phi}\vee\phi$
we see that $\left|f_{\triangle}\left(\phi\right)\right|^{\star}$ is
singly exponential in $\left|\phi\right|$.\end{proof}
\begin{corollary}
The translation into $\mathcal{A}LTL$ is at most $i$-exponential in length,
for formul\ae{}{} with at most $i$ nested $\blacktriangle$ operators.
\global\long\def\mathbf{RC}{\mathbf{RC}}
\end{corollary}
\begin{defi}
We define a translation $\mathbf{RC}$ from RoCTL{*} into CTL{*} such that
for each RoCTL{*} formula $\phi$ we let $\mathbf{RC}\left(\phi\right)$ be
the $\mathcal{A}LTL$ formula $\varrho\left(\phi\right)$ with each atom of the
form $p_{A\psi}$ replaced with $A\psi$, and each automata in $\varrho\left(\phi\right)$
replaced with the translation into an equivalent LTL formula referenced
in \prettyref{cor:Translating-a-counter-free}.
\end{defi}
The following theorem follows from \prettyref{thm:ALTL-truth}.
\begin{lemma}
\label{lem:Truth->Sat}Where $\tau\left(\phi\right)$ is a truth-preserving
translation from RoCTL{*} to CTL{*}, $\Gamma\left(\phi\right)$ is
both truth and satisfiability preserving, where $\Gamma\left(\phi\right)\equiv\tau\left(\phi\right)\wedge AGEN\neg{\bf v}$.\end{lemma}
\begin{proof}
Consider some RoCTL-structure $M$. Since $\func{sp}\left(w\right)$ is
non-empty for any world $w$ of $M$, there exists some fullpath $\sigma\in\func{ap}\left(w\right)$
such that $M,\sigma\vDash N\neg{\bf v}$. Hence $M,w\vDash EN\neg{\bf v}$.
Since this is true for any arbitrary $w$ we also see that $M,w\vDash AGEN\neg{\bf v}$.
Thus for all fullpaths $\pi$ we have $M,\pi\vDash\tau\left(\phi\right)\iff M,\pi\vDash\Gamma\left(\phi\right)$,
and so $\Gamma$ is truth-preserving.
If $\phi$ is satisfiable we see that there exists a RoCTL-structure
$M$ and fullpath $\sigma$ through $M$ such that $M,\sigma\vDash\phi$.
Hence $M,\sigma\vDash\tau\left(\phi\right)$, and as before $M,\sigma\vDash\Gamma\left(\phi\right)$.
Thus $\Gamma\left(\phi\right)$ is satisfiable.
Say $\Gamma\left(\phi\right)$ is satisfiable in CTL{*}. Then there
exists some CTL-structure $M$ and fullpath $\sigma$ through $M$
such that $M,\sigma\vDash\Gamma\left(\phi\right)$. We can assume
without loss of generality that all worlds in $M$ are reachable from
$\sigma_{0}$, and so for every world $w$ in we have $M,w\vDash EN\neg{\bf v}$.
Thus for every world $w$ we can pick a fullpath $\sigma$ starting
at $w$ such that $\sigma\vDash GN\neg{\bf v}$, and so $\func{sp}\left(w\right)$
is non-empty. By definition $M$ is a RoCTL-structure, and as $M,\sigma\vDash\Gamma\left(\phi\right)$
we have $M,\sigma\vDash\tau\left(\phi\right)$. Finally, $M,\sigma\vDash\phi$,
and so $\phi$ is satisfiable in RoCTL{*}.\end{proof}
\begin{theorem}
\label{thm:RoCTL*->CTL*}The translation $\mathbf{RC}$ into CTL{*} is truth-preserving.
\end{theorem}
As the RoCTL-structures are precisely those structures where $\func{sp}\left(w\right)$
is non-empty for each world $w$ (see \prettyref{lem:Truth->Sat} for more detail),
we have the following corollary.
\begin{corollary}
The translation $\mathbf{RC}_{SAT}$ is satisfaction preserving (and truth
preserving) where $\mathbf{RC}_{SAT}\left(\phi\right)\equiv\mathbf{RC}\left(\phi\right)\wedge AGEN\neg{\bf v}$.\end{corollary}
\begin{theorem}
The translation $\mathbf{RC}$ is at most $\left(i+3\right)$-exponential
in the length, for formul\ae{}{} with at most $i$ nested $\blacktriangle$ operators.\end{theorem}
\begin{proof}
From \prettyref{lem:fprone-1exp}, we see that there is at most a
singly exponential blowup per $\blacktriangle$ operator. Once we have translated
the whole formula into an $\mathcal{A}LTL$ formula $\psi$, we know from \prettyref{cor:counter-free-FSA-to-LTL-3EXP}
that we can translate the automata into LTL formul\ae{}{} with a 3-exponential
blowup.
The automata are translated into LTL recursively, but the blowup remains
3-exponential. Say $\phi$ is the formula being translated. We see
that the number states in each automaton is no more than the complexity
$\left|\varrho\left(\phi\right)\right|^{\star}$ of $\varrho\left(\phi\right)$.
Thus with each recursion we multiply the length of the translated
formula by a number 3-exponential in $\left|\varrho\left(\phi\right)\right|^{\star}$
which together still results in a 3-exponential blowup (note, for
example the formula $\left(2^{^{^{n}}}\right)^{i}$ is singly exponential
in $n$, not $i$-exponential in $n$).
\end{proof}
\section{Optimality of reduction into CTL{*}}
\label{sec:optimal}\label{sec:Succinctness}\input{Body_Succinctness__.tex}
\subsection{\label{sec:Easily-Translatable-Fragments}Easily Translatable Fragments
of RoCTL{*}}
Although the translation is non-elementary in the worst case we note
that real world{} formulas often fall into an easily translatable
fragment of RoCTL{*}. The most common use for nested Robustly operators
is to directly chain $n$ Robustly operators together to express the
statement ``Even with $n$ additional failures''. We also note that
when describing the behaviour of a system, the specification of the
system takes the form of a number of clauses each of which are reasonably
short, see Example \ref{exa:coordinated-attack}. We will now show
that such formulas are easy to translate into CTL{*}, and that it
is easy to use CTL{*} decision procedures on such formulas.
It is easy to represent the statement ``${\bf v}$ occurs at most $n$
times in future worlds'' in LTL, we will call this statement $\gamma^{n}$.
So for example, $\gamma^{0}\equiv NG\neg{\bf v}$, $\gamma^{1}\equiv N\left(\neg{\bf v} UNG\neg{\bf v}\right)$,
and so forth. Note that $\left|\gamma^{n}\right|\in\mathcal{O}\left(n\right)$.
We see that translating ${\bf \Lambda}^{n}\phi$ is no more complex than translating
${\bf \Lambda}\phi$; we can translate ${\bf \Lambda}^{n}\phi$ the same way as we
translated ${\bf \Lambda}\phi$ as above, but we replace $\psi_{s_{i}}$ with
\begin{eqnarray*}
E\left(\bigwedge s_{i}\wedge N\gamma^{n-1}\right)\mbox{ .}
\end{eqnarray*}
We see that $\triangle\phi$ means $\phi$ holds on the original fullpath
or a deviation, $\triangle\prone\phi$ means that $\phi$ holds on the
original path or a deviation, or a deviation from a deviation. In
general $\triangle^{n}\phi$ means that $\phi$ holds on some path at
most $n$ deviations from the current path. Thus:
\begin{eqnarray*}
\triangle^{n}\phi & \equiv & \phi\vee{\bf \Lambda}\phi\vee\cdots\vee{\bf \Lambda}^{n}\phi\mbox{ .}
\end{eqnarray*}
Thus we see that the length of the translation of $\triangle^{n}\phi$
is linear in $n$, and thus has no overall effect on the order of
complexity. Note that $\blacktriangle^{n}\phi\equiv\neg\triangle^{n}\neg\phi$,
so $\blacktriangle^{n}$ is also no harder to translate than a single $\blacktriangle$
operator. This is significant because one of the motivations of
RoCTL{*} was to be able to express the statements of the form ``If
less than $n$ additional failures occur then $\phi$''. The related
statement ``If $n$ failures occur then $\phi$'' is ever easier
to translate into CTL{*} as $O\blacktriangle^{n}\phi\equiv A\left(\gamma^{n}\rightarrow\phi\right)$.
Let the $\blacktriangle$-complexity of a formula $\phi$ be defined as follows:
\begin{eqnarray*}
\left|\phi\right|_{\blacktriangle} & = & \max_{\blacktriangle\psi\leq\phi}\left|\psi\right|\mbox{ .}
\end{eqnarray*}
It is clear that there exists some function $f$ such that for all
RoCTL{*} formula $\phi$ of length $n$ the translation of $\phi$
into CTL{*} is of length $f\left(n\right)$ or less. As the translation
of $\triangle$ does not look inside state formul\ae{}{} it is clear that
$\left|c\left(\phi\right)\right|\in\mathcal{O}\left(f\left(\left|\phi\right|_{\blacktriangle}\right)\left|\phi\right|\right)$.
In other words, for any fragment of RoCTL{*} where the length $\left|\phi\right|_{\blacktriangle}$
of path-formul\ae{}{} contained within a $\blacktriangle$ operator is bounded
there is a linear translation from this fragment to CTL{*}. As a result
the complexity properties of RoCTL{*} formul\ae{}{} with bounded
$\left|\phi\right|_{\blacktriangle}$ are similar to CTL{*}; we can decide
the satisfiability problem in doubly exponential time and the model
checking problem in time singly exponential in the length of the formula
and linear in the size of the model, see \citep{ModelChecking} for
an example of a model checker for CTL{*}.
We can refine both above results by noting that the construction of
$\mathcal{A}_{{\bf \Lambda}\phi}$ does not look inside state formul\ae{}{}. Thus
a fragment of RoCTL{*} which has a bounded number of $\blacktriangle^{n}$
nested within a path-formula (unbroken by $A$ or $O$) has an elementary
translation into CTL{*}.
In \citep{two_seconds}, we discussed a fragment of RoCTL{*} called
State-RoCTL\@. This fragment could naturally express many interesting
robustness properties, but had a linear satisfaction preserving translation
into CTL\@. The truth-preserving translation of State-RoCTL into
RoCTL{*} was technically exponential, but had a linear number of unique
sub-formulas and so has a natural and efficient compressed format;
for example, the truth-preserving translation provided a polynomial-time
model checking procedure.
\section{Conclusion}
\label{sec:concl}\input{sprobConcl.tex}
\bibliographystyle{ACM-Reference-Format-Journals}
\subsection{\label{sec:Expressive-Equivalences}Expressive Equivalences}
While \thispaper{} focuses on temporal logic, there are many ways
of defining the languages expressible by LTL\@. This is very useful,
as it provides us with many ways to model the expressivity of temporal
logics. We are particularly interested in the expressive equivalence
of LTL with counter-free B\"uchi{} automata.
In \prettyref{sub:First-Order-Definable-Lanuages}, we will outline
some important results relating to expressive equivalences, focusing
on those presented in \cite{DiGa08Thomas}. There are a number of
reasons we present these here. Firstly, by showing the many results
that \cite{DiGa08Thomas} builds upon we hope to give the reader a
feel for the complexity of attempting to follow approach of \cite{DiGa08Thomas}
in proving that LTL and counter-free B\"uchi{} automata have the same
expressive power. Secondly, since \cite{DiGa08Thomas} uses many results,
having a map of those results and where to find them in the paper
can be of assistance in following the work of \cite{DiGa08Thomas}.
In \prettyref{sub:Finite-DFAs-to}, we outline the proof of \cite{Wilke99classifyingdiscrete}
that any language recognised by a finite counter-free DFA can be represented
in LTL\@. We note that this result is much weaker than the theorem
of \cite{DiGa08Thomas}. However, this result is simple and constructive.
This allows us to get an idea as to what the formul\ae{}{} translated
from DFAs might look like, as well as an indication of the length
of the translated formul\ae{}{}.
\subsection{\label{sub:First-Order-Definable-Lanuages}First-Order Definable
Languages}
We here present a summary of some significant results in first order
definable languages. We focus on the survey paper of \cite{DiGa08Thomas},
which provides a very powerful equivalence theorem.
\begin{theorem}
\label{thm:LTL=00003Dcounter-free}For any language L, the following
statements are all equivalent. \cite{DiGa08Thomas}\end{theorem}
\begin{enumerate}
\item L is first-order definable
\item L is star-free
\item L is aperiodic
\item L is definable in LTL
\item L is first-order definable with at most 3 names for variables
\item L is accepted by a counter-free B\"uchi{} automata
\item L is accepted by some aperiodic automata
\item L is accepted by some very weak automata
\end{enumerate}
Below we summarise the results that provide the basis for this theorem.
Given that the proofs are numerous and frequently complex we will
not reproduce them here. Further, since we are only interested in
counter-free B\"uchi{} automata and LTL we do not define the other
terms used in the theorem. Readers are invited to read \cite{DiGa08Thomas}
if they are interested in this detail.
\begin{figure}
\begin{centering}
\input{bits/FirstOrder.pdftex_t}
\par\end{centering}
\caption{\label{fig:Visual-Summary-of}Visual Summary of Equivalence Results
in \prettyref{thm:LTL=00003Dcounter-free}}
\end{figure}
{[}1{]}$\implies${[}4{]}: This is in essence Kamp's Theorem \cite{phd-kamp}.
Note that Kamp focuses on translating into a temporal logic with past-time
operators; however, this can be translated back into LTL \cite{567462,gabbay1994temporal}.
{[}1{]}$\iff${[}2{]}: \cite{DiGa08Thomas} cites \cite{Perrin:1986:FLS:9118.9126},
and presents a proof in their Section 4, as well as an alternate
proof of {[}1{]}$\implies${[}2{]} in their section 10.2.
{[}2{]}$\iff${[}3{]}: \cite{Perrin:1984:RRA:645716.665310}, and
\cite[Section 6]{DiGa08Thomas}
{[}3{]}$\implies${[}4{]}: This is one of the more complex proofs
of this paper \cite[Section 8]{DiGa08Thomas}. It serves a similar
purpose to Kamp's theorem.
{[}3{]}$\implies${[}6{]}$\implies${[}7{]}$\implies${[}3{]}: This
is their Proposition 34, \cite[p27]{DiGa08Thomas}. This builds on
a number of results discussed in the paper. For example, {[}6{]}$\implies${[}7{]}
is trivial because since any counter-free B\"uchi{} automaton is periodic,
which is Lemma 29 of \cite[p25]{DiGa08Thomas}.
{[}4{]}$\implies${[}8{]}$\implies${[}7{]}: This is mentioned at
the top of page 4. {[}4{]}$\implies${[}8{]} is Proposition 41 of
\cite[p35]{DiGa08Thomas}. The proof takes LTL formul\ae{}{} in positive
normal form and provides a simple construction of the corresponding
weak alternating automata. {[}8{]}$\implies${[}7{]} does not appear
to be explicitly stated in the text, but a translation into B\"uchi{}
automata is given and in the proof of Proposition 43 \cite[p36]{DiGa08Thomas},
it is mentioned that the automata has an aperiodic transition monoid,
and so by definition is an aperiodic automata.
{[}4{]}$\implies${[}5{]}: \cite{DiGa08Thomas} describes this as
trivial and presents a simple proof (Section 7 p12--13).
{[}5{]}$\implies${[}1{]}: Obvious as {[}5{]} is a restriction of
{[}1{]}.
{[}8{]}$\implies${[}3{]}: Proposition 43 of \cite[p36]{DiGa08Thomas}.
We now present a brief outline of the path from counter-free automata
to LTL, and where they are found in \cite{DiGa08Thomas}. First it
is shown that counter-free automata are aperiodic {[}p25, lemma 29{]}.
Translating aperiodic automata into aperiodic monoids is discussed
{[}p28{]}. The most substantial part of the proof is the translation
from aperiodic monoids (or homomorphisms). The set of words and the
concatenation operator can be considered an infinite monoid {[}p13{]}.
We can choose a homomorphism from that infinite monoid to a finite
monoid. They present a factorisation of the words, and we can factorise
words of a language to produce a simplified language. The translation
into LTL has two major steps, translating the simplified language
into LTL, and showing that the existence of an LTL formula for the
simplified language demonstrates the existence of an LTL formula for
the original language.
Translating LTL to counter-free B\"uchi{} automata would seem significantly
more simple. The obvious powerset construction is counter-free, though
it has a Streett acceptance condition rather than B\"uchi{}. Note
that \cite{DiGa08Thomas} is used in \thispaper{} only for an existence
result, and so the details are not important to \thispaper{}; following
\prettyref{fig:Visual-Summary-of} counter-clockwise from {[}4{]}
to {[}6{]} is sufficient, even though this is presumably not cleanest
or simplest route possible.
\subsection{\label{sub:Finite-DFAs-to}Finite Counter-free DFAs to LTL}
\selectlanguage{english}%
\global\long\def\mathcal{A}{\mathcal{A}}
\global\long\def\alpha{\alpha}
\selectlanguage{british}%
We here outline the proof of \cite{Wilke99classifyingdiscrete}, showing
how we may translate a counter-free DFA into an LTL formula.
For any automaton (or pre-automaton) $\mathcal{A}$, word $u$ and state
$q$. We use $u^{\mathcal{A}}\left(q\right)$ to represent the current
state of the automaton after starting at state $q$, and reading the
word $u$. For any function $\alpha\colon\: Q\rightarrow Q$, we let
the language $L_{\alpha}^{\mathcal{A}}$ be the set of words $u$ such
that $u^{\mathcal{A}}=\alpha$. For any set $S$, we let $u^{\mathcal{A}}\left[S\right]=\left\{ u^{\mathcal{A}}(q)\colon\: q\in S\right\} $.
\begin{theorem}
The language recognised by any counter-free DFA $\mathcal{A}$ can be expressed
in LTL\@. \cite{Wilke99classifyingdiscrete}
\end{theorem}
Due to the importance of this result to \prettyref{sub:RoCTL*-to-ALTL-and-CTL*},
we will briefly outline their proof. They prove that for all words
$u$ the language $L_{\alpha}^{\mathcal{A}}$ can be expressed in LTL\@.
It is then clear that the language recognised by $\mathcal{A}$ can be
expressed by the LTL formula:
\begin{align*}
\bigvee_{\alpha\text{ s.t. }\alpha[Q_{0}]\cap F\ne\emptyset} & \textrm{LTL}\left(L_{\alpha}^{\mathcal{A}}\right),
\end{align*}
where $\textrm{LTL}\left(L_{\alpha}^{\mathcal{A}}\right)$ is the LTL
formula that defines the language $L_{\alpha}^{\mathcal{A}}$.
The proof that $L_{\alpha}^{\mathcal{A}}$ can be expressed in LTL works
by induction, either reducing the state space at the expense of increasing
the alphabet, or shrinking the alphabet without increasing the state
space.
They note that, since $\mathcal{A}$ is counter-free, if $u^{\mathcal{A}}\left[Q\right]=Q$
then $u^{\mathcal{A}}$ is the identity (that is $u^{\mathcal{A}}\left(q\right)=q$
for all $q\in Q$). Hence if $u^{\mathcal{A}}[Q]=Q$ for all $u$ then
it is trivial to express $L_{\alpha}^{\mathcal{A}}$ in LTL\@. Otherwise
there is some input symbol $b$ such that $b^{\mathcal{A}}[Q]$ is a strict
subset of $Q$.
They then define three languages based on $b$; $L_{0}$ the restriction
of $L_{\alpha}^{\mathcal{A}}$ where $b$ does not occur; $L_{1}$ the
restriction of $L_{\alpha}^{\mathcal{A}}$ where $b$ occurs precisely
once; and $L_{2}$ the restriction where $b$ occurs at least twice.
Let $B$ be the obvious restriction of $\mathcal{A}$ such that $b$ is
removed from the input language, and let $\tilde{L}_{\alpha}^{B}$
be $L_{\alpha}^{B}\cup\{\epsilon\}$. They also define $C$ such
that the language recognised by $C$ is similar to that of $\mathcal{A}$
except that the input symbols of $C$ are in essence words that end
in $b$, and so we can restrict the states of $C$ to be $b^{\mathcal{A}}[Q]$.
Recall that $b^{\mathcal{A}}[Q]$ is a strict subset of $Q$ and so we
have reduced the number of states. They define a function $h$ to
translate the words of $\mathcal{A}$ into words of $C$, and likewise
$h^{-1}$ translates the words of $C$ into words of $\mathcal{A}$. They
provide the following equalities:
\begin{align*}
L_{0} & =L_{\alpha}^{B},\quad\bigcup_{\alpha=\beta b^{\mathcal{A}}\beta'}\overset{L_{\beta,\beta'}}{\overbrace{\tilde{L}_{\beta}^{B}b\tilde{L}_{\beta'}^{B}}},\quad L_{2}=\bigcup\overset{L_{\beta,\gamma,\beta'}}{\overbrace{\tilde{L}_{\beta}^{B}bh^{-1}\left(L_{\gamma}^{C}\right)\tilde{L}_{\beta'}^{B}}}
\end{align*}
They let $\Gamma=\Sigma-\left\{ b\right\} $, and note that
\begin{align*}
L_{\beta,\beta'}=\tilde{L}_{\beta}^{B}b\Gamma^{*}\cap\Gamma^{*}b\tilde{L}_{\beta'}^{B} & \quad L_{\beta,\gamma,\beta'}=\Sigma^{*}b\tilde{L}_{\beta'}^{B}\cap\Gamma^{*}bh^{-1}\left(L_{\gamma}^{C}\right)\Gamma^{*}\cap\tilde{L}_{\beta}^{B}b\Sigma^{*}\mbox{ .}
\end{align*}
Since $B$ has a smaller input language, and $C$ has a smaller state
space, we can assume by way of induction that $L_{\alpha}^{B}$, $\tilde{L}_{\beta}^{B}$,
$\tilde{L}_{\beta'}^{B}$ and $L_{\gamma}^{C}$ can be expressed in
LTL\@. It follows that $L_{\alpha}^{\mathcal{A}}$ can be expressed
in LTL\@. The result then follows from induction.
\begin{corollary}
\label{cor:Translating-a-counter-free}Translating a counter-free
DFA into an LTL formula results in a formula of length at most $m2^{2^{\mathcal{O}\left(n\ln n\right)}}$
where $m$ is the size of the alphabet and $n$ is the number of states.
\cite{Wilke99classifyingdiscrete}
\end{corollary}
One minor note is that \cite{Wilke99classifyingdiscrete} uses stutter-free
operators so their $\left(\alpha U\beta\right)$ is equivalent to
our $N\left(\alpha U\beta\right)$; however, this is trivial to translate.
\ifnoroot{\bibliographystyle{alpha}
\section{Introduction }
We introduce the Robust Full Computation Tree Logic (RoCTL{*})
as
an extension of
the branching time temporal logic CTL{*} to represent issues relating
to robustness and reliability in systems. It does this by adding an
Obligatory operator and a Robustly operator. The Obligatory operator
specifies how the systems should ideally behave by quantifying over paths
in which no failures occur. The Robustly operator specifies that something
must be true on the current path and similar paths that ``deviate''
from the current path, having at most one more failure occurring.
This notation allows phrases such as ``even with $n$ additional
failures'' to be built up by chaining $n$ simple unary Robustly
operators together.
RoCTL{*} is a particular combination of
temporal and deontic logics allowing reasoning about
how requirements on behaviour
are progressed and change with time,
and the unfolding of actual events.
The RoCTL{*} Obligatory operator is similar to the Obligatory operator
in Standard Deontic Logic (SDL), although in RoCTL{*} the operator
quantifies over paths rather than worlds.
However, it is the Robustly operator
which gives RoCTL{*} many advantages
over a simple
combination of temporal logic and
deontic logic as
in \cite{vandertorre98temporal}.
SDL has many paradoxes and
some of these, such as the ``Gentle Murderer'' paradox
(``if you murder, you must murder gently''~\cite{Fo84}), spring from
the inadequacy of SDL to deal with obligations caused by acting
contrary to duty.
Contrary-to-Duty (CtD) obligations are important for modeling a robust
system, as it is often important to state that the system should achieve
some goal and also that, if it fails, then it should
act to mitigate or in some way recover
from the failure.
RoCTL{*} can represent CtD obligations by specifying
that the agent must ensure that the CtD obligation is met even if
a failure occurs.
SDL
is able to distinguish what ought to be true from what is true, but
is unable to specify obligations that come into force only when we
behave incorrectly.
Addition of temporal operators to deontic logic allows us to specify
correct responses to failures that have occurred in the past~\cite{vandertorre98temporal}.
However, this approach alone is not sufficient~\cite{vandertorre98temporal}
to represent obligations such as ``You must assist your neighbour,
and you must warn them iff you will not assist them''. In RoCTL{*}
these obligations can be represented if the obligation to warn your
neighbour is robust but the obligation to assist them is not.
\newcommand{\ignore}[1]{}
\ignore
Other approaches to dealing with Contrary-to-Duty obligations exist.
Defeasible logic is often used~\cite{dblp:journals/fuin/mccarty94},
and logics of agency, such as STIT~\cite{be91}, can be useful as
they can allow obligations to be conditional on the agent's ability
to carry out the obligation.
A number of other extensions of temporal logics have been proposed
to deal with deontic or robustness issues~\cite{jan_broersen_designing_2004,w_long_quantification_2000,hansson94logic,huib_aldewereld_designing_2005,agerri_rodrigo_normative_2005}.
Each of these logics are substantially different from RoCTL{*}. Some
of these logics are designed specifically to deal with deadlines~\cite{jan_broersen_designing_2004,hansson94logic}.
The Agent Communication Language was formed by adding deontic and other
modal operators to CTL~\cite{agerri_rodrigo_normative_2005}; this
language does not explicitly deal with robustness or failures.
\citet{hansson94logic}
proposed an extension of CTL
to deal with reliability. However, as well as being intended to deal with deadlines,
their logic reasons about reliability using probabilities rather than
numbers of failures, and their paper does not contain any discussion
of the relationship of their logic to deontic logics. Like our embedding
into QCTL{*},
\citet{huib_aldewereld_designing_2005}
uses a Viol atom to represent failure. However, their logic also uses
probability instead of failure counts and is thus suited to a different
class of problems than RoCTL{*}. Another formalisation of robustness
is representing the robustness of Metric Temporal Logic (MTL) formulas
to perturbations in timings
\cite{bouyer-robust}.
None of these logics appear to have an operator that is substantially
similar to the Robustly operator of RoCTL{*}.
In the last few years there has been considerable interest in logics
for reasoning about systems that are robust to partial
non-compliance with the norms. One approach has been to define
{\em robustness}
in terms of the ability of a multi-agent system to deal with having
some subset of agents that are unwilling or unable to comply with
the norms \cite{HRW08,AHW10}. Like RoCTL{*} they consider socially
acceptable behaviours to be a subset of physically possible behaviours.
A logic that like can discuss numbers of faults was suggested by \cite{FNP10},
though this logic extended ATL instead of CTL{*} and defined fault-tolerance
in terms of numbers of winning strategies. More recently the Deontic
Computation Tree Logic (dCTL) was proposed \cite{castro2011dctl}.
Like RoCTL{*} the logic divides states into normal and abnormal states,
but avoids capturing the full expressivity of CTL{*} to allow the
model checking property to be polynomial like the simpler CTL logic.
There is a restriction of RoCTL{*} that can be easily translated into
CTL \cite{two_seconds}, allowing this restriction to be reasoned about
as efficiently as CTL; however, dCTL is more expressive than CTL \cite{castro2011dctl}.
Finally, a Propositional Deontic Logic was proposed by \cite{AKC12}
than divided events into allowable and non-allowable depending on
the current state.
Diagnosis problems in control theory~\cite{jmpc,avw} also deals
with failures of systems. Diagnosis is in some sense the dual of the
purpose of the RoCTL{*} logic, as diagnosis requires that failure
cause something (detection of the failure) whereas robustness involves
showing that failure will \emph{not} cause something.
This paper provides some examples of robust systems that can be effectively
represented in RoCTL{*}. It is easy to solve the coordinated attack
problem if our protocol is allowed to assume that only $n$ messages
will be lost. The logic may also be useful to represent the resilience
of some economy to temporary failures to acquire or send some resource.
For example, a remote mining colony may have interacting requirements
for communications, food, electricity
and fuel. RoCTL{*} may be more
suitable than Resource Logics (see for example~\cite{weerdt01resource})
for representing systems where a failure may cause a resource to become
temporarily unavailable.
This paper presents a simple example where
the only requirement is to provide a cat with food when it is hungry.
The Obligatory operator,
as well as some uses of the Robustly operator, are easy to translate
into CTL{*}~\cite{DBLP:conf/jelia/McCabe-Dansted08}
but a general
way to achieve a translation to CTL{*} is not obvious.
The first translation
in our paper
is of RoCTL{*} into the tree semantics
of Quantified CTL{*} (QCTL{*}). We note that a similar translation
can be made into a fragment of Hybrid temporal logic. Although QCTL{*}
is strictly more expressive than CTL{*} the translation of RoCTL{*}
into QCTL{*} will be given for two reasons. Firstly the translation
into QCTL{*} is very simple and thus is well suited as an introduction
to reasoning with RoCTL{*}. Secondly, even this weak result is sufficient
to demonstrate that RoCTL{*} is decidable. Finally, the translation
into QCTL{*} is linear, while it will be shown that any translation
to CTL{*} must be non-elementary in the worst case.
We then give a translation of RoCTL{*} formulas into CTL{*}.
This results in a formula
that is satisfied on a model iff the original formula is satisfied
on the same model. This means that we can use all the CTL{*} model
checkers, decision procedures and so forth for RoCTL{*}.
Unfortunately,
the translation can be quite long.
We show that although all RoCTL{*} formul\ae{}{} can be translated
into CTL{*}, the length of the CTL{*} formula is not elementary in
the length of the RoCTL{*} formula. Hence some properties can be represented
much more succinctly in RoCTL{*} than CTL{*}.
This
translation requires roughly one extra exponential per nested robustly
operator. We will show that no translation can do better than this,
so although RoCTL{*} is no more expressive than CTL it is very succinct
in the sense that any translation of RoCTL{*} into either CTL{*} or
tree automata will result in a non-elementary blowup in the length
of the formula.
We can summarise the contributions of this paper as follows.
Firstly, it defines a new intuitive and expressive logic, RoCTL{*}, for specifying robustness in systems.
The logic seems to combine temporal and deontic notions in
a way that captures the important contrary-to-duty obligations
without the usual paradoxes.
Secondly, it provides a proof that the logic can be translated in a truth-preserving manner
into the existing CTL* logic.
Thirdly, it provides a proof that RoCTL{*} is non-elementarily more succinct than CTL*
for specifying some properties.
This paper extends results from the conference papers \cite{FrDaRe07book,DBLP:conf/time/McCabe-DanstedFRP09,Da11Cats}.
There is further discussion and more details in the thesis
\cite{MCD10}.
The structure of the paper is as follows.
RoCTL{*}
is introduced in the next section
before we
show that the new logic can be applied across
a wide variety of examples,
practical, theoretical and philosophical.
In section~\ref{sec:machinery}, we
revise a large collection of
existing machinery that
we will need in the subsequent
expressivity and succinctness proofs.
In section~\ref{sec:bisim}, we
show that RoCTL{*} is preserved under bisimulations:
needed for some unwinding proofs, but also interesting to have.
In section~\ref{sec:qctl}, we
show the fairly straightforward translation
of RoCTL{*} into QCTL{*}.
Section~\ref{sec:altl} presents
some useful conversions between automata.
Section~\ref{sec:ctlstar} contains the
translation of RoCTL{*} into CTL{*}.
In section~\ref{sec:optimal}, we
show that this translation is optimal.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction} \label{introduction}
In non-central
heavy ion collision (HIC) experiments in LHC at CERN and in RHIC at BNL, it is
believed that a very strong magnetic field is created in the direction perpendicular to the reaction plane due to the
spectator particles that are not participating in the collisions. The experiments conducted
by PHENIX Collaboration~\cite{phenixphoton} showed direct-photon anisotropy
which has posed a serious challenge to the present theoretical models. It is
conjectured that this excess elliptic flow may be due to the excess
photons produced by the decay $\rho\rightarrow\,\pi(\eta)\,\gamma$ and the
branching ratio of which increases in presence of the magnetic field near the
critical value where the condensate of $\rho$ is found.
The estimated strength of this magnetic field depends on collision energy and
impact parameter between the colliding nuclei and is about several times the
pion mass squared, \textit{i.e.}, \( eB \sim 15m^{2}_{\pi}\) at LHC in
CERN~\cite{VSkokov2009}. Also, a class of neutron star called magnetar
exhibits ~\cite{VdelaIncera,Bandyopadhyay:1997kh,Chakrabarty:1997ef}
a magnetic field of $10^{18}-10^{20}$ Gauss at the inner core and
~$10^{12}-10^{13} $ Gauss at the surface. These observations motivate
to study the properties of hot magnetised medium
using both phenomenology and quantum field theory.
The presence of a strong magnetic field in HIC influences the QCD phase transitions~\cite{andersenphase} and
particle productions, especially the production of the soft photon~\cite{Basar:2012bp} and
dileptons~\cite{sadooghidilepton,aritraemspectrum,aritradilepton,Tuchin:2013apa,Tuchin:2012mf,Tuchin:2013bda},
which act as a probe of the medium.
Apart from these, there is a large class of other phenomena that take place
in presence of background magnetic field like chiral magnetic effects
due to axial anomalies~\cite{dmitrikharzeevCME,FukushimaCME,CMEaxial},
magnetic catalysis~\cite{miranskydimred,igormagneticcatalysis},
inverse magnetic catalysis~\cite{inversemagneticcatalysis,Farias:2014eca},
superconductivity of the vacuum~\cite{vacuumsuperconductivity}.
It further influences the thermal chiral and deconfining phase
transition~\cite{chiralconfinedeconfine}, change of topological charge~\cite{kharzeevtopo},
anomalous transports~\cite{anomaloustransport}, refractive indices~\cite{Hattori:2012je,Hattori:2012ny} and
screening mass~\cite{mesonrefractive}, decay constant~\cite{mesondecay} of
neutral mesons etc. In addition, efforts were also made to study the bulk properties for
a fermi gas~\cite{Strickland:2012vu}, low lying hadrons~\cite{Andersen:2012zc} and
strongly coupled systems~\cite{Mamo:2013efa},
collective excitations in magnetised QED medium~\cite{Sadooghi:2015hha} using Ritus method
and QCD medium~\cite{elmforsfermion} using Furry's picture,
neutrino properties~\cite{Bhattacharya:2002qf,Bhattacharya:2002aj} and Field theory of the Faraday effects~\cite{Ganguly:1999ts,DOlivo:2002omk}.
The magnetic field created in HIC lasts for very short time ( $\sim$ a few fm). The strength of the field decays
rapidly with time after $\tau\sim 1-2$ $fm/c$. However, the medium effects like electric conductivity can delay
the decay and by the time deconfined quarks and gluons equilibrate with QGP medium, the magnetic field strength
gets sufficiently weak. At that time the relevant energy scales of the system
can be put in this way: $q_{f}B < m^{2}_{\pi}\ll T^{2}$. In this low field limit the properties of the
deconfined medium are also affected. So, it becomes important to treat the weak field limit separately.
Fermion propagator in presence of a uniform background magnetic
field has been derived first by Schwinger~\cite{schwinger1951}. Using this, one loop fermion self-energy and the vacuum polarisation was calculated in
double parameter integral in~\cite{tsaifermion} and~\cite{tsaivacuumpol}, respectively. The weak field expansion of this propagator was calculated
order by order in powers of $q_{f}B$ in~\cite{chyiweak}. Recently, the pion self-energy and its dispersion property have been studied
at zero temperature~\cite{pradiproypionself} in weak field approximation and using full propagator at finite temperature~\cite{Mukherjee:2017dls}. Also a detailed study of
the spectral properties of $\rho$ mesons has been performed in presence of magnetic field both at zero~\cite{arghyarho,Bandyopadhyay:2016cpf} and at non-zero temperature~\cite{arghyarhothermal}.
For hot and dense medium (\textit{e.g.}, QED and QCD plasma), it is well know that a bare perturbation theory
breaks down due to infrared divergences.
A reorganisation of the perturbation theory has been
done by performing the expansion around a system of massive quasiparticles~\cite{Andersen:1999fw}, where mass is generated through
thermal fluctuations. This requires a resummation of certain class of diagrams, known as hard thermal loop (HTL) resummation~\cite{brateennucl337},
when the loop momenta are of the order of the temperature. This reorganised perturbation theory,
known as HTL perturbation theory (HTLpt), leads to
gauge independent results for various physical quantities~\cite{braatendilepton,HTLgluondamping,Haque:2011iz,Haque:2011vt,Haque:2010rb,Andersen:2002ey,Andersen:2003zk,Haque:2012my,Haque:2013qta,Andersen:2009tc,Andersen:2010ct,Andersen:2010wu,Andersen:2011sf,Andersen:2011ug,Haque:2013sja,Haque:2014rua,Mustafa:2004hf}. Within this one-loop HTLpt, the thermomagnetic
correction to the quark self-energy~\cite{ayalafermionself}, quark-gluon three point~\cite{ayalafermionself} function at zero chemical potential and four point~\cite{Haque:2017nxq} function at finite chemical potential in weak field limit have been computed. The fermion self-energy
has also been extended to the case of non-zero chemical potential and the pressure of a weakly magnetised QCD
plasma~\cite{aritraweakpressure} has also been obtained.
In recent years a huge amount of activity is underway to explore the properties of a hot medium
with a background magnetic field using phenomenology as well as using thermal field theory.
In a thermal medium the bulk and dynamical properties~\cite{brateennucl337,weldonfermion,Weldon:1982aq} are
characterised by the collective excitations in a time like region and the Landau damping in
a space-like domain. The basic quantity associated with these medium properties
is the two point correlation function. In this work we construct the general structure of the fermionic
two point functions (e.g., self-energy and the effective propagator) in a nontrivial background
like a hot magnetised medium.
We then analyse its property under the transformation of some discrete symmetries of the system,
the collective fermionic spectra, QED like three-point functions and the spectral representation of
the two point function and its consequences in a hot magnetised medium.
The formulation is applicable equally well
to both QED and QCD.
The paper is organised as follows; In section \ref{fer_prop_section}, the notation and set up are briefly
discussed through a fermion
propagator in a constant background field using Schwinger formalism.
Section \ref{gen_2pt} has number of parts in which we obtain
the general structure of the self-energy (subsec.\ref{self_gen_structure}), the effective fermion propagator (subsec.\ref{eff_fer_prop} ),
the transformation properties and discrete symmetries of the effective propagator (subsec.\ref{prop_trans}), the modified Dirac equations
in general and for lowest Landau level (subsec.\ref{dirac_mod}) and
the dispersion properties of the various collective modes (subsec.\ref{disp_rep}) in time-like region. In section ~\ref{vert_func} the general structure
of the self-energy and the propagator has been verified from one-loop direct calculation. The spectral
representation of the effective propagator in space-like domain has been obtained in section \ref{spec_rep}. We have presented
some detailed calculations for various sections and subsections in Appendix \ref{append_A}-\ref{spec_htl}. Finally, we conclude in section \ref{remarks}.
\section{Charged Fermion Propagator in Background Magnetic Field within Schwinger Formalism}
\label{fer_prop_section}
In this section we set the notation and briefly outline the fermionic propagator in presence of a background magnetic field
following Schwinger formalism~\cite{schwinger1951} . Without any loss of generality, the background
magnetic field is chosen along the $z$ direction, \(\vec{B}=B\hat{z}\),
and the vector potential in a symmetric gauge reads as
\begin{equation}
A^\mu =(0,-\frac{yB}{2},\frac{xB}{2}, 0) \, . \label{symm_g}
\end{equation}
Below we also outline the notation we shall be using throughout:
\begin{eqnarray}
&& a\indices{^\mu}=(a\indices{^0},a\indices{^1},a\indices{^2},a\indices{^3})=(a_{0},\vec{a}); ~~ a\cdot b\equiv a\indices{_0}b\indices{_0}-\vec{a}\cdot\vec{b};~~
g\indices{^\mu^\nu}=\textsf{diag}\left(1,-1,-1,-1\right),\nonumber \\
&& a^\mu = a_\shortparallel^\mu + a_\perp^\mu;~~ a_\shortparallel^\mu = (a^0,0,0,a^3) ;~~ a_\perp^\mu = (0,a^1,a^2,0) \nonumber\\
&&g^{\mu\nu} = g_\shortparallel^{\mu\nu} + g_\perp^{\mu\nu};~~ g_\shortparallel^{\mu\nu}= \textsf{diag}(1,0,0,-1);~~ g_\perp^{\mu\nu} = \textsf{diag}(0,-1,-1,0),\nonumber\\
&&(a\cdot b) = (a\cdot b)_\shortparallel - (a\cdot b)_\perp;~~ (a\cdot b)_\shortparallel = a^0b^0-a^3b^3;~~ (a\cdot b)_\perp = a^1b^1+a^2b^2, \nonumber \\
&& \slashed{a}=\gamma\indices{^\mu}a\indices{_\mu}=\slashed{a}_{\shortparallel}+\slashed{a}_{\perp};~~~
\slashed{a}_{\shortparallel} = \gamma\indices{^0}a\indices{_0}-\gamma\indices{^3}a\indices{^3};~~~
\slashed{a}_{\perp} = \gamma\indices{^1}a\indices{^1}+\gamma\indices{^2}a\indices{^2}
\end{eqnarray}
where $\shortparallel$ and $\perp$ are, respectively, the parallel and perpendicular
components, which would be separated out due to the presence of the background magnetic field.
Now, the fermionic two-point function is written as
\begin{small}
\begin{align}
& S(x,x^{\prime})=-i\,C(x,x^{\prime})\int_{0}^{\infty}\,ds\,\,\frac{1}{s\,\sin(q_{f}\,B\,s)} \exp\left(-im_f^2\,s+i\,q_{f}\,B\,s\,\Sigma_{3}\right)\, \nonumber\\
& \hspace{1cm}\exp\left[-\frac{i}{4\,s}\left((x-x^{\prime})^2_{\shortparallel}-\frac{q_{f}\,B\,s}{\tan(q_{f}\,B\,s)}(x-x^{\prime})^2_{\perp}\right)\right] \nonumber \\
& \times \left[m_f+\frac{1}{2\,s}\left((\slashed{x}_{\shortparallel}-\slashed{x}^{\prime}_{\shortparallel})-\frac{q_{f}\,B\,s}{\sin(q_{f}\,B\,s)}\exp\left(-i\,q_{f}\,B\,s\,
\Sigma_{3}\right)\,\left(\slashed{x}_{\perp}-\slashed{x}^{\prime}_{\perp}\right)\right)\right], \label{gxxp}
\end{align}
\end{small}
where the parameter $s$ is called Schwinger proper time variable \cite{schwinger1951}.
We note that $m_f$ and $q_f$ are mass and \textit{absolute charge} of the fermion of flavour $f$, respectively.
The phase factor, $C(x,x^{\prime})$, is independent of $s$ but is
responsible for breaking of both gauge and translational invariance. Remaining part, denoted as $\mathcal{S}(x-x^{\prime})$, is translationally invariant.
However, as shown below, $C(x,x^{\prime})$ drops out for a gauge invariant calculation. Now $C(x,x^{\prime})$ reads as
\begin{align}
C(x,x^{\prime}) &= C\,\exp\left[i\,q_{f}\,\int_{x^{\prime}}^{x}\,d\xi^{\mu}\left(A_{\mu}(\xi)+\frac{1}{2}F_{\mu\nu}(\xi-x^{\prime})^{\nu}\right)\right] , \label{sol_c}
\end{align}
where $C$ is just a number. The integral in the exponential is independent of the path taken
between \(x\) and \(x^{\prime}\) and choosing it as a straight line one can write
\begin{align}
C(x,x^{\prime}) = C\,\Phi(x,x^{\prime})=C\,\exp\left[i\,q_{f}\,\int_{x^{\prime}}^{x}\,d\xi^{\mu}A_{\mu}(\xi)\right].
\end{align}
Using the gauge transformation
$A^{\mu}(\xi) \rightarrow A^{\mu}(\xi)+\partial^\mu\Lambda(\xi)$,
and choosing symmetric gauge as given in (\ref{symm_g}), the phase factor \(\Phi(x,x^{\prime})\) becomes $1$, if we take \cite{ayalafermionself}
\begin{align}
\Lambda(\xi)=\frac{B}{2}\left(x^{\prime}_{2}\xi_{1}-x^{\prime}_{1}\xi_{2}\right).
\end{align}
From equation \eqref{gxxp}, the momentum space propagator can be obtained as
\begin{small}
\begin{align}
S(K) &= \int d^{4}x\,e^{iK\cdot x}\,\mathcal{S}(x-x^{\prime}) \nonumber \\
&= -i\int_{0}^{\infty}\,ds\,\exp\left[i\,s\,\left(K^{2}_{\shortparallel}-\frac{\tan(q_{f}B\,s)}{q_{f}B\,s}K^{2}_{\perp}-m_f^{2}\right)\right] \nonumber \\
&\,\,\, \, \, \times
\left[\left(1+\gamma_{1}\gamma_{2}\tan(q_{f}B\,s)\right)\left(\slashed{K}_{\shortparallel}+m_f\right)-\sec^2(q_fBs){\slashed {K}_{\perp}}\right] \nonumber \\
&= \,\exp\left(-{{K_\perp}^{2}}/{|q_{f}B|}\right)\sum_{l=0}^{\infty}(-1)^n\frac{D_{n}(q_{f}B,K)}{K^{2}_{\shortparallel}-m_f^{2}-2l|q_{f}B|} , \label{mirprop}
\end{align}
where $k_\perp^2 = 2l|q_fB|$, is quantised with Landau level $l=0,1, \cdots$, and
\begin{align}
D_{l}(q_{f}B,K) &= \left(\slashed{K}_{\shortparallel}+m_f\right)\left[(1-i\gamma_{1}\gamma_{2})L_{l}\left(2\frac{{K_\perp}^{2}}{|q_{f}B|}\right)
- (1+i\gamma_{1}\gamma_{2})L_{l-1}\left(2\frac{{K_\perp}^{2}}{|q_{f}B|}\right)\right] \nonumber \\
& - 4\slashed{K}_{\perp}L^{1}_{l-1}\left(2\frac{{K_\perp}^{2}}{|q_{f}B|}\right),
\label{lagu}
\end{align}
\end{small}
where \( L_{l}(x) \) is Laguerre polynomial, \( L^{j}_{l}(x) \) is associated Laguerre polynomial
with \( L^{j}_{-1}(x)=0 \) and both $j$ is a non-negative integer.
Below we discuss the structure of the propagator in \eqref{mirprop} in presence of background magnetic field. Since fermion propagator is
$4\times 4$ matrix, a new matrix $ (\slashed{K}_{\shortparallel}+m_f) i\gamma_1\gamma_2$ appears in addition to that of the vacuum structure
$(\alpha'{\slashed{K}}$, $\alpha'(K^2)$ is a Lorentz invariant structure function) for a chirally symmetric theory. One can now write
the new matrix for a chirally symmetric theory in terms of background electromagnetic field tensor $F^{\rho\lambda}$ as
\begin{eqnarray}
i\gamma_1\gamma_2 \slashed{K}_{\shortparallel} \, B &=& -\gamma_5K^\mu {\tilde F}_{\mu\nu} \gamma^\nu, \label{rel_t0}
\end{eqnarray}
where the background dual field tensor reads as
\begin{equation}
{\tilde F}_{\mu\nu}= \frac{1}{2} \epsilon_{\mu\nu\rho\lambda} F^{\rho\lambda}. \label{dual}
\end{equation}
The structure of a chirally symmetric free fermion propagator
in presence of only magnetic field can be viewed as ($\alpha'{\slashed{K}}+\delta' \gamma_5K^\mu {\tilde F}_{\mu\nu} \gamma^\nu $),
where $\delta'$ is a new structure constant that appears due to the presence of background magnetic field. When a fermion propagates only
in a hot medium, then the vacuum part will be modified only due to the thermal background~\cite{weldonfermion}
as $(\alpha'{\slashed{K}}+\beta'\slashed{u})$, where $u$ is the four velocity of the heat bath. When a fermion moves in a nontrivial background
like hot magnetised medium then
one can write \eqref{rel_t0} as
\begin{align}
i\gamma_{1}\gamma_{2}\slashed{K}_{\shortparallel} &= -\gamma_{5}\left[\left(K.n\right)\slashed{u}-\left(K.u\right)\slashed{n}\right]
, \label{acom}
\end{align}
where
\begin{equation}
n_\mu = \frac{1}{2B} \epsilon_{\mu\nu\rho\lambda}\, u^\nu F^{\rho\lambda} = \frac{1}{B}u^\nu {\tilde F}_{\mu\nu} \, . \label{nmu}
\end{equation}
and the four velocity in the rest frame of the heat bath and the direction of the magnetic field $B$, respectively, given as
\begin{subequations}
\begin{align}
u^{\mu} &= (1,0,0,0), \label{fv4} \\
n^{\mu} &= (0,0,0,1) . \label{mgdir}
\end{align}
\end{subequations}
One can notice that in a hot magnetised medium both $u$ and $n$ are correlated as given in \eqref{nmu}
and the contribution due to magnetic field in \eqref{rel_t0} in presence of heat bath becomes a thermo-magnetic contribution.
We also further note that in absence of heat bath, \eqref{acom} reduces to \eqref{rel_t0}, which is not obvious by inspection
but we would see later.
\section{General Structure of Fermion Two-point Function in a Hot Magnetised Medium}
\label{gen_2pt}
In previous section the modification of a free propagator has been discussed briefly
in presence of a background magnetic field.
In this section we would like to obtain the most general structure of a fermion self-energy, the effective fermion propagator
and some of its properties in a nontrivial background like hot magnetised medium. We would also discuss the modified Dirac equation
and the fermion dispersion spectrum in a hot magnetised medium. For thermal bath we would use HTL approximation and
any other approximation required for the purpose will be stated therein.
\subsection{General Structure of the Fermion Self-Energy}
\label{self_gen_structure}
The fermionic self-energy is a matrix as well as a Lorentz scalar. However, in presence of nontrivial background, \textit{e.g.,} heat bath
and magnetic field, the boost and rotational
symmetries of the system are broken. The general structure of fermion self-energy for hot magnetised medium can be written by the following arguments.
The self-energy $\Sigma(P)$ is a $4\times 4$ matrix which depends, in present case, on the four momentum of the fermion $P$,
the velocity of the heat bath $u$ and the direction of the magnetic field $n$.
Now, any $4\times 4$ matrix can be expanded in terms of 16 basis
matrices: $\{\mathbbm{1},\gamma_{5},\gamma_{\mu},\gamma_{\mu}\gamma_{5},\sigma_{\mu\nu}\}$, which are the unit matrix, the four $\gamma$-matrices,
the six $\sigma_{\mu\nu}$ matrices, the four $\gamma_{5}\gamma_{\mu}$ matrices and finally
$\gamma_{5}$. So, the general structure can be written as
\begin{align}
\Sigma(P) &= -\alpha \mathbbm{1} - \beta \gamma_{5} - a \slashed{P} - b\slashed{u} - c \slashed{n} - a'\gamma_{5}\slashed{P} - b^{\prime}\gamma_{5}\slashed{u}
- c^{\prime}\gamma_{5}\,\slashed{n} \nonumber \\
& -h\, \sigma_{\mu\nu}P^{\mu}P^{\nu}- h^\prime \sigma_{\mu\nu}u^{\mu}u^{\nu}- \kappa \ \sigma_{\mu\nu}n^{\mu}n^{\nu}
- d\sigma_{\mu\nu}P^{\mu}u^{\nu}-d^{\prime}\sigma_{\mu\nu}n^{\mu}P^{\nu}-\kappa^\prime\sigma_{\mu\nu}u^{\mu}n^{\nu}
\, , \label{genstructselfenergy0}
\end{align}
where various coefficients are known as structure functions.
We note that the combinations involving $\sigma_{\mu\nu}$ do not
appear due to antisymmetric nature of it in any loop order of self-energy. Also in a chirally invariant
theory, the terms \( \alpha \mathbbm{1} \) and \(\gamma_{5}\beta \) will not appear as they would break the chiral symmetry.
The term \( \gamma_5\slashed{P} \) would appear in the self-energy if fermions interact with an axial vector\footnote{
The presence of an axial gauge coupling leads to chiral or axial anomaly and a chirally invariant theory does not
allow this. Other way, the preservation of both chiral and axial symmetries is impossible, a choice must be made
which one should be preserved. For a chirally invariant theory this term drops out. Also the presence of $\gamma_5$
in a Lagrangian violates parity invariance.}.
By dropping those in \eqref{genstructselfenergy0} for chirally symmetric theory, one
can now write
\begin{align}
\Sigma(P) &= - a \,\slashed{P} - b\,\slashed{u} - c \,\slashed{n} - b^{\prime}\gamma_{5}\,\slashed{u}
- c^{\prime}\gamma_{5}\,\slashed{n} . \label{genstructselfenergy1}
\end{align}
Now we point out that some important information is encoded into the fermion propagator
in \eqref{mirprop} through \eqref{acom} for a hot magnetised medium. This suggests that $c\slashed{n}$ should not appear
in the fermion self-energy~\footnote{We have checked that even if one keeps $c \,\slashed{n}$, the coefficient $c$ becomes zero
in one-loop order in the weak field approximation.} and the most general form of the fermion self-energy for a hot magnetised medium becomes
\begin{align}
\Sigma(P) &= - a \,\slashed{P} - b\slashed{u} - b^{\prime}\gamma_{5}\,\slashed{u}
- c^{\prime}\gamma_{5}\,\slashed{n}. \label{genstructselfenergy}
\end{align}
When a fermion propagates in a vacuum, then $b=b'=c'=0$ and $\Sigma(P)=-a\slashed{P}$. But when it propagates in a background of pure magnetic
field without any heat bath, then $a\ne 0$, $b=0$ and the structure functions, $b'$ and $c'$, will depend only on the
background magnetic field as we will see later. When a fermion propagates in a heat bath, then $a\ne 0$, $b\ne0$ but both
$b'$ and $c'$ vanish because there would not be any thermo-magnetic corrections as can also be seen later.
We now write down the \emph{right chiral} projection operator, $\displaystyle \mathcal{P}_{+}$ and the \emph{left chiral} projection
operator $\displaystyle \mathcal{P}_{-}$ , respectively, defined as:
\begin{subequations}
\begin{align}
\mathcal{P}_{+} &= \frac{1}{2}\left(\mathbbm{1}+\gamma_{5}\right) \label{RChPO} , \\
\mathcal{P}_{-} &= \frac{1}{2}\left(\mathbbm{1}-\gamma_{5}\right) \label{LChPO} ,
\end{align}
\end{subequations}
which satisfy the usual properties of projection operator:
\begin{equation}
\mathcal{P}^{2}_{\pm} = \mathcal{P}_{\pm}, \quad \mathcal{P}_{+}\,\mathcal{P}_{-}=\mathcal{P}_{-}\,\mathcal{P}_{+}
= 0, \quad \mathcal{P}_{+}+\mathcal{P}_{-} = \mathbbm{1}, \quad \mathcal{P}_{+}-\mathcal{P}_{-} = \gamma_{5}. \label{PropProj}
\end{equation}
Using the chirality projection operators, the general structure of the self-energy in \eqref{genstructselfenergy}
can be casted in the following form
\begin{align}
\Sigma(P) = -\mathcal{P}_{+}\,{\slashed{C}}\,\mathcal{P}_{-} -\mathcal{P}_{-}\,{\slashed{D}}\,\mathcal{P}_{+}, \label{fergenstruct}
\end{align}
where $\slashed{C}$ and $\slashed{D}$ are defined as
\begin{subequations}
\begin{align}
\slashed{C} &= a\,\slashed{P}+(b+b^{\prime})\,\slashed{u}+c^{\prime}\,\slashed{n} \label{slashc} , \\
\slashed{D} &= a\,\slashed{P}+(b-b^{\prime})\,\slashed{u}-c^{\prime}\,\slashed{n}. \label{slashd}
\end{align}
\end{subequations}
From \eqref{genstructselfenergy} one obtains the general form of the various structure functions as
\begin{subequations}
\begin{align}
a &= \frac{1}{4}\,\,\frac{\Tr\left(\Sigma\slashed{P}\right)-(P.u)\,\Tr\left(\Sigma\slashed{u}\right)}{(P.u)^{2}-P^{2}} , \label{sta}\\
b &= \frac{1}{4}\,\,\frac{-(P.u)\,\Tr\left(\Sigma\slashed{P}\right)+P^{2}\,\Tr\left(\Sigma\slashed{u}\right)}{(P.u)^{2}-P^{2}} ,
\label{stb} \\
b^{\prime} &= - \frac{1}{4}\,\Tr\left(\slashed{u}\Sigma\gamma_{5}\right) , \label{stbp} \\
c^{\prime} &= \frac{1}{4}\,\Tr\left(\slashed{n}\Sigma\gamma_{5}\right) , \label{stcp}
\end{align}
\end{subequations}
which are also Lorentz scalars . Beside $T$ and $B$, they would also depend on three Lorentz scalars defined by
\begin{subequations}
\begin{align}
\omega &\equiv P^{\mu}u_{\mu}, \label{ome} \\
p\indices{^3}&\equiv -P^{\mu}n_{\mu} =p_z \, , \label{p3} \\
p_{\perp} &\equiv \left[(P^{\mu}u_{\mu})^{2}-(P^{\mu}n_{\mu})^{2}-(P^{\mu}P_{\mu})\right] ^{1/2}. \label{pperp}
\end{align}
\end{subequations}
Since \(P^{2}=\omega^{2}-p^{2}_{\perp}-{p\indices{_z}}^{2}\), we may interpret $\omega$, $p_{\perp}$, $p\indices{_z}$ as Lorentz invariant
energy, transverse momentum, longitudinal momentum respectively.
All these structure functions for 1-loop order in a weak
field and HTL approximations have been computed in
Appendix \ref{append_A} and quoted here~\footnote{In weak field approximation the domain of applicability
becomes $m_{th}^2 (\sim g^2T^2) < q_fB < T^2$
instead of $m^2 < q_fB< T^2$ as discussed in Appendix~\ref{append_A}.} as
\begin{small}
\begin{subequations}
\begin{align}
a(p_0,|\vec p|) &
\, = \, -\frac{m^{2}_{th}}{|\vec{p}|^{2}}Q_{1}\left(\frac{p_{0}}{|\vec{p}|}\right), \label{at}\\
b(p_0,|\vec p|)&
\, = \, \frac{m^{2}_{th}}{|\vec{p}|}\left[\frac{p_{0}}{|\vec{p}|}Q_{1}\left(\frac{p_{0}}{|\vec{p}|}\right)-Q_{0}\left(\frac{p_{0}}{|\vec{p}|}\right)\right], \label{bt} \\
b^{\prime}(p_0,|\vec p|) &
\, = \, 4C_{F}g^{2}M^{2}(T,m_f,q_{f}B)\frac{p_{z}}{|\vec{p}|^{2}}Q_{1}\left(\frac{p_{0}}{|\vec{p}|}\right), \label{bprime}\\
c^{\prime} (p_0,|\vec p|)&
\, =\, 4C_{F}g^{2}M^{2}(T,m_f,q_{f}B)\frac{1}{|\vec{p}|}Q_{0}\left(\frac{p_{0}}{|\vec{p}|}\right). \label{cprime}
\end{align}
\end{subequations}
\end{small}
We note that the respective vacuum contributions in $a$, $b'$ and $c'$ have been dropped by the choice of the renormalisation prescription,
and the general structure of the self-energy, as found in appendix \ref{append_A}, agrees with that in \eqref{genstructselfenergy}.
\subsection{Effective Fermion Propagator}
\label{eff_fer_prop}
The effective fermion propagator is given by Dyson-Schwinger equation (see Fig. \ref{fig:dyson_schwinger}) which reads as
\begin{align}
\,S^{*}(P)=\frac{1}{\slashed{P}-\Sigma(P)}\, , \label{eff_prop0}
\end{align}
and the inverse fermion propagator reads as
\begin{equation}
{S^*}^{-1}(P) =\slashed{P}-\Sigma(P)\, . \label{inv_prop}
\end{equation}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.9]{fig1.pdf}
\caption{\small Diagramatic representation of the Dyson-Schwinger equation for one-loop effective fermion propagator.}
\label{fig:dyson_schwinger}
\end{figure}
Using \eqref{fergenstruct} the inverse fermion propagator can be written as
\begin{align}
{S^*}^{-1}(P)
&= \mathcal{P}_{+}\left[(1+a(p_{0},|\vec{p}|))\slashed{P}+\left(b(p_{0},|\vec{p}|)
+b^{\prime}(p_{0},p_{\perp},p_{z})\right)\slashed{u}+c^{\prime}(p_{0},|\vec{p}|)\slashed{n}\right] \mathcal{P}_{-} \nonumber \\
&\,\, +\mathcal{P}_{-}\left[(1+a(p_{0},|\vec{p}|))\slashed{P}+\left(b(p_{0},|\vec{p}|)-b^{\prime}(p_{0},p_{\perp},p_{z})\right)\slashed{u}
-c^{\prime}(p_{0},|\vec{p}|)\slashed{n}\right]\mathcal{P}_{+} \nonumber \\
&= \mathcal{P}_{+}\,\slashed{L}\,\mathcal{P}_{-}+\mathcal{P}_{-}\,\slashed{R}\,\mathcal{P}_{+} \, , \label{eff_in_prop0}
\end{align}
where $\slashed{L}$ and $\slashed{R}$ can be obtained from two four vectors given by
\begin{subequations}
\begin{align}
L\indices{^\mu}(p_{0},p_{\perp},p_{z}) &= \mathcal{A}(p_{0},|\vec{p}|)\,P^{\mu}
+\mathcal{B}_{+}(p_{0},p_{\perp},p_{z})\,u^{\mu}+c^{\prime}(p_{0},|\vec{p}|)\, n^\mu , \label{l_mu} \\
R\indices{^\mu}(p_{0},p_{\perp},p_{z}) &= \mathcal{A}(p_{0},|\vec{p}|)\,P^{\mu}
+\mathcal{B}_{-}(p_{0},p_{\perp},p_{z})\,u^{\mu}-c^{\prime}(p_{0},|\vec{p}|)\, n^\mu \, ,
\label{r_mu}
\end{align}
\end{subequations}
with
\begin{subequations}
\begin{align}
\mathcal{A}(p_{0},|\vec{p}|)&=1+a(p_{0},|\vec{p}|), \label{cal_a} \\
\mathcal{B}_{\pm}(p_{0},p_{\perp},p_{z})&=b(p_{0},|\vec{p}|) \pm b^{\prime}(p_{0},p_{\perp},p_{z}) \ . \label{cal_bpm}
\end{align}
\end{subequations}
Using \eqref{eff_in_prop0} in \eqref{eff_prop0}, the propagator can now be written as
\begin{align}
S^{*}(P) &= \mathcal{P}_{-}\frac{\slashed{L}}{L^{2}}\mathcal{P}_{+} + \mathcal{P}_{+}\frac{\slashed{R}}{R^{2}}\mathcal{P}_{-} \, , \label{eff_prop1}
\end{align}
where we have used the properties of the projection operators
$\, \mathcal{P}_{\pm}\gamma\indices{^\mu}=\gamma\indices{^\mu}\mathcal{P}_{\mp}, \, \mathcal{P}^{2}_{\pm}=\mathcal{P}_{\pm}, \, \mbox{and} \,
\mathcal{P}_{+}\mathcal{P}_{-}=\mathcal{P}_{-}\mathcal{P}_{+}=0$. It can be checked that $S^*(P){S^*}^{-1}(P)= \mathcal{P}_{+}
+\mathcal{P}_{-} = \mathbbm{1}$ .
Also we have
\begin{subequations}
\begin{align}
L^2 =L^{\mu}L_{\mu}
&= \left(\mathcal{A}p_{0}+\mathcal{B}_{+}\right)^{2}-\left[\left(\mathcal{A}p_{z}
+c^{\prime}\right)^{2}+\mathcal{A}^{2}p^{2}_{\perp}\right] =L_0^2-|\vec{L}|^2 \, , \label{l2}\\
R^{2} =R^\mu R_\mu &= \left(\mathcal{A}p_{0}+\mathcal{B}_{-}\right)^{2}-\left[\left(\mathcal{A}p_{z}
-c^{\prime}\right)^{2}+\mathcal{A}^{2}p^{2}_{\perp}\right] = R_0^2-|\vec{R}|^2 \, ,\label{r2}
\end{align}
\end{subequations}
where we have used $u^{2}=1, \, n^{2}=-1, \,u\cdot n=0, \, P\cdot u=p_{0},\, \, \mbox{and}\, \, P\cdot n=-p_z$. Note that we have suppressed
the functional dependencies of $L,\, R, \, \mathcal{A}$, $\mathcal{B}_{\pm}$ and $c^{\prime}$ and would bring them back
whenever necessary.
For the lowest Landau Level (LLL), $l=0 \, \Rightarrow p_\perp=0$, and these relations reduce to
\begin{subequations}
\begin{align}
L^2_{LLL} &= \left(\mathcal{A}p_{0}+\mathcal{B}_{+}\right)^{2}-\left(\mathcal{A}p_{z} +c^{\prime}\right)^{2}=L_0^2-L_z^2 \, , \label{defLsquare_l0}\\
R^{2}_{LLL} &= \left(\mathcal{A}p_{0}+\mathcal{B}_{-}\right)^{2}-\left(\mathcal{A}p_{z}-c^{\prime}\right)^{2} = R_0^2-R_z^2 \, .\label{defRsquare_r0}
\end{align}
\end{subequations}
The poles of the effective propagator, $ L^2=0$ and $ R^2 =0$, give rise to quasi-particle dispersion relations in
a hot magnetised medium. There will be four collective modes with positive energies: two from $L^2=0$ and two from $R^2=0$.
Nevertheless, we will discuss dispersion properties later.
\subsection{Transformation Properties of Structure Functions and Propagator}
\label{prop_trans}
First, we outline some transformation properties of the various structure functions as obtained in
\eqref{at}, \eqref{bt}, \eqref{bprime} and \eqref{cprime}.
\begin{enumerate}
\item Under the transformation ${\vec p} \rightarrow -{\vec p}= (p_\perp,-p_z)$:
\begin{subequations}
\begin{align}
a(p_{0},|-\vec{p}|) &= a(p_{0},|\vec{p}|), \label{amvp} \\
b(p_{0},|-\vec{p}|) &= b(p_{0},|\vec{p}|), \label{bmvp} \\
b^{\prime}(p_{0},p_{\perp},-p_{z}) &= -b^{\prime}(p_{0},p_{\perp},p_{z}), \label{bpmvp} \\
c^{\prime}(p_{0},|-\vec{p}|) &= c^{\prime}(p_{0},|\vec{p}|) . \label{cpmvp}
\end{align}
\end{subequations}
\item For $p_0 \rightarrow -p_0$:
\begin{subequations}
\begin{align}
a(-p_{0},|\vec{p}|) &= a(p_{0},|\vec{p}|), \label{amp0} \\
b(-p_{0},|\vec{p}|) &= -b(p_{0},|\vec{p}|), \label{bmp0} \\
b^{\prime}(-p_{0},p_{\perp},p_{z}) &= b^{\prime}(p_{0},p_{\perp},p_{z}), \label{bpmp0} \\
c^{\prime}(-p_{0},|\vec{p}|) &= -c^{\prime}(p_{0},|\vec{p}|) . \label{cpmp0}
\end{align}
\end{subequations}
\item For $P \rightarrow -P =(-p_0,-{\vec p})$:
\begin{subequations}
\begin{align}
a(-p_{0},|-\vec{p}|) &= a(p_{0},|\vec{p}|), \label{amp} \\
b(-p_{0},|-\vec{p}|) &= -b(p_{0},|\vec{p}|), \label{bmp} \\
b^{\prime}(-p_{0},p_{\perp},-p_{z}) &= -b^{\prime}(p_{0},p_{\perp},p_{z}), \label{bpmp} \\
c^{\prime}(-p_{0},|-\vec{p}|) &= -c^{\prime}(p_{0},|\vec{p}|) . \label{cpmp}
\end{align}
\end{subequations}
We have used the fact that $Q_{0}(-x) = -Q_{0}(x)$ and $Q_{1}(-x) = Q_{1}(x)$.
\end{enumerate}
Now based on the above we also note down the transformation properties of those quantities appearing in the
propagator: .
\begin{enumerate}
\item For $\mathcal A$:
\begin{subequations}
\begin{align}
\mathcal{A}(p_{0},p_{\perp},p_{z}) &\xrightarrow[]{\text{$\vec{p}\rightarrow -\vec{p}$}} \mathcal{A}(p_{0},p_{\perp},p_{z}), \label{calamvp} \\
\mathcal{A}(p_{0},p_{\perp},p_{z}) &\xrightarrow[]{\text{$p_{0}\rightarrow -p_{0}$}} \mathcal{A}(p_{0},p_{\perp},p_{z}) , \label{calamvp} \\
\mathcal{A}(p_{0},p_{\perp},p_{z}) &\xrightarrow[\text{$\vec{p}\rightarrow -\vec{p}$}]{\text{$p_{0}\rightarrow -p_{0}$}} \mathcal{A}(p_{0},p_{\perp},p_{z}).
\label{calcpmp}
\end{align}
\end{subequations}
\item For ${\mathcal B}_\pm$:
\begin{subequations}
\begin{align}
\mathcal{B}_{\pm}(p_{0},p_{\perp},p_{z}) &\xrightarrow[]{\text{$\vec{p}\rightarrow -\vec{p}$}} \mathcal{B}_{\mp}(p_{0},p_{\perp},p_{z}), \label{calbmvp} \\
\mathcal{B}_{\pm}(p_{0},p_{\perp},p_{z}) &\xrightarrow[]{\text{$p_{0}\rightarrow -p_{0}$}} -\mathcal{B}_{\mp}(p_{0},p_{\perp},p_{z}), \label{calbmp0} \\
\mathcal{B}_{\pm}(p_{0}, p_{\perp},p_{z}) &\xrightarrow[\text{$\vec{p}\rightarrow -\vec{p}$}]{\text{$p_{0}\rightarrow -p_{0}$}} -\mathcal{B}_{\pm}(p_{0},p_{\perp},p_{z}).
\label{calbmp}
\end{align}
\end{subequations}
\end{enumerate}
Using the above transformation properties, it can be shown that
$\slashed{L}, \, \slashed{R}, \, L^2$ and $R^2$, respectively given in \eqref{l_mu}, \eqref{r_mu}, \eqref{l2} and \eqref{r2} transform as
\begin{subequations}
\begin{align}
\slashed{L}(p_{0},p_{\perp},p_{z}) &\xrightarrow[]{\text{$\vec{p}\rightarrow -\vec{p}$}} \mathcal{A}(p_{0},|\vec{p}|)(p_{0}\gamma\indices{^0}+\vec{p}\cdot\vec{\gamma})
+\mathcal{B}_{-}(p_{0},p_{\perp},p_{z})\slashed{u}+c^{\prime}(p_{0},|\vec{p}|)\slashed{n} \, , \label{l_mp}\\
\slashed{R}(p_{0},p_{\perp},p_{z}) &\xrightarrow[]{\text{$\vec{p}\rightarrow -\vec{p}$}} \mathcal{A}(p_{0},|\vec{p}|)(p_{0}\gamma\indices{^0}+\vec{p}\cdot\vec{\gamma})
+\mathcal{B}_{+}(p_{0},p_{\perp},p_{z})\slashed{u}-c^{\prime}(p_{0},|\vec{p}|)\slashed{n}\, , \label{r_mp} \\
L^{2}(p_{0},p_{\perp},p_{z}) &\xrightarrow[]{\text{$\vec{p}\rightarrow -\vec{p}$}}R^{2}(p_{0},p_{\perp},p_{z})\, , \label{l2_mp} \\
R^{2}(p_{0},p_{\perp},p_{z}) &\xrightarrow[]{\text{$\vec{p}\rightarrow -\vec{p}$}}L^{2}(p_{0},p_{\perp},p_{z})\, , \label{r2_mp}
\end{align}
\end{subequations}
and
\begin{subequations}
\begin{align}
\slashed{L}(p_{0},p_{\perp},p_{z}) &\xrightarrow[\text{$\vec{p}\rightarrow -\vec{p}$}]{\text{$p_{0}\rightarrow -p_{0}$}}-\slashed{L}(p_{0},p_{\perp},p_{z}) , \label{l_mp_p_mp} \\
\slashed{R}(p_{0},p_{\perp}, p_{z}) &\xrightarrow[\text{$\vec{p}\rightarrow -\vec{p}$}]{\text{$p_{0}\rightarrow -p_{0}$}} -\slashed{R}(p_{0},p_{\perp},p_{z}) , \label{r_mp0_mp} \\
L^{2}(p_{0},p_{\perp}, p_{z})& \xrightarrow[\text{$\vec{p}\rightarrow -\vec{p}$}]{\text{$p_{0}\rightarrow -p_{0}$}} L^{2}(p_{0},p_{\perp},p_{z}), \label{l2_mp0_mp} \\
R^{2}(p_{0},p_{\perp},p_{z}) &\xrightarrow[\text{$\vec{p}\rightarrow -\vec{p}$}]{\text{$p_{0}\rightarrow -p_{0}$}} R^{2}(p_{0},p_{\perp},p_{z}) . \label{r2_mp0_mp}
\end{align}
\end{subequations}
Now we are in a position to check the transformation properties of the effective propagator under some of the discrete symmetries:
\subsubsection{Chirality}
\label{chirality_trans}
Under chirality the fermion propagator transform as~\cite{Weldon:1999th}
\begin{equation}
S(p_{0},\vec{p}) \longrightarrow -\gamma_{5}\,S(p_{0},\vec{p})\,\gamma_{5} . \label{chi_def}
\end{equation}
The effective propagator, $S^*(p_{0},p_{\perp},p_{z})$, in \eqref{eff_prop1} transforms under chirality as
\begin{align}
-\gamma_{5}\,S^*(p_{0},p_{\perp},p_{z})\,\gamma_{5} &= -\gamma_{5}\mathcal{P}_{-}\frac{\slashed{L}(p_{0},p_{\perp},p_{z})}{L^{2}(p_{0},p_{\perp},p_{z})}\mathcal{P}_{+}\gamma_{5}-\gamma_{5}\mathcal{P}_{+}\frac{\slashed{R}(p_{0},p_{\perp},p_{z})}{R^{2}(p_{0},p_{\perp},p_{z})}\mathcal{P}_{-}\gamma_{5} \nonumber \\
&= \mathcal{P}_{+}\frac{\slashed{L}(p_{0},p_{\perp},p_{z})}{L^{2}(p_{0},p_{\perp},p_{z})}\mathcal{P}_{+} + \mathcal{P}_{-}\frac{\slashed{R}(p_{0},p_{\perp},p_{z})}{R^{2}(p_{0},p_{\perp},p_{z})}\mathcal{P}_{-} \nonumber \\
&= S^*(p_{0},p_{\perp},p_{z}), \label{chi_inv}
\end{align}
which satisfies \eqref{chi_def} and indicates that it is chirally invariant.
\subsubsection{Reflection}
\label{ref_sym}
Under reflection the fermion propagator transforms~\cite{Weldon:1999th}
as
\begin{equation}
S(p_{0},\vec{p}) \longrightarrow S(p_{0},- \vec{p}) . \label{ref_def}
\end{equation}
The effective propagator, $S^*(p_{0},p_{\perp},p_{z})$, in \eqref{eff_prop1} transforms under reflection as
\begin{align}
S^*(p_{0},p_{\perp},-p_{z})&= \mathcal{P}_{-}\frac{\slashed{L}(p_{0},p_{\perp},-p_{z})}{L^{2}(p_{0},p_{\perp},-p_{z})}\mathcal{P}_{+}
+ \mathcal{P}_{+}\frac{\slashed{R}(p_{0},p_{\perp},-p_{z})}{R^{2}(p_{0},p_{\perp},-p_{z})}\mathcal{P}_{-} \nonumber \\
&= \mathcal{P}_{-}\frac{\mathcal{A}(p_{0},|\vec{p}|)(p_{0}\gamma\indices{^0}+\vec{p}\cdot\vec{\gamma})
+\mathcal{B}_{-}(p_{0},p_{\perp},p_{z})\slashed{u}+c^{\prime}(p_{0},|\vec{p}|)\slashed{n}}{R^{2}(p_{0},p_{\perp},p_{z})}\mathcal{P}_{+} \nonumber \\
&\hspace{0.35cm} +\mathcal{P}_{+}\frac{\mathcal{A}(p_{0},|\vec{p}|)(p_{0}\gamma\indices{^0}+\vec{p}\cdot\vec{\gamma})
+\mathcal{B}_{+}(p_{0},p_{\perp},p_{z})\slashed{u}-c^{\prime}(p_{0},|\vec{p}|)\slashed{n}}{L^{2}(p_{0},p_{\perp},p_{z})}\mathcal{P}_{-} \nonumber \\
& \ne S^*(p_{0},p_{\perp},p_{z}) . \label{ref_gen_ninv}
\end{align}
However, now considering the rest frame of the heat bath, \(u\indices{^\mu} = (1,0,0,0)\),
and the background magnetic field along $z$-direction, \(n\indices{^\mu} = (0,0,0,1)\),
one can write \eqref{ref_gen_ninv} as
\begin{align}
S^*(p_{0},p_{\perp},-p_{z})&= \mathcal{P}_{-}\frac{\mathcal{A}(p_{0},|\vec{p}|)(p_{0}\gamma\indices{^0}+\vec{p}\cdot\vec{\gamma})
+\mathcal{B}_{-}(p_{0},p_{\perp},p_{z})\gamma_0-c^{\prime}(p_{0},|\vec{p}|)\gamma^3}{R^{2}(p_{0},p_{\perp},p_{z})}\mathcal{P}_{+} \nonumber \\
&\hspace{0.35cm} +\mathcal{P}_{+}\frac{\mathcal{A}(p_{0},|\vec{p}|)(p_{0}\gamma\indices{^0}+\vec{p}\cdot\vec{\gamma})
+\mathcal{B}_{+}(p_{0},p_{\perp},p_{z})\gamma_0+c^{\prime}(p_{0},|\vec{p}|)\gamma^3}{L^{2}(p_{0},p_{\perp},p_{z})}\mathcal{P}_{-} \nonumber \\
& \ne S^*(p_{0},p_{\perp},p_{z}) . \label{ref_rest_ninv}
\end{align}
As seen in both cases the reflection symmetry is violated as we will see later while discussing the dispersion property of a fermion.
\subsubsection{Parity}
\label{parity_trans}
Under parity a fermion propagator transforms~\cite{Weldon:1999th} as
\begin{equation}
S(p_{0},\vec{p}) \longrightarrow \gamma\indices{_0}\,S(p_{0},-\vec{p})\,\gamma\indices{_0} . \label{parity_def}
\end{equation}
The effective propagator, $S^*(p_{0},p_{\perp},p_{z})$, in \eqref{eff_prop1} under parity transforms as
\begin{align}
\gamma\indices{_0}\,S^*(p_{0},p_{\perp},-p_{z})\,\gamma\indices{_0} &= \gamma\indices{_0}\mathcal{P}_{-}
\frac{\slashed{L}(p_{0},p_{\perp},-p_{z})}{L^{2}(p_{0},p_{\perp},-p_{z})}\mathcal{P}_{+}\gamma\indices{_0}
+ \gamma\indices{_0}\mathcal{P}_{+}\frac{\slashed{R}(p_{0},p_{\perp},-p_{z})}{R^{2}(p_{0},p_{\perp},-p_{z})}\mathcal{P}_{-}\gamma\indices{_0} \nonumber \\
&= \mathcal{P}_{+}\gamma\indices{_0}\frac{\slashed{L}(p_{0},p_{\perp},-p_{z})}{R^{2}(p_{0},p_{\perp},p_{z})}\gamma\indices{_0}\mathcal{P}_{-} + \mathcal{P}_{-}\gamma\indices{_0}\frac{\slashed{R}(p_{0},p_{\perp},-p_{z})}{L^{2}(p_{0},p_{\perp},p_{z})}\gamma\indices{_0}\mathcal{P}_{+} \nonumber \\
&\ne S^*(p_{0},p_{\perp},p_{z}) \, , \label{parity_ninv}
\end{align}
which does not obey \eqref{parity_def}, indicating that the effective propagator in general frame of reference is not parity invariant due to the background medium.
However, now considering the rest frame of the heat bath, \(u\indices{^\mu} = (1,0,0,0)\),
and the background magnetic field along $z$-direction, \(n\indices{^\mu} = (0,0,0,1)\),
one can write \eqref{parity_ninv} by using \eqref{l_mp}, \eqref{r_mp} and
\( \gamma\indices{_0}\,\gamma\indices{^i} = -\gamma\indices{^i}\,\gamma\indices{_0}\) as
\begin{align}
\gamma\indices{_0}\,S^*(p_{0},p_{\perp},-p_{z})\,\gamma\indices{_0}
&= \mathcal{P}_{+}\frac{\slashed{R}(p_{0},p_{\perp},p_{z})}{R^{2}(p_{0},p_{\perp},p_{z})}\mathcal{P}_{-}+\mathcal{P}_{-}\frac{\slashed{L}(p_{0},p_{\perp},p_{z})}{L^{2}(p_{0},p_{\perp},p_{z})}\mathcal{P}_{+} \nonumber \\
&= S^*(p_{0},p_{\perp},p_{z}),
\end{align}
which indicates that the propagator is parity invariant in the rest frame of the magnetised heat bath. We note that other discrete symmetries
can also be checked but leave them on the readers.
\subsection{Modified Dirac Equation}
\label{dirac_mod}
\subsubsection{For General Case }
\label{hll_modes}
The effective propagator that satisfy the modified Dirac equation with spinor $U$ is given by
\begin{align}
\left(\mathcal{P}_{+}\,\slashed{L}\,\mathcal{P}_{-}+\mathcal{P}_{-}\,\slashed{R}\,\mathcal{P}_{+}\right)U &= 0 . \label{moddireqn}
\end{align}
Using the chiral basis
\begin{align}
\gamma_{0} = \begin{pmatrix}
0 && \mathbbm{1} \\
\mathbbm{1} && 0
\end{pmatrix},\hspace{1cm}\vec{\gamma} = \begin{pmatrix}
0 && \vec{\sigma} \\
-\vec{\sigma} && 0
\end{pmatrix},\hspace{1cm}\gamma_{5} = \begin{pmatrix}
-\mathbbm{1} && 0 \\
0 && \mathbbm{1}
\end{pmatrix}, \hspace{1cm} U = \begin{pmatrix}
\psi_{L} \\
\psi_{R}
\end{pmatrix}\, , \label{chi_basis}
\end{align}
one can write \eqref{moddireqn} as
\begin{align}
\begin{pmatrix}
0 && \sigma \cdot R \\
\bar{\sigma}\cdot L && 0
\end{pmatrix}\begin{pmatrix}
\psi_{L}\\
\psi_{R}
\end{pmatrix} = 0\,,
\end{align}
where $\psi_{R}$ and $\psi_{L}$ are two component Dirac spinors with \(\sigma\equiv (1,\vec{\sigma})\)
and \( {\bar \sigma} \equiv (1,-\vec{\sigma})\), respectively. One can obtain nontrivial solutions with the condition
\begin{align}
\mbox{det}\begin{pmatrix}
0 && \sigma \cdot R \nonumber\\
\bar{\sigma}\cdot L && 0
\end{pmatrix} &= 0 \nonumber \\
\mbox{det}[L\cdot \bar{\sigma}]\,\mbox{det}[R\cdot {\sigma}] &= 0 \nonumber \\
L^{2}R^{2} &= 0 \, . \label{det0}
\end{align}
We note that for a given $p_0\ (=\omega)$, either $L^{2}=0$, or $R^{2}=0$, but not both of them are simultaneously zero.
This implies that i) when \(L^{2}=0\), \(\psi_{R}=0\) ; ii) when \(R^{2}=0\), \(\psi_{L}=0\). These dispersion conditions are same as
obtained from the poles of the effective propagator in \eqref{eff_prop1} as obtained in subsec.~\ref{eff_fer_prop}.
\begin{enumerate}
\item For \(R^{2}=0\) but \(L^{2}\neq 0\), the right chiral equation is given by
\begin{align}
\left ( R\cdot \sigma\right )\,\psi_{R}=0 . \label{chiral_rt}
\end{align}
Again \(R^{2}=0\) \, $\Rightarrow$ \, \( R_{0}=\pm |\vec{R}| = \pm \sqrt{R^{2}_{x}+R^{2}_{y}+R^{2}_{z}} \) and the
corresponding dispersive modes are denoted by $R^{(\pm)}$. So the solutions of \eqref{chiral_rt} are
\begin{subequations}
\begin{align}
{\mbox {(i)}} \, \, R_{0}&= |\vec{R}|; \hspace{0.7 cm} {\mbox{mode}} \, \, R^{(+)}; \hspace{0.7cm}
U_{R^{(+)}} = \sqrt{\frac{|\vec{R}|+R_{z}}{2|\vec{R}|}}\begin{pmatrix}
0\\
0\\
1 \\
\frac{R_{x}+iR_{y}}{|\vec{R}|+R_{z}}
\end{pmatrix}\,
= \begin{pmatrix}
0 \\
\, \, \, \, \, \psi_R^{(+)}
\end{pmatrix} \, ,
\label{r0+}\\
{\mbox {(ii)}} \, \, R_{0}&= -|\vec{R}|; \hspace{0.4 cm} {\mbox{mode}} \, \, R^{(-)}; \hspace{0.4cm}
U_{R^{(-)}} = -\sqrt{\frac{|\vec{R}|+R_{z}}{2|\vec{R}|}}\begin{pmatrix}
0\\
0\\
\frac{R_{x}-iR_{y}}{|\vec{R}|+R_{z}} \, . \\
-1
\end{pmatrix} \,
= \begin{pmatrix}
0 \\
\,\, \, \, \, \psi_R^{(-)}
\end{pmatrix} \,
. \label{r0-}
\end{align}
\end{subequations}
\item For \(L^{2}=0\) but \(R^{2}\neq 0\), the left chiral equation is given by
\begin{align}
(L \cdot \bar{\sigma}) \,\psi_{L}=0 , \label{chiral_lt}
\end{align}
where \(L^{2}=0\) implies two conditions;
\( L_{0}=\pm |\vec{L}| = \pm \sqrt{L^{2}_{x}+L^{2}_{y}+L^{2}_{z}}\) and the
corresponding dispersive modes are denoted by $L^{(\pm)}$. The two solutions of \eqref{chiral_lt} are obtained as
\begin{subequations}
\begin{align}
{\mbox {(i)}} \,\, L_{0}= |\vec{L}|; \hspace{0.7 cm} {\mbox{mode}} \, \, L^{(+)}; \hspace{0.7cm}
U_{L^{(+)}} = -\sqrt{\frac{|\vec{L}|+L_{z}}{2|\vec{L}|}}\begin{pmatrix}
\frac{L_{x}-iL_{y}}{|\vec{L}|+L_{z}} \\
-1 \\
0\\
0
\end{pmatrix} \,
= \begin{pmatrix}
\,\, \, \, \, \psi_L^{(+)} \\
0
\end{pmatrix} \, ,
\label{l0+}\\
{\mbox {(i)}} \,\, L_{0}= -|\vec{L}|;\hspace{0.7 cm} {\mbox{mode}} \, \, L^{(-)}; \hspace{0.7cm}
U_{L^{(-)}} = \sqrt{\frac{|\vec{L}|+L_{z}}{2|\vec{L}|}}\begin{pmatrix}
1 \\
\frac{L_{x}+iL_{y}}{|\vec{L}|+L_{z}} \\
0\\
0
\end{pmatrix}\,
= \begin{pmatrix}
\,\, \, \, \, \psi_L^{(-)} \\
0
\end{pmatrix} \, . \label{l0-}
\end{align}
\end{subequations}
\end{enumerate}
We note here that $\psi^{(\pm)}_L$ and $\psi^{(\pm)}_R$ are only chiral eigenstates but neither the spin nor the helicity eigenstates.
\subsubsection{For lowest Landau level (LLL)}
\label{lll_modes}
\begin{enumerate}
\item For $R_{LLL}^2=0$ in \eqref{defRsquare_r0} indicates that $R_0=\pm R_z, \, R_x=R_y=0$. The two solutions obtained, respectively,
in \eqref{ll_sp3} and \eqref{ll_sp4} in Appendix~\ref{lll_app} are given as
\begin{subequations}
\begin{align}
{\mbox {(i)}} \, \, R_{0}&= R_z; \hspace{0.7 cm} {\mbox{mode}} \, \, R^{(+)}; \hspace{0.7cm}
U_{R^{(+)}} = \begin{pmatrix}
0\\
0\\
1 \\
0
\end{pmatrix}
=\begin{pmatrix}
0\\
\chi_+
\end{pmatrix} \ .
\, \label{llr0+}\\
{\mbox {(ii)}} \, \, R_{0}&= -R_z; \hspace{0.7 cm} {\mbox{mode}} \, \, R^{(-)}; \hspace{0.7cm}
U_{R^{(-)}} = \begin{pmatrix}
0\\
0\\
0 \\
1
\end{pmatrix} \,
=\begin{pmatrix}
0\\
\chi_-
\end{pmatrix} \ , \label{llr0+}
\end{align}
\end{subequations}
where \(\displaystyle \chi_{+} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}\) and \(\displaystyle \chi_{-} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}\).
\item For LLL, $L_{LLL}^2=0$ in \eqref{defLsquare_l0} indicates that $L_0=\pm L_z, \, L_x=L_y=0$. The two solutions obtained, respectively,
in \eqref{ll_sp5} and \eqref{ll_sp6} in Appendix~\ref{lll_app} are given as
\begin{subequations}
\begin{align}
{\mbox {(i)}} \, \, L_{0}&= L_z; \hspace{0.7 cm} {\mbox{mode}} \, \, L^{(+)}; \hspace{0.7cm}
U_{L^{(+)}} = \begin{pmatrix}
0 \\
1 \\
0\\
0
\end{pmatrix}
=\begin{pmatrix}
\chi_-\\
0
\end{pmatrix}
\, , \label{lll0+}\\
{\mbox {(i)}} \, \, L_{0}&= -L_z; \hspace{0.7 cm} {\mbox{mode}} \, \, L^{(-)}; \hspace{0.7cm}
U_{L^{(-)}} = \begin{pmatrix}
1 \\
0 \\
0\\
0
\end{pmatrix}\,
=\begin{pmatrix}
\chi_+\\
0
\end{pmatrix} \,
. \label{lll0-}
\end{align}
\end{subequations}
\end{enumerate}
The spin operator along the $z$ direction is given by
\begin{equation}
\Sigma^{3} = \mathcal{\sigma}\indices{^1^2}=\frac{i}{2}\left [\gamma\indices{^1},\gamma\indices{^2}\right ]=i\,\gamma\indices{^1}\gamma\indices{^2}
=\begin{pmatrix} \sigma\indices{^3} && 0 \\ 0 && \sigma\indices{^3}\end{pmatrix},
\end{equation}
where $\sigma$ with single index denotes Pauli spin matrices whereas that with double indices denote
generator of Lorentz group in spinor representation.
Now,
\begin{align}
\Sigma\indices{^3}\,\,{U}_{R^{(\pm)}} &= \begin{pmatrix} \sigma\indices{^3} && 0 \\ 0 && \sigma\indices{^3} \end{pmatrix}\,
\begin{pmatrix} 0 \\ \chi_{\pm} \end{pmatrix}=\begin{pmatrix} 0 \\ \sigma\indices{^3}\,\chi_{\pm}\end{pmatrix} =
\pm\,\begin{pmatrix} 0 \\ \chi_{\pm} \end{pmatrix} = \pm\, {U}_{R^{(\pm)}} , \label{lll_spin_sol} \\
\Sigma\indices{^3}\,\,{U}_{L^{(\pm)}} &= \begin{pmatrix} \sigma\indices{^3} && 0 \\ 0 && \sigma\indices{^3} \end{pmatrix}\,
\begin{pmatrix} \chi_{\mp} \\ 0 \end{pmatrix} = \begin{pmatrix} \sigma\indices{^3}\,\chi_{\mp} \\ 0 \end{pmatrix} =
\mp \begin{pmatrix} \chi_{\mp} \\ 0 \end{pmatrix} = \mp\,{U}_{L^{(\pm)}} . \label{llr_spin_sol}
\end{align}
So, the modes $L^{(-)}$ and $R^{(+)}$ have spins along the direction of magnetic field whereas $L^{(+)}$ and $R^{(-)}$ have spins opposite to the direction of magnetic field.
Now we discuss the helicity eigenstates of the various modes in LLL.
The helicity operator is defined as
\begin{align}
\mathcal{H}_{\vec{p}} = \mathbf{\hat{p}}\cdot\vec{\Sigma} \, .
\end{align}
When a particle moves along $+z$ direction, $\mathbf{\hat{p}}=\mathbf{\hat{z}}$ and when it
moves along $-z$ direction, $\mathbf{\hat{p}}=-\mathbf{\hat{z}}$. \\
Thus
\begin{align}
\mathcal{H}_{\vec{p}} = \begin{cases}\,\,\,\, \Sigma\indices{^3},\qquad\text{for}\qquad p_{z}>0 ,
\\ -\Sigma\indices{^3},\qquad\text{for}\qquad p_{z}<0 .\end{cases}
\end{align}
Thus,
\begin{align}
\mathcal{H}_{\vec{p}}\,\,{\displaystyle U}_{R^{(\pm)}}= \begin{cases} \pm {\displaystyle U}_{R^{(\pm)}}, \qquad\text{for}\qquad p_{z}>0 ,\\
\mp {\displaystyle U}_{R^{(\pm)}}, \qquad\text{for}\qquad p_{z}<0 . \end{cases}
\end{align}
and
\begin{align}
\mathcal{H}_{\vec{p}}\,\,{\displaystyle U}_{L^{(\pm)}}= \begin{cases} \mp {\displaystyle U}_{L^{(\pm)}}, \qquad\text{for}\qquad p_{z}>0 \, , \\
\pm {\displaystyle U}_{L^{(\pm)}}, \qquad\text{for}\qquad p_{z}<0 \, . \end{cases}
\end{align}
\subsection{Dispersion}
\label{disp_rep}
\begin{figure}[H]
\centering
\includegraphics[width=0.43\textwidth]{fig2a.pdf}
\includegraphics[width=0.43\textwidth]{fig2b.pdf}
\caption{\small Dispersion plots for higher Landau level, $l\ne 0$. The energy $\omega$
is scaled with the thermal mass $m_{th}$ for convenience}
\label{fig:HLLfig}
\end{figure}
In presence of magnetic field, the component of momentum transverse to the magnetic field is Landau quantised and takes discrete values given by \( \displaystyle p^{2}_{\perp} = 2 l |q_{f} B|\), where \( l \) is a given Landau levels. In presence of pure background magnetic
field and no heat bath ($T=0$), the Dirac equation gives rise a dispersion relation
\begin{align}
E^{2}=p^{2}_{z} + m_{f}^{2} + (2\,\nu + 1)\,q_{f}\,|Q|B-q_{f}\,Q\,B\,\sigma \, , \label{disp_B}
\end{align}
where $\nu = 0,1,2,\cdots$, $Q=\pm 1$, $\sigma=+1$ for spin up and $\sigma = -1$ for spin down.
The solutions are classified by energy eigenvalues
\begin{align}
E_{l}^{2} = p^{2}_{z} + m_{f}^{2} + 2\,l \,q_{f}\,B \ . \label{disp_pure_m}
\end{align}
where one can define
\begin{equation}
2\,l = (2\,\nu + 1)\,|Q| - Q\, \sigma \, .
\end{equation}
Now we discuss the dispersion properties of a fermions in a hot magnetised medium. For general case (for higher LLs, $l\ne 0$) the dispersion
curves obtained by solving, $L^2=0$ and $R^2=0$ given in \eqref{l2} and \eqref{r2}, numerically.
We note that the roots of $L_0=\pm |\vec L | \, \Rightarrow \, L_0 \mp |\vec L|=0 $ are represented by $L^{(\pm)}$ with energy
$\omega_{L^{(\pm)}}$ whereas those for
$R_0=\pm |\vec R | \, \Rightarrow \, R_0 \mp |\vec R |=0 $ by $R^{(\pm)}$ with energy
$\omega_{R^{(\pm)}}$ . The corresponding eigenstates are obtained in \eqref{l0+}, \eqref{l0-},
\eqref{r0+} and \eqref{r0-} in subsection~\ref{hll_modes}. We have chosen $T=0.2$ GeV, $\alpha_s=0.3$ and $q_fB=0.5 m_\pi^2$, where
$m_\pi$ is the pion mass. In Fig.~\ref{fig:HLLfig} the dispersion curves for higher Landau levels are shown where all four modes can
propagate for a given choice of $Q$. This is because the corresponding states for these modes are neither spin nor helicity eigenstates
as shown in subsec.~\ref{hll_modes}. We also note that there will be negative energy modes which are not displayed here but would be discussed
in the analysis of the spectral representation of the effective propagator section~\ref{spec_rep}.
\begin{figure}
\includegraphics[width=0.43\textwidth]{fig3a.pdf}
\includegraphics[width=0.43\textwidth]{fig3b.pdf}
\caption{\small Dispersion plots for LLL, \, $l=0$. The energy $\omega$ is scaled with the thermal mass $m_{th}$ for convenience. For details see the text.}
\label{fig:lll_disp}
\end{figure}
At LLL $l=0 \, \rightarrow \, p_{\perp}=0$ and the roots of $ R_0=\pm R_z $ give rise to two right handed modes $R^{(\pm)}$ with energy
$\omega_{R^{(\pm)}}$ whereas those for $L_0=\pm L_z $ produce~\footnote{We make a general note here for left handed modes at LLL.
At small $p_z$, $L_z$ itself is negative for LLL and becomes positive after a moderate value of $p_z$. This makes the left handed modes
$L^{(+)}$ and $L^{(-)}$ to flip in LLL than those in higher Landau levels. For details see Appendix~\ref{eff_lll_mass}.} two left handed
modes $L^{(\pm)}$ with energy
$\omega_{L^{(\pm)}}$ .
In Appendix~\ref{eff_lll_mass} the analytic solutions for the dispersion relations in LLL are presented which show four different modes and
the corresponding eigenstates are obtained in subsec.~\ref{lll_modes}.
Now at LLL we discuss two possibilities below:
\begin{enumerate}
\item[(i)] for positively charged fermion $Q=1$, $\sigma=1$ implies $\nu = 0$ and $\sigma=-1$ implies $\nu=-1$. Now we note that $\nu$ can never be negative.
This implies that the modes with $Q=1$ and $\sigma=-1$ (spin down) cannot propagate in LLL. Now, the right handed mode $R^{(+)}$ and the
left handed mode $L^{(-)}$ have
spin up as shown
in subsec.~\ref{lll_modes}, will propagate in LLL for $p_z>0$. The $R^{(+)}$ mode has helicity to chirality ratio $+1$ is a quasiparticle
whereas the mode $L^{(-)}$ left handed has that of $-1$ known as plasmino (hole). However, for $p_z<0$, the right handed mode flips to plasmino (hole)
as its chirality to helicity ratio becomes -1 whereas the left handed mode becomes particle as its chirality to helicity ratio becomes $+1$.
The dispersion behaviour of the two modes are shown in the left panel of Fig.~\ref{fig:lll_disp} which begins at mass
$\left. m_{LLL}^{*-}\right |_{p_z=0}$ as given in \eqref{mp}.
\item[(ii)] for negatively charged fermion $Q=-1$, $\sigma=1$ implies $\nu = -1$ and $\sigma=-1$ implies $\nu=0$. Thus, the
modes with $Q=-1$ and $\sigma=+1$ (spin up) cannot propagate in LLL. However, the modes $L^{(+)}$ and $R^{(-)}$ have spin down as found
in subsec.~\ref{lll_modes} will propagate in LLL. Their dispersion
are shown in the right panel of Fig.~\ref{fig:lll_disp} which begin at mass $m_{LLL}^{*+} $ as given in \eqref{mp}.
For $p_z>0$ the mode $L^{(+)}$ has helicity to chirality ratio $+1$ whereas $R^{(-)}$ has that of $-1$ and vice-versa for $p_z<0$.
\end{enumerate}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig4.pdf}
\caption{\small The dispersion plots corresponding to HTL propagator in absence of magnetic field, \textit{i.e.}, $B=0$.}
\label{fig:htl_disp}
\end{center}
\end{figure}
In the absence of the background magnetic field ($B=0$), the two modes, the left handed $L^{(+)}$ and the right handed $R^{(+)}$ fermions,
merge together whereas the other two modes, the left handed $L^{(-)}$ and the right handed $R^{(-)}$ fermions, also merge together.
This leads to degenerate (chirally symmetric) modes for which the dispersion plots start at $m_{th}$ and one gets back the
usual HTL result~\cite{braatendilepton} with quasiparticle and plasmino modes in presence of heat bath as shown in Fig.~\ref{fig:htl_disp}.
As evident from the dispersion plots (Figs.~\ref{fig:HLLfig} and \ref{fig:lll_disp})
both left and right handed modes are also degenerate at $p_z=0$ in presence of magnetic field but at non-zero $|p_z|$ both left
and right handed modes get separated from each others, causing a chiral asymmetry without disturbing the chiral invariance
(subsec.~\ref{chirality_trans}) in the system.
Also in subsec.~\ref{ref_sym} it was shown that the fermion propagator does not obey the reflection symmetry in presence of
medium, which is now clearly evident from all dispersion plots as displayed above.
\section{Three Point Function}
\label{vert_func}
The $(N+1)$-point functions are related to the $N$-point functions through Ward-Takahashi (WT) identity.
The 3-point function is related to the 2-point function as
\begin{eqnarray}
Q_\mu \Gamma^\mu(P,K;Q) &=& S^{-1}(P) -S^{-1} (K) = \slashed{P} -\slashed{K} - \Sigma(P) + \Sigma(K) \nonumber \\
&=& \underbrace{(\slashed{P}-\slashed{K})}_{\mbox{Free}} -\underbrace{\left (\Sigma^{B=0}(P,T) -\Sigma^{B=0}(K,T)\right )}_{\mbox{Thermal or HTL correction}}
- \, \underbrace{\left (\Sigma^{B\ne 0}(P,T) -\Sigma^{B\ne 0}(K,T)\right )}_{\mbox{Thermo-magnetic correction}}\nonumber \\
&=& \slashed{Q} + a(p_0,|\vec p|) \slashed{P} + b(p_0,|\vec p|) \slashed{u}
- \, a(k_0,|\vec k|) \slashed{K} - b(k_0,|\vec k |) \slashed{u} + b'(p_0, p_\perp,p_z) \gamma_5 \slashed{u} \nonumber \\
&& + c'(p_0, p_\perp,p_z) \gamma_5 \slashed{n}
- b'(k_0, k_\perp,k_z) \gamma_5 \slashed{u} - c'(k_0, k_\perp,k_z) \gamma_5 \slashed{n}
\, , \label{wi_3pt}
\end{eqnarray}
where $Q=P-K$. We note that recently the general form of the thermo-magnetic corrections for 3-point~\cite{ayalafermionself,Haque:2017nxq} and 4-point~\cite{Haque:2017nxq} functions have
been given in terms of the involved angular integrals, which satisfy WT identies. Nevertheless, to validate the general structure of the self-energy in
\eqref{genstructselfenergy} vis-a-vis the inverse propagator in \eqref{inv_prop}, we obtain below the temporal component of the
3-point function at $\vec q=0; \, \vec p= \vec k $ and $p=k$ .
Using \eqref{at}, \eqref{bt}, \eqref{bprime} and \eqref{cprime}, we can obtain
\begin{eqnarray}
\Gamma^0(P,K;Q)\big |_{{\vec q} =0} &=& \gamma_0 \,
\underbrace{- \, \frac{m^2_{th}}{pq_0} \, \delta Q_0
\, \gamma^0 + \frac{m^2_{th}}{pq_0} \, \delta Q_1 \, (\hat p \cdot \vec \gamma) }_{\mbox{Thermal or HTL correction}} \nonumber \\
&& \underbrace{ - \, \frac{{M'}^2}{pq_0} \, \left [ \delta Q_0 \, \gamma_5 \,
+ \, \frac{p_z}{p} \, \delta Q_1 \, \left (i\gamma^1\gamma^2 \right ) \right ] \gamma^3 }_{\mbox{Thermo-magnetic correction}}\ \nonumber \\
&=& \gamma^0 \, +\delta \Gamma^0_{\tiny \mbox{HTL}} (P,K;Q) \, + \, \delta \Gamma^0_{\tiny \mbox{TM}} (P,K;Q) \,
\, , \label{wi_3pt_g0}
\end{eqnarray}
with
\begin{eqnarray}
\gamma_5\gamma^0&=& -i \gamma^1\gamma^2\gamma^3 \, , \nonumber \\
{M'}^2 &=& 4 \, C_F \, g^2 \, M^2(T,m,q_fB) \, , \nonumber \\
\delta Q_j &=& Q_j\left ( \frac{p_0}{p} \right ) - Q_j\left ( \frac{k_0}{p} \right ) \, .
\end{eqnarray}
where $Q_j$ are the Legendre functions of the second kind given in \eqref{Q0} and \eqref{Q1}. Important to note that
the thermo-magnetic (TM) correction $\delta \Gamma^0_{\tiny \mbox{TM}} $ matches exactly with that from direct calculation in \eqref{tm_g_0}
in Appendix~\ref{vert_direct}.
The result also agrees with the HTL 3-point function~\cite{ayalafermionself,Haque:2017nxq} in absence of background magnetic field by setting $B=0\, \Rightarrow M'=0$ as
\begin{eqnarray}
\Gamma^0_{\tiny \mbox{HTL}} (P,K;Q)\big |_{{\vec q} =0} &=& \left [ 1\,- \, \frac{m^2_{th}}{pq_0} \, \delta Q_0
\right ]\, \gamma^0 \, + \frac{m^2_{th}}{pq_0} \, \delta Q_1 \, (\hat p \cdot \vec \gamma) \, \nonumber \\
&=& \gamma^0 +\delta \Gamma^0_{\tiny \mbox{HTL}} (P,K;Q) \, ,
\, \label{wi_3pt_htl}
\end{eqnarray}
where all components, \textit{i.e.}, $(0,1,2,3)$, are relevant for pure thermal background.
Now in absence of heat bath, setting $T=0\, \Rightarrow$ $m_{th}=0$ and ${M'}^2=4 \, C_F \, g^2 \, M^2(T=0,m,q_f,B)$, the temporal 3-point function
in \eqref{wi_3pt_g0} reduces to
\begin{eqnarray}
\Gamma^0_B(P,K;Q)\big |_{{\vec q} =0} &=&
\gamma^0 \, \underbrace { - \frac{{M'}^2}{pq_0} \left [ \delta Q_0 \, \gamma_5 + \, \frac{p_z}{p} \, \delta Q_1 \, (i\gamma^1\gamma^2) \right ] \,
\gamma^3 }_{\mbox{Pure magnetic correction}} \, \\
&=& \gamma^0 +\delta \Gamma^0_{\tiny \mbox{M}} (P,K;Q) \, . \label{wi_3pt_mag_g0}
\end{eqnarray}
We now note that this is the three-point function with pure background magnetic field but no heat bath. The gauge boson is oriented along the field direction
and there is no polarisation in the transverse direction. Thus, only the longitudinal components (\textit{i.e,} (0,3)-components) of the 3-point
function would
be relevant for pure background magnetic field in contrast to that of \eqref{wi_3pt_htl} for pure thermal background.
\section{Spectral Representation of the Effective Propagator}
\label{spec_rep}
In this section we obtain the spectral representation of the effective propagator in a hot magnetised medium. This quantity
is of immense interest for studying the various spectral properties, real and virtual photon production, damping rates and
various transport coefficients etc. of the hot magnetised medium, in particular, for hot magnetised QCD medium.
\subsection{General Case}
\label{spec_rep_gen}
The effective propagator as obtained in \eqref{eff_prop1} is given by
\begin{eqnarray}
S^*=\mathcal{P}_-\frac{\slashed{L}}{L^2}\mathcal{P}_+ + \mathcal{P}_+\frac{\slashed{R}}{R^2}\mathcal{P}_- \, ,
\end{eqnarray}
where $\slashed{L}$ and $\slashed{R}$ can be written in the rest frame of the heat bath and the magnetic field in the $z$-direction
following \eqref{l_mu} and \eqref{r_mu}, respectively, as
\begin{eqnarray}
\slashed{L} &=& \left[(1+a(p_0,p))p_0+b(p_0,p)+b'(p_0,p_\perp,p_z)\right]\gamma^0 - \left[(1+a(p_0,p))p_z+c'(p_0,p_\perp,p_z)\right]\gamma^3\nonumber\\
&& - (1+a(p_0,p))(\gamma \cdot p)_\perp \nonumber \\
&=& \left[(1+a(p_0,p))p_0+b(p_0,p)+b'(p_0,p_\perp,p_z)\right]\gamma^0 - \left[p(1+a(p_0,p))\right](\gamma \cdot \hat{p})- c'((p_0,p_\perp,p_z)\gamma^3\nonumber \\
&=& g_L^1(p_0,p_\perp,p_z)\gamma^0 - g_L^2(p_0,p_\perp,p_z)(\gamma \cdot \hat{p}) - g_L^3(p_0,p_\perp,p_z)\gamma^3, \label{l_rest} \\
\slashed{R} &=& \left[(1+a(p_0,p))p_0+b(p_0,p)-b'(p_0,p_\perp,p_z)\right]\gamma^0 - \left[(1+a(p_0,p))p_z-c'(p_0,p_\perp,p_z)\right]\gamma^3 \, \nonumber\\
&& (-1+a(p_0,p))(\gamma \cdot p)_\perp \nonumber \\
&=& \left[(1+a(p_0,p))p_0+b(p_0,p)-b'(p_0,p_\perp,p_z)\right]\gamma^0 - \left[p(1+a(p_0,p))\right](\gamma \cdot \hat{p})+ c'(p_0,p_\perp,p_z)\gamma^3 \nonumber \\
&=& g_R^1(p_0,p_\perp,p_z)\gamma^0 - g_R^2(p_0,p_\perp,p_z)(\gamma \cdot \hat{p}) + g_R^3(p_0,p_\perp,p_z)\gamma^3 \, , \label{r_rest}
\end{eqnarray}
where ${\hat p} ={\mathbf p}/|{\mathbf p}|$, $p=|{\mathbf p}|$ and, $p_z$ and $p_\perp$ are given, respectively, in \eqref{p3} and \eqref{pperp}.
We also note that though $g_L^2=g_R^2;~g_L^3=g_R^3$, but they are treated separately for the sake of notations that we would be using,
for convenience, as $g_L^i$ and $g_R^i$. One can decompose the effective propagator into six parts by separating
out the $\gamma$ matrices as
\begin{eqnarray}
S^* &=& \mathcal{P}_-\gamma^0\mathcal{P}_+~\frac{g_L^1(p_0,p_\perp,p_z)}{L^2}
- \mathcal{P}_-(\gamma \cdot \hat{p})\mathcal{P}_+~\frac{g_L^2(p_0,p_\perp,p_z)}{L^2}
- \mathcal{P}_-\gamma^3\mathcal{P}_+~\frac{g_L^3(p_0,p_\perp,p_z)}{L^2} \nonumber\\
&+& \mathcal{P}_+\gamma^0\mathcal{P}_-~\frac{g_R^1(p_0,p_\perp,p_z)}{R^2}
- \mathcal{P}_+(\gamma \cdot \hat{p})\mathcal{P}_-~\frac{g_R^2(p_0,p_\perp,p_z)}{R^2}
+ \mathcal{P}_+\gamma^3\mathcal{P}_-~\frac{g_R^3(p_0,p_\perp,p_z)}{R^2}.
\label{prop_pre_spec}
\end{eqnarray}
In subsection~\ref{disp_rep} we have discussed that $L^2=0$ yields four poles,
leading to four modes with both positive and negative energy as $\pm\omega_{L^{(+)}}(p_\perp,p_z)$ and $\pm\omega_{L^{(-)}}(p_\perp,p_z)$.
Similarly, $R^2=0$ also yields four poles, namely $\pm\omega_{R^{(+)}}(p_\perp,p_z)$ and $\pm\omega_{R^{(-)}}(p_\perp,p_z)$.
With this information one can obtain the spectral representation~\cite{Bellac:2011kqa,Karsch:2000gi,Chakraborty:2001kx,braatendilepton} of the effective propagator in \eqref{prop_pre_spec} as
\begin{eqnarray}
\rho &=& \left(\mathcal{P}_-\gamma^0\mathcal{P}_+\right)~\rho_L^1
- \left( \mathcal{P}_-(\gamma \cdot \hat{p})\mathcal{P}_+\right)~\rho_L^2
- \left( \mathcal{P}_-\gamma^3\mathcal{P}_+\right)~\rho_L^3 \nonumber\\
&& + \left( \mathcal{P}_+\gamma^0\mathcal{P}_-\right)~\rho_R^1
- \left(\mathcal{P}_+(\gamma \cdot \hat{p})\mathcal{P}_-\right)~\rho_R^2
+ \left( \mathcal{P}_+\gamma^3\mathcal{P}_-\right)~\rho_R^3 \, .
\label{prop_spec}
\end{eqnarray}
where the spectral function corresponding to each of the term can be written as
\begin{eqnarray}
\rho_L^i &=& \frac{1}{\pi} ~\mathrm{Im}\left(\frac{g_L^i}{L^2}\right)\nonumber\\
&=& Z_{L^{(+)}}^{i+}(p_\perp,p_z)\delta(p_0-\omega_{L^{(+)}}(p_\perp,p_z))+Z_{L^{(+)}}^{i-}(p_\perp,p_z)\delta(p_0+\omega_{L^{(+)}}(p_\perp,p_z))\nonumber\\
&&+Z_{L^{(-)}}^{i+}(p_\perp,p_z)\delta(p_0-\omega_{L^{(-)}}(p_\perp,p_z))+Z_{L^{(-)}}^{i-}(p_\perp,p_z)\delta(p_0+\omega_{L^{(-)}}(p_\perp,p_z))+\beta_L^i
\, ,\label{spec_li}\\
\rho_R^i &=& \frac{1}{\pi} ~\mathrm{Im}\left(\frac{g_R^i}{R^2}\right)\nonumber\\
&=& Z_{R^{(+)}}^{i+}(p_\perp,p_z)\delta(p_0-\omega_{R^{(+)}}(p_\perp,p_z))+Z_{R^{(+)}}^{i-}(p_\perp,p_z)\delta(p_0+\omega_{R^{(+)}}(p_\perp,p_z))\nonumber\\
&&+Z_{R^{(-)}}^{i+}(p_\perp,p_z)\delta(p_0-\omega_{R^{(-)}}(p_\perp,p_z))+Z_{R^{(-)}}^{i-}(p_\perp,p_z)\delta(p_0+\omega_{R^{(-)}}(p_\perp,p_z))+\beta_R^i\, ,
\label{spec_residue}
\end{eqnarray}
where $i=1,2,3$. We note that the delta-functions are associated with pole parts originating from the time like domain ($p_0^2 >p^2)$
whereas the cut parts $\beta^i_{L(R)}$ are associated with the Landau damping arises from the space-like domain, $p_0^ 2 < p^2$,
of the propagator. The residues $Z^i_{L(R)}$ are determined at the various poles as
\begin{eqnarray}
Z_{L(R)}^{i \ {\mbox{sgn of pole }}}(p_\perp,p_z) = g_{L(R)}^i(p_0,p) \Bigg| \frac{\partial L^2(R^2)}{\partial p_0} \Bigg|^{-1}_{p_0=\mbox{ pole}} \, \label{residue}
\end{eqnarray}
As a demonstration, we present analytical expressions of three residues corresponding to the pole $p_0=+\omega_{L^{(+)}}$ as
\begin{eqnarray}
Z_{L^{(+)}}^{1+} &=& \frac{p\left(p^2-\omega^2_{L^{(+)}}\right)\left[p^2\left(m_{th}^2\log\left(\frac{\omega_{L^{(+)}}+p}
{\omega_{L^{(+)}}-p}\right)-2p\omega_{L^{(+)}}\right)+M'^2p_z(2p-\omega_{L^{(+)}})
\log\left(\frac{\omega_{L^{(+)}}+p}
{\omega_{L^{(+)}}-p}\right)\right]}{m_{th}^2\left[8p^4\left(\omega_{L^{(+)}}+M'^2/m_{th}^2~p_z\right)+\log\left(\frac{\omega_{L^{(+)}}+p}
{\omega_{L^{(+)}}-p}\right)
X \right]}\, , \nonumber
\end{eqnarray}
\begin{eqnarray}
Z_{L^{(+)}}^{2+} &=& \frac{p^2\left(p^2-\omega^2_{L^{(+)}}\right)\left[2p\left(m_{th}^2+p^2\right)-m_{th}^2\omega_{L^{(+)}}\log\left(\frac{\omega_{L^{(+)}}+p}
{\omega_{L^{(+)}}-p}\right)\right]}{m_{th}^2\left[8p^4\left(\omega_{L^{(+)}}+M'^2/m_{th}^2~p_z\right)+\log\left(\frac{\omega_{L^{(+)}}+p}
{\omega_{L^{(+)}}-p}\right) X
\right]} \, ,\nonumber\\
Z_{L^{(+)}}^{3+} &=& \frac{-M'^2p^5\left(p^2-\omega^2_{L^{(+)}}\right)\log\left(\frac{\omega_{L^{(+)}}+p}
{\omega_{L^{(+)}}-p}\right)}{m_{th}^4\left[8p^4\left(\omega_{L^{(+)}}+M'^2/m_{th}^2~p_z\right)+\log\left(\frac{\omega_{L^{(+)}}+p}
{\omega_{L^{(+)}}-p}\right)X \right]} \, , \nonumber
\end{eqnarray}
where $X= 2p^3(M'^2-2m_{th}^2)+2M'^2pp_z^2+M'^2\omega_{L^{(+)}}p_\perp^2\log\left(\frac{\omega_{L^{(+)}}+p}
{\omega_{L^{(+)}}-p}\right)$. The other poles of $L^2=0$ can trivially be found out by replacing $\omega_{L^{(+)}}$ in the above expressions.
The expressions for the residues for $R$ parts can similarly be expressed as the $L$ parts, but we do not show them.
Below in Fig.~\ref{residue_ll} we present the residues corresponding to the first Landau level where all the terms are present.
We take the value of the magnetic field as $m_\pi^2/2$ and temperature to be $200$ MeV.
\begin{center}
\begin{figure}[H]
\begin{center}
\hspace*{-1.0cm}
\includegraphics[scale=0.5]{fig5a.pdf}\hspace*{-1.0cm}\includegraphics[scale=0.5]{fig5b.pdf}
\hspace*{-1.0cm}
\includegraphics[scale=0.5]{fig5c.pdf}\hspace*{-1.0cm}\includegraphics[scale=0.5]{fig5d.pdf}
\caption{Different Residues for the first LL ($l=1$) are plotted with scaled momentum along the magnetic field direction.}
\label{residue_ll}
\end{center}
\end{figure}
\end{center}
Now, the expressions for the cut parts $\beta_{L(R)}^i$ are given below :
\begin{eqnarray}
\beta_L^i &=& \frac{1}{\pi}\Theta(p^2-p_0^2)\frac{\mathrm{Im}(g_L^i)~\mathrm{Re}(L^2) - \mathrm{Im}(L^2)\mathrm{Re}(g_L^i)}{(\mathrm{Re}(L^2))^2+(\mathrm{Im}(L^2))^2}, \\
\beta_R^i &=& \frac{1}{\pi}\Theta(p^2-p_0^2)\frac{\mathrm{Im}(g_R^i)~\mathrm{Re}(R^2) - \mathrm{Im}(R^2)\mathrm{Re}(g_R^i)}{(\mathrm{Re}(R^2))^2+(\mathrm{Im}(R^2))^2},
\label{cut_LR_i}
\end{eqnarray}
where
\begin{eqnarray}
\mathrm{Re}(g_L^1) &=& p_0-M'^2\frac{p_z}{p^2}-\frac{m_{th}^2}{p}\left(1-\frac{M'^2}{m_{th}^2}\frac{p_zp_0}{p^2}\right)Q_0\left(\left|\frac{p_0}{p}\right|\right),\\
\mathrm{Im}(g_L^1) &=& \frac{\pi}{2}\frac{m_{th}^2}{p}\left(1-\frac{M'^2}{m_{th}^2}\frac{p_zp_0}{p^2}\right),\\
\mathrm{Re}(g_L^2) &=& \mathrm{Re}(g_R^2) = p+\frac{m^{2}_{th}}{p}\left[1-\frac{p_{0}}{p}Q_0\left(\left|\frac{p_0}{p}\right|\right)\right] ,\\
\mathrm{Im}(g_L^2) &=& \mathrm{Im}(g_R^2) = \pi\,m^{2}_{th}\,\frac{p_{0}}{2\,p^{2}} ,\\
\mathrm{Re}(g_L^3) &=& \mathrm{Re}(g_R^3) = \frac{M'^2}{p}Q_0\left(\left|\frac{p_0}{p}\right|\right) , \\
\mathrm{Im}(g_L^3) &=& \mathrm{Im}(g_R^3) = - \frac{\pi M'^2}{2p} , \label{gl_re}
\end{eqnarray}
and
\begin{eqnarray}
\mathrm{Re}(g_R^1) &=& p_0+M'^2\frac{p_z}{p^2}-\frac{m_{th}^2}{p}\left(1+\frac{M'^2}{m_{th}^2}\frac{p_zp_0}{p^2}\right)Q_0\left(\left|\frac{p_0}{p}\right|\right),\\
\mathrm{Im}(g_R^1) &=& \frac{\pi}{2}\frac{m_{th}^2}{p}\left(1+\frac{M'^2}{m_{th}^2}\frac{p_zp_0}{p^2}\right) \, . \label{gl_im}
\end{eqnarray}
Also we obtain
\begin{eqnarray}
\mathrm{Re}(L^2) &=& A_L+B_LQ_0\left(\left|\frac{p_0}{p}\right|\right)+C\left(Q_0^2\left(\left|\frac{p_0}{p}\right|\right)-\frac{\pi^2}{4}\right), \\
\mathrm{Im}(L^2) &=& -\frac{\pi B_L}{2} - \pi Q_0\left(\left|\frac{p_0}{p}\right|\right)C, \\
\mathrm{Re}(R^2) &=& A_R+B_RQ_0\left(\left|\frac{p_0}{p}\right|\right)+C\left(Q_0^2\left(\left|\frac{p_0}{p}\right|\right)-\frac{\pi^2}{4}\right),\\
\mathrm{Im}(R^2) &=& -\frac{\pi B_R}{2} - \pi Q_0\left(\left|\frac{p_0}{p}\right|\right)C ,
\end{eqnarray}
where
\begin{eqnarray}
A_L &=& p_0^2-p^2-2m_{th}^2-\frac{m_{th}^4}{p^2}-\frac{2M'^2p_0p_z}{p^2}+\frac{M'^4p_z^2}{p^4}, \\
A_R &=& p_0^2-p^2-2m_{th}^2-\frac{m_{th}^4}{p^2}+\frac{2M'^2p_0p_z}{p^2}+\frac{M'^4p_z^2}{p^4} ,
\end{eqnarray}
\begin{eqnarray}
B_L &=& \frac{2m_{th}^4p_0}{p^3} - \frac{2M'^2p_z}{p} + \frac{2M'^2p_0^2p_z}{p^3} - \frac{2M'^4p_0p_z^2}{p^5}, \\
B_R &=& \frac{2m_{th}^4p_0}{p^3} + \frac{2M'^2p_z}{p} - \frac{2M'^2p_0^2p_z}{p^3} - \frac{2M'^4p_0p_z^2}{p^5} ,\\
C &=& \frac{m_{th}^4-M'^4}{p^2}-\frac{p_0^2m_{th}^4}{p^4} + \frac{M'^4p_0^2p_z^2}{p^6}.
\end{eqnarray}
\subsection{LLL Case}
\label{spec_rep_lll}
For LLL, as $p_\perp=0$, so $g_{L(R)}^2$ and $g_{L(R)}^3$ in \eqref{l_rest} and \eqref{r_rest} can now be merged as
\begin{eqnarray}
g_L^{2+3} &=& \left[(1+a(p_0,p))p_z+c'(p_0,p)\right]\gamma^3\, , \nonumber\\
g_R^{2+3} &=& \left[(1+a(p_0,p))p_z-c'(p_0,p)\right]\gamma^3 \, . \nonumber
\end{eqnarray}
\begin{center}
\begin{figure}[H]
\begin{center}
\hspace*{-1cm}
\includegraphics[scale=0.5]{fig6a.pdf}\hspace*{-1.0cm}\includegraphics[scale=0.5]{fig6b.pdf}
\hspace*{-1cm}
\includegraphics[scale=0.5]{fig6c.pdf}\hspace*{-1.0cm}\includegraphics[scale=0.5]{fig6d.pdf}
\caption{Different Residues for the LLL ($l=0$) are plotted with scaled momentum along the magnetic field direction.}
\label{residue_lll}
\end{center}
\end{figure}
\end{center}
The spectral function corresponding to LLL reads as
\begin{eqnarray}
\rho_{LLL} &=&\left( \mathcal{P}_-\gamma^0\mathcal{P}_+\right)~\rho_L^1 - \left(\mathcal{P}_-\gamma^3\mathcal{P}_+\right)~\rho_L^{2+3} \nonumber\\
&+&\left( \mathcal{P}_+\gamma^0\mathcal{P}_-\right)~\rho_R^1 -\left( \mathcal{P}_+\gamma^3\mathcal{P}_-\right)~\rho_R^{2+3}.
\label{prop_spec_LLL}
\end{eqnarray}
where one needs to determine
\begin{eqnarray}
\rho_{L(R)}^{2+3} &=& \frac{1}{\pi} ~\mathrm{Im}\left(\frac{g_{L(R)}^{2+3}}{L^2(R^2)}\right)\nonumber,
\end{eqnarray}
which can again be represented in terms of different residues corresponding to different
poles of $L^2(R^2)=0$ as in Eq.(\ref{spec_residue}). In Fig.~\ref{residue_lll}, the variation of
the residues for the lowest Landau level are shown.
In appendix~\ref{spec_htl} we have demonstrated how one gets back the HTL spectral functions when magnetic field is withdrawn
from the thermal medium.
\section{Conclusions}
\label{remarks}
In this article the general structure of fermionic self-energy for a chirally invariant theory has been
formulated for a hot and magnetised medium. Using this we have obtained a closed form of the
general structure of the effective fermion propagator. The collective excitations in such a nontrivial
background has been obtained for a time-like momenta in the weak field and HTL approximation
in the domain $m^2_{th}(\sim g^2T^2 < |q_fB| < T^2$. We found that
the left and right handed modes get separated and become asymmetric in presence of magnetic field which
were degenerate and symmetric otherwise.
The transformation of the effective propagator in a hot magnetised medium under some of the discrete symmetries
have been studied and its consequences are also reflected in the collective fermion modes in the Landau levels.
We have also obtained the Dirac spinors of the various collective modes by solving the Dirac equation
with the effective two-point function. Further, we checked the general structure of the two-point function by obtaining the
three-point function using the Ward-Takahashi identity, which agrees with the direct calculation of one-loop order
in weak field approximation. We also found that only the longitudinal component
of the vertex would be relevant when there is only background magnetic field.
The spectral function corresponding to the effective propagator
is explicitly obtained for a hot magnetised medium which will be extremely useful
for studying the spectral properties, \textit{e.g.}, photon/dilepton production, damping rate, transport coefficients
for a hot magnetised medium.
This has pole contribution due to the various collective modes originating
from the time-like domain and a Landau cut contribution appearing from the space-like domain.
It has explicitly been shown that the spectral function reduces to that obtained for thermal
medium in absence of the magnetic field. Our formulation is in general applicable to both
QED and QCD with nontrivial background like hot magnetised medium
\section{Acknowledgement}
The authors would like to acknowledge useful discussions with Palash B Pal, Najmul Haque,
Chowdhury Aminul Islam,
Arghya Mukherjee and Bithika Karmakar.
AB and MGM were funded by the Department of Atomic Energy (DAE), India via the
project TPAES whereas AD and PKR were funded by the project DAE/ALICE/SINP. AB was also partially supported by the National Post Doctoral Program CAPES (PNPD/CAPES), Govt. of Brazil.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Deformations of skew group algebras constructed from finite groups
acting on polynomial rings
are used in representation
theory, combinatorics, and the study of orbifolds.
These deformations
include graded affine Hecke algebras,
Drinfeld Hecke algebras, rational Cherednik algebras, and symplectic reflection algebras.
The skew group algebra $S\#G$ arising from a group $G$
acting on an algebra $S$
by automorphisms
is the natural semidirect product algebra $S\rtimes G$.
Lusztig~\cite{Lusztig88, Lusztig89}
defined deformations
of skew group algebras $S\#G$
for a Weyl
group $G$ acting
on its reflection representation $V= \mathbb R^n$
and its associated polynomial ring
$S=S(V)=\mathbb R[v_1, \ldots, v_n]$ in his investigations of
the representation theory
of groups of Lie type.
His algebras
alter
the relations
$g\cdot s = g(s)\cdot g$
capturing
the group action but
preserve the commutativity
relation $v_iv_j=v_jv_i$
defining the polynomial ring $S$.
These algebras are known
as {\em graded affine Hecke algebras}. Around the same time,
Drinfeld~\cite{Drinfeld}
more broadly
considered an arbitrary
subgroup $G$ of $\text{GL}_n(\CC)$.
He defined
a deformation
of $S\# G$,
sometimes called
the {\em Drinfeld Hecke algebra},
by instead
altering the commutation
relation $v_iv_j = v_jv_i$
defining the polynomial
ring $S$ (now defined over $\CC$) but
leaving the group action relation alone.
Drinfeld's type of deformation
was rediscovered by Etingof and Ginzburg~\cite{EtingofGinzburg} in the study
of orbifolds as
{\em symplectic reflection algebras}
when the acting group
$G$
is symplectic, which include
{\em rational Cherednik algebras}
as a special case.
Collectively, we
call all these deformations
{\em Drinfeld Hecke algebras}.
Over $\CC$,
every deformation
modeled on Lusztig's graded affine Hecke algebra is
isomorphic to a deformation
modeled on Drinfeld's Hecke algebra
(see~\cite{RamShepler}).
This result
holds for any finite group $G$ in the {\em nonmodular setting}, i.e., when
the characteristic
of the underlying field $\mathbb F$
is coprime to
the group order $|G|$
(see~\cite{SheplerWitherspoon2015}),
and in the present work
we extend this result to algebras
that deform {\em both} Lusztig and Drinfeld
type relations.
The theorem fails
in the {\em modular setting},
when $\text{char}(\mathbb F)$
divides $|G|$, and
new types of deformations arise.
We begin here a concrete study of
these mixed deformations
using PBW conditions from~\cite{SheplerWitherspoon2015}.
We consider the symmetric group $\mathfrak{S}_n$
acting by permutations
of basis elements on a
vector space $V\cong \mathbb F^n$
over a field $\mathbb F$.
We include the case when
$\text{char}(\mathbb F)$ divides
the order of $\mathfrak{S}_n$.
We deform the natural
semi-direct product algebra
$S\rtimes \mathfrak{S}_n$
for $S=S(V)\cong \mathbb F[v_1, \ldots, v_n]$, the polynomial
ring,
by introducing relations
that disturb both the action of $\mathfrak{S}_n$
on $V$ and the commutativity of
the polynomial ring
$S$.
We classify
the deformations arising in this way.
Recently, authors have
considered
the representation theory
of similar deformations
over fields of positive characteristic
defined by relations of Drinfeld type.
For example,
Bezrukavnikov, Finkelberg, and Ginzburg~\cite{BFG}, Devadas and Sun~\cite{DevadasSun},
Devadas and Sam~\cite{DevadasSam},
Cai and Kalinov~\cite{CaiKalinov},
and Lian~\cite{Lian}
all consider the Weyl group of type
$A_{n-1}$ (the irreducible reflection action
of the symmetric group $\mathfrak{S}_n$)
acting on $V\oplus V^*$
to give rational Cherednik algebras.
Bellamy and Martino~\cite{BellamyMartino}
as well as Gordon~\cite{Gordon}
investigate
the action of
the symmetric group $\mathfrak{S}_n$
in the nonmodular setting,
when char$(\mathbb F)$ and $|\mathfrak{S}_n|$
are coprime.
For related algebras built on the action
of the special linear group, general linear group,
and cyclic groups, see
Balagovi\'c and Chen~\cite{BalagovicChen,
BalagovicChen2} and Latour~\cite{Latour}.
In contrast, the algebras classified here
are defined by relations of both
Lusztig and Drinfeld type.
\begin{example}
Let $\FF$ be a field of characteristic $p\geq 0$, $p\neq 2$, and let $G=\mathfrak{S}_n$ act on $V=\FF^n$ by permuting basis vectors
$v_1, \dots, v_n$ for $n> 2$.
Using \cref{PBWconditions} below, one may show that
the $\mathbb F$-algebra
generated by $v$ in $V$
and the group algebra $\mathbb F G$
with relations
\[\begin{aligned}
v_iv_j-v_jv_i&=\sum_{k\neq i,j}
\big((i\ j\ k)-(j\ i\ k)\big)
\text{ in } \mathbb F G \quad\text{ and }\quad
gv_i-\ g(v_i)g
=\big(g(i)-i\big)g
\text{ in } \mathbb F G\\
\end{aligned}\]
is a PBW deformation of $\mathbb F[v_1, v_2, ..., v_n]\rtimes G$.
\end{example}
\vspace{2ex}
{\bf Outline.}
We review Drinfeld Hecke algebras in Section 2 and PBW conditions
in Section~3.
In Section~4, we show that every
Drinfeld Hecke algebra in the nonmodular
setting is isomorphic to one
in which the skew group relation is not deformed.
We turn to the modular setting for the rest
of the article and consider the symmetric group in its permutation representation.
We examine PBW conditions in Sections 5 and 6.
We classify the Drinfeld Hecke algebras
in Section~7 and give these algebras explicitly
for dimensions $n=1,2,3$ in Section~8.
We conclude
by investigating
the case of an invariant parameter in Section 9.
\vspace{2ex}
{\bf Notation.}
Throughout, we fix a field $\FF$
of characteristic $p\geq 0$, $p\neq 2$,
and unadorned tensor symbols
denote the tensor product over $\FF$:
$\otimes = \otimes_{\FF}$. We assume all $\FF$-algebras are associative
with unity $1_{\mathbb F}$
which is identified with the group identity $1_G$ in any group algebra $\mathbb F G$.
\section{Drinfeld Hecke algebras}\label{sec:quadratic}
We consider a finite group $G\subset \text{GL(V)}$
acting linearly on a finite-dimensional
vector space $V\cong \FF^n$
over the field $\FF$.
We may
extend the action to one by automorphisms
on
the symmetric algebra $S(V)\cong\mathbb F[v_1, \dots, v_n]$
for an $\mathbb F$-basis
$v_1, \dots, v_n$ of $V$
and on the tensor algebra $T(V)=T_\mathbb F(V)$ (i.e., the free associative $\FF$-algebra
generated by $v_1, \ldots, v_n$).
We write the action of $G$ on any $\mathbb F$-algebra $A$ with left superscripts,
$a\mapsto\ ^g a$ for $g$ in $G$,
$a$ in $A$, to avoid confusion
with the product $g\, a$ in any
algebra containing
$\mathbb F G$ and $A$.
\subsection*{Skew group algebra}
Recall that the {\bf skew group algebra}
$R\#G
=R\rtimes G$
of a group $G$ acting by automorphisms
on an $\FF$-algebra $R$ (e.g., $S(V)$ or $T(V)$)
is
the $\mathbb F$-algebra
generated by $R$ and $\FF G$
with relations
$ g\, r = \,^{g} r \, g$
for
$r$ in $R$, $g$ in $G$.
This natural semi-direct product algebra is also called the
{\em smash product}
or {\em crossed product algebra}.
\subsection*{Relations
given in terms of
parameter functions}
We consider an algebra
generated by $v$ in $V$
and $g$ in $G$ with
relations given in terms
of two parameter functions
$\lambda$ and $\kappa$. The parameter
$\lambda$ measures the failure
of group elements
to merely act when they
are interchanged with a vector $v$ in $V$,
and $\kappa$ measures
the failure of vectors in $V$ to commute.
Let
$\mathcal{H}_{\lambda,\kappa} $
be the associative $\FF$-algebra generated
by $v$ in $V$ together with the group algebra
$\FF G$ with additional relations
\begin{equation}\label{relations}
\begin{aligned}
\text{}\ vw-wv=\kappa(v,w)
\ \ \text{ and }\ \
\text{}\ g v- \,^g\! v g = \lambda(g,v)
\quad\text{ for }\
v,w \in V,\ g\in G
\end{aligned}
\end{equation}
for a choice of
linear parameter functions $\lambda$ and $\kappa$,
with $\kappa$ alternating,
$$
\kappa:V\otimes V \rightarrow \FF G,\quad
\lambda:\FF G\otimes V \rightarrow \FF G\, .
$$
For ease with notation,
we write $\kappa(v,w)$ for $\kappa(v\otimes w)$
and $\lambda(g,v)$ for $\lambda(g\otimes v)$ throughout.
Thus $\mathcal{H}_{\lambda,\kappa}$
is a quotient of the free
$\mathbb F$-algebra generated
by $G$ and the basis $v_1,\ldots, v_n$ of $V$:
$$
\begin{aligned}
\mathcal{H}_{\lambda,\kappa}
=
\mathbb F\langle
v_1, \ldots, v_n, g: g\in G
\rangle
/
(
&gh - g\cdot_G h,\\
&vw-wv-\kappa(v,w),\\
&gv- \,^g\! v g - \lambda(g,v):
v,w \in V,\ g,h \in G)\, .
\end{aligned}
{}_{}
$$
Note that $\mathcal{H}_{0,0}=S(V)\# G$,
which is isomorphic
to $S(V)\otimes \mathbb F G$
as an $\mathbb F$-vector space.
\vspace{2ex}
\subsection*{PBW property}
Recall that a filtered algebra satisfies
the PBW property
with respect to a set of generating
relations
when its homogeneous version
coincides with its
associated graded algebra.
We filter the algebra
$\mathcal{H}_{\lambda,\kappa}$ by degree
after assigning degree $1$ to
each $v$ in $V$ and degree $0$
to each $g$ in $G$.
The homogeneous version of
$\mathcal{H}_{\lambda,\kappa}$ is then
the skew group algebra
$S(V)\# G$, while
the associated
graded algebra
$\gr\mathcal{H}_{\lambda,\kappa}$
is a quotient of $S(V)\# G$
(e.g., see Li~\cite[Theorem~3.2]{Li2012}
or Braverman and Gaitsgory~\cite{BG}).
Thus we say $\mathcal{H}_{\lambda,\kappa}$
exhibits the {\em Poincar\'e-Birkhoff-Witt (PBW) property}
when
$$
\gr \mathcal{H}_{\lambda,\kappa} \cong S(V)\# G
$$
as graded algebras.
We call $\mathcal{H}_{\lambda,\kappa}$ a
{\em Drinfeld Hecke algebra}
in this case.
Note that every Drinfeld Hecke algebra
has $\mathbb F$-vector space basis
$\{v_1^{i_1} v_2^{i_2} \dotsm v_n^{i_n} g:
i_m \in \mathbb N, g\in G\}$
for any $\mathbb F$-vector space basis
$v_1,\ldots, v_n$ of $V$.
This terminology arises from the fact
that
Lusztig's graded versions of the affine Hecke algebras
(see~\cite{Lusztig88, Lusztig89})
are special cases of the PBW algebras
$\mathcal{H}_{\lambda, \kappa}$:
Lusztig's algebras arise
in the case that
$\kappa\equiv 0$
and $G$ is a finite Coxeter group.
(The parameter $\lambda$ may be defined using a simple
root system
for $G$ and BGG operators).
Drinfeld's algebras~\cite{Drinfeld}
arise in the case that $\lambda\equiv 0$.
Examples
when $\lambda\equiv 0$
include
rational Cherednik algebras and symplectic reflection algebras
(see Etingof and Ginzburg~\cite{EtingofGinzburg}).
\section{Poincar\'e-Birkhoff-Witt property}
In this section,
we recall necessary and sufficient
conditions
on the parameters
$\lambda$ and $\kappa$
from~\cite{SheplerWitherspoon2015} for a filtered
quadratic algebra $\mathcal{H}_{\lambda,\kappa}$ to
exhibit the PBW property.
Again, let $G\subset \text{GL}(V)$
be a finite group
acting on $V\cong \FF^n$.
Let
$$\lambda_g: \FF G\otimes V\rightarrow \FF
\quad\text{ and }\quad
\kappa_g: V\otimes V\rightarrow \FF
$$
be the coefficient
functions for the parameters $\lambda$ and $\kappa$, i.e., the
$\FF$-linear maps for which
$$
\lambda(h,v)=\sum_{g\in G}\lambda_g(h,v) g
\quad\text{ and }\quad \kappa(u,v)
=\sum_{g\in G} \kappa_g(u,v)g
\quad\text{
for } g,h\in G,\
u,v\in V.
$$
\subsection*{Group action on parameters}
The group $G$ acts on the space of
parameter functions $\lambda$ and $\kappa$ in the usual way,
\begin{equation}
\label{groupactiononparameters}
(^h\kappa)(v,w)=
\ ^h\big(\kappa(\ ^{h^{-1}}\!v,\ ^{h^{-1}}\!w)\big),
\ (^h\lambda)(g, v)
=\ ^h\big(\lambda(h^{-1}gh, \ ^{h^{-1}}\! v)\big),
\end{equation}
using the action of $G$
on $\mathbb F G$ by conjugation,
$g: h\mapsto g h g^{-1}$.
\subsection*{PBW conditions}
The second PBW
condition
below
measures the failure of $\kappa$ to be $G$-invariant
while the first
shows that $\lambda$ is determined
by its values on generators of $G$.
\vspace{1ex}
\begin{thm}[\bf PBW Conditions]
\label{PBWconditions}
For any finite group $G\subset \text{GL}(V)$,
an algebra $\mathcal{H}_{\lambda,\kappa}$ exhibits the PBW property if and only if
the following five conditions hold
for all $g,h$ in $G$, $u,v,w$ in $V$:
\begin{enumerate}
\item\label{cocycle21}
\, \rule[0ex]{0ex}{2.5ex}
$\lambda(gh,v)=\lambda(g,\, ^hv) h + g\lambda(h,v)$ in $\FF G$.
\item\label{firstobstruction12}\ \ \rule[0ex]{0ex}{2.5ex
$\kappa(\, ^gu,\, ^g v)g-g\kappa(u,v)
=
\lambda\bigl(\lambda(g,v),u\bigr)-\lambda\bigl(\lambda(g,u),v\bigr)$ in $\FF G$.
\item\label{cocycle12}\ \ \rule[0ex]{0ex}{2.5ex
$\lambda_h(g,v)(\, ^hu-\, ^gu)=\lambda_h(g,u)(\, ^hv-\, ^gv)
$ in $V$.
\item\label{firstobstruction03}
\ \ \rule[0ex]{0ex}{2.5ex
$\kappa_g(u,v)(^gw-w)+\kappa_g(v,w)(^gu-u)+\kappa_g(w,u)(^gv-v)=0$ in $V$.
\item\label{mixedbracket}
\ \ \rule[0ex]{0ex}{2.5ex
$\lambda\bigl(\kappa(u,v),w\bigr)
+\lambda\bigl(\kappa(v,w),u\bigr)+\lambda\bigl(\kappa(w,u),v\bigr)=0$ in $\FF G$.
\end{enumerate}
\end{thm}
We will refer
to the specific conditions in this theorem,
each with universal quantifiers,
as {\em PBW Conditions (1) through (5)}.
Notice that if $\mathcal{H}_{\lambda, \kappa}$
exhibits the PBW property,
so does $\mathcal{H}_{c\lambda, c^2 \kappa}$
for any $c$ in $\mathbb F$.
\vspace{1ex}
We will need two corollaries from~\cite{SheplerWitherspoon2015}, the first on $\kappa$
and the second on $\lambda$.
We set
$
\ker\kappa_g
=
\{v\in V:\kappa_g(v,w)=0
\text{ for all } w\in V\}\,
$
and say that $\kappa$ is {\em supported}
on a subset $S\subset G$ when $\kappa_h\equiv 0$
unless $h$ lies in $S$. Similarly, we say that
$\lambda(g,*)$ is {\em supported} on $S\subset G$
when $\lambda_h(g,v)=0$ for all $v$ in $V$ and $h$ not in $S$.
Here, $V^g=\{v\in V: \, ^ g v=v\}$
is the fixed point space.
\vspace{1ex}
\begin{cor}\label{PBWkappaconditions}
Let $G\subset \text{GL}(V)$
be a finite group and
assume $\mathcal{H}_{\lambda, \kappa}$
exhibits the PBW property.
For every $g$ in $G$,
if $\kappa_g \not\equiv 0$,
then either
\begin{enumerate}
\item[(a)] $\codim V^g = 0$ (i.e., $g$ acts as the identity on $V$),
or
\item[(b)]
$\codim V^g = 1$ and $\kappa_g(v_1,v_2)=0$
for all $v_1,v_2$ in $V^g$,
or
\item[(c)]
$\codim V^g=2$ with
$\ker\kappa_g = V^g$.
\end{enumerate}
\end{cor}
\vspace{1ex}
Recall that a nonidentity element of $\text{GL}(V)$
is a {\em reflection} if it fixes a hyperplane of $V$
pointwise. We see in the above corollary that
over fields of positive characteristic,
a parameter $\kappa$ defining an algebra with the PBW
property may be supported on reflections in addition to the identity $1_G$ and bireflections (elements whose
fixed point space has codimension $2$)
in contrast
to the possible support in the nonmodular setting.
\vspace{1ex}
\begin{cor}\label{cor:lambda}
Let $G\subset \text{GL}(V)$
be a finite group
and say $\mathcal{H}_{\lambda,\kappa}$ exhibits the PBW property.
Then the following statements
hold for any $g$ in $G$.
\begin{enumerate}
\item
$\lambda(1,*)$ is identically zero:
$\lambda(1,v)=0$ for all $v$ in $V$.
\item \label{lg-inverse}
$\lambda(g,*)$ determines $\lambda(g^{-1}, *)$ by
$
g\lambda(g^{-1},v)= - \lambda(g, {}^{g^{-1}}v)g^{-1}\, .
$
\item
$\lambda(g,*)$ can be defined recursively:
For $j\geq 1$,
$
\lambda(g^j,v)=\sum_{i=0}^{j-1} g^{j-1-i}\, \lambda(g,\, ^{g^i}v)\, g^i\ .
$
\item
$\lambda(g,*)$ is supported on $h$ in $G$ with $h^{-1}g$ either
a reflection or the identity on $V$.
If $h^{-1}g$ is a reflection, then
$\lambda_h(g,v)=0$ for all $v$ on the reflecting hyperplane
$V^{h^{-1} g}$.
\item
If $V^g\neq V$,
$\lambda_1(g,v)=0$
unless $g$ is a reflection and $v\notin V^g$.
\end{enumerate}
\end{cor}
We give an example
in which the parameter
$\kappa: V\otimes V \rightarrow \mathbb F G$
is {\em not} $G$-invariant.
\begin{example}
{\em
Let
$G=\mathfrak{S}_n$ act on $V=\FF^n$ for $n> 3$
by permuting basis
elements $v_1, \ldots, v_n$,
i.e., $v_i\mapsto \ ^gv_i=v_{g(i)}$
for $g$ in $G$.
Fix two scalar parameters
$m,m'$ in $\mathbb F$.
Define $a_{ij}$ in $\mathbb F$
for $1\leq i\neq j\leq n$
by
$m=a_{12}=a_{13}=-a_{21}=-a_{31}$,
$m'=a_{23}=-a_{32}$, and $a_{ij}=0$ otherwise.
Then
the algebra defined by
\[
\begin{aligned}
v_1v_2-v_2v_1
&=v_2v_3-v_3v_2
=v_3v_1-v_1v_3
=
\,
m^2\big((1\ 3\ 2)-(1\ 2\ 3)\big),
\
v_iv_j-v_jv_i
=0
\text{ otherwise},
\\
gv_i-v_{g(i)}g
&=
\sum_{j\neq i}\
( a_{ij}- a_{g(i)\, g(j)} )
\
g(i\ j)
\quad
\text{ for $g\in G$, $1\leq i \leq n$}
\,
\end{aligned}
\]
is a PBW deformation of $S(V)\#\FF G$
(see \cref{classification} with $c=a_{123}$).
Notice (see \cref{equation:alpha beta def}
and \cref{definition of Hf})
that $a_{ij}=(1/4)\lambda_1\big((i\ j),
v_i-v_j\big)$.
}
\end{example}
\section{Nonmodular Setting}
\label{nonmodular}
Before classifying algebras
in the modular setting,
we verify in this section
that every Drinfeld Hecke algebra
$\mathcal{H}_{\lambda, \kappa}$ in the nonmodular setting
is isomorphic to
one with $\lambda\equiv 0$.
We consider an arbitrary
finite group $G\subset \text{GL}
(V)$ acting on $V\cong \FF^n$
but assume $\text{char}(\FF)\neq 2$
does not divide $|G|$ (e.g., $\text{char}(\FF)=0$)
in this section only.
In the next theorem,
we extend a result of
~\cite{SheplerWitherspoon2015}
from the special case
in which one of the parameter functions
is zero to the case of more general parameters;
the result
of~\cite{SheplerWitherspoon2015}
strengthened a theorem
in~\cite{RamShepler}
from the setting of Coxeter groups
to arbitrary finite groups.
\begin{thm}\label{LusztigyIsDrinfeldy}
Say $G\subset \text{GL}(V)$ is a finite
group for
$V\cong \mathbb F^n$ with $\text{char}(\mathbb F)\neq 2$ coprime to
$|G|$.
If an algebra
$\mathcal{H}_{\lambda,\kappa'}$ exhibits the PBW property
for parameters
$\lambda: \FF G\otimes V \rightarrow \FF G$
and
$\kappa':V\otimes V\rightarrow \FF G$,
then there is an
algebra $\mathcal{H}_{0, \kappa}$
for some
parameter
$\kappa:V\otimes V\rightarrow \FF G$
with
$$
\mathcal{H}_{0,\kappa}
\ \cong\ \mathcal{H}_{\lambda, \kappa'}\,
\quad\text{
as filtered algebras}.$$
\end{thm}
\begin{proof}
Define a conversion function
$\gamma:V\rightarrow \FF G$ by
$$
\gamma(v)=\frac{1}{|G|}\sum_{a,b\,\in\, G} \lambda_{ab}(b,\, ^{b^{-1}}v) a\
\
\text{ and write $\gamma=\sum_{a\in G} \gamma_a a$}
$$
for coefficient functions $\gamma_a:V\rightarrow \mathbb F$.
Let
$\kappa:V\otimes V\rightarrow \FF G$
be the parameter function
$$
\kappa(u,v)=\gamma(u)\gamma(v)-\gamma(v)\gamma(u)
+\lambda(\gamma(u),v)-\lambda(\gamma(v),u)
+ \kappa'(u,v).
$$
Consider the
associative $\FF$-algebra $F$ generated by
$V$ and the algebra $\FF G$, i.e.,
$$F=T_{\FF}(\FF G\oplus V)/(g\otimes h-gh, 1_{\FF}-1_G: g,h\in G)$$
(identifying each $g$ with $(g,0)$).
Define an algebra homomorphism
$f:F \rightarrow \mathcal{H}_{\lambda,\kappa'}$ by
$$
f(v)= v + \gamma(v) \quad\text{and}\quad f(g) = g
\quad\text{ for all } v \in V \text{ and } g \in G\, .
$$
One may use the
PBW conditions
for $\mathcal{H}_{\lambda,\kappa'}$
to show that the
relations defining $\mathcal{H}_{0,\kappa}$ lie
in the kernel of $f$,
as in the proof of~\cite[Theorem~4.1]{SheplerWitherspoon2015}.
Thus $f$ factors through an onto, filtered
algebra homomorphism
$$
f:
\mathcal{H}_{0,\kappa}
\twoheadrightarrow \mathcal{H}_{\lambda,\kappa'}\, .
$$
The $m$-th filtered components of
$\mathcal{H}_{\lambda,\kappa'}$
and
$\mathcal{H}_{0,\kappa}$
are both spanned over $\mathbb F$
by the monomials
$v_1^{a_1}\dotsm v_n^{a_n}g$
for
$g \in G$ and $a_i\in \mathbb N$ with $\sum_i a_i \leq m$, for a fixed basis $v_1,\ldots, v_n$ of $V$.
This spanning set is in fact
a basis for
$(\mathcal{H}_{\lambda,\kappa'})_m$
by the PBW property,
and thus
$$
\dim_{\mathbb F}
(\mathcal{H}_{0,\kappa})_m
\leq
\dim_{\mathbb F}
(\mathcal{H}_{\lambda,\kappa'})_m
\, .
$$
The map $f$
restricts to a surjective linear transformation
of finite-dimensional $\mathbb F$-vector spaces
on each filtered piece,
$$f:
(\mathcal{H}_{0,\kappa})_m
\twoheadrightarrow
(\mathcal{H}_{\lambda,\kappa'})_m\, ,
$$
and hence is injective on each filtered piece.
(Indeed, for any
$v_1^{a_1}\dotsm v_n^{a_n}g$ of filtered
degree $m$ in the
PBW basis for $\mathcal{H}_{\lambda,\kappa'}$,
the element
$
(v_1-\gamma(v_1))^{a_1}\dotsm
(v_n-\gamma(v_n))^{a_n}g
$
in
$\mathcal{H}_{0,\kappa}$
is a preimage under $f$ and also
has filtered degree $m$.)
Thus
$f$ is an isomorphism
of filtered algebras.
Notice that
$f$
in turn induces an isomorphism of graded algebras,
$
\gr\mathcal{H}_{0,\kappa}\cong \gr\mathcal{H}_{\lambda,\kappa'}\cong S(V)\# G\, ,
$
and $\mathcal{H}_{0,\kappa}$ also exhibits the PBW property.
\end{proof}
The special
case of \cref{LusztigyIsDrinfeldy} when $\kappa'\equiv 0$ is from
\cite{SheplerWitherspoon2015}.
Note that \cref{LusztigyIsDrinfeldy} fails
over fields of positive characteristic
as the next example from~\cite{SheplerWitherspoon2015}
shows:
not every algebra $\mathcal{H}_{\lambda,0}$
(modeled on Lusztig's graded affine Hecke algebra)
is isomorphic to an algebra $\mathcal{H}_{0,\kappa}$
(modeled on the Drinfeld Hecke algebra).
\vspace{1ex}
\begin{example}
\em{
Let $G\cong \mathbb{Z}/2\mathbb{Z}$
be generated by
$
g=\left(
\begin{smallmatrix}
1&1\\0&1
\end{smallmatrix}
\right)
$
acting on $V=\FF_2^2$
with respect to an ordered basis $v,w$.
Consider the $\mathbb F$-algebra $\mathcal{H}_{\lambda,\kappa'}$
generated by $V$ and $\FF G$
with relations
$$
gv=vg,\ \ gw=vg+wg+1,\ \
vw-wv=g\, .
$$
Then $\mathcal{H}_{\lambda,\kappa'}$ satisfies
the PBW property
but is not
isomorphic to $\mathcal{H}_{0, \kappa}$ for any
parameter $\kappa$.
(Here, $\lambda(g,v)=\lambda(1,v)
=\lambda(1,w)=0$,
$\lambda(g,w)=1$, and
$\kappa'(u,v)=g$.)
}
\end{example}
\vspace{2ex}
\cref{LusztigyIsDrinfeldy}
and PBW Condition~(2)
(with $\lambda\equiv 0$) imply
the next observation.
\begin{cor}
Every Drinfeld Hecke algebra in the nonmodular setting arises from a
parameter $\kappa$ that is {\em invariant},
i.e., satisfying
$$
\kappa(\, ^gu,\, ^g v)g = g\kappa(u,v)
\quad\text{ for }g\in G,\ u,v \in V.
$$
\end{cor}
\vspace{2ex}
\section{Deforming the group action}
In this section and the next,
we lay the framework for a complete classification of Drinfeld
Hecke algebras
for the symmetric group $\mathfrak{S}_n$ acting
by permutation matrices on $V\cong \mathbb F^n$.
In Sections $7$ and $8$,
we classify these algebras by giving the
parameters $\lambda$ and $\kappa$ such that $\mathcal{H}_{\lambda,\kappa}$ is PBW.
In this section we obtain the form of the parameter $\lambda$, and in the next section
we describe the parameter $\kappa$.
We assume $n> 2$ here and in Sections $6$ and
$7$ for ease with notation;
we give the PBW algebras
explicitly for $n=1,2,3$ in Section $8$.
We consider the action
of the symmetric group
by permutations.
Let $G=\mathfrak{S}_n$ act on $V\cong\FF^n$
by permuting basis
elements $v_1, \ldots, v_n$
of $V$,
i.e., $v_i\mapsto \ ^gv_i=v_{g(i)}$
for $g$ in $G$.
We write $s_i=(i\ \, i+1)$ for the adjacent transpositions
generating $G$ for $1\leq i<n$ and set
$s_n=(n\ 1)$ for ease with later notation.
Recall that we assume
$2\neq\text{char}(\mathbb F)\geq 0$.
Fix linear parameter functions
\[\kappa:V\otimes V\rightarrow \FF G\ \ \text{ and }\ \ \lambda:\FF G \otimes V \rightarrow \FF G,
\quad
\text{with $\kappa$ alternating.}
\]
\subsection*{Scalar parameters of freedom}
We show how each PBW algebra $\mathcal{H}_{\lambda, \kappa}$ has parameter $\lambda$ determined by
certain values on reflections.
Note
the group action in
\cref{groupactiononparameters}
induces the usual action
on
$\lambda_1:G\otimes V \rightarrow \mathbb F$
with
$$
(^{h^{-1}}\lambda_1)
\big((i\ j), v_i\big)
=
\lambda_1\big(h(i\ j)h^{-1}, \ ^{h}v_i\big)
=
\lambda_1\big((h(i), h(j)), v_{h(i)}\big)\ .
$$
We define scalars in $\mathbb F$
for any linear parameter
function
$\lambda: \mathbb F G\otimes V
\rightarrow \mathbb F G$:
\begin{equation}
\begin{aligned}
\label{equation:alpha beta def}
\alpha_{ij}
&:=&
&\tfrac{1}{4}\ \lambda_1\big((i\ j), v_i-v_j\big),\\
\alpha_{ijk}
&:=&
&\alpha_{ij}\alpha_{jk}
+\alpha_{jk}\alpha_{ki}
+\alpha_{ki}\alpha_{ij}, \\
^{g}\alpha_{ij}
&:=&
&\tfrac{1}{4}\ \lambda_1\big(
(g(i)\ g(j)), v_{g(i)}-v_{g(j)}\big), \\
^{g}\alpha_{ijk}
&:=&
&\alpha_{g(i)g(j)}\alpha_{g(j)g(k)}+\alpha_{g(j)g(k)}\alpha_{g(k)g(i)}+\alpha_{g(k)g(i)}\alpha_{g(i)g(j)},\text{ and}\\
\beta_k
&:=&
&\tfrac{1}{2}\ \lambda_{s_k}\big(s_k, v_k-v_{k+1}\big),
\end{aligned}\end{equation}
for $1\leq i,j,k\leq n$
with $i,j,k$ distinct.
We take indices on $\beta$ modulo $n$
throughout to more easily cyclically permute parameters in later results.
We will see that
if
$\mathcal{H}_{\lambda,\kappa}$
satisfies the PBW property, then $\beta_k
=\lambda_{s_k}(s_k, v_k)$ for all $k$
(by \cref{lemma:opposite on reflections}).
\vspace{1ex}
\subsection*{Determination of $\lambda$}
We prove in this section
that every PBW algebra $\mathcal{H}_{\lambda, \kappa}$
has
parameter $\lambda$ determined by its values $\alpha_{ij}$ and
$\beta_k$:
\begin{thm}\label{ld def}
A parameter
$\lambda$
satisfies PBW Conditions~(1) and~(3) if and only if
\begin{equation}
\label{equation: definition of ld}
\lambda(g, v_i)\
=\ \sum_{k=0}^{g(i)-i+n-1} \beta_{i+k}\, g +\sum_{1\ \leq\ j\neq i\ \leq\ n}(\alpha_{ij}-\ ^{g}\alpha_{ij})\,
g(i\ j)
\
\end{equation}
for all $g \in G$
and $1\leq i\leq n$ with $\beta_1+\cdots+\beta_n=0$.
\end{thm}
We collect some necessary observations
before giving the proof of this theorem
at the end of this section.
\vspace{0ex}
\subsection*{Lemmas for the proof of \cref{ld def}}
Recall the {\em (absolute) reflection
length} function $\length: \mathfrak{S}_n\rightarrow \mathbb{Z}_{\geq 0}$ on $\mathfrak{S}_n$
which gives
the minimal number $\length(g)$ of
transpositions in a factorization of $g$
into transpositions.
%
The following observation is well-known
for reflection groups over $\mathbb R$ (e.g., see \cite{Carter}, \cite{FosterGreenwood},
\cite{SheplerWitherspoonAdvances})
and we include a proof for
the symmetric group acting
over arbitrary fields
for the sake of completeness.
\begin{lemma}
\label{codimlemma}
For $g,h$ in $\mathfrak{S}_n$,
$\length(g)=\codim V^g$,
and $V^g\cap V^h= V^{gh}$
when
$\length(g)+\length(h)=\length(gh)$.
\end{lemma}
\begin{proof}
For $\mathfrak{S}_n$
acting on $V_\mathbb R=\mathbb R^n$ by permutation
of basis vectors,
$\length(g)=\codim V_\mathbb R^g$.
But $\codim V_\mathbb R^g
= \codim V^g$ (just consider
the decomposition of $g$ into disjoint
cycles and take orbit sums for invariant
vectors, for example). Hence
$ \length(g)+\length(h)=\length(gh)$ if and only if
$
\codim V^{g}
+ \codim V^{h}
=\codim V^{gh}
$.
In this case, $V^g\cap V^h=V^{gh}$
since
$V^g\cap V^h\subset V^{gh}$
implies that
$\codim V^g+\codim V^h - \codim(V^{gh}) \geq
\codim V^g+\codim V^h - \codim(V^g\cap V^h) \geq 0$.
\end{proof}
Our next observation
follows from PBW Condition~(3) with $h=g(i\ j)$, $u=v_i, v=v_j$:
\begin{lemma}
\label{lemma:opposite on reflections}
If the parameter function $\lambda$
satisfies PBW Condition~(3), then
\[
\lambda_{g(i\ j)}(g, v_i)=-\lambda_{g(i\ j)}(g, v_j)=\tfrac{1}{2}\lambda_{g(i\ j)}(g, v_i-v_j)
\quad\text{ for } i\neq j,\
g\in G.
\]
In particular,
$\lambda_1\big((i\ j), v_i\big)
=
-\lambda_1\big((i\ j), v_j\big)
=
\tfrac{1}{2}
\lambda_1\big((i\ j), v_i-v_j\big)$.
\end{lemma}
\begin{lemma}\label{gg fixed}
If the parameter function
$\lambda$
satisfies PBW Condition~(1), then $\lambda_c(c, v)=0$
for all $c$ in $G$ and $v$ in $V^c$,
the fixed point space.
\end{lemma}
\begin{proof}
We induct on the (absolute) reflection
length $\ell(c)$ of $c$ using
\cref{cor:lambda}~(1),
which follows directly
from PBW Condition~(1)
(see the proof of~\cite[Cor.\ 3.3]{SheplerWitherspoon2015}).
First suppose $c$ is a reflection itself
with $v \in V^c$. Then by PBW Condition~(1),
\[
0=\lambda(1, v)
=
\lambda(cc, v)=\lambda(c, \ ^cv)c+c\lambda(c, v)
=\lambda(c, v)c+c\lambda(c, v).
\]
The coefficient of the identity group element on the right-hand side is $0=2\lambda_c(c, v)$.
Now suppose the claim holds for all group elements $g$ with $\ell(g)=k$ and that
$\length(c)=k+1$. Then $c=ab$ for some $a$
with $\length(a)=k$ and some transposition $b$. As
$\length(ab)=\length(a)+\length(b)$,
the vector $v$ lies in $V^{ab}=V^{a}\cap V^b$ by \cref{codimlemma}.
By PBW Condition~(1),
$
\lambda(c, v)=\lambda(a, ^bv)b+a\lambda(b, v),$
and the result follows from
the induction hypothesis applied to the terms with $c$:
\[
\lambda_c(c, v)=\lambda_{a}(a, v)+\lambda_b(b, v)=0.
\]
\end{proof}
\begin{lemma}
\label{gg depends on trans only}
\label{betas sum to zero}
Say the parameter
function $\lambda$ satisfies PBW Condition~(1).
Then
\begin{enumerate
\setlength{\itemindent}{-.25in}
\item[(a)]
For any $g\in G$ and any $i$,
$
\lambda_g(g, v_i)
=
\lambda_{(i\ g(i))}
\big((i\ g(i)), v_i\big)
=
-\lambda_{(i\ g(i))}\big((i\ g(i)), v_{g(i)}\big)$.
\item[(b)]
For any $k$-cycle $(l_1\ l_2\
\cdots \ l_k)$
in $G$,
$$
\lambda_{(l_1\ l_{2})}\big((l_1\ l_{2}), v_{l_1}\big)
+
\lambda_{(l_2\ l_{3})}\big((l_2\ l_{3}), v_{l_2}\big)
+\cdots+
\lambda_{(l_k\ l_{1})}\big((l_k\ l_{1}), v_{l_k}\big)
=
0\, .
$$
\item[(c)]
For any $i<j$,
$$
\begin{aligned}
\lambda_{(i\ j)}&\big((i\ j), v_{i}\big)
=
\lambda_{s_i}\big(s_i, v_{i}\big)
+
\lambda_{s_{i+1}}\big(s_{i+1}, v_{i+1}\big)
+\cdots+
\lambda_{s_{j-1}}\big(s_{j-1}, v_{j-1}\big)\,
\, .
\end{aligned}
$$
\end{enumerate}
\end{lemma}
\begin{proof}
For (a), assume that $v_i \notin V^g$, else the claim
follows from \cref{gg fixed}.
Set $j=g(i)$ and
$h=g(i\ j)$.
Then $v_{j}\in V^{h}$ and PBW Condition~(1)
implies that
\[
\begin{aligned} \lambda(g, v_i)
=
\lambda(h, ^{(i\ j)}v_i)(i\ j)
+
h\lambda\big((i\ j), v_i\big)
=
\lambda(h, v_{j})(i\ j)+h\lambda\big((i\ j), v_i\big).
\end{aligned}
\]
We isolate the coefficient of $g$ and apply Lemma~\ref{gg fixed} twice:
\[\lambda_g(g, v_i)=\lambda_{h}(h, v_{j})
+
\lambda_{(i\ j)}
\big((i\ j), v_i\big)
=
\lambda_{(i\ j)}\big((i\ j), v_i\big)
=
-\lambda_{(i\ j)}\big((i\ j), v_j\big).
\]
For (b),
\cref{gg fixed} and (a) imply
that,
for $g=(l_1\ \cdots\ l_k)$,
\[
\begin{aligned}
0=\lambda_g(g, \sum_{i=1}^kv_{l_i})=\sum_{i=1}^k\lambda_g(g, v_{l_i})
=
\sum_{i=1}^{k-1}
\lambda_{(l_i\ l_{i+1})}
\big((l_i\ l_{i+1}), v_{l_i}\big) + \lambda_{(l_k\ l_1)}
\big((l_k\ l_1), v_k\big).
\end{aligned}
\]
For (c), just use (b) with cycle $(i\ i+1\ \cdots\ j)$ and (a).
\end{proof}
\begin{remark}{\em
Note that
\cref{gg depends on trans only}
implies that there are at most $n-1$ choices determining the values $\lambda_g(g, v)$ for $g$ in $G$ and $v$ in $V$ in a PBW
algebra $\mathcal{H}_{\lambda, \kappa}$.
Indeed,
the values of $\lambda_g(g, v)$ are determined
by the values $\lambda_{(i\ j)}((i\ j), v_i)$ for $i<j$
by part (a), which are determined by the values
$\beta_k=\lambda_{s_k}(s_k, v_k)$ for $1\leq k< n$
by part (c).
}
\end{remark}
\begin{lemma}
If the parameter function $\lambda$
satisfies
PBW Conditions~(1) and~(3),
then for all
$g \in G$,
$
\lambda(g, v_1+
\cdots +v_n)=0
$.
\end{lemma}
\begin{proof}
PBW Condition~(3) implies Corollary~\ref{cor:lambda}\ (4)
(see the proof of~\cite[Cor.\ 3.3]{SheplerWitherspoon2015}), hence
\[
\begin{aligned}
\sum_{i=1}^n\lambda(g, v_i)
&=
\sum_{i=1}^n\lambda_g(g, v_i)g+\sum_{i=1}^n\sum_{j\neq i}\lambda_{g(i\ j)}(g, v_i)g(i\ j)\\
&=
\lambda_g\big(g, \sum_{i=1}^n v_i\big)g
+\sum_{1 \leq i < j \leq n }\big(\lambda_{g(i\ j)}(g, v_i)+\lambda_{g(i\ j)}(g, v_j)\big)g(i\ j).\end{aligned}
\]
This vanishes by
\cref{lemma:opposite on reflections}
and \cref{gg fixed}.
\end{proof}
\begin{lemma}\label{betas}
For the parameter function $\lambda$ satisfying
PBW Conditions~(1) and~(3),
\begin{equation}
\label{SumDefinition}
\beta_1+\cdots +\beta_n=0
\quad\text{ and }\quad
\lambda_g(g, v_i)=
\sum_{k=0}^{g(i)-i+n-1}\beta_{i+k}
\quad\text{ for all }
g \in G,\ 1\leq i\leq n\, .
\end{equation}
\end{lemma}
\begin{proof}
\cref{betas sum to zero}
with the cycle $(1\ 2\ \cdots\ n)$
implies that $\sum_{1\leq j\leq n} \beta_j =0$.
For $i<g(i)$,
\[\begin{aligned}
\sum_{k=0}^{g(i)-i+n-1}
\beta_{i+k}
=
\sum_{k=i}^{g(i)-1}\beta_{k}
+ \sum_{k=g(i)}^{g(i)+n-1}
\beta_k
=
\sum_{k=i}^{g(i)-1}\lambda_{s_k}(s_k, v_{k}),
\end{aligned}
\]
which is just
$
\lambda_{(i\ g(i))}\big((i\ g(i)), v_{i}\big)
=\lambda_g(g, v_i)$
by \cref{betas sum to zero}.
Similarly, for $g(i)<i$,
\[\begin{aligned}
\sum_{k=0}^{g(i)-i+n-1}
\beta_{i+k}
= \beta_i+\beta_{i+1}+\cdots+ \beta_n+\beta_1+\cdots +\beta_{g(i)-1}
=
-( \beta_{g(i)} + \cdots + \beta_{i-1}),
\end{aligned}
\]
which again is just
$
-\lambda_{(g(i)\ i)}\big((g(i)\ i), v_{g(i)}\big)
=\lambda_g(g, v_i)$
by \cref{betas sum to zero}.
\end{proof}
\vspace{1ex}
\begin{cor}\label{ld zero for fixed v}
If the parameter $\lambda$ satisfies (\ref{SumDefinition}),
then for any $k$-cycle
$(l_1\ \cdots\ l_k)$ in $G$,
$$
\lambda_{(l_1\ l_2)}\big((l_1\ l_2), v_{l_1}\big)
+ \lambda_{(l_{2}\ l_3)}\big((l_2\ l_3), v_2\big) + \cdots
+ \lambda_{(l_k\ l_1)}\big((l_k\ l_1), v_{l_k}\big)
=
0\, .
$$\end{cor}
\begin{proof}
Since (\ref{SumDefinition}) implies that
$$\lambda_{(i\ j)}((i\ j), v_i)=\beta_i+\cdots +\beta_n+\beta_1+\cdots +\beta_{j-1} \quad\text{for } 1\leq i\neq j\leq n
$$
(note that the sum stops at $\beta_n$ when $j=1$
as $\beta_n=\beta_0$), the sum
$\sum_{i=1}^k \lambda_{(l_i\ l_{i+1})}((l_i\ l_{i+1}), v_{l_i})$
is a multiple of $\beta_1+\cdots +\beta_n$, which is zero by the first equality in (\ref{SumDefinition}).
\end{proof}
\vspace{2ex}
\subsection{Proof of \cref{ld def}}
We now have the tools to show that every PBW algebra $\mathcal{H}_{\lambda, \kappa}$
has
parameter $\lambda$ determined by the values $\alpha_{ij}$ and
$\beta_k$.
\begin{proof}[Proof of \cref{ld def}]
We show PBW Conditions (1) and (3) are equivalent to these three facts
for all $g$ in $G$ and all $i$:
\begin{itemize}
\item[(a)] $\lambda_{g(i\ j)}(g, v_i)
=
\alpha_{ij}-\ ^{g}\alpha_{ij}$
for all $j\neq i$,
\item[(b)] $\beta_1+\cdots+ \beta_n=0$ and
$\lambda_g(g, v_i)
=\sum_{k=0 }^{g(i)-i+n-1 }\beta_k$, and
\item[(c)]
$\lambda(g, v_i)$ is supported on $g$
and all $g(i\ j)$ for $j\neq i$.
\rule{0ex}{2.25ex}
\end{itemize}
Assume PBW Conditions~(1) and~(3)
both hold.
\cref{betas} implies (b).
PBW Condition~(3) implies part (4) of
Corollary~\ref{cor:lambda}
(see the proof of~\cite[Cor.\ 3.3]{SheplerWitherspoon2015}),
implying (c).
We induct on the
(absolute) reflection length $\length(g)$
of $g$ in $G=\mathfrak{S}_n$
to verify (a).
First, if $g=1$, then both sides of $(a)$ vanish
by \cref{cor:lambda}(1).
Now suppose
$g$ is a transposition
fixing $i$ and $j$.
Then $\alpha_{ij} = \ ^{g}\alpha_{ij}$ and the right-hand side of (a) is zero;
on the other hand,
PBW Condition~(1) implies that
\[
\lambda\big((i\ j), v_i\big)g
+(i\ j)\lambda(g, v_i)
=
\lambda(g, v_j)(i\ j)+g\lambda\big((i\ j), v_i\big)
\]
and we
apply Lemma ~\ref{lemma:opposite on reflections}
to the coefficient of $g$,
\[
\lambda_1\big((i\ j), v_i\big)
+
\lambda_{g(i\ j)}(g, v_i)
=
\lambda_{g(i\ j)}(g, v_j)+
\lambda_1\big((i\ j),v_i\big),
\]
to
see the left-hand side of (a) is zero.
Now suppose instead $g=(i\ k)$ for some $k\neq i$. Then $g=(i\ j\ k)(i\ j)=(j\ k)(i\ j\ k)$,
and we use PBW Condition~(1) to write $\lambda((i\ k), v_i-v_j)$ in two ways and match the coefficients of $(i\ k)(i\ j)$:
\[
\begin{aligned}
\lambda_{(i\ k)(i\ j)}
\big((i\ k), v_i-v_j\big)
&=
\lambda_{(i\ j\ k)(i\ j)}
\big( (i\ j\ k), v_j-v_i)+\lambda_1((i\ j), v_i-v_j\big)
\quad\text{ and }
\\
\lambda_{(i\ k)(i\ j)}
\big((i\ k), v_i-v_j)
&=
\lambda_1\big((j\ k), v_j-v_k\big)
+
\lambda_{(i\ j\ k)(i\ j)}
\big((i\ j\ k), v_i-v_j\big);
\end{aligned}
\]
we conclude $2\lambda_{(i\ k)(i\ j)}((i\ k), v_i-v_j)
=4(\alpha_{ij}-\,
^{g}\alpha_{ij})$ and
\cref{lemma:opposite on reflections}
implies (a).
To show the induction step,
fix some $g$ in $G$,
and assume the result holds for all group elements
with smaller (absolute) reflection length.
Write
$g=g_1 g_2$
for some $g_1, g_2$ in $\mathfrak{S}_n$ with
$0<\length(g_1), \length(g_2) < \length(g)$.
PBW Condition~(1) implies that
\[
\lambda(g, v_i-v_j)
=
\lambda(g_1g_2, v_i-v_j)
=
\lambda\big(g_1,\, ^{g_2}(v_i-v_j)\big)
g_2+g_1\lambda(g_2, v_i-v_j),
\]
and we equate
the coefficients of
$g(i\ j)
=
g_1
\big(g_2(i)\ g_2(j)\big)g_2$
to obtain (a):
\[\begin{aligned}
2\lambda_{g(i\ j)}(g, v_i)
&=
\lambda_{g(i\ j)}(g, v_i-v_j)
=
\lambda_{g_1(g_2(i)\ g_2(j))}(g_1, v_{g_2(i)}-v_{g_2(j)})
+
\lambda_{g_2(i\ j)}(g_2, v_i-v_j)
\\
&= 2(^{g_2}\alpha_{ij}-\ ^{g_1g_2}\alpha_{ij})
+2(\alpha_{ij}-\ ^{g_2}\alpha_{ij})
=
2(\alpha_{ij}-\ ^{g}\alpha_{ij})\, .
\end{aligned}
\]
To prove the converse, we assume (a), (b), and (c) hold and first show PBW Condition~(1).
Note that (b) implies that for all $g\in G$ and for all $v_i \in V$, $\lambda_g(g,v_i)$ coincides
with $\lambda_{(i\ g(i))}((i\ g(i)), v_i)$.
The right-hand side of PBW Condition~(1)
at $v=v_i$ is
\begin{equation}\label{4terms}
\begin{aligned}
\lambda_{(h(i)\ gh(i))}
\big((h(i)\ gh(i) ), v_{h(i)}\big)\, gh
\ &+
\sum_{j:\, h(j)\neq h(i)}(\alpha_{h(i)h(j)}-\ ^g\alpha_{h(i)h(j)})
\, gh(i\ j)h^{-1}h
\\
\ +\, \lambda_{(i\ h(i))}
\big((i\ h(i)), v_i\big)
\, gh
\ &+\ \ \
\sum_{j: j\neq i}(\alpha_{ij}-\ ^h\alpha_{ij})\, gh(i\ j).
\end{aligned}
\end{equation}
We apply
\cref{ld zero for fixed v}
to the 3-cycle $(i\ h(i)\ gh(i))$ and the $2$-cycle
$(i\ gh(i))$, as well as (b), to simplify the $gh$ terms
and obtain
\[\begin{aligned}
-
\lambda_{(i\ gh(i))}\big((i\ gh(i)), v_{gh(i)}\big) \, gh
&=
\lambda_{(gh(i)\ i)}
\big((gh(i)\ i), v_i\big)\,
gh\,
= \lambda_{gh}
\big(gh, v_i\big)\,
gh\,
.
\end{aligned}
\]
The $gh(i\ j)$ terms combine as
$\sum_{j: i\neq j}\lambda_{gh(i\ j)}(gh, v_i)\, gh(i\ j)$.
Thus~\cref{4terms} is just
$\lambda(gh, v_i)$.
The only nontrivial case in showing PBW Condition~(3) is when $h=g(i\ j)$ with vectors $\{u,v\}=\{v_i,v_j\}$. To confirm that
\[\lambda_{g(i\ j)}
(g, v_i)
(v_{g(i)}-v_{g(j)})=\lambda_{g(i\ j)}(g, v_j)(v_{g(j)}-v_{g(i)}),\]
apply (a) to each side
and note that
$ (\alpha_{ij}-\ ^g\alpha_{ij})(v_{g(i)}-v_{g(j)})
=(\alpha_{ji}-\ ^g\alpha_{ji})(v_{g(j)}-v_{g(i)})$.
\end{proof}
\section{Deforming commutativity}
Again we consider $G=\mathfrak{S}_n$ acting by permutation
of basis vectors $v_1,\ldots, v_n$ of $V\cong\mathbb F^n$.
Working toward the classification of the Drinfeld Hecke algebras
$\mathcal{H}_{\lambda,\kappa}$ in Section 7,
we described the form of $\lambda$ in a PBW deformation in the last section and we describe the form of $\kappa$
here.
We again take $n> 2$ in this section
(see Section $8$ for $n=1,2,3$)
and
fix linear parameter functions
\[\kappa:V\otimes V\rightarrow \FF G\ \ \text{ and }\ \ \lambda:\FF G \otimes V \rightarrow \FF G,
\quad
\text{with $\kappa$ alternating.}
\]
We use the PBW Conditions (1)--(5) of \cref{PBWconditions}.
The next lemma measures the failure of $\kappa$ to be invariant and follows
from \cref{ld def}.
\vspace{1ex}
\begin{lemma}
\label{kappa invariance} Assume
PBW Conditions~(1) and~(3) hold
for the parameter function $\lambda$.
Then PBW Condition~(2) is equivalent to
\[
\kappa(\, ^gv_i,\, ^gv_j)g-g\kappa(v_i, v_j)
=
\sum_{k \neq i,j}
(^g
\alpha_{ijk}-\alpha_{ijk})
\, g\, \big((i\ j\ k)-(i\ k\ j)
\big)
\quad
\text{for all $g\in G$, $i\neq j$}.
\]
\end{lemma}
\begin{proof}
We rewrite the right-hand side of PBW Condition~(2) with $v_i, v_j$ in place
of $u, v$
using \cref{ld def}. \cref{betas sum to zero}
implies all terms vanish except
those with
group elements $g(i\ j\ k)$ and $g(i\ k\ j)$.
The coefficient of $g(i\ j\ k)$
is
\[
\sum_{k\neq i,j}\Big((\alpha_{jk}-\ ^g\alpha_{jk})(\alpha_{ik}-\ ^g\alpha_{ij})-(\alpha_{ij}-\ ^g\alpha_{ij})(\alpha_{jk}-\ ^g\alpha_{ik})-(\alpha_{ik}-\ ^g\alpha_{ik})(\alpha_{ji}-\ ^g\alpha_{jk})\Big)
\]
which simplifies to
$
\, ^g\alpha_{ijk}-\alpha_{ijk}
$
since $\alpha_{ij}=-\alpha_{ji}$, etc.
Likewise,
the coefficient
\[
\sum_{k\neq i,j}\Big( (\alpha_{ji}-\ ^g\alpha_{ji})(\alpha_{ik}-\ ^g\alpha_{jk})-(\alpha_{jk}-\ ^g\alpha_{jk})(\alpha_{ij}-\ ^g\alpha_{ik})-(\alpha_{ik}-\ ^g\alpha_{ik})(\alpha_{jk}-\ ^g\alpha_{ji})\Big)
\]
of $g(i\ k\ j)$
simplifies to
$-(^g\alpha_{ijk}-\alpha_{ijk}) $
and we obtain the
right-hand side of the
equation in the statement.
\end{proof}
\vspace{1ex}
\begin{lemma}\label{KappaEqualities}
If $\mathcal{H}_{\lambda,\kappa}$ satisfies
the PBW property, then
for all distinct $i,j,k$,
\[
\kappa_{(i\ j\ k)}(v_i, v_j)
=
\kappa_{(i\ j\ k)}(v_j, v_k)
=
\kappa_{(i\ j\ k)}(v_k, v_i)
=
\kappa_{(i\ k\ j)}(v_i, v_k)
=
-\kappa_{(i\ k\ j)}(v_k, v_i)\, .
\]
\end{lemma}
\begin{proof}
We use Theorem~\ref{PBWconditions}. PBW Condition~(4)
with $g=(i\ j\ k)$ implies that
\[
\kappa_{(i\ j\ k)}(v_i, v_j)(v_i-v_k)+\kappa_{(i\ j\ k)}(v_j, v_k)(v_j-v_i)+\kappa_{(i\ j\ k)}(v_k, v_i)(v_k-v_j)=0,
\,
\]
giving the first two equalities.
Set
$g=(i\ j)$
in
Lemma~\ref{kappa invariance} (whence $^g\alpha_{ijk}=\alpha_{ijk}$ and the right-hand side vanishes), and consider the coefficient of $(j\ k)$ to deduce the third equality. For the final equality, recall
that $\kappa$ is alternating.
\end{proof}
\vspace{1ex}
\begin{prop}\label{kappa 3-cycles} If $\mathcal{H}_{\lambda,\kappa}$ satisfies
the PBW property,
then $\kappa$ is supported on 3-cycles, and
\[
\kappa(v_i, v_j)
=
\sum_{k\neq i,j}\kappa_{(i\ j\ k)}(v_i, v_j)\big(
(i\ j\ k)-(i\ k\ j)\big)\,
\quad\text{ for } i\neq j\, .
\]
\end{prop}
\begin{proof}
We first show that $\kappa$ is supported on $3$-cycles using \cref{PBWconditions} and \cref{kappa invariance}.
By Corollary~\ref{PBWkappaconditions},
$\kappa_g\equiv 0$ unless
$g$ is the identity, a transposition,
the product of two disjoint transpositions,
or a $3$-cycle.
Assume some $\kappa_g(v_i, v_j)\neq 0$
(so $i\neq j$).
Corollary~\ref{PBWkappaconditions}
in fact implies that
$g$ must be
$$
1_G,\ (i\ j),\ (j\ k),\
(i\ j)(k\ l),\ (i\ k)(j\ l),\
(i\ j \ k), \text{ or }
(i\ k \ j)
\quad\text{ for some $k\neq l$ and $k,l\not\in\{i, j\}$}.
$$
Let $g=(i\ j)$ in \cref{kappa invariance}
and equate the coefficients of
$1_G$ to conclude that
\[\kappa_{(i\ j)}(v_j, v_i)-\kappa_{(i\ j)}(v_i, v_j)=0,\]
which implies $\kappa_{(i\ j)}(v_i, v_j)=0$
as $\kappa$ is alternating and $\text{char}(\mathbb F)\neq 2$.
Similarly, we
equate the coefficients of $(i\ j)$ to deduce that
$\kappa_1(v_i, v_j)=0$
and the coefficients of $(k\ l)$
to deduce that $\kappa_{(i\ j)(k\ l)}(v_i, v_j)=0$.
Likewise,
set $g=(i\ j)(k\ l)$ and equate coefficients of $(i\ l)(j\ k)$ to see that $\kappa_{(i\ k)(j\ l)}(v_i, v_j)=0$.
To verify that $\kappa_{(j\ k)}(v_i, v_j)=0$,
on one hand we
notice that
$\kappa_{(j\ k)}(v_i, v_j)=\kappa_{(j\ k)}(v_i, v_k)$
after setting $g=(j\ k)$ and equating the coefficients
of $1_G$,
and
on the other hand we notice that
$\kappa_{(j\ k)}(v_i, v_j)=-\kappa_{(j\ k)}(v_i, v_k)$
by
PBW Condition~(4) with $g=(j\ k)$ and vectors $v_i, v_j$, and $v_k$.
Hence by
Lemma~\ref{KappaEqualities},
\[\begin{aligned}
\kappa(v_i, v_j)
&=
\sum_{k\neq i,j}\kappa_{(i\ j\ k)}(v_i, v_j)(i\ j\ k)+\kappa_{(i\ k\ j)}(v_i, v_j)(i\ k\ j)\\
&=
\sum_{k\neq i,j}\kappa_{(i\ j\ k)}(v_i, v_j)
\big((i\ j\ k)-(i\ k\ j)\big).\end{aligned}
\]
\end{proof}
The next proposition gives an explicit formula for $\kappa(v_i,v_j)$ when $\mathcal{H}_{\lambda,\kappa}$ is PBW.
\vspace{1ex}
\begin{prop}
\label{kappa def}
If $\mathcal{H}_{\lambda,\kappa}$ satisfies
the PBW property,
then
\[
\kappa(v_i,v_j)=\sum_{k\neq i,j}\big(\alpha_{ijk}+\kappa_{(1\ 2\ 3)}(v_1, v_2)-\alpha_{123}\big)\big((i\ j\ k)-(i\ k\ j)\big)
\ \text{ for }
i\neq j.
\]
\end{prop}
\begin{proof}
By \cref{PBWconditions}, we may
use \cref{kappa 3-cycles}
and Lemma~\ref{kappa invariance} to write $g\kappa(v_i, v_j)$
in two ways
and then equate
the coefficients of $g(i\ j\ k)$
for distinct $i, j, k$ in $\{1, \ldots, n\}$:
\[
\kappa_{(i\ j\ k)}(v_i, v_j)=\kappa_{g(i\ j\ k)g^{-1}}(v_{g(i)},v_{g(j)})-
\ ^g\alpha_{ijk}+\alpha_{ijk}.\]
In particular, for $g=(i\ 1)(2\ j)(3\ k)$,
$\kappa_{(i\ j\ k)}(v_i, v_j) = \kappa_{(1\ 2\ 3)}(v_1, v_2)-\alpha_{123}+\alpha_{ijk},
$
and \cref{kappa 3-cycles} implies the result.
\end{proof}
\vspace{1ex}
\begin{cor}\label{kappa def implies equalities}
If $\kappa$ is defined as in \cref{kappa def},
then
for all distinct $i,j,k$,
\[
\kappa_{(i\ j\ k)}(v_i, v_j)
=
\kappa_{(i\ j\ k)}(v_j, v_k)
=
\kappa_{(i\ j\ k)}(v_k, v_i)
=
\kappa_{(i\ k\ j)}(v_i, v_k)=-\kappa_{(i\ k\ j)}(v_k, v_i)\, .
\]
\end{cor}
\section{Classification}
We now give the classification of
Drinfeld Hecke algebras for the
group $G =\mathfrak{S}_n$ acting on $V\cong\mathbb F^n$
by permuting basis vectors $v_1,\ldots, v_n$
of $V$.
Recall that $2\neq \text{char}(\mathbb F)\geq 0$.
We assume $n> 2$ in this section
for ease with notation;
see Section $8$ for the cases $n=1,2,3$.
We show that every Drinfeld Hecke algebra
has relations of a particular form
based on parameters $a_{ij}, b_k$ and $c$
in $\mathbb F$.
\begin{definition}
\label{definition of Hf}
For any ordered tuple of scalars
in $\mathbb F$,
$$\mu=(a_{ij}, b_{k}, c:
\ 1\leq i < j\leq n,\
1\leq k<n ),
$$
let
$\mathcal{H}_{\mu}$ be the $\mathbb F$-algebra generated
by $V$ and $\mathbb F G$ with relations
\begin{align}
gv_i- \, ^{g}v_ig
&=
\hspace{-1ex}
\sum_{k=0}^{g(i)-i+n-1}
b_{i+k}\, g
+
\sum_{j \neq i}
(a_{ij}-\, a_{g(i)\, g(j)})
\ g(i\ j)
\text{ for } g\in G, 1\leq i\leq n,
\label{equation: classify lambda}
\\
v_i v_j-v_jv_i
&=
\sum_{k\neq i,j}(c-a_{123}+a_{ijk})
\big((i\ j\ k)-(i\ k\ j)\big)
\quad\text{ for } 1\leq i\neq j\leq n
\label{equation: classify kappa}
\end{align}
where\ $a_{ijk}=
a_{ij}a_{jk}+
a_{jk}a_{ki}+
a_{ki}a_{ij}$,
$a_{ji}=-a_{ij}$,
and $b_n=-(b_1+\cdots + b_{n-1})$
with indices on $b$ taken
modulo $n$.
\end{definition}
We show that the above algebras $\mathcal{H}_{\mu}$
make up the complete set
of Drinfeld Hecke algebras:
\begin{thm}
\label{classification}
For
any Drinfeld Hecke algebra
$\mathcal{H}_{\lambda,\kappa}$
for $G=\mathfrak{S}_n$ acting on $V\cong\mathbb F ^n$ by permutations,
there is an ordered tuple
$\mu=(a_{ij}, b_{k}, c:1\leq i<j\leq n,1\leq k<n)$ of scalars
so that
$\mathcal{H}_{\lambda,\kappa}
=\mathcal{H}_{\mu}$.
Conversely, for any choice $\mu$ of scalars,
$\mathcal{H}_{\mu}$ is a Drinfeld Hecke algebra.
\end{thm}
\begin{proof}
First assume $\mathcal{H}_{\lambda, \kappa}$
satisfies the PBW property
for some parameters
$\lambda$ and $\kappa$.
Let $\mathcal{H}_{\mu}$
be the algebra of
\cref{definition of Hf}
for $\mu=(a_{ij}, b_k, c)$ defined
by
$$
\begin{aligned}
a_{ij}
&=&&
\tfrac{1}{4}\lambda_1\big((i\ j), v_i-v_j\big)
&& \ \ \text{ for } 1\leq i<j\leq n,
\\
b_k
&=&&
\tfrac{1}{2}\lambda_{(k\ k+1)}\big((k\ k+1), v_k-v_{k+1}\big)
&&\ \ \text{ for } 1\leq k<n,
\\
c
&=&&
\kappa_{(1\ 2\ 3)}(v_1, v_2),
&&\ \ \text{ and }
\\
a_{ijk}
&=&&
a_{ij}a_{jk}+a_{jk}a_{ki}+a_{ki}a_{ij}
.
\end{aligned}
$$
Using ~\cref{PBWconditions},
we replace $\alpha_{ij}$ and $\beta_k$ with $a_{ij}$ and $b_k$, respectively, in ~\cref{ld def}
to see that $\lambda$
defines the right-hand side of
relation
\cref{equation: classify lambda}.
Lemma~\ref{kappa invariance}
implies that
\[\kappa_{g(i\ j\ k)g^{-1}}(v_{g(i)}, v_{g(j)})
-\kappa_{(i\ j\ k)}(v_i, v_j) =\ ^ga_{ijk}-a_{ijk}
\quad\text{ for all }
g\in G.
\]
Let $g=(1\ i)(2\ j)(3\ k)$ to see that
\[
\kappa_{(1\ 2\ 3)}(v_1, v_2)-\kappa_{(i\ j\ k)}(v_i, v_j)
=
a_{123}-a_{ijk}.
\]
\cref{kappa 3-cycles}
then implies that $\kappa$
gives the right-hand side of
\cref{equation: classify kappa}.
Thus
$\mathcal{H}_{\lambda, \kappa}=\mathcal{H}_{\mu}$.
Conversely, fix an algebra $\mathcal{H}_{\mu}$
and
set $\lambda(g,v_i)$ equal to the right-hand side of \cref{equation: classify lambda} and
set $\kappa(v_i, v_j)$ equal to the right-hand side of \cref{equation: classify kappa} for all $1\leq i\neq j\leq n$
and $g$ in $G$.
Extend $\lambda$ and $\kappa$ to linear parameter functions
$\lambda: \mathbb F G\otimes V \rightarrow \mathbb F G$,
$\kappa: V\otimes V
\rightarrow \mathbb F G$
so that $\mathcal{H}_{\mu}
=\mathcal{H}_{\lambda,\kappa}$.
We show that $\mathcal{H}_{\lambda,\kappa}$
is a Drinfeld Hecke algebra
by verifying the five
PBW Conditions of \cref{PBWconditions}. Results from previous sections
(with $\alpha_{ij}=a_{ij}$
and $\beta_k=b_k$) apply.
\cref{ld def} implies that PBW Conditions~(1) and~(3) hold.
PBW Condition~(2) is equivalent to the equation in Lemma~\ref{kappa invariance}; we examine the coefficient of
each $g(i\ j\ k)$
and find that
\[
^g
\kappa_{(i\ j\ k)}(v_i, v_j)
-\kappa_{(i\ j\ k)}(v_i, v_j)
=(\ ^ga_{ijk}+c-a_{123})
-(a_{ijk}+c-a_{123})
=\ ^ga_{ijk}-a_{ijk}
\]
as desired. The coefficients of $g(i\ k\ j)$ are likewise equal in this equation
and every other term trivially vanishes, giving
PBW Condition~(2).
PBW Condition~(4) is trivial except for
$g=(i\ j\ k)$ and $v_i, v_j$ and $v_k$. By \cref{kappa def implies equalities},
\[
\begin{aligned}
&\kappa_{(i\ j\ k)}(v_i- v_j)(^gv_k, v_k)+\kappa_{(i\ j\ k)}(v_j- v_k)(^gv_i, v_i)+\kappa_{(i\ j\ k)}(v_k- v_i)(^gv_j, v_j)\\
&\ \
=\kappa_{(i\ j\ k)}(v_i, v_j)(v_i- v_k)+\kappa_{(i\ j\ k)}(v_j, v_k)(v_j- v_i)+\kappa_{(i\ j\ k)}(v_k, v_i)(v_k- v_j)=0.\\
\end{aligned}\]
To verify PBW Condition~(5) with $v_i, v_j$, and $v_k$, it suffices to consider
the coefficients of
three group elements,
namely, $(i\ k\ j\ m)$, $(i\ k)$, and $(i\ j\ k)$ with indices
distinct,
as other terms all vanish.
The coefficient of $(i\ k\ j\ m)$
vanishes: one need only
expand
\[
\begin{aligned}
\kappa&_{(i\ j\ m)}(v_i, v_j)(a_{ki}-a_{kj})
-\kappa_{(i\ k\ j)}(v_i, v_k)(a_{jm}-a_{im})\\
&\ \ \ -\kappa_{(j\ k\ m)}(v_j, v_k)(a_{im}-a_{ik})
-\kappa_{(i\ m\ k)}(v_k, v_i)(a_{jk}-a_{jm})\\
&=
(c-
a_{123}+a_{ijm})
(a_{ki}-a_{kj})
-(c+-a_{123}+a_{ikj})(a_{jm}-a_{im})\\
&\ \ \ \ -(c-a_{123}+a_{jkm})(a_{im}-a_{ik})
-(c-a_{123}
+a_{imk})(a_{jk}-a_{jm})
\\
&=a_{ki}(a_{ijm}-a_{jkm})+a_{kj}(a_{imk}-a_{ijm})+a_{jm}(a_{imk}-a_{ikj})+a_{mi}(a_{jkm}-a_{ikj})=0.
\end{aligned}
\]
The coefficient of $(i\ k)$ is just
\[\kappa_{(i\ j\ k)}(v_i, v_j)
\big((a_{ij}-a_{jk})+(a_{ji}-a_{kj})-(a_{kj}-a_{ji})-(a_{jk}-a_{ij})\big)
=0.\]
The coefficient of $(i\ j\ k)$
vanishes as well
by Lemma~\ref{gg fixed}:
$$
\kappa_{(i\ j\ k)}(v_i, v_j)
\cdot
\lambda_{(i\ j\ k)}\big((i\ j\ k), v_i+ v_j+ v_k\big)
=
0.
$$
Hence $\mathcal{H}_{\mu}$ satisfies the PBW property by
\cref{PBWconditions}.
\end{proof}
\begin{cor}
For $n>2$, the Drinfeld Hecke algebras for $\mathfrak{S}_n$ acting on $\mathbb F^n$ constitute a family defined by $\tfrac{1}{2}(n^2+n)$ parameters
in $\mathbb F$.
\end{cor}
\section{Drinfeld Hecke algebras for
the symmetric group in low dimensions}
\label{lowdim}
We now give the Drinfeld Hecke algebras more explicitly
for $G=\mathfrak{S}_n$ acting in dimensions $n\leq 3$
by permuting basis vectors $v_1,\ldots, v_n$
of $V\cong\mathbb F^n$. They all arise from an invariant parameter
$\kappa$.
\subsection*{One dimension}
The Drinfeld Hecke algebras $\mathcal{H}_{\lambda, \kappa}$
for $n=1$ are all trivial:
Theorem \ref{PBWconditions} forces
$\kappa\equiv 0$ and $\lambda \equiv 0$
and $$
\mathcal{H}_{\lambda, \kappa}
= \mathcal{H}_{0, 0
} =
\mathbb F[v_1, \ldots, v_n]\# G.
$$
\subsection*{Two dimensions}
The Drinfeld Hecke algebras $\mathcal{H}_{\lambda, \kappa}$
for
$n=2$ constitute a $2$-parameter family
given by
$$
\begin{aligned}
\mathcal{H}_{a,b}=
\mathbb F\langle
v,g: v\in V
\rangle
/
\big(
&g^2-1,\ \
v_1v_2-v_2v_1, \\
&(1\ 2)v_1- \, v_2(1\ 2) - a-b(1\ 2),\\
&(1\ 2)v_2- \, v_1(1\ 2) + a+b(1\ 2)
\big),
\end{aligned}
{}_{}
$$
for arbitrary scalars $a, b$
in $\mathbb F$.
This follows from
Theorem \ref{PBWconditions} which forces
$\lambda((1\ 2), v_1)=-\lambda((1\ 2), v_2)$ and $\kappa(v_1, v_2)=0$ as $\kappa(v_1, v_2)= \kappa(v_2, v_1)$ with $\text{char}(\mathbb F)\neq 2$.
\begin{remark}{\em Note that if
we were to allow char$(\FF)=2=n$, we would find instead
a $4$-parameter
family of Drinfeld Hecke algebras:
for arbitrary $a,b,c,d$ in $\mathbb F$,
$$
\begin{aligned}
\mathcal{H}_{a,b,c,d}
=\mathbb F\langle
v,g: v\in V
\rangle
/
\big(
&g^2-1,\ \
v_1v_2-v_2v_1-c-d(1\ 2),\\
&(1\ 2)v_1- \, v_2(1\ 2) - a-b(1\ 2),\\
&(1\ 2)v_2- \, v_1(1\ 2) - a-b(1\ 2)
\big).
\end{aligned}
{}_{}
$$
}
\end{remark}
\subsection*{Three dimensions}
The Drinfeld Hecke algebras $\mathcal{H}_{\lambda, \kappa}$
for $n=3$
constitute a $6$-parameter family:
\begin{prop}
Let
$\lambda: \mathbb F G\otimes V \rightarrow \mathbb F G$
and $\kappa: V
\otimes V \rightarrow \mathbb F G$
be linear parameter functions with $\kappa$ alternating.
The algebra $\mathcal{H}_{\lambda,\kappa}$
generated by
a basis $v_1, v_2, v_3$ of $V$ and $\mathbb F G$ with relations
$$
\begin{aligned}
v_iv_j-v_jv_i=
\kappa(v_i, v_j)
\quad\text{ and }\quad
gv_i-\ ^gv_ig=\lambda(g,v_i)
\ \ \text{ for}\ g\in G
\end{aligned}
$$
satisfies the PBW
property if and only if
there are scalars $a_1, a_2, a_3, b_1, b_2,c$ in $\FF$
with
$$
\kappa(v_i, v_j)
=
c\big((i\ j\ k)-(i\ k\ j)\big)
\quad\text{ for all }
i,j,k\ \text{ distinct}
$$
and
$\lambda$ is defined
by
\begin{small}
\[\begin{aligned}
&\lambda((1\ 2), v_1)=a_1 +b_1(1\ 2)-(a_2+a_3)(1\ 3\ 2),
&&\lambda((2\ 3), v_1)=(a_1+a_3)\big((1\ 3\ 2)+(1\ 2\ 3)\big),\\
&\lambda((1\ 2), v_2)=-a_1-b_1(1\ 2)+(a_2+a_3)(1\ 2\ 3),
&&\lambda((2\ 3), v_2)=a_2+b_2(2\ 3)-(a_1+a_3)(1\ 3\ 2),\\
&\lambda((1\ 2), v_3)=(a_2+a_3)\big((1\ 3\ 2)-(1\ 2\ 3)\big),
&&\lambda((2\ 3), v_3)=1+(2\ 3)-(1\ 2\ 3),\\
&\lambda((1\ 3), v_2)=(a_1+a_2)\big((1\ 3\ 2)-(1\ 2\ 3)\big),
&&
\lambda((1\ 3), v_1)=-a_3-b_3(1\ 3)+(a_1+a_2)(1\ 2\ 3),\\
&\lambda((1\ 3), v_3)=a_3-(a_1+a_2)(1\ 3\ 2)+b_3(1\ 3),
\end{aligned}
\]
\end{small}
\begin{small}
\[
\begin{aligned}
&\lambda((1\ 2\ 3), v_1)=(a_1-a_2)(1\ 3)-(a_3+a_2)(2\ 3)+b_1(1\ 2\ 3),\ \ \ \
&& \\
&
\lambda((1\ 2\ 3), v_2)=(a_2+a_3)(1\ 2)+(a_2-a_1)(1\ 3)+b_2(1\ 2\ 3),
&& \\
&\lambda((1\ 2\ 3), v_3)=(a_2+a_3)(2\ 3)-(a_2+a_3)(1\ 2)+b_3(1\ 2\ 3),
&& \\
&\lambda((1\ 3\ 2), v_1)=(a_2-a_3)(1\ 2)+(a_1-a_3)(2\ 3)-b_3(1\ 3\ 2), \\
& \lambda((1\ 3\ 2), v_2)=(a_3-a_1)(2\ 3)+(a_2-a_1)(1\ 3)-b_2(1\ 3\ 2),\\
&\lambda((1\ 3\ 2), v_3)=(a_1-a_2)(1\ 3)+(a_3-a_2)(1\ 2)-b_1(1\ 3\ 2)\, ,
\end{aligned}
\]
\end{small}
\hspace{-1.5ex}
where $b_3=-(b_1+b_2)$.
\end{prop}
\begin{proof}
\cref{classification}
implies that
$\kappa_{(i\ j\ k)}(v_i, v_j)$ is determined by
$a_{ijk}-a_{123}$, but for distinct $i, j, k$, $a_{ijk}=a_{123}$
since $a_{ij}=-a_{ji}$, which implies that $\kappa$ is defined as indicated. The parameter $\lambda$ is defined as in \cref{classification} with $a_1=a_{12}, a_2=a_{23}$, and $a_3=a_{31}$.
\end{proof}
\section{Commutativity up to an
invariant parameter}
\label{sec:}
We saw in \cref{nonmodular}
that Drinfeld Hecke algebras in the nonmodular setting all arise from a
parameter $\kappa$ which is $G$-{\em invariant}.
Again, let $G=\mathfrak{S}_n$ act on $V\cong\mathbb F^n$
by permuting basis
vectors $v_1, \ldots, v_n$ of $V$.
In \cref{lowdim}, we observed that
the Drinfeld Hecke algebras
in the modular setting in low dimension
($n=1,2,3$)
also all arise from a parameter
$\kappa$ which is $G$-invariant.
By~\cref{classification},
other Drinfeld Hecke algebras
$\mathcal{H}_{\lambda,\kappa}$
with more general parameters
$\lambda$ and $\kappa$ arise,
but here
we investigate those
with $\kappa$ invariant.
We assume $n> 2$ in this section.
Note that by PBW Condition~(2), the invariance
of $\kappa$ forces
$$
0=
\lambda\bigl(\lambda(g,v),u\bigr)-\lambda\bigl(\lambda(g,u),v\bigr)
\quad\text{ for}\quad
u,v\in V,\ g\in G\, .
$$
Again, we take indices on $b$ modulo $n$.
\begin{cor}
\label{kappainvariant}
An algebra $\mathcal{H}_{\lambda, \kappa}$ satisfies the PBW property with $\kappa$ invariant
if
there are
scalars $c$, $d$,
$a_{1i}$, $b_j$ in $\mathbb F$
for
$1< i \leq n$,
$1\leq j <n$,
with the $a_{1i}$ distinct, such that
$$
\begin{aligned}
\kappa(v_i, v_j)
&= \
c \sum_{k\neq i, j} (i\ j\ k)-(i\ k\ j)
&&\text{ for }\quad
1\leq i\neq j \leq n ,
\ \text{ and }
\\
\lambda(g, v_i)
&=
\sum_{k=0}^{g(i)-i+n-1}b_{i+k}\, g\
+
\sum_{j\neq i} (a_{ij}-a_{g(i)g(j)})
\, g(i\ j)
&&\text{ for }\quad
1\leq i\leq n\, ,
\end{aligned}
$$
where $a_{ij}=\frac{d+a_{1i}a_{1j}}{a_{1i}-a_{1j}}$ for $i,j \neq 1$, $i\neq j$, $a_{1i}=-a_{i1}$,
and $b_n=-(b_1+\cdots +b_{n-1})$.
\end{cor}
\begin{proof}
Consider the ordered tuple
$\mu=(a_{ij},b_k, c: 1\leq i< j \leq n, 1\leq k <n)$
and let $\mathcal{H}_{\mu}$
be the PBW algebra of \cref{classification}.
A calculation confirms that $a_{ijk}=a_{ij}a_{jk}+a_{jk}a_{ki} +
a_{ki}a_{ij}=d$ for all distinct $i,j,k$
and thus $a_{ijk}-a_{123}=0$,
implying that $\mathcal{H}_{\mu}
=\mathcal{H}_{\lambda,\kappa}$.
Thus $\mathcal{H}_{\lambda,\kappa}$
satisfies the PBW property
and one may check the invariance of
$\kappa$ directly.
\end{proof}
\begin{prop}\label{golden rule}
The algebra $\mathcal{H}_{\lambda,0}$ satisfies the PBW
property
for
$\lambda:\FF G \otimes V\rightarrow
\FF G$
defined up to scalar in $\mathbb F$ by
$$
\lambda(g, v_i) =
(g(i)-i)\, g
\quad\text{ for all }
g\in G,
\ 1\leq i\leq n.
$$
\end{prop}
\begin{proof}
In \cref{classification}, set $a_{ij}=0$, $c=0$, and $b_k=1$
for all $1\leq i<j
\leq n$, $1\leq k<n$.
Then $\kappa$ vanishes and $\lambda(g, v)$ is supported on $g$ with
\[\lambda(g, v_i)
=\sum_{k=0}^{g(i)-i+n-1}b_i\, g
=(g(i)-i)\, g,\]
as $b_n=-(n-1)$.
For a scalar multiple, just choose the same
constant for all
$b_k$, $k<n$.
\end{proof}
Note that the converse of
\cref{golden rule}
fails: There are examples
of PBW algebras
$\mathcal{H}_{\lambda, 0}$
with $\lambda$ taking other forms.
\begin{cor}
If $\mathcal{H}_{\lambda, 0}$
satisfies the PBW property,
then so does
$\mathcal{H}_{\lambda,\kappa}$
for
$$
\kappa:V \otimes V\rightarrow
\FF G\quad
\text{ defined by}\quad
\kappa (v_i, v_j) =
\sum_{k\neq i, j} (i\ j\ k)-(i\ k\ j)
\quad\text{ for }\quad
i\neq j.
$$
\end{cor}
\begin{proof}
By~\cref{classification},
there is an ordered tuple
$\mu=(a_{ij}, b_k, c
:1\leq i< j\leq n,
i\leq k <n)$
so that $\mathcal{H}_{\mu}=\mathcal{H}_{\lambda,0}$.
Observe that
$c=a_{123}-a_{ijk}$
for all distinct $i,j,k$.
Set $c'=c+1$
and consider the ordered tuple
$\mu'=(a_{ij}, b_k, c'
:1\leq i< j\leq n,
i\leq k <n)$.
The algebra
$\mathcal{H}_{\mu'}$
satisfies the PBW
property by~\cref{classification}.
Since
$c'-a_{123}+a_{ijk}=1$
for all distinct $i,j,k$,
$\mathcal{H}_{\mu'}=\mathcal{H}_{\lambda,\kappa}$
for $\kappa$ as given in the statement,
and $\mathcal{H}_{\lambda,\kappa}$
satisfies the PBW property.
\end{proof}
We end by highlighting a
handy $2$-parameter family of Drinfeld Hecke algebras.
\begin{cor}
The algebra generated by $v$ in
$V=\FF^n$ and $\FF G$
with relations
$$
g v_i -\ ^g v_i\, g
= a \big(g(i)-i\big)g
\ \text{ and }\
v_i v_j - v_j v_i =
b \sum_{k\neq i, j}
(i\ j\ k)-(i\ k\ j)
\quad\text{ for all } g\in G, i\neq j
$$
satisfies the PBW property
for any $a,b$ in $\mathbb F$.
\end{cor}
\begin{proof}
Use \cref{classification} with $a_{ij}=0$ , $c=1$, and $b_k=1$ for all $1\leq i<j\leq n$, $1\leq k<n$.
\end{proof}
\vspace{2ex}
\section{Acknowledgements}
The authors thank the referee for a very
careful reading of the article
and many helpful suggestions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{introduction}
For a $k$-dimensional manifold $M^k$, a family of smooth embeddings $\phi_t:M^{k} \rightarrow \mathbb{R}^{n}$ for $t\in (a,b)$ is said to evolve by mean curvature if it satisfies the equation $\frac{d}{dt}\phi_t(x)=\vec{H}(\phi_t(x))$, where $\vec{H}$ is the mean curvature vector. If a compact submanifold $M \subseteq \mathbb{R}^{n}$ is of type $C^2$, it follows from standard parabolic PDE theory that there exists a unique mean curvature flow starting from $M$ for some finite maximal time $T$. \\
The question of mean curvature flow (and geometric flows in general) with rough initial data, i.e. when the $C^2$ assumption is weakened, has been researched extensively (see e.g \cite{EH2},\cite{EH},\cite{Wan},\cite{Sim2},\cite{KL},\cite{Lau},\cite{Her}). For the co-dimension one, arbitrary dimensional mean curvature flow, two results form the forefront in that regard: In the case that $M$ is merely Lipschitz, short time existence was proved by Ecker and Huisken in the celebrated paper \cite{EH}. More recently, under the assumption that the initial set is $(n-1)$-dimensional $(\de,R)$ Reifenberg flat (see Definition \ref{Reif_def} ) with $\de$ sufficiently small, short time existence and uniqueness was shown in \cite{Her} (see also Theorem \ref{main_thm}). Those two results are very different in nature; The result in \cite{EH} allows \textit{any} Lipschitz submanifold as an input, by that also dictating a local graph structure and a finite local area. The result in \cite{Her} allows some \textit{higher Hausdorff dimensional} sets which are not graphical at any scale (such as variants of the Koch-snowflake) to be taken as inputs, but it requires $\de$ to be \textit{small}. Note however that the Lipschitz assumption implies the Reifenberg property, the Reifenberg parameter $\de$ being the Lipschitz constant.\\
In the high co-dimensional case, the optimal known result, which is due to Wang, speaks about the same objects as Ecker-Huisken's result, but has the smallness character of the result in \cite{Her}. More precisely, it was shown in \cite{Wan} that there exists some $\de_0$ such that if $M$ is uniformly locally Lipschitz $k$-dimensional sub-manifold of $\mathbb{R}^n$, with Lipschitz constant less than $\de_0$ (i.e. there exists some $R>0$ such that every point has a ball of radius $R$ around it on which the sub-manifold is an $\de_0$-Lipschitz graph) then there exists a mean curvature flow emanating from it (in light of the example in \cite{OL}, the smallness assumption in high co-dimension is necessary). By the discussion above, the high co-dimensional generalization of the result in \cite{Her}, which will be stated shortly, will form a full (qualitative) generalization of the result in \cite{Wan}. To state this result, we first need to define the objects to which it applies.
\begin{definition}[\cite{Reif}]\label{Reif_def}
Given $\de>0$, $R>0$ and $k\in \mathbb{N}$, a compact connected set $X \subseteq \mathbb{R}^{n}$ is called \textbf{$k$-dimensional $(\de,R)$ Reifenberg flat} if for every $x\in X$ and $r<R$ there exists a $k$-dimensional plane $P_{x,r}$ passing through $x$ such that
\begin{equation}
d_H(B(x,r)\cap P_{x,r}, B(x,r)\cap X) \leq \de r.
\end{equation}
Here $d_H$ is the Hausdorff distance.
\end{definition}
Any $C^2$ $k$-submanifold is easily seen to be $k$-dimensional $(\de,R)$ Reifenberg flat for some $\de,R>0$. Every uniformly locally Lipschitz $k$-submanifold of $\mathbb{R}^n$ is trivially $k$-dimensional $(\de,R)$ Reifenberg flat as well. The Reifenberg condition is however general enough to include some sets with Hausdorff dimension larger than $k$ (see \cite{Tor},\cite{Her}). \\
Another notion that one needs in order to discuss evolution of non smooth initial data is that of a weak solution to the $k$-dimensional mean curvature flow. This weak mean curvature flow is called the level set flow, as its original definition, due to Evans and Spruck \cite{ES1} and Chen-Giga-Goto \cite{CGG} (in the co-dimension one case) was via viscosity solutions for the equation of a level set of a function evolving by mean curvature. In co-dimension one, a geometric, avoidance principle based, equivalent definition was given in \cite{Ilm3}. In high co-dimension, while a viscosity solution based definition was given in \cite{AS}, there is no effective geometric definition of weak mean curvature flow (of arbitrary sets), as even smooth flows cease to satisfy avoidance. The definition of the level set flow in \cite{AS} is technical, and will thus be postponed to Section \ref{uniq_sec}. For now, it suffices to know that the $k$-dimensional level set flow is a semi-group action of $\mathbb{R}_+$ (time) on compact sets $X\subseteq \mathbb{R}^n$, $(t,X)\mapsto X_t$ which, starting from an initial $k$-dimensional submanifold, coincides with smooth $k$-dimensional mean curvature flow, for as long as the latter is defined. Up to the specificities of this association, which will defined formally (and further investigated) in Section \ref{uniq_sec}, we can now state our main theorem.
\begin{theorem}\label{main_thm}
There exists some $\de_0,c_0>0$ such that if $X$ is $k$-dimensional $(\de,R)$-Reifenberg flat in $\mathbb{R}^n$ for $0<\de<\de_0$ then the $k$-dimensional level set flow (in the sense of \cite{AS}) emanating from $X$ $(X_t)_{t\in (0,c_0R^2)}$ is a smooth $k$-dimensional mean curvature flow, which further attains the initial value $X$ in the following sense:
\begin{equation}
\lim_{t \rightarrow 0}d_{H}(X,X_t)=0.
\end{equation}
In fact, there exist some $c_1,c_2>0$ with $c_1^2<\frac{1}{8}$ and $\frac{1}{4c_1}-c_2>\sqrt{2k}$ such that the following estimates on the evolution hold:
\begin{enumerate}[label=\roman*.]
\item $|A(t)| \leq \frac{c_1}{\sqrt{t}}$.
\item $d_H(X_t,X) \leq c_2\sqrt{t}$.
\item $X_t$ has a tubular neighborhood of size $\frac{\sqrt{t}}{4c_1}$.
\end{enumerate}
\end{theorem}
\begin{remark}
Recall that level set-flow should be thought of as (and in some regards is) ``the set of all possible evolutions'' (see \cite[Sec. 10]{Ilm} and \cite[Thm. 5.4]{AS}). Thus, in addition to existence of a smooth mean curvature flow emanating from $k$-dimensional Reifenberg sufficiently flat sets, we get uniqueness in the strongest possible sense.
\end{remark}
\begin{remark}
In light of the discussion preceding the statement of the theorem, this qualitatively generalizes the result from \cite{Wan}. As the $\de_0$ of Theorem \ref{main_thm} is smaller than the one from \cite{Wan}, the generalization is only qualitative, i.e. there are still initial submanifolds for which the result in \cite{Wan} is applicable while Theorem \ref{main_thm} is not.
\end{remark}
\vspace{5 mm}
The proof of theorem \ref{main_thm} is naturally divided into two parts: existence, in which we will construct a mean curvature flow satisfying estimates $i-iii$ of Theorem \ref{main_thm} and uniqueness, in which we will show that the resulted flow is actually the level-set flow. The proof of the existence part goes along the same lines as the existence part of the main theorem in \cite{Her} and will be described in Section \ref{exist_sec}, where the parts that are identical will only be stated, referring to \cite{Her} for the proof. The parts which require additional work to the one done in the co-dimension one case will be treated in full. The proof of the uniqueness, which will inhibit Section \ref{uniq_sec}, is completely different from the one in \cite{Her}. Back there, one could work almost entirely in the realm of smooth solution, utilizing inward and outward approximations \cite[Cor. 2.11]{Her} and Ilmanen's avoidance based definition of the level set flow \cite{Ilm3}. The uniform estimates on the evolution of the approximations (see \cite[Thm. 1.14]{Her} and Theorem \ref{un_est}) coupled with a separation estimate \cite[Thm. 1.20]{Her} were then used to show that the flows emanating from the inward and outward approximations remain very close, providing barriers to the level set flow. In high co-dimension there is no notion of ``inside'' and ``outside'', and, as discussed above, there is no avoidance based definition of the level set flow. In order to show uniqueness (and in fact, even define it) we will therefore have to revert to work entirely in the viscosity solution realm. While in co-dimension one the viscosity solution definition of the level set flow is very well known and standard, the high co-dimensional analogue of it, introduced in \cite{AS} is far less known. Some part of our work will consist of exploring it a bit further than what was done in \cite{AS}.
The short time uniqueness of the flow is an immediate corollary of the existence part of Theorem \ref{main_thm} and the following general strong smooth uniqueness criterion for the level set flow, which is of interest by its own right.
\begin{theorem}\label{smooth_uniq}
Let $X\subset \mathbb{R}^n$ be a connected compact set, let $c_1,c_2>0$ be constant and let $(X_t)_{t\in(0,T]}$ be a smooth $k$-dimensional mean curvature flow, satisfying
\begin{enumerate}[label=\roman*.]
\item $|A(t)| \leq \frac{c_1}{\sqrt{t}}$.
\item $d_H(X_t,X) \leq c_2\sqrt{t}$.
\item $X_t$ has a tubular neighborhood of size $\frac{\sqrt{t}}{4c_1}$.
\end{enumerate}
Assume further that $c_1^2\leq\frac{1}{8}$ and $\frac{1}{4c_1}-c_2>\sqrt{2k}$. Then $X_t$ is the level set flow of $X$.
\end{theorem}
Theorem \ref{smooth_uniq} is a quantitative generalization of the fact that, starting from a smooth sub-manifold, the level set flow coincides with smooth mean curvature flow. As in the smooth case (see \cite[Sec. 3]{AS}), the idea is to use the distance to $X_t$ to construct non-negative lower barriers to the level set equation. As conditions $i-iii$ are more precise than a smoothness assumption, this choice should be made more quantitatively; The construction of the sub-solution, as well as the choice of the parabolic neighborhood along the boundary of which the barrier can be seen to be smaller than the solution, should reflect estimates $i-iii$. In terms of the proof, this means that as opposed to the smooth case, where one could get along by some continuity based arguments, in our case we will be forced to compute some quantities more explicitly, and to use avoidance of balls (which is true in arbitrary co-dimension (see Lemma \ref{lower_bound})) to get some initial estimates on the behavior of the level set equation (with the right initial data).
\begin{remark}
As a flow satisfying $i-iii$ provides approximations at different scales to $X$, having such smooth flow implies some regularity of $X$. This regularity is far weaker than the one assumed in Theorem \ref{main_thm} (c.f. Theorem \ref{approx_smfd}), so we expect Theorem \ref{smooth_uniq} to be applicable in other situations as well.
\end{remark}
\section{Existence}\label{exist_sec}
The proof of the existence of a flow $(X_t)_{t\in (0,c_0R^2)}$ satisfying estimates $i-iii$ of Theorem \ref{main_thm} is divided, as in the co-dimension one case of \cite{Her}, into three steps: construction of approximations at different scales (Section \ref{approx_sec}) , obtaining uniform estimates on the flows emanating from them, and passage to a limit (Section \ref{conclus_sec}). The proof of the uniform estimates in turn, consists of three major ingredients: estimates for graphical mean curvature with small initial data on thick cylinders (Section \ref{main_estimate_sec}), an interpolation lemma (Section \ref{inter_ext_sec}), and an iteration scheme (Section \ref{conclus_sec}). As stated in the introduction, in this section we will address in full the parts that require different treatment than the one in \cite{Her} and mention the parts that remain the same.
\subsection{Approximation}\label{approx_sec}
A guideline to proving estimates on a class of weak objects is to first approximate them by smooth objects, then derive estimates that depend only on quantities that are expressible for the weak objects as well, and finally pass to a limit. The first step in our case is the following approximation theorem, which is essentially from \cite[Appendix B]{KL1}, where the hypothesis used are different, but the construction remains the same.
\begin{theorem}[\cite{KL1}]\label{approx_smfd}
For every $\delta>0$ there exists $\de>0$ such that if $X\subseteq \mathbb{R}^{n}$ is $k$-dimensional $(\de,R)$ Reifenberg flat, then for every $0<r<R/10$ there exists a $k$ dimensional sub-manifold $X^r$ such that
\begin{enumerate}
\item $d_H(X,X^r) \leq \delta r$.
\item $|A^r| \leq \frac{\delta}{r}$
\item For every $x\in X$, $X^r\cap B(x,r)=G\cup B$ where $G$ is connected and $B \subseteq B(x,r)-B(x,(1-\delta r))$.
\end{enumerate}
\end{theorem}
\begin{remark}
In \cite{Her} we utilized a stronger global approximation result from \cite{HW}, but its proof depended on mollifying the characteristic function of the domain bounded by $X$, which only works in co-dimension one.
\end{remark}
\begin{remark}
Reifenberg's topological disk theorem \cite{Reif} follows easily from Theorem \ref{approx_smfd}; The approximations at comparable scales are graphical above one another, and composing those graphical representations yields the bi-H\"{o}lder parametrization (c.f. proof Corollary \ref{approx_smfd_add}).
\end{remark}
\begin{remark}\label{almost_same_tangents}
While the ``approximate tangents'' of a $k$-dimensional $(\de,R)$ Reifenberg flat set vary with point and scales, comparable scales and nearby points have very close approximating tangents. More precisely, for every $\delta>0$ there exists $\de>0$ such that if $X$ is $k$-dimensional $(\de,R)$ Reifenberg flat, then for every $r<R/10$ and $x_1,x_2\in X$ with $d(x_1,x_2)<r$, both $||P_{x_1,r}-P_{x_2,r}||<\delta$ and $||P_{x_1,r}-P_{x_1,10r}||<\delta$. Here $P_{x_1,r}$ is as in Definition \ref{Reif_def}, $||-||$ is the operator norm and we use the standard identification of a $k$ sub-space with the projection operator to it. This elementary observation is the key property of Reifenberg flat sets that leads to Theorem \ref{approx_smfd}.
\end{remark}
\begin{proof}[Proof of Thm. \ref{approx_smfd}]
Let $\phi:[0,\infty)\rightarrow \mathbb{R}_+$ be a smooth function such that $\phi|_{[0,1]}=1$ and $\phi|_{[2,\infty)}=0$. Let $G(n-k,n)$ be the $n-k$ dimensional Grassmanian and $E(n-k,n)$ be the total space of the tautological vector bundle over $G(n-k,m)$. Now, fix $r<R/10$ and let $p_1,\ldots, p_L\in X$ be a maximal collection of points such that the balls $B(p_i,r/6)$ are disjoint. In particular $N(X,r):=\{y\in \mathbb{R}^n\;\;\mathrm{s.t}\;\; d(y,X)<r\} \subseteq \bigcup B(p_i,2r)$. For every $i\in \{1,\ldots L\}$ fix an $n-k$ dimensional plane $Q_i=P_{p_i,r}^{\perp}$ where $P_{p_i,r}$ is as in the definition of a Reifenberg flat set, and set $\phi_i(y)=\phi(|y-p_i|/2r)$. Now, for every $y\in N(X,r)$ define $O_y=\frac{\sum_{i=1}^{L}\phi_i(y)Q_i}{\sum_{i=1}^{L}\phi_{i}(y)}$ and note that there exists some $I$, independent of $X$ and $r$, such that for every $y\in N(X,r)$, at most $I$ of the summands in the numerator are non zero. Fixing $x\in X$ and letting $Q_x=P_{x,r}^{\perp}$ we note that for every $y\in B(x,2r)$, all the contributors to $O_y$ are in $B(x,6r)$. Thus, by Remark \ref{almost_same_tangents}, $||O_y-Q_x||_{C^3(B(x,2r))}\leq \alpha(\de)$, where $\lim_{\de\rightarrow 0}\alpha(\de)=0$. As $O_y$ is symmetric and very close to $Q_x$ , If we let $\tilde{Q}_y$ be the orthonormal projection to the span of the $n-k$ eigenvectors of $O_y$ with eigenvalues close to $1$, we also have
\begin{equation}\label{closeness}
||\tilde{Q}_y-Q_x||_{C^3(B(x,2r))}\leq \alpha(\de).
\end{equation}
Finally, set $\eta(y)=\frac{\sum_{i=1}^{L}\phi_i(y)\tilde{Q}_y(y-p_i)}{\sum_{i=1}^{L}\phi_{i}(y)}$ and define $\pi:N(X,r)\rightarrow E(n-k,n)$ by $\pi(y)=(\pi_1(y),\pi_2(y))=(\tilde{Q}_y,\eta(y))$ and define $X^{r}$ to be the inverse image of the zero section $\xi$ in $E(n-k,n)$.
\vspace{5 mm}
Let us verify that $X^{r}$ indeed satisfies the desired properties. First observe that if $\pi_2(y)=0$ then $y\in N(X,\delta(\de))$ where $\lim_{\de \rightarrow 0}\delta(\de)=0$. Indeed, let $x\in X$ be the closest point to $y$ and observe, as before, that if $p_i$ provides a non zero contribution to $O_y$ then $p_i\in B(x,6r)$ and so by the Reifenberg property at scales $r$ and $10r$ $|Q_x(p_i)|<\delta(\de)$ and so $|Q_x(y)|<\delta(\de)$ by \eqref{closeness}. Thus $d(P_{x,r}(y),y)<\delta(\de)$ and by the Reifenberg property $d(X,y)<\delta(\de)$. Moreover, as $\pi(N(X,r))\pitchfork \xi$ (again, by \eqref{closeness}), $X^{r}$ is a $k$-dimensional sub-manifold. Fixing $x\in X$ we have that $|\tilde{Q}_x(x)|<\delta(\de)$ and so by \eqref{closeness} again, there is $y\in N(x,r)$ with $d(y,x) < \delta(\de)$ such that $\pi_2(y)=0$. Thus, we have established that $X^r$ is a sub-manifold that satisfy condition 1.
Taking $x\in X$ and $x'\in P_{x,r}\cap(B(x,r))$ we see that $||\pi_{2}|_{x'-x+Q_x}-Id||_{C^3(B(x',3r))}<\delta(\de)$ and so by the quantitative version of the the inverse function theorem, there exists a unique point $y\in (x'-x+Q_x)\cap B(x,2r)$ with $\pi_2(y)=0$. Thus $X^{r}\cap P_{x,r}^{-1}(B^{k}(x,r))\cap B(x,2r)$ is a graph of a function $f$ over $P_{x,r}\cap B^{k}(x,r)$ with $||f(x_1,\ldots, x_k)||_{C^3(B^k(x,r))}=||Q_x(x_1,\ldots, x_k, f(x_1,\ldots, x_k))||_{C^3(B^k(x,r))}<\delta(\de)$. This completes the proof.
\end{proof}
\begin{corollary}\label{approx_smfd_add}
For every $\delta>0$ there exists $\de>0$ such that if $X\subseteq \mathbb{R}^n$ is $k$-dimensional $(\de,R)$ Reifenberg flat set then in addition to $1-3$ of Theorem \ref{approx_smfd}, for every $x\in X$ and $s\in (r,R/10)$, $X^r\cap B(x,s)$ can be expressed as $X^r\cap B(x,s)=G\cup B$ where $G$ is connected and $B\subseteq B(x,s)-B(x,(1-5\delta)s)$.
\end{corollary}
\begin{proof}
Just like in \cite[Lemma 4.4 and Lemma 4.9]{Her} (see also Lemma \ref{interpolation}), conditions (1)-(3) of Theorem \ref{approx_smfd} imply that for $\de>0$ sufficiently small, $X^s$ has a tubular neighborhood of radius $s/4$ and that $X^{s/4}$ is a graph of a function $f_s$ over $X^s$ with $|f_s(x)| \leq 2\delta s$. Defining $f:X^s\rightarrow X^{4^{-j}s}$ by $f(y)=f_{4^{-k+1}s}\circ f_{4^{-k+2}s} \circ \ldots \circ f_s(y)$ we see that for every $y\in X^s$, $d(f(y),y)< 4\delta s$. This, together with property (3) for $X^s$ completes the proof.
\end{proof}
\subsection{Estimates for Graphical Mean Curvature Flow with Small Initial Data on Thick Cylinder}\label{main_estimate_sec}
In this section, we will generalize the proof of the main estimate for graphical mean curvature flow \cite[Thm. 5.1]{Her} to the arbitrary co-dimensional case.
\begin{theorem}\label{main_est}
There exist some $c>0$ such that for every $\delta>0$ and $M>0$, there exist positive $\tau_0=\tau_0(M,\delta)<<1$ and $\lambda_0=\lambda_0(M,\delta)<<1$ such that for every $0<\lambda<\lambda_0$ there exists some $\de_0=\de_0(\delta,M,\lambda)$ such that for every $0<\tau<\tau_0$ and $\de<\de_0$ the following holds:
\noindent If $u:B^k(p,r)\times [0,\tau r^2]\rightarrow \mathbb{R}^{n-k}$ is a graph moving by mean curvature such that:
\begin{enumerate}
\item For every $(x,t) \in B(p,r)\times [0,\tau r^2]$
\begin{equation}
|\nabla u(x,t)| \leq M\de,\;\;\;\;\;|u(x,t)| \leq M^2\beta.
\end{equation}
\item For every $x \in B(p,r)$ we have
\begin{equation}
|\nabla u(x,\lambda \tau r^2)| \leq \de,\;\;\;\;\; |u(x,\lambda \tau r^2)| \leq \beta.
\end{equation}
\end{enumerate}
Then:
\begin{equation}
|A(p,\tau r^2)| \leq (1+\delta)\frac{1}{\sqrt{\pi}}\frac{\de}{\sqrt{\tau} r},\;\;\;\;\;|A(p,\tau r^2)| \leq c\frac{\beta}{\tau r^2}+\delta\frac{\de}{\sqrt{\tau} r}.
\end{equation}
\end{theorem}
As in the proof of the co-dimension one case, the idea is to regard the graphical mean curvature flow equation as a non-homogeneous heat equation. The controlled growth of the function and its gradient (condition 1 in Theorem \ref{main_est}), and the thickness of the cylinder ($\tau(M)<<1$) allows one to derive a Schauder type estimate for the non-homogeneous heat equation \cite[Thm. 5.12]{Her} for which the homogeneous part ``does not see the boundary'' and depends only on the initial slice (i.e. on $\de,\beta$, but not on $M$). In that regard, the estimate one gets for the homogeneous part are (up to a multiplicative constant) like the ones obtained for physical solutions to the heat equation on the full space. Proving Theorem \ref{main_est} then reduces to showing that H\"{o}lder norm of the non-linearity behaves sub-linearly.
In the co-dimension one case, the major step towards obtaining those estimates was a H\"{o}lder gradient estimate for $u$, which was proved by tracing the dependences of the constants in the proof of H\"{o}lder gradient regularity for parabolic quasilinear equations of general type \cite{Lie}. This led to showing that some H\"{o}lder norm of $\nabla u$ is at most linear in the $C^0$ norm of $\nabla u$ (when the latter is small). Such argument is not valid in the high co-dimensional case (as there is no such general H\"{o}lder gradient estimate), but by virtue of a compactness argument, one gets a weaker result, which will nevertheless suffice for our purposes.
\begin{theorem}\label{holder_grad_bds}
There exists some $\tau_0>0$ such that for every $\delta>0$ there exists $\de>0$ such that for every $0<\tau<\tau_0$ the following holds: If $u:B^k(p,r)\times [0,\tau r^2]\rightarrow \mathbb{R}^{n-k}$ is a solution to the graphical mean curvature flow with $||Du||<\de$ then, setting $B^{\tau}(p,r)=B(p,(1-1000\sqrt{\tau_0})r)\times [0,\tau r^2]$
\begin{equation}
\sup_{z_1,z_2\in B^{\tau}(p,r)}d_{z_1,z_2}^\alpha\frac{||Du(z_1)-Du(z_2)||}{d(z_1,z_2)^{\alpha}}<\delta,\;\;\;\;\;\; \sup_{z\in B^{\tau}(p,r)}d_{z}\frac{||D^2u(z)||}{d(z_1,z_2)}<\delta.
\end{equation}
Here
\begin{equation}
d((x_1,t_1),(x_2,t_2))=\sqrt{|x_1-x_2|^2+|t_1-t_2|},
\end{equation}
$d_{z_1}=d(z_1,\partial(B(p,r)\times [0,\tau r^2]))$ (note that this is \underline{not} the distance to the boundary of $B^{\tau}(p,r)$) and $d_{z_1,z_2}=\min(d_{z_1},d_{z_2})$.
\end{theorem}
\begin{proof}
We argue by contradiction. Assume w.l.g that $r=1$ and note that the first inequality is trivial when $d(z_1,z_2) \geq d_{z_1,z_2}$ and follows by integration from the second inequality otherwise. Suppose that there exist some $\delta$ such that for every $m$ we can find $\tau^m<\tau_0$ and a solution of the graphical MCF $u^m:B^k(x,r)\times [0,\tau^m r^2]\rightarrow \mathbb{R}^{n-k}$ with $||Du^m||<1/m$ and $z_m \in B^{\tau^m}(p,r)$ such that
\begin{equation}
d_{z_m}||D^2u^m(z_m)||\geq\delta.
\end{equation}
Setting $z_m=(x_m,t_m)$, the closest boundary point to $z_m$ is $(x_m,0)$. Let $\lambda_m=\sqrt{\frac{\tau_0}{t_m}}\geq 1$ and define
\begin{equation}
v^m(y,s)=\lambda_m\left[u^m(x_m+y/\lambda_m,s/\lambda_m^2)-u^m(x_m,0)\right].
\end{equation}
Note that $v^2_m(0,0)=0$ and set $\xi_m=(y_m,s_m)=(0,\tau_0)$ . By the definition of $B^{\tau^m}(p,r)$, we conclude that $v^m$ is defined (at least) on $B(0,1000\sqrt{\tau_0})\times [0,\tau_0]$, satisfies the graphical mean curvature flow equation, and while $||Dv^m||\leq 1/m$, for $\xi_m =(0,\tau_0)$ we have
\begin{equation}
||D^2v^m(\xi^m)||\geq \delta/\sqrt{\tau_0}.
\end{equation}
On the other hand, by the estimate from \cite{Wan}, the sequence sub-converges to a solution of the graphical mean curvature flow which is on one hand constant and on the other, has non-vanishing second derivative.
\end{proof}
Now, the graphical mean curvature equation has the form
\begin{equation}
\partial_t u -\Delta u=a^{ij}\frac{\partial^2 u}{\partial x_i\partial x_j}=N(Du,D^2u)
\end{equation}
where
\begin{equation}
a^{ij}=\left[\left(\delta_{kl}+\left<\frac{\partial u}{\partial x_k},\frac{\partial u}{\partial x_l}\right>\right)^{-1}\right]^{ij}-\delta^{ij}.
\end{equation}
Note that $a^{ij}$ is a rational function in the gradient of $u$, where the numerator $P(Du)$ has neither free coefficients, nor terms that are linear in $Du$. Thus, by Theorem \ref{holder_grad_bds} (and the estimates from \cite{Wan}), for every $\tau<\tau_0$ and every $\delta$ there exists $\de_0>0$ such that for every $\tau<\tau_0$, for every solution of the graphical MCF on $B(p,r)\times [0,\tau r^2]$ with $||Du||<\de$ we have, for every $x,y\in B^{\tau}(p,r)$
\begin{equation}
d_x|N(x)| \leq \delta \de,\;\;\;\;\; d_{x,y}^{1+\alpha}\frac{|N(x)-N(y)|}{d(x,y)^{\alpha}} \leq \delta \de.
\end{equation}
As discussed above, those estimates for the non-linearity, together with\cite[Thm. 5.12]{Her} imply Theorem \ref{main_est}.
\subsection{Extension and Interpolation}\label{inter_ext_sec}
In this section we include two very simple result. The first regards the extension of curvature bounds forward in time for arbitrary compact submanifolds, while the second is an interpolation result, at the presence of curvature bounds and Hausdorff bounds.
By \cite[Lemma 2.1]{Wan}, the evolution of the second fundamental form in arbitrary co-dimension satisfies the inequality
\begin{equation}
\frac{d}{dt}|A|^2 \leq \Delta |A|^2+ C(k,n)|A|^4.
\end{equation}
Thus, by the maximum principle, and the fact that the curvature must blow up at a singularity, one has the following extension lemma:
\begin{lemma}\label{extension}
If $M$ is a $k$-dimensional mean submanifold of $\mathbb{R}^n$ with $|A|\leq \alpha$ then there exist a mean curvature flow $M_t$ starting from $M$ that exists for (at least) $0\leq t \leq \frac{1}{C(k,n)\alpha^2}$, such that the norm of the second fundamental form satisfies the estimate
\begin{equation}
|A(t)| \leq \frac{\alpha}{\sqrt{1-C(k,n)\alpha^2t}}.
\end{equation}
\end{lemma}
The following elementary interpolation is central in our argument:
\begin{lemma}[Interpolation]\label{interpolation}
Fix $\delta>0,\alpha>0$. There exists $\beta_0>0$ such that for every $\beta<\beta_0$ the following holds: Assume $p\in X\subseteq B^n(p,r)$ where $X$ is a $k$-submanifold with
\begin{enumerate}
\item $|A| \leq \frac{\alpha}{r}$ .
\item $d_H(P\cap B^n(p,r),X \cap B^n(p,r)) \leq \beta r$ for $P=\spa\{e_1,\ldots,e_k\}$.
\end{enumerate}
Then inside the cylinder $C_{\delta,\beta}=B^k(p,(1-\delta)r)\times [-\beta r,\beta r]^{n-k}$, the connected component of $p$ is a graph of a function $u$ over $P$ and we have the estimate
\begin{equation}
|\nabla{u}| \leq \sqrt{3\alpha\beta}
\end{equation}
(and $|u|\leq \beta r$).
\end{lemma}
\begin{proof}
Assume w.l.g. $r=1$ and $p=0$ and denote $Q=P^\perp$. For $\beta$ sufficiently small $C_{\delta/4,\beta}\subseteq B(0,1)$ and $\alpha\beta<1$. Now, let $x\in C_{\delta/2,\beta}$ and let $\gamma(t)$ be a unit speed geodesic with $\gamma(0)=p$. We may assume w.l.g., by possibly changing the parametrization according to $t\mapsto -t$, that $\left<\gamma'(0),e_n\right>=\max_{v\in Q,\;||v||=1}\left<\gamma'(0),v\right>$ and that $x_n(\gamma(t)) \geq 0$. Letting $f(t)=x_n(\gamma(t))$ we find $f'(t)=\left<\gamma'(t),e_n\right>$ and $f''(t)=\left<\gamma''(t),e_n\right>=\left<\gamma''(t),e_n-\left<\gamma'(t),e_n\right>\gamma'(t)\right>\geq -\alpha\sqrt{1-f'(t)^2}$. The equality case of the above ODE for $f'(t)$ corresponds to a circle of radius $\frac{1}{\alpha}$. Letting $\mu(t):\mathbb{R}\rightarrow \mathbb{R}^2$ be a clockwise and unit speed parametrized circle of radius $\frac{1}{\alpha}$ with $\mu(0)=(0,0)$ and $\left<\mu'(0),e_2\right>=f'(0)$ we see that as long as $x_2(\mu(t))$ is increasing, and as long as $\gamma(t)\in C_{\delta/2,\beta}$, $x_n(\gamma(t))\geq x_2(\mu(t))$. For $\beta$ sufficiently small (depending on $\alpha,\delta$) $x_2(\mu(t))$ will reach its maximum at time $0<T<\delta/4$ so the extra condition $\gamma(t)\in C_{\delta/2,\beta}$ is redundant . Thus $x_2(\mu(t))\leq x_n(\gamma(t))\leq \beta$, and an easy calculation for circles in the plane gives the bound
\begin{equation}\label{almost_tan}
\tan\angle(T_xX,P) \leq \frac{\sqrt{2\beta\alpha -\alpha^2\beta^2}}{1-\alpha\beta} \leq \sqrt{3\alpha\beta}
\end{equation}
for $\beta$ sufficiently small.
What remains to be shown is that the connected component of $p$ is indeed a graph. Assume there exist $x_1,x_2\in X\cap C_{\delta,\beta}$ with $x_1\neq x_2$ but $P(x_1)=P(x_2)$. Observe that by \eqref{almost_tan}, $X\cap \overline{C_{\delta,\beta}}$ is a sub-manifold with boundary. Let $\gamma:[0,a]\rightarrow X \cap \overline{C_{\delta,\beta}}$ be a minimizing geodesic between $x_1$ and $x_2$. Such a geodesic is always $C^1$ and is smooth for as long $\gamma(t)$ is away from the boundary. For such $t$ however $||P(\gamma''(t))|| \leq \sqrt{3\alpha\beta}\alpha$ by \eqref{almost_tan} and so for $\beta$ sufficiently small, and as $\gamma'(0)$ is almost parallel to $P$, $P(\gamma(t)))$ is almost a straight line until it hits the boundary (at some $t<4$). Since $\gamma(t)$ is $C^1$, and intersects the boundary with an exterior normal component, this is a contradiction.
To see that for every $y\in B^n(0,1-\delta)$ there is some $x\in X$ with $P(x)=y$, note that by the Hausdorff condition, we can find $\bar{x}\in X\cap B(0,(1-\delta/2))$ with $d(\bar{x},y)\leq \beta$ (when $\beta$ is small). Taking $\bar{y}=P(\bar{x})$ we see, again, by \eqref{almost_tan} for $\bar{x}$, and the fact that the curvature scale $\frac{1}{\alpha}$ is far bigger than $\beta$, that there will exist a point over $y$ as well.
\end{proof}
\subsection{Construction of a Flow}\label{conclus_sec}
For $\de$ sufficiently small, if $X$ is $k$-dimensional $(\de,R)$ Reifenberg flat, Theorem \ref{approx_smfd} and Corollary \ref{approx_smfd_add} provide us with smooth approximations to $X$ at different scales. The extension lemma, Lemma \ref{extension}, the interpolation Lemma, Lemma \ref{interpolation}, and the a-priori estimate, Theorem \ref{main_est} can substitute the corresponding result of \cite{Her} in the iteration scheme of \cite[Sec. 3.2,3.3]{Her}. Thus, just like there we obtain the following uniform estimates.
\begin{theorem}[Uniform estimates]\label{un_est}
There exist some $\de$ and $c_0,c_1,c_2,c_3$ such that if $X$ is $k$-dimensional $(\de,R)$-Reifenberg flat, and considering the approximating surfaces $X^r$ from Theorem \ref{approx_smfd} and Corollary \ref{approx_smfd_add}, each $X^r$ flows smoothly by $k$-dimensional mean curvature for time $t\in [0,c_0 R^2]$ and for every $t\in [c_3r^2,c_0R^2]$ we have: (1) $|A^r(t)| \leq \frac{c_1}{\sqrt{t}}$ where $A^r(t)$ the second fundamental form of $X^r_t$. (2) $d_H(X^{r}_t,X) \leq c_2\sqrt{t}$. (3) For every $x\in X$ and $s\in (\frac{\sqrt{t}}{c_1},R/4)$ we have
\begin{equation}\label{uni_est_conn}
B(x,s)\cap X^r_t=G\cup B
\end{equation}
where $G$ is connected and $B\cap B(x,\frac{9}{10}s)=\emptyset$. Moreover, the constants $c_1,c_2$ satisfy $c_1^2 \leq \frac{1}{8}$ and $c_1c_2<\frac{1}{2}$.
\end{theorem}
By the uniform estimates one can pass to a sub-limit flow $X_t$ as in Theorem \ref{main_thm}. Note that the condition $iii$ of Theorem \ref{main_thm} follows easily from $1-3$ of Theorem \ref{un_est} (see \cite[Lem 4.4]{Her}).
\section{Uniqueness}\label{uniq_sec}
In this section, we will prove Theorem \ref{smooth_uniq} which, together with the existence part of Theorem \ref{main_thm}, imply the full Theorem \ref{main_thm}. In section \ref{Prelim} we will recall the definition and some properties of the high co-dimensional level set flow from \cite{AS}. In section \ref{dist_evolve} we will recall and further explore the behavior of the associated level set operator on distance functions from smooth evolutions by mean curvature. Section \ref{smooth_uniq_sec} will be devoted to the proof of Theorem \ref{smooth_uniq}.
\subsection{Preliminaries}\label{Prelim}
Let us start by introducing some notations. For every $0\neq p\in \mathbb{R}^{n}$ define $P_p$ to be the projection to the orthogonal complement of $p$. Given an $n\times n$ symmetric matrix $A$ and such $p$, let $X=P_pAP_p$. If we denote the eigenvalues corresponding to $p^\perp$ by $\lambda_1(X) \leq\ldots \leq \lambda_{n-1}(X)$, define
\begin{equation}
F(p,A)=\sum_{i=1}^{k}\lambda_i(X).
\end{equation}
In \cite{AS}, the level sets of positive viscosity solution to the equation
\begin{equation}\label{visc_eq}
\frac{d}{dt} u=F(\nabla u, \nabla^2 u)
\end{equation}
were used to give a definition for weak mean curvature flow. Before diving into formalities, we will try to convince the reader that this approach is plausible. Let $(M_t)_{t\in[0,T]}$ be a smooth family of $k$-dimensional sub-manifolds of $\mathbb{R}^n$. Let $u:\mathbb{R}^n\times [0,T]\rightarrow \mathbb{R}_+$ be a smooth function such that for every $t\in [0,T]$, $M_t=\{x\in \mathbb{R}^n \;\;\mathrm{s.t.}\;\; u(x,t)=0\}$ and such that $\nabla u \neq 0$ on a neighborhood of $M_t$, and $u\geq \delta_0>0$ outside that neighborhood. For $\de$ sufficiently small, $M^\de_t=\{x\in \mathbb{R}^n \;\;\mathrm{s.t.}\;\; u(x,t)=\de\}$ would be a smooth ``tubular'' hypersurface around $M_t$. We would expect it to have $n-k-1$ principal curvatures that are very large, corresponding to ellipsoids in the orthogonal complement of $T_pM$ for $p\in M_t$. The other $k$ principal curvatures should be very close to the ones of $M_t$ w.r.t. the normal of $M^\de_t$, as for every geodesics $\gamma(s)$ of $M_t$ and every point on $x\in M^\de_t$ which is closest to $\gamma(0)$, there should be an almost geodesic curve in $M^\de_t$ which is orthogonal to the above mentioned ellipsoids which ``traces $\gamma$'' . The second fundamental form of $M^\de_t$ at $x\in M^\de_t$ is given by $A(x,t)=\frac{1}{|\nabla u |}P_{\nabla u(x,t)}\nabla^2u(x,t)P_{\nabla u(x,t)}$, so we expect $\frac{1}{||\nabla u||}F(\nabla u,\nabla^2u)$ to be very close to $-\vec{H}(\pi(x),t)\cdot \frac{\nabla u}{||\nabla u||}$, where $\vec{H}(\pi(x),t)$ is the mean curvature of $M_t$ at the point closest to $x$. On the other hand, the normal velocity of $M^\de_t$ at $x$ is given by $-u_t/||\nabla u||$ and so equation \eqref{visc_eq} tells us that, parameterizing the flow of $M_t^\de$ by $\phi^\de(x,t)$ and letting $\nu(x,t)$ be the normal to $M_t^\de$ at $x$, we have
\begin{equation}
\frac{d}{dt}\phi^\de(x,t)\cdot \nu = \vec{H}(\pi(x),t)\cdot \nu+O(\de).
\end{equation}
Thus, the entire ellipsoid around $\pi(x)$ moves approximately by $\vec{H}(\pi(x))$, which would correspond to a motion of $M_t$ by a $k$-dimensional mean curvature flow.\\
We proceed by defining the level-set flow and by collecting some of its properties from \cite{AS}. Let $u_0$ be a non-negative, uniformly continuous function. Theorem 2.4 in \cite{AS} states that there exists a unique uniformly continuous, positive viscosity solution to equation \eqref{visc_eq} $u:\mathbb{R}^n\times [0,\infty)$ such that $u(-,0)=u_0$ (for the definition of viscosity solution, see \cite[Def. 2.1]{AS}).
\begin{definition}\cite[Def 2.6]{AS}
Let $X\subseteq \mathbb{R}^n$ be a closed set. Let $u_0$ be a non-negative, uniformly continuous function such that $X=\{x\in \mathbb{R}^{n} \;\;\textrm{s.t.}\;\; u_0(x)=0\}$. Letting $u$ be the solution of the IVP of equation \eqref{visc_eq} with $u(x,0)=u_0(x)$, the $k$-dimensional \textbf{level-set flow} of $X$ is defined to be $X_t=\{x\in \mathbb{R}^{n} \;\;\textrm{s.t.}\;\; u(x,t)=0\}$.
\end{definition}
A-priori, this definition may depend on the choice of $u_0$, but Theorem 2.5 of \cite{AS} shows that different choices yield the same result.\\
The following three properties of the level-set flow and the level-set equation would be of importance to us.
\begin{property_a}\label{prop_a}
If $X$ is a smooth $k$-dimensional sub-manifolds of $\mathbb{R}^n$ and the smooth mean curvature flow $(X_t)_{t\in [0,T]}$ is defined for all $t\in [0,T]$ then $X_t$ is also the level-set flow of $X$ (see \cite[Cor. 3.9]{AS}).
\end{property_a}
\begin{property_b}
If $u,v:\mathbb{R}^n\times [0,T]\rightarrow \mathbb{R}_+$ are two non-negative, uniformly continuous solutions to \eqref{visc_eq} then $||u-v||_{L^{\infty}(\mathbb{R}^n\times[0,T])} \leq ||u-v||_{L^{\infty}(\mathbb{R}^n\times\{0\})}$ (see \cite[Thm. 2.2]{AS}).
\end{property_b}
\begin{property_c}\label{prop_c}
Let $\Omega \subseteq \mathbb{R}^n \times [0,T]$ be a bounded domain, and let $v,u$ be non-negative, uniformly continuous functions on $\mathbb{R}^n\times [0,T]$ such that $v$ is a sub-solution to \eqref{visc_eq} in $\Omega$ and $u$ is a solution of \eqref{visc_eq} in $\Omega$. If $u\leq v$ on $\partial_{\mathrm{par}} \Omega$ the $u\leq v$ on $\Omega$. Here $\partial_{\mathrm{par}} \Omega$ is the parabolic boundary of $\Omega$ (see remark below).
\end{property_c}
\begin{remark}
In \cite[Thm. 2.2]{AS} it was shown that $F$ satisfies the assumptions of \cite[Thm. 2.1]{GGIS}, and of Thm. 4.1 of \cite{CGG}, both of which imply Property C for domains of the form $\Omega=D\times [0,T]$ where $D\subseteq \mathbb{R}^n$ is a compact domain. The proof of Thm. 4.1 in \cite{CGG} works just as well for an arbitrary bounded domain $\Omega$, while compactness also gives Prop. 2.3 of \cite{GGIS}, from which point the proof of Thm. 2.1 of \cite{GGIS} works for arbitrary such $\Omega$ as well. This is, of course, not surprising at all, as the weak maximum principle is a statement about the interior.
\end{remark}
\subsection{Evolution of Distances}\label{dist_evolve}
Let $(M_t)_{t\in [0,T]}$ is a family of $k$-submanifolds evolving by smooth mean curvature flow. For each $0\leq t \leq T$, $M_t$ has a tubular neighborhood at which the distance function from $M_t$, $r_t$ is smooth. Studying the properties of $F(\nabla r_t,\nabla^2 r_t)$ and of $\frac{d}{dt}r_t-(\nabla r_t,\nabla^2 r_t)$ will occupy the first part of this section. In the second part, we will give a simple lower bound, which is based on avoidance of balls, for the solution of the level-set equation starting from a distance function from an \textit{arbitrary} set.\\
\begin{lemma}\label{dist_tub_stat}
Let $M$ be smooth, $k$-dimensional sub-manifold in $\mathbb{R}^{n}$ with normal injectivity radius $\rho$ and let $r$ be the distance function from $M$. For every $0\leq s \leq \rho$, let $M^s$ be the smooth co-dimension $1$ level set $M^s=\{y\in \mathbb{R}^{n}\;\;\mathrm{s.t}\;\;r(y)=s\}$ and let $A^s$ be the second fundamental form of $M^s$ with respect to the interior normal. Let $x\in M$, $\nu\in SN_xM$ (the unit normal space to $M$ at $x$), $P=T_xM$ and $Q=(P\oplus \nu)^\perp$. The following hold:
\begin{enumerate}
\item for every $0<s<\rho$ the principal directions of $M^s$ at $x+s\nu$ are independent of $s$ and split according to $P$ and $Q$.
\item $A^s|_{Q}=\frac{1}{s}Id$ while $A^s|_P$ is bounded inside the tubular neighborhood, and if $v_1,\ldots v_k\in P$ are the principal directions of $A^{-\nu}(;)=\left<A(;), -\nu\right>$ with eigenvalues $\beta_1,\ldots, \beta_k$ then
\begin{equation}\label{ev_prop}
A^s|_{P}(v_i,v_i)=\frac{\beta_i}{1+s\beta_i},
\end{equation}
and $A^s|_P(v,v) <\frac{1}{s}$ for every $v\in P$ with $||v||=1$.
\end{enumerate}
\end{lemma}
\begin{remark}
Most of the above lemma is stated and proved in \cite[Thm 3.2]{AS}. Equation \eqref{ev_prop} was proved for some constants $\beta_i$, without identifying them as the principal directions of $-A(;)\cdot \nu$. This fact was not needed for the applications therein, and was stated there as a conjecture (which the authors did not care much about).
\end{remark}
\begin{proof}[Proof of Lemma. \ref{dist_tub}]
$(x+P^\perp)\cap M^s$ consist of a sphere of radius $s$ in $x+P^\perp$, whose normal at $x+s\nu$ is $-\nu$, so $A^s|_{Q}=\frac{1}{s}$ as stated. Fix $v=\sum a_iv_i\in P$, and let $\gamma(t)$ be a unit speed geodesic in $M$ with $\gamma(0)=x$ and $\dot{\gamma}(0)=v$, and let $\nu(t)$ be a normal field along $\gamma(t)$, which solves the linear system of ODEs $\nu(0)=\nu$ and $N_{\gamma(t)}M(\dot{\nu}(t))=0$ in the normal bundle to $M$. In particular $||\nu(t)||=||\nu(0)||=1$, so $\mu(t):=\gamma(t)+s\nu(t)$ is a curve in $M^s$ such that
\begin{equation}
||\dot{\mu}(0)||^2=||v+sA^{-\nu}(v,v_i)v_i||=\sum \left(a_i(1+s\beta_i)\right)^2
\end{equation}
and
\begin{equation}
A^s(\dot{\mu}(0),\dot{\mu}(0))=\left< \ddot{\mu}(0),-\nu\right>=A^{-\nu}(v,v)+s||\dot{\nu}(0)||^2=\sum a_i^2\beta_i+s\sum (a_i\beta_i)^2.
\end{equation}
As $s<\rho < \frac{1}{|\beta_i|}$, $1+s\beta_i>0$ and so
\begin{equation}
\frac{\sum a_i^2\beta_i+s\sum (a_i\beta_i)^2}{\sum \left(a_i(1+s\beta_i)\right)^2} <\frac{1}{s}
\end{equation}
holds. Additionally,
\begin{equation}
A^s(v_i,v_i)=\frac{\beta_i+s\beta_i^2}{(1+s\beta_i)^2}=\frac{\beta_i}{1+s\beta_i}.
\end{equation}
\end{proof}
\begin{lemma}\label{dist_tub}
Let $N_t$ be a tubular neighborhood of a smooth $k$-dimensional mean curvature flow $M_t$ then, $r(x,t)$ be the distance function from $M_t$ and $v_1(x,t),\ldots, v_k(x,t)$ be the principal directions of $M_t$ at $\pi(x,t) \in M_t$ w.r.t. the normal $-\nabla r(x)$ (where $\pi(x,t)$ is the nearest point projection to $M_t$). Then
\begin{equation}
\frac{d}{dt}r-F(\nabla r,\nabla^2 r)=\left(\sum_{i=1}^{k}\frac{\left<A(v_i,v_i),-\nabla r\right>^2}{1+r\left<A(v_i,v_i),-\nabla r\right>} \right)r
\end{equation}
on $N_t-M_t$.
\end{lemma}
\begin{proof}
Fix $t$, $x\in N_t-M_t$ and let $p=\pi(x,t)$. By the definition of $F$ and the fact that $||\nabla r||=1$, $F(\nabla r(x,t),\nabla^2r(x,t))$ is the sum of the $k$ smallest principal curvatures of $M^r_t$ at $x$. By Lemma \ref{dist_tub_stat}, those principal curvatures correspond to vectors in $T_{\pi(x,t)}M_t$, so since the trace of a matrix is independent of the basis we get, again by Lemma \ref{dist_tub_stat},
\begin{equation}
F(\nabla r,\nabla^2 r)=\sum_{i=1}^{k}\frac{\left<A(v_i,v_i),-\nabla r\right>}{1+r\left<A(v_i,v_i),-\nabla r\right>}.
\end{equation}
On the other hand, since $M_t$ moves by mean curvature, by the first variation of length we get
\begin{equation}
\frac{d}{dt}r(x,t)=\left<\vec{H}(\pi(x,t),t),-\nabla r\right>=\sum_{i=1}^{k}\left<A(v_i,v_i),-\nabla r\right>.
\end{equation}
The result follows.
\end{proof}
\vspace{5 mm}
The following lower bound on the solution of the level set flow starting from a distance function from a set will be used in the proof of Theorem \ref{smooth_uniq}. As discussed above, it is based on the fact that although arbitrary co-dimensional mean curvature flow does not satisfy avoidance, it does satisfy it w.r.t. (co-dimension one) balls, moving according to the sum of their lowest $k$ principal curvatures (which for balls are, of course, the same). For Brakke flows, this fact was already observed in Brakke's original manuscript \cite{Bra}. In a sense, the following lemma shows it for the Ambrosio-Soner level-set flow.
\begin{lemma}\label{lower_bound}
Let $X$ be a closed set and let $p$ be a point with $d(X,p)=R$. Let $g$ be the distance function from $X$ and consider the level set flow $u$ starting from $g$. If $R^2>2kt$ then
\begin{equation}
u(p,t) \geq R-\sqrt{2kt}
\end{equation}
\end{lemma}
\begin{proof}
Let $\tilde{g}(y)$ be a function which equals $\min\{d(X,y),R-\sqrt{2kt}\}$ on $\mathbb{R}^{n}-B(p,\sqrt{2kt})$ and $R-d(p,y)$ on $B(p,\sqrt{2kt})$. Letting $\tilde{u}$ be the solutions to the level-set equation corresponding to $\tilde{g}$, for every $0\leq c < R-\sqrt{2kt}$, $\{x\;\;|\;\; u(x,t)=c\}=\{x\;\;|\;\; \tilde{u}(x,t)=c\}$ while by continuity and by the known evolution of spheres $\tilde{u}(p,t)=R-\sqrt{2kt}$. Thus $u(p,t)\geq R-\sqrt{2kt}$.
\end{proof}
\subsection{Conclusion}\label{smooth_uniq_sec}
\begin{proof}[Proof of Thm. \ref{smooth_uniq}]
Set $X_0=X$, consider the functions $r(x,t)=\mathrm{dist}(x,X_t)$ and $v(x,t)=r^2(x,t)/\sqrt{t}$ and let $N=\{(x,t)\in \mathbb{R}^{n}\times[0,T]\;\;| r(x,t) < \frac{\sqrt{t}}{4c_1}\}$ and $N_t=N\cap \left(\mathbb{R}^n\times \{t\}\right)$. $v$ is smooth in $N$ and by Lemma \ref{dist_tub}
\begin{equation}
\frac{d}{dt}v-F(\nabla v,\nabla^2 v)=2\left(\sum_{i=1}^{k}\frac{\left<A(v_i,v_i),-\nabla r\right>^2}{1+r\left<A(v_i,v_i),-\nabla r\right>}-\frac{1}{4t} \right)v
\end{equation}
where $v_i$ are the principal directions of $X_t$ w.r.t. the normal $-\nabla r(x)$. Furthermore by the assumptions of the Theorem, for every $(x,t)\in N$ we have
\begin{equation}
\left|r\left<A(v_i,v_i),-\nabla r\right>\right|\leq \frac{c_1}{\sqrt{t}}\cdot \frac{\sqrt{t}}{4c_1}< \frac{1}{2},\;\;\;\;\;\;\;\;\;\;\;\sum_{i=1}^{k}\left<A(v_i,v_i),-\nabla r\right>^2 \leq \frac{c_1^2}{t},
\end{equation}
and since $c_1^2 \leq 1/8$ we find that $v$ is a sub-solution to equation \eqref{visc_eq} in $N$.
Let $d(x)$ be the distance function from $X=X_0$ and let $u$ be the solution to \eqref{visc_eq} with the initial data $d$. For $(x,t)\in \partial_{\mathrm{par}} N$ we have
\begin{equation}
d(x) \geq r(x,t)-d_H(X,X_t) \geq \left(\frac{1}{4c_1}-c_2\right)\sqrt{t}>\sqrt{2kt}
\end{equation}
so by Lemma \ref{lower_bound}
\begin{equation}
u(x,t) \geq \left(\frac{1}{4c_1}-c_2-\sqrt{2k}\right)\sqrt{t}\geq \alpha v(x,t)
\end{equation}
for $\alpha=16c_1^2\left(\frac{1}{4c_1}-c_2-\sqrt{2k}\right)>0$. Thus, by Property C, $u \geq \alpha v$ on $N$. The first part of the above argument also shows that $u>0$ outside of $N$. In particular, as the level set flow of $X$, $\tilde{X}_t$ is defined to be the zero set of $u$, and as $u>0$ on $\mathbb{R}^n-N_t$ and $u\geq \alpha v>0$ on $N_t-X_t$ we see that $\tilde{X}_t \subseteq X_t$.
The inclusion $X_t \subseteq \tilde{X}_t$ is is simpler (and in the co-dimension one case, follows immediately from Ilmanen's definition). Suppose $u(x_0,t_0)=\delta > 0$ for some $0\leq t_0 \leq T$ and $x_0 \in X_{t_0}$. For every $s<t_0$, the distance function from $X_s$, $d^s(x)=r(x,s)$ satisfies $||d^s-d||_{L^{\infty}(\mathbb{R}^n)} \leq c_2\sqrt{s}$. Denote by $u^s$ the solution for the level set equation emanating from $d^s$. Then
\begin{equation}
0<\delta=u(x_0,t_0) \leq |u(x_0,t_0)-u(x_0,t_0-s)|+|u(x_0,t_0-s)-u^s(x_0,t_0-s)|
\end{equation}
where we have used the fact that $u^s(x_0,t_0-s)=0$ by Property A. The first term of the left hand side goes to zero as $s\rightarrow 0$ since $u$ is continuous, while the second term also goes to zero since $||u^s-u||_{L^{\infty}(\mathbb{R}^n\times[0,T])} \leq ||d^s-d||_{L^{\infty}(\mathbb{R}^n)}$ by Property B. This is a contradiction, so if $x_0\in X_{t_0}$ we must have $u(x_0,t_0)=0$, i.e. $x_0\in \tilde{X}_{t_0}$.
\end{proof}
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The derivation of hadronic interactions from QCD has been a goal of nuclear
physics for many years. At present this appears to be a very difficult problem;
even the
more modest goal of deriving hadronic interactions from the nonrelativistic
quark model is difficult, due in part
to the variety of mechanisms which contribute to
scattering.
In a typical hadronic scattering process these mechanisms include
s-channel resonance production, t-channel
resonance exchange, and nonresonant scattering.
Despite the apparent complexity, there is considerable evidence
that some scattering amplitudes
are dominated by relatively simple
perturbative QCD processes.
One well known example is the
short-range part of the
NN interaction; many
groups have concluded that the NN repulsive core is due to the combined
effects of
the Pauli principle and the color magnetic spin-spin component of one gluon
exchange.
Similarly, the
intermediate-range
attractive interaction may be due to a relatively simple effect at the
quark level, specifically a color-dipole interaction induced by the spatial
distortion of the three-quark clusters \cite{NIcart}.
Of course one pion exchange dominates at sufficiently large
separations, and in a more complete description one should adjoin this to
the short-range quark interaction.
In this paper we discuss $I = 3/2$ $K\pi $ elastic scattering in the
nonrelativistic quark model. This process resembles $NN$
scattering in that s-channel resonances are not expected to give important
contributions, assuming that multiquark resonances are not in evidence.
The Born-order QCD scattering amplitude for this process involves
one gluon exchange followed by quark exchange.
In a previous paper \cite{BS} it was shown that this simple
description of hadron-hadron scattering leads to
good agreement with the nonperturbative variational results
of Weinstein and Isgur
\cite{WI} near threshold,
and with the measured $I=2$ $\pi\pi$ S-wave phase shift throughout the
full range of $M_{\pi\pi}$ for which data exists. A
similar method was used in Ref.\ \cite{Swan}
to extract effective potentials for many
low-lying
meson-meson channels. These
potentials have recently been applied to several
problems in low-energy meson
physics. In particular, Dooley {\it et al.} \cite{DSB} used the results
of Ref.\ \cite{Swan} to suggest that the $IJ^{PC} = 0 0^{++}$ $\theta
(1720)$ may be a $(K^*\bar K^*)$-$(\omega\phi)$ vector-vector molecular
bound state. Simple estimates of
branching ratios in this model find
good agreement with Particle Data Group values
and predict new decay modes. In another application it has been
argued that the $f_1(1420)$ ``E" effect may be a threshold enhancement which is
due to an attractive $(K^* \bar K)$-$(\omega \phi)$ interaction
in the $01^{++}$ channel \cite{Swan}.
The $I= 3/2$ $K\pi $ channel is an ideal one for testing our model
of Born scattering amplitudes,
since we expect it to be largely unaffected by s-channel resonances.
The success of the
previous application of the model to $I = 2$ $\pi\pi$ scattering
and the assumption of flavor symmetry suggest
that we may also find reasonable agreement with the experimental
S-wave $K\pi $ phase shift. The P-wave, however, is driven entirely by
flavor symmetry {\it breaking}, and hence the interplay of these two waves
provides an interesting and nontrivial test of the model.
\section{Method}
The calculation is based on a standard quark model Hamiltonian of the
form
\begin{equation}
H = \sum_{i=1}^4 \left( {p_i^2 \over 2 m_i} + m_i \right) + \sum_{i<j}
\left[ V_{conf}(r_{ij}) + V_{cont}(r_{ij})\; \vec S_i \cdot \vec S_j +
V_{orb} \; \right] \vec F_i \cdot \vec F_j
\end {equation}
where
\begin{equation}
V_{cont} = {-8 \pi \alpha_s\over 3 m_i m_j}\; \delta(\vec r_{ij})
\end {equation}
is the contact color-hyperfine interaction, $V_{conf}$ is a
confinement potential, and $V_{orb}$ represents spin-orbit and tensor
interactions.
We
neglect $V_{orb}$ in this paper since its effects are generally found to
be numerically small
in meson spectroscopy \cite{GI}, and are expected to be unimportant in
the scattering of two $\ell=0$ mesons as well.
We shall also ignore the contribution of $V_{conf}$ to the scattering
interaction.
This may appear to be a questionable approximation; however,
resonating group calculations have found that the exchange (scattering)
kernel due to $V_{conf}$ is much smaller than the corresponding
kernel for the
hyperfine term \cite{Shim}. This result was also found in the
variational
calculation of Ref.\ \cite{WI} and the perturbative calculation of Ref.\
\cite{Swan}. The latter reference noted that the small $V_{conf}$
contribution to scattering is due to a color-factor
cancellation in the matrix element of $V_{conf}$.
However, this result only applies to certain channels;
one should {\it not} neglect the
effects of the confinement term in scattering involving
vector mesons.
Although we calculate the scattering amplitude only
to Born order, there is
evidence that this is a useful and even accurate approximation
in systems which are not dominated by s-channel resonances or t-channel
meson exchange.
First, as there is
little evidence for flavor mixing
in meson spectroscopy outside the $\eta-\eta'$ system,
one anticipates that neglecting higher terms in
the Born series (such as $q \bar q \rightarrow gg \rightarrow q \bar q$)
is not a bad approximation. In addition, the Born-approximation
$I=2$ $\pi\pi$ effective
potentials
derived in Refs.\ \cite{BS} and \cite{Swan} are numerically very similar
to the
nonperturbative potentials derived by Weinstein and Isgur.
Finally, comparison of perturbative phase shifts to those found in a
variational resonating group calculation shows good numerical
agreement \cite{Swan}.
For simplicity we use single Gaussians for the asymptotic pion and kaon
wavefunctions,
\begin{equation}
\psi_{\pi(K)}(r) = \left({\beta_{\pi(K)}^2 \over \pi}\right)^{3/4}
{\rm e}^{-\beta_{\pi(K)}^2\, r^2/2} \ ,
\end {equation}
where $r = |\vec r_q - \vec r_{\bar q}|$.
The corresponding momentum-space wavefunction $\phi(k_{rel})$ is
a function of the magnitude of the relative momentum vector
$\vec k_{rel}=(m_{\bar q}\vec k_q - m_q\vec k_{\bar q})/(m_q + m_{\bar q})$.
Flavor symmetry breaking is incorporated through unequal strange
and nonstrange quark masses (we introduce a mass ratio
$\rho = m_u/m_s$) and a meson width parameter $\xi$
(defined by $\xi = \beta_\pi^2/\beta_K^2$;
$\xi < 1$ corresponds to a smaller kaon than
pion).
Of course these parameters are related.
For instance if
we take $V_{conf}(r_{ij}) = C + \kappa r_{ij}^2 /2$ then
$\xi = \sqrt{(1 + \rho)/2}$.
Standard quark model values for the constituent masses,
$m_u = 0.33$ GeV and $m_s =
0.55$ GeV, give $\rho = 0.6$, and from the SHO relation above
we might anticipate
$\xi \approx 0.9$.
A fit to light meson spectroscopy in a
Coulomb plus linear potential model with a contact hyperfine term
finds a similar $\rho $ value of $\rho = 0.58$. With this $\rho$
a single-Gaussian variational calculation
\cite{Swan} actually finds a value for $\xi$ slightly above unity,
$\xi = 1.05$, because the stronger pion hyperfine attraction leads to a smaller
pion than kaon despite the heavier strange quark mass. In any case we expect
$\xi$ to be near unity.
There are four Born-order quark exchange graphs for $K\pi $ scattering,
which
we previously classified as two
``transfer" or ``capture" processes in our discussion of $\pi\pi$
scattering \cite{BS}.
The transfer diagrams represent scattering due to a
spin-spin hyperfine interaction between a
quark pair ($T_1$) or an antiquark pair ($T_2$).
In the capture
diagrams the interaction is between a quark-antiquark pair in different mesons,
$u\bar s$ for $C_1$ and $u \bar d$ for $C_2$. We apply the
methods of Ref. \cite{BS} (Appendix C)
to obtain the Born-order Hamiltonian matrix element
$h_{fi}$ for these diagrams, which is
\begin{equation}
h_{fi} = {1\over (2\pi)^3}
{4 \pi \alpha_s \over 9 m_u^2} \Big( T_1 + T_2 + C_1 + C_2 \Big) \ ,
\end {equation}
where the term contributed by each diagram is
\begin{mathletters}
\begin{eqnarray}
T_1 &=& \exp\bigg\{
- (1 + \xi(1+\zeta)^2 )\;
\bigg[ {1- \mu\over 2} \bigg] \;
{k^2 \over 4 \beta_\pi^2}\;
\bigg\} \\
T_2 &=& \rho \left({2\sqrt{\xi} \over 1 + \xi } \right)^{3}
\exp\bigg\{ -
{\xi \over 1+\xi} \; \bigg[ 1 + (1-\zeta)^2 + 2(1-\zeta)\mu \, \bigg] \;
{k^2 \over 4\beta_\pi^2} \; \bigg\} \\
C_1 &=& \rho \left({4 \over 2 + \xi}\right)^{3/2} \exp \bigg\{
-{ 1 \over 2+\xi} \; \bigg[ 1+ 3\xi - \xi \zeta (1-\zeta)
+ ( \xi - 1- 3 \xi \zeta) \mu \,
\bigg] \; {k^2 \over 4\beta_\pi^2}\; \bigg\} \\
C_2 &=& \left({4\xi \over 1 +2\xi}\right)^{3/2} \exp \bigg\{
-{ \xi \over 1+2\xi} \;
\bigg[ 3 - \zeta + \zeta^2 + \xi (1+\zeta)^2 \nonumber \\
&& + (1 - 3\zeta -\xi (1+\zeta)^2 ) \mu \, \bigg]
{k^2 \over 4\beta_\pi^2} \; \bigg\} \ .\\
\nonumber
\end{eqnarray}
\end{mathletters}
In these matrix elements $\mu = \cos(\theta_{CM})$, where $\theta_{CM}$ is
the center of mass
scattering angle, the quark mass parameter $\zeta$ is $
(m_s-m_u)/(m_s+m_u) = (1-\rho)/(1+\rho)$, and $k$ is the magnitude of
the asymptotic three-momentum of each meson in the CM frame.
The matrix element $h_{fi}$ is related to the $\ell$th partial-wave
phase shift by \cite{BS}
\begin{equation}
\delta^{(\ell)} = {- 2\pi^2 k E_\pi E_K \over ( E_\pi + E_K)} \int_{-1}^{1}
h_{fi}(\mu)\, P_\ell (\mu)\,d\mu \ ,
\end{equation}
with the meson energies related to $k$ by relativistic kinematics.
This result involves $\delta^{(\ell )} $ rather than
$\sin \delta^{(\ell)} $ because we choose to
equate our Born amplitude to the leading
term in the
elastic scattering amplitude $(\exp\{2i\delta^{(\ell)}\}-1)/2i$
rather than to the full real part. The phase shifts for all partial waves
follow
from this result
through application of the integral $\int_{-1}^1 e^{a\mu} P_\ell (\mu)
d\mu = 2 i_\ell (a)$ where $i_\ell$ is the modified spherical Bessel function
of the first kind.
\section{Results and Discussion}
On evaluating the angular integrals (6) we find
$I=3/2$ $K\pi $ phase shifts for all $\ell$ in Born
approximation given SHO wavefunctions; these are functions of the
four free parameters
$\beta_\pi$, $\alpha_s/m_u^2$, $\rho=m_u/m_s$, and $\xi=\beta_\pi^2/\beta_K^2$,
and require the physical meson masses as input.
In the following discussion we shall fix the nonstrange
quark mass to be
$m_u=0.33$ GeV since the phase shifts actually involve the ratios given
above rather than the absolute scale of $m_u$.
We proceed by fitting the predicted phase shifts to the S- and P-wave phase
shift data of Estabrooks {\it et al.} \cite{Esta}.
Note however that there may be a discrepancy between this
data and earlier $I=3/2$ $K\pi$ results \cite{Jong,LP} near threshold;
two S-wave data sets are shown in Fig.\ \ref{fig1}.
As an initial ``benchmark" prediction we first
neglect flavor-symmetry violation (except for the use of
physical meson masses in kinematics and phase space)
and employ the same
parameters we previously used to describe
$I=2$ $\pi\pi$ scattering in Ref.\ \cite{BS};
$\alpha_s = 0.6$, $m_u = 0.33$ GeV and $\beta_\pi($fitted to $\pi\pi)
= 0.337$ GeV,
and we
set $m_s=m_u$ and $\beta_K=\beta_{\pi}$ so that $\rho = 1$ and $\xi = 1$.
The resulting S-wave phase shift is shown as
a dotted line in Fig.\ \ref{fig1}. Although the shape of the predicted phase
shift is qualitatively correct, evidently the predicted magnitude is
somewhat larger
than the data at invariant masses above 0.9 GeV.
Of course this flavor-symmetric
parameter set is unrealistic because it does not assume a
heavier strange quark; allowing $m_s$ to vary
yields the value $\rho = 0.677$ in a fit to the S-wave data
of Estabrooks {\it et al.}; this is
close to the $\rho \approx 0.33 \ \hbox{GeV} / 0.55 \ \hbox{GeV}$ $= 0.6$
expected
from $q\bar q$ quark model spectroscopy. The resulting phase shift is shown as
a dashed line in Fig.\ \ref{fig1}, and the agreement is impressive. The same
parameter set gives a P-wave phase shift which is shown as a dashed line in
Fig.\ \ref{fig2}. Evidently the agreement with experiment
is again quite good. Note that
the predicted P-wave phase shift is zero for $\rho = 1$, so the S-wave data
are consistent with approximate flavor symmetry (which implies
$\pi\pi$ S-wave $\approx$ $K\pi $ S-wave) and the P-wave data are consistent
with the expected amount of
flavor symmetry breaking (seen in the nonzero $K\pi $ P-wave).
Although we have found a satisfactory description of the data simply by using
$\pi\pi$ parameters and physical meson masses
and fitting $m_s$, it is of interest to investigate the
sensitivity of our results to changes in the other parameters and to
determine their global optimum values.
Fixing $\xi = 1$ and $\beta_\pi = 0.337$ GeV and fitting $\alpha_s$ and
$\rho$ to the Estabrooks {\it et al.} S-wave data
gives $\alpha_s = 0.634$ and $\rho = 0.604$,
again consistent with standard quark model values.
A global fit to the 33 S- and P-wave data points of Estabrooks
{\it et al.}
with all four parameters free
gives $\rho = 0.789$, $\alpha_s = 0.577$, $\beta_\pi = 0.293$
GeV and $\xi = 0.568$. The rather large $m_u/m_s$ in this fit
is partially compensated by a spatially small kaon wavefunction, but as the
phase shifts are rather insensitive to $\xi$, and we expect a value
closer to unity, this best fit probably gives less realistic
parameter values than the
single-parameter fit which found $\rho=0.677$.
The phase shifts
predicted by the
the global four-parameter set are shown as
solid lines
in Figs.\
\ref{fig1} and \ref{fig2}.
Note that the four-parameter
S-wave phase shift is essentially indistinguishable
from the one-parameter $(m_s)$ fit (dashed line);
the most important difference in the predictions of the two parameter sets
is in the P-wave, which is not yet very well determined
experimentally.
Estabrooks {\it et al.} also reported measurements of the $I=3/2$ $K\pi $
D-wave phase shift. We predict a
small negative D-wave phase shift in accord with the data, although
the magnitude of our D-wave is somewhat smaller than is observed.
A similar discrepancy in the $I=2$ $\pi\pi$ D-wave was previously noted
\ \cite{BS,Swan}.
It should be stressed that the
D-wave is qualitatively different from the P-wave;
it is not driven by flavor symmetry breaking and is an intrinsically
small effect at these energies, so that other contributions which we have
neglected may be important here. Possible contributions to this higher
partial wave include
the confinement, spin-orbit and tensor interactions.
The departure of the actual $q\bar q$ wavefunction from the assumed single
Gaussian may also be important, although
this effect was investigated in Ref.\ \cite{Swan} for
$I=2$ $\pi\pi$ scattering
and was found to be small.
Weinberg \cite{Wein} used PCAC to predict an
$I=3/2$ $K\pi $ scattering length of
\begin{equation}
a_S^{(3)} = -{m_K m_\pi \over m_K + m_\pi} \ {1 \over 8 \pi f^2}
\end{equation}
in his original PCAC paper.
Here, $f$ is the pseudoscalar decay constant which may be identified with
$f_\pi$ in the flavor symmetric limit.
The quark Born approximation for the
scattering length may be extracted from our expression for the S-wave phase
shift, and is
\begin{equation}
a_S^{(3)} = -{m_K m_\pi \over m_K + m_\pi}\;
{2 \alpha_s \over 9 m_u^2} \left[ 1 +
\left( {4 \xi \over 1 + 2\xi}\right)^{3/2} + \rho \left( {4 \over 2 +\xi}
\right)^{3/2} + \rho \left( {2 \sqrt{\xi} \over 1 +\xi}\right)^3 \right] \ .
\end{equation}
With our various parameter sets we find the following values for
the scattering length $a_S^{(3)}$:
$-0.092/m_\pi$ ($\rho = 1, \xi = 1$); $-0.077/m_\pi$
($\rho = 0.677, \alpha_s = 0.6$); $-0.078/m_\pi$ ($\rho = 0.604,
\alpha_s = 0.634$); $-0.076/m_\pi$ (global fit). These are compared to the
PCAC prediction, one-loop chiral perturbation
theory and various model calculations in Table\ \ref{table1}.
Experimental values for the scattering length range from
$-0.07/m_\pi$ to $-0.14/m_\pi$, and are also summarized
in Table\ \ref{table1}. Note that
we may also interpret our scattering length as
a quark Born formula for $f_\pi $
if we assume the PCAC relation (7).
With the flavor-symmetric parameter set we find $f_\pi = 80$ MeV,
in reasonable agreement with the experimental value of 93 MeV.
Our theoretical values for the scattering length
are consistent with most
experimental results, but not with the most recent,
which is that of Estabrooks {\it et al}.
Lang and Porod \cite{LP} note that the
Estabrooks {\it et al.} S-wave phase shift agrees with previous data for
$m_{K\pi } \agt 1$ GeV, but is
somewhat larger in magnitude than previous measurements
for $m_{K\pi } \alt 1$ GeV. Presumably this leads to their rather
large scattering length. It would clearly
be useful to resolve this experimental
discrepancy, since only in this mass region is there any indication
of a possible disagreement between the S-wave phase shift and our predictions.
It would also be very useful to improve the accuracy of the P-wave
measurement, which is a sensitive test of flavor symmetry breaking,
and to extend the S-wave phase shift measurements to higher invariant masses
as a test of the extremum predicted and perhaps observed near 1.4 GeV.
\section{Conclusions}
We have calculated
$I = 3/2$ $K\pi $ elastic scattering phase shifts
using a Born-order constituent-exchange description
in the framework of
the nonrelativistic quark
potential model.
Extensive
previous work leads us to believe that
one gluon exchange combined with quark exchange
may accurately describe nonresonant
hadron scattering in certain channels including $I=3/2$ $K\pi$,
and that the Born
approximation to the scattering amplitude is an acceptable one. This
reaction
is appropriate
for testing this model of scattering because
t-channel pion exchange is forbidden by $G$-parity and the experimental phase
shift shows no evidence for s-channel resonance formation.
Since this model was previously found to describe
the related $I=2$ $\pi\pi$ S-wave phase shift accurately \cite{BS},
approximate flavor
symmetry leads us to expect
that the predicted $I = 3/2$ $K\pi $
S-wave phase shift should at least be
in qualitative agreement with the data. This is
indeed found to be the case. The agreement is considerably improved
when flavor symmetry is broken by assigning the strange quark
a mass consistent with standard quark model values.
The P-wave $K\pi $ phase shift however
is generated entirely by flavor symmetry breaking effects
(primarily by the strange-nonstrange quark mass difference in our model), and
is not present in $I=2$ $\pi\pi$ scattering.
The very reasonable result we find for the P-wave phase shift using
fitted S-wave parameters is therefore a nontrivial and successful test
of the model.
Finally, the model predicts an
S-wave scattering length of about $-0.077/m_\pi$, which is in the range of
reported experimental values and is commensurate with the predictions of chiral
perturbation theory.
Although we find a small negative D-wave phase shift as has been reported
experimentally, the magnitude and energy dependence are
not well reproduced. This, however, is a
small contribution to the scattering amplitude, and the various other
scattering mechanisms which have been neglected in this calculation
may be significant in this case, and should be investigated in future.
Weinstein and Isgur have also studied S-wave
$I=3/2$ $K\pi $ scattering in the nonrelativistic quark model, using a
nonperturbative variational technique \cite{WI2}.
(Their method does not allow extraction of higher partial waves
at present.)
They find good agreement
with the S-wave data, although they must first
scale the range and strength of their
effective $K\pi $ potentials.
We perform no such scaling
but do employ relativistic phase space; the fact that both methods agree
well with experiment suggests that their scaling may
actually be compensating for kinematic effects above threshold. This conclusion
is supported by a recent reanalysis of the Weinstein-Isgur variational
calculations \cite{JW}.
The obvious extension of this work is to meson-baryon and baryon-baryon
scattering, with the caveat that one should specialize to channels such as
K$^+$-nucleon and baryon-baryon in which $q\bar q$ pair creation and
annihilation is unimportant. These topics are currently under investigation.
\acknowledgements
This research was sponsored by the Natural Sciences and Engineering
Research Council of Canada; the United States Department of Energy under
contracts DE-AC05-840R21400 with Martin Marietta Energy Systems Inc.,
DE-FG05-91ER40627 with the Physics Department of the University of
Tennessee, DE-AC02-76ER03069 with the Center for Theoretical Physics at
the Massachusetts Institute of Technology; and by the State of
Tennessee Science Alliance Center under contract R01-1062-32.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Galaxy number counts as a function of magnitude provide direct
constraints on galaxy evolution in both luminosity and number density.
Number counts in the UV, in particular, can help trace the star
formation history of the universe. Until recently obtaining faint
galaxy number counts in the UV has been difficult due to the small
areas surveyed
\markcite{Gardner00,Deharveng94,Iglesias04,Sasseen02,Teplitz06}({Gardner}, {Brown}, \& {Ferguson} 2000; {Deharveng} {et~al.} 1994; {Iglesias-P{\'a}ramo} {et~al.} 2004; {Sasseen} {et~al.} 2002; {Teplitz} {et~al.} 2006). While
the {\sl Galaxy Evolution Explorer (GALEX)} has allowed for the
measurement of UV galaxy number counts over a wide field of view
\markcite{Xu05}({Xu} {et~al.} 2005, $\sim 20$ deg$^2$), the confusion limit of {\sl GALEX}
restricts the magnitude range covered to 14 to 23.8 $m_{AB}$. The
deepest UV number counts from {\sl HST\ } range from $m_{AB}$= 23 to
29 over an extremely small field of view of $\sim 1.3$ square arcminutes.
Here, we present galaxy number counts obtained in 3 near UV filters
(1928 \AA, 2246 \AA, 2600\AA) as well as in the $u$ band (3645\AA)
obtained using the {\sl Swift\/} UV/Optical Telescope
\markcite{UVOT}(UVOT; {Roming} {et~al.} 2005). Deep exposures were taken of a 289 square arcminute
field of view overlapping the Chandra Deep Field South
\markcite{CDFS}(CDF-S; {Giacconi} {et~al.} 2002) allowing for the measurement of number counts
from $m_{AB}$ = 21 to 26. UVOT data covers the break in the slope of
the NUV number counts with greater precision than the existing {\sl
GALEX} and {\sl HST} number counts. We use the UVOT number counts
to explore the evolution of star-forming galaxies out to $z\sim1$.
\section{Data \& Analysis \label{sect:data}}
The CDF-S was observed with UVOT, one of three telescopes onboard the
\textit{Swift} spacecraft \markcite{Swift}({Gehrels} {et~al.} 2004), the primary mission of which
is to study gamma-ray bursts (GRB). The UVOT is a 30 cm telescope
with $f$-ratio 12.7 \markcite{UVOT}({Roming} {et~al.} 2005). It has two grisms and seven
broadband filters. The central wavelengths and widths of the uvw2,
uvm2, uvw1, and $u$ filters used in this paper can be found in Table
\ref{tab:uvotobs}. For a more detailed discussion of the filters, as
well as plots of the responses, see \markcite{UVOTcal}({Poole} {et~al.} 2008).
Observations of the CDF-S were made between July 7, 2007 and December
29, 2007. CDF-S images are unbinned with a pixel scale of 0.5
arcseconds. UVOT data processing is described in the UVOT Software
Guide\footnote{\texttt{http://heasarc.gsfc.nasa.gov/docs/swift/analysis}}.
The data were processed with a version of the UVOT pipeline in which
exposure maps are aspect corrected. This feature is not currently
available for data currently in the archive but will appear in future
versions of the pipeline. Image files and exposure maps were summed
using \texttt{UVOTIMSUM} from the publicly available UVOT FTOOLS
(HEAsoft
6.6.1)\footnote{\texttt{http://heasarc.gsfc.nasa.gov/docs/software/lheasoft/}}.
This involves two flux conserving interpolations of the images, the
first to convert from the raw frame to sky coordinates, and the second
when summing the images. Bad pixels are known and a correction is
applied. UVOT, as is the case with all microchannel plate intensified
CCDs, is insensitive to cosmic rays. The maximum exposure time in
each filter is given in Table \ref{tab:uvotobs}.
Because \textit{Swift} is optimized for fast slewing to accomplish its GRB
mission, the pointing accuracy is of order 1 to 2 arcminutes. In
addition the requirement that the solar panels face towards the Sun
causes the field of view to rotate over the course of the year. As a
result the exposure times vary significantly across the summed images.
Exposure maps are nearly uniform in the center but become complicated
on the edges. Table \ref{tab:uvotobs} gives the area covered where
the exposure time is at least 98\% of the maximum exposure time in
each\ filter. The 98\% value was chosen to
maximize the area used in this study while simultaneously maintaining
a magnitude limited sample.
The area in each filter covered by the 98\% exposure time criterion is
shown in Figure \ref{fig:expmap}. For comparison the area covered by
the CDF-S, Hubble Ultra Deep Field \markcite{UDF}({Beckwith} {et~al.} 2006), and Great
Observatories Origins Deep Survey \markcite{GOODS}({Giavalisco} {et~al.} 2004) is shown by the
labeled contours. A false color image of the central region of the
CDF-S using the uvw2, uvm2, and uvw1 images is shown in Figure
\ref{fig:ecdfs}.
Photometry was performed using SExtractor \markcite{Sextractor}({Bertin} \& {Arnouts} 1996), a
publicly available code designed for the identification and
measurement of astrophysical sources in large scale galaxy survey
data. A full listing of the SExtractor parameters used is provided in
the online version of Table \ref{tab:params}. The background map,
which measures the local background due to the sky and other sources,
was generated internally by SExtractor. To improve the detectability
of faint extended sources the filtering option was used with a
Gaussian filter. The filter size was selected to match the full width
half maximum (FWHM) of the point spread function (PSF) as recommended
in the SExtractor manual. The PSF was measured from the CDF-S image
for each filter using one star. There was only one star which was
bright enough and isolated enough to accurately measure the outer
regions of the PSF. The PSFs used were 3.30" in uvw2, 2.87" in uvm2,
2.86" in uvw1, and 2.67" in $u$. Magnitudes were calculated from
\texttt{MAG\_AUTO} which is designed to be the best measure of the
total magnitudes of galaxies. SExtractor was used to process count
rate images created by dividing the summed images by the exposure map.
The resulting output was converted to flux using the values given by
\markcite{UVOTcal}{Poole} {et~al.} (2008) for stellar spectra. The fluxes were then converted
to AB magnitudes \markcite{Oke74}({Oke} 1974). The number of sources detected in
each band ranges from 888 to 1260 and is given in Table
\ref{tab:uvotobs} along with the area covered in each image.
The UVOT detector is a microchannel plate intensified CCD which
operates in photon counting mode. As such it is subject to
coincidence loss which occurs when two or more photons arrive at a
the same location on the detector within a single CCD readout interval
of 11 ms \markcite{Fordham00}({Fordham}, {Moorhead}, \& {Galbraith} 2000). When this happens only one photon will be
counted, which systematically undercounts the true number of photons.
The coincidence loss correction is at the 1\% level for m$_{AB} \sim
19$ in the UVOT filters we use in this paper. For the magnitude ranges
considered in our number counts the coincidence loss is insignificant
and no attempts are made to correct for it.
By design the CDF-S is on a line of sight with very low Galactic
extinction. In addition the area covered by the UVOT observation is
around 130 square arcminutes, depending on the filter, so variations
in extinction across the field are small. According to the dust maps
of \markcite{Schlegel98}{Schlegel}, {Finkbeiner}, \& {Davis} (1998) the range of Galactic extinction in our field is
$0.020 \leq {\rm A}_V \leq 0.030$. Our photometry is corrected for
Galactic extinction based on the position of the source assuming the
Milky Way dust curve of \markcite{Pei92}{Pei} (1992). The extinction correction is
largest in the uvm2 filter as it is centered on the 2175 \AA\ dust
feature which is pronounced in the Milky Way. The extinction
correction ranges from $0.053 \leq {\rm A_{uvm2}} \leq 0.086$ across
the field in uvm2 which demonstrates that the extinction correction is
not a significant source of error in any of the filters.
\section{Bias Corrections}
The raw number counts suffer from several biases which need to be
quantified. Completeness addresses the inability to detect an object
either due to confusion with other sources or limitations in the
photometry. Eddington bias \markcite{Eddington13}({Eddington} 1913) occurs
because magnitude errors will preferentially scatter objects into
brighter magnitude bins because there are generally more objects at
fainter magnitudes. There is also the potential for false detections of
objects due to noise.
These first two problems can be addressed simultaneously with a
Monte Carlo simulation, following the procedure set out in \markcite{Smail95}{Smail} {et~al.} (1995).
For each of the four images, synthetic galaxies were added and the
analysis repeated. Synthetic galaxies were placed at random locations
on the image. The magnitudes of the synthetic galaxies were between
21 and 27 in uvw2, uvm2 and uvw1 and between 20 and 25.5 in $u$ and
the relative numbers by magnitude follow the observed distribution
from the original SExtractor photometry in the relevant filter. The
synthetic galaxies are given exponential profiles with semi-major axes
and ellipticities that match the observed distribution as a function
of magnitude. Individual photon arrivals are modeled using Poisson
statistics and following the galaxy profile. The resulting image is
then convolved with the UVOT PSF for the final image.
For each filter a single synthetic galaxy was added to the real image
and the photometry process described in \S \ref{sect:data} was
redone. The resulting photometry catalog was checked to determine if
the synthetic galaxy was detected and at what magnitude. This was
repeated 50,000 times for each filter to build up statistics on the
completeness. The number counts were corrected by dividing by the
fraction of synthetic galaxies detected in the relevant magnitude bin.
These values are tabulated in Table \ref{tab:ncount}. Following
\markcite{Smail95}{Smail} {et~al.} (1995) the number counts are truncated where the completeness
correction exceeds 50\%. The Poisson error bars on the number counts
are also divided by the completeness correction to take into account
uncertainties introduced by the completeness correction.
Correcting for false detections was also done using the methods of
\markcite{Smail95}{Smail} {et~al.} (1995). Using the exposure time and background count rate
calculated from the background map output by SExtractor noise frames
were simulated for each filter. The photometry methods described in
\S \ref{sect:data} were repeated for each frame and the number of
false detections recorded as a function of magnitude. For each filter
100 noise frames were analyzed. The number of spurious sources,
$N_{spur}$, is shown as a function of magnitude for each filter in
Table \ref{tab:ncount}. Out of all the simulated frames only one
spurious source was detected. Given our deep exposures the
completeness correction truncates our number counts well before
background noise becomes an issue.
Galaxy number counts can be overestimated due to contamination by
Galactic stars and quasars. The fraction of objects in the field that
are quasars is estimated by position matching the four UVOT photometry
catalogs with the Extended Chandra Deep Field South X-ray point source
catalog \markcite{ECDFS}({Lehmer} {et~al.} 2005). Objects with X-ray detections are assumed to
be quasars. The number of such sources in each band is 11, 11, 14,
and 21 for the uvw2, uvm2, uvw1, and $u$ bands respectively. This
represents 1.2, 1.0, 1.1, and 2.3\% of the total sample. These
sources have been removed from the number counts. The number of AGN
per magnitude bin, $N_{AGN}$, is tabulated in Table \ref{tab:ncount}.
The problem of stellar contamination is greatly reduced by
the fact that the line of sight towards the CDF-S is out of the plane
of the Milky Way. The CDF-S field was explicitly chosen to be
particularly sparse. As a result the field is a statistical outlier,
and the stellar contamination in this field will be unusually low. In
addition, the fraction of stars with significant UV flux is low,
particularly when the field points toward the Galactic halo where the
stellar population is very old. This is another reason the stellar
contamination in the three NUV filters should be low.
The contamination due to stars is estimated by position matching the
UVOT photometry catalogs with objects in the field with stellar
classifications in the COMBO-17 survey \markcite{Wolf04}({Wolf} {et~al.} 2004). The
COMBO-17 survey includes photometry in 17 passbands for a $30\times30$
arcminute field surrounding the CDF-S. It also contains photometric
redshifts and classifications of objects in the survey. The UVOT
positions were compared with the objects classified as stars or white
dwarfs in COMBO-17. This yields 24, 15, and 40 stars in the uvw2,
uvm2, and uvw1 NUV filters which corresponds to 2.7, 1.4, and 3.2\% of
the total sample. The number of stars per magnitude bin, $N_{star}$ is
shown in Table \ref{tab:ncount}. Not all NUV number counts are
corrected for stellar contamination \markcite{Gardner00}(e.g. {Gardner} {et~al.} 2000). Given
the numbers provided in Table \ref{tab:ncount} the number counts can
easily be recalculated without the stellar contamination correction.
However it is different in the $u$ band where more stars have
significant fluxes. Position matching yielded 48 stars in the $u$
band which is 5.1\% of the total sample. As in the NUV counts the
stellar contamination has been corrected for and the details are in
Table \ref{tab:ncount}. \markcite{Capak04}{Capak} {et~al.} (2004) provide both raw number counts and
the number counts corrected for stellar contamination in the $U$ band
from observations around the Hubble Deep Field North (HDFN). The $u$
and $U$ filters are comparable, and the HDFN is similar to the CDF-S
in being one of the darkest areas of the sky pointed out of the
Galactic disk with low Galactic extinction. The level of stellar
contamination in \markcite{Capak04}{Capak} {et~al.} (2004) ranges from 66\% at $u=20$ to 6\% at
$u=25$. At the bright end of this scale the values are comparable,
but at the faint end they are roughly twice as high as in the CDF-S.
One possible explanation for this discrepancy is that the
\markcite{Capak04}{Capak} {et~al.} (2004) sample covers $\sim 720$ arcsec$^2$ compared to 137
arcsec$^2$ for this sample. Over this larger area one would expect
the number of stars to be closer to the average number expected for
that line of sight to the halo, while in our relatively smaller area
the number of stars can remain a statistical outlier.
Cosmic variance is another potential source of bias which arises due
to local inhomogeneities in the Universe. Galaxies are known to
cluster on many different length scales. As a result the density of
galaxies will differ along different lines of sight. The smaller the
area covered by a survey the more the results will be biased by cosmic
variance. A publicly available code from \markcite{Trenti08}{Trenti} \& {Stiavelli} (2008) was used to
estimate the errors due to cosmic variance in our number counts. This
code is based in part on $N$-body simulations of galaxy structure
formation. It uses the area of the survey, mean redshift, range of
redshifts observed and the number of objects detected to calculate the
error due to cosmic variance. The mean redshift and redshift range of
each of our luminosity bins was estimated from the model number counts
described in \S \ref{sect:models}. The results show that the
uncertainty due to cosmic variance are of the same order as the
Poisson errors for all of the filters and magnitude bins used here.
We therefore multiply our Poisson errors by a factor of $\sqrt 2$ to
take into account the effects of cosmic variance.
The resulting corrected number counts are shown in Figures
\ref{fig:ncuvw2} (uvw2), \ref{fig:ncuvm2} (uvm2), \ref{fig:ncuvw1}
(uvw1), and \ref{fig:ncuu} ($u$). The number counts are also given in
Table \ref{tab:ncount}. In Figures \ref{fig:ncuvw2}, \ref{fig:ncuvm2},
and \ref{fig:ncuvw1} the number counts are plotted along side the NUV
number counts from GALEX \markcite{Xu05}({Xu} {et~al.} 2005) and STIS \markcite{Gardner00}({Gardner} {et~al.} 2000). A
color conversion has been applied to shift the GALEX NUV filter and
STIS F25QTZ filter in the NUV channel by generating synthetic
magnitudes from a catalog of spectral synthesis models with a range of
ages and star formation histories and estimating the typical color
offset. The GALEX and STIS NUV filters have very similar bandpasses
which typically differ by less than 0.01 magnitudes. The uvm2 filter
has the tightest relationship with the NUV filters with a color
correction of 0.013. The spread is larger in the uvw2 and uvw1
filters, but is still only of order 0.05. In Figure \ref{fig:ncuu}
the UVOT $u$ band number counts are compared to the $U$ band counts of
\markcite{Capak04}{Capak} {et~al.} (2004) and \markcite{Eliche06}{Eliche-Moral} {et~al.} (2006) and the $u$ band measurements of
\markcite{Metcalfe01}{Metcalfe} {et~al.} (2001) and \markcite{Yasuda01}{Yasuda} {et~al.} (2001). Color corrections to the UVOT
$u$ band were determined in the same fashion and are equal to 0.81 in
$U$ and 0.06 in $u$.
\section{Models \label{sect:models}}
Simple models of number counts were constructed for both non-evolving
and evolving luminosity functions in the UVOT filters. For each model
the luminosity function is summed over redshift. This summation
includes two corrections. The first is a filter correction to convert
the GALEX NUV filter to the UVOT uvw2, uvm2, and uvw1 filters and the
$U$ band to the UVOT $u$ filter. The second is a $K$-correction to
convert the observed UVOT filter to the rest-frame UVOT filter. Both
of these corrections are a function of redshift.
These corrections were calculated using a model galaxy spectrum
generated with the publicly available P\'{E}GASE spectral synthesis
code \markcite{PEGASE}({Fioc} \& {Rocca-Volmerange} 1997). For the uvw2, uvm2, and uvw1 filters a starburst
galaxy model was used with a constant star formation rate, Solar
metallicity, and standard Salpeter IMF at an age of 800 Myr. This was
chosen to match the model number counts in \markcite{Xu05}{Xu} {et~al.} (2005) which used the
SB4 starburst template of \markcite{Kinney96}{Kinney} {et~al.} (1996) because it most closely
matched the ratio of the local FUV to NUV luminosity densities
described by \markcite{Wyder05}{Wyder} {et~al.} (2005). The P\'{E}GASE model is very nearly the
SB1 template of \markcite{Kinney96}{Kinney} {et~al.} (1996), however we model a range of internal
extinctions. An $\Omega_M = 0.3$, $\Omega_\Lambda = 0.7$, $H_0=70$ km
s$^{-1}$ Mpc$^{-1}$ cosmology is used throughout.
The $u$ band models were calculated assuming the cosmic spectrum of
\markcite{Baldry02}{Baldry} {et~al.} (2002) in addition to the starburst spectrum . The cosmic
spectrum is a luminosity weighted average spectrum of galaxies with
$z\lesssim 0.1$ which makes it a good choice for a template
representative of all galaxies. The empirical cosmic spectrum does
not extend far enough into the blue to be useful for modeling the UVOT
$u$ band let alone passing it through the filters at increasing
redshifts. To extend the spectrum into the ultraviolet a template
spectrum was created in P\'{E}GASE from the best fitting parameters
given by \markcite{Baldry02}{Baldry} {et~al.} (2002). The model number counts were corrected for
a range of models of internal extinction. Models were calculated for
$0 \leq {\rm A}_V \leq 2$ for the Milky Way, LMC, and SMC dust models
of \markcite{Pei92}{Pei} (1992) and the starburst dust model of \markcite{Calzetti94}{Calzetti}, {Kinney}, \& {Storchi-Bergmann} (1994).
Galaxy luminosity functions have traditionally been fit empirically
using Schechter functions \markcite{Schechter76}({Schechter} 1976). The Schechter
function is given by
\begin{equation}
\phi(M){\rm d}M = \frac{\ln 10}{2.5} \phi^* \left [10^{0.4(M^* - M)} \right ]^{\alpha + 1}
\exp \left [-10^{0.4(M^* - M)} \right ] {\rm d}M
\end{equation}
where $\phi(M){\rm d}M$ is the number of galaxies with absolute
magnitude between $M$ and $M +{\rm d}M$ per Mpc$^3$. Three free
parameters are fit using an empirical luminosity function; $\alpha$ is
the slope at the faint end of the luminosity function, $M^*$ is the
luminosity where the luminosity function turns over, and $\phi^*$ is
the density normalization.
For the non-evolving models the local galaxy luminosity function was
used at all redshifts. The GALEX NUV galaxy luminosity function of
\markcite{Wyder05}{Wyder} {et~al.} (2005) was used for the uvw2, uvm2, and uvw1 models. In the
$u$ band the models are based on the local $U$ band luminosity
function from \markcite{Ilbert05}{Ilbert} {et~al.} (2005). In the evolving models the Schechter
function parameters $\alpha$, $\phi^*$, and $M^*$ vary with redshift.
In the uvw2, uvm2, and uvw1 bands the evolution of the Schechter
function parameters is based on their evolution at 1500 \AA\ as found
by \markcite{Arnouts05}{Arnouts} {et~al.} (2005), normalized to match the \markcite{Wyder05}{Wyder} {et~al.} (2005) NUV
parameters for the local universe. For the $u$ band the evolution of
the Schechter function parameters comes from \markcite{Ilbert05}{Ilbert} {et~al.} (2005). In
neither the non-evolving nor evolving models does the dust extinction
change as a function of redshift, nor does the underlying galaxy
template evolve. A model with that level of complexity would be
beyond the scope of this paper.
The model number counts are also corrected for the Lyman forest and
continuum using the methods described by \markcite{Madau95}{Madau} (1995). With the
exception of Hubble Deep Field $U$ band number counts
\markcite{Metcalfe01, Volonteri00}({Metcalfe} {et~al.} 2001; {Volonteri} {et~al.} 2000) NUV and $U$ model number counts
are generally not corrected for Lyman absorption. Our modeling
reveals that this is justified. In the bands considered in this paper this
affects the models by a few percent at 29th magnitude and
much less at brighter magnitudes. Although the models described here
are plotted mainly for context the Lyman absorption corrections are
included.
Example models are plotted with the number counts in Figures
\ref{fig:ncuvw2}, \ref{fig:ncuvm2}, \ref{fig:ncuvw1}, and \ref{fig:ncuu}.
\section{Results \label{sect:results}}
Figures \ref{fig:ncuvw2} and \ref{fig:ncuvm2} show that in the uvw2
and uvm2 filters the number counts are in excellent agreement with the
NUV results from GALEX \markcite{Xu05}({Xu} {et~al.} 2005) and HST \markcite{Gardner00}({Gardner} {et~al.} 2000).
Furthermore, Figures \ref{fig:ncuvw2} and \ref{fig:ncuvm2} demonstrate
the unique contribution of UVOT. The UVOT number counts have a
significant overlap with GALEX, however they continue $\sim 1.5$
magnitudes deeper with error bars comparable to those of GALEX. In
this magnitude range they overlap with the HST number counts, but are
much less uncertain due to the wider field of view of UVOT as compared
to STIS. While UVOT is not able to go as deep as HST, it provides
more precise number counts in the magnitude range where there is a
knee in the slope of the number counts.
Figures \ref{fig:ncuvw2} and \ref{fig:ncuvm2} also show some of the
models discussed in \S \ref{sect:models}. The models shown are for
the star forming galaxy template with \markcite{Calzetti94}{Calzetti} {et~al.} (1994) dust models
and ${\rm A_V} = 1.0$. The solid line is a model with a non-evolving
galaxy luminosity function and the dashed line is an evolving model
following the evolution of the Schechter function parameters described
by \markcite{Arnouts05}{Arnouts} {et~al.} (2005). The underlying models are the same in the two
figures, but have been calculated for the different filters. In both
cases the non-evolving luminosity function model under-predicts the
number counts given the galaxy template and extinction assumptions.
However the evolving luminosity function model is simultaneously in
good agreement with the uvw2 and uvm2 number counts. This is an
independent confirmation that the evolution in the luminosity function
parameters found by \markcite{Arnouts05}{Arnouts} {et~al.} (2005) are reasonable.
Figure \ref{fig:ncuvw1} shows that the uvw1 number counts are
significantly higher than the GALEX NUV counts. This can be explained
by the fact that the uvw1 filter has a tail in the red with
significant sensitivity between 3000 and 4000 \AA. This extends
redward of the limits of the GALEX and STIS NUV filters. At this
point bright elliptical galaxies can be detected in spite of the fact
that they do not produce an appreciable flux in the NUV. Beyond the
extreme case of ellipticals, post-starburst galaxies with substantial
populations of A type stars and even to a lesser extent regular spiral
galaxies will also be over represented in the uvw1 number counts
compared to the NUV due to light being detected in the red wing of the
uvw1 filter.
The black models in Figure \ref{fig:ncuvw1} are the same as those in
Figures \ref{fig:ncuvw2} and \ref{fig:ncuvm2}. The evolving
luminosity function model is still better than the no evolution model,
and is fairly representative of the GALEX and HST number counts.
However it does not agree with the uvw1 number counts as well as in
the uvw2 and uvm2. This is due to the fact that the starburst galaxy
template is too blue to take into account the red objects which may be
detected by the red end of the uvw1 filter. The red models assume the
same evolutionary parameters as the black models but uses the redder
cosmic spectrum of \markcite{Baldry02}{Baldry} {et~al.} (2002) as the galaxy template. The models
using the cosmic spectrum template are below their respective star
forming template counterparts. Thus the cosmic spectrum model has the
opposite problem in that it undercounts galaxies experiencing strong
star formation. This shows that the simple modeling used here is less
successful for describing the uvw1 number counts, but also suggests
that the uvw1 filter could be useful in constraining the relative
numbers of different galaxy types over time.
Figure \ref{fig:ncuu} shows that the $u$ band number counts are
generally in good agreement with other observations. On the faint end
of the number counts the UVOT observations are in excellent agreement
with the $U$ band counts of \markcite{Capak04}{Capak} {et~al.} (2004) and \markcite{Eliche06}{Eliche-Moral} {et~al.} (2006) and the
$u$ band counts of \markcite{Metcalfe01}{Metcalfe} {et~al.} (2001). At around magnitude 22 to 23
the UVOT number counts appear about 50\% higher. One explanation for
this is that the \markcite{Yasuda01}{Yasuda} {et~al.} (2001) $u$ number counts are also higher
than the other observations on the faint end. Modeling galaxy colors
shows that the SDSS $u$ is a much better proxy for UVOT $u$ than
Johnson $U$. The higher number counts may be due to additional blue
sensitivity. Figure \ref{fig:ncuu} also reveals that in the $u$
band UVOT does not have the unique advantage it has in the NUV filters
as it covers the same magnitude range as the ground based observations
and does not go as deep. However it provides an independent check on
the ground based results.
Figure \ref{fig:ncuu} also shows $u$ band model number counts for both
the starburst (black) and cosmic spectrum (red) templates, and both
non-evolving luminosity functions (solid) and those which evolve with
the parameters of \markcite{Ilbert05}{Ilbert} {et~al.} (2005). In the $u$ band the evolving
luminosity function models with the starburst and cosmic spectrum
templates bracket the observed number counts, but then turn over at $u
\sim 25$ faster than the observed counts.
In summary, the UVOT is uniquely positioned to cover the knee in the
galaxy number counts compared to GALEX and HST in the NUV. Due to its
smaller PSF it can go deeper than the GALEX confusion limit, and it's
larger field of view provides better statistics on the bright end of
the STIS number counts. The simple model number counts used here
strongly point to an evolving galaxy luminosity function in agreement
with earlier studies. More detailed models are needed to explain the
number counts in the uvw1 and $u$ filters, but are beyond the scope of
this paper. However the measurements provided by this paper in the
magnitude range where the number counts turn over will enable a more
precise differentiation between models. In addition, the three NUV
filters of UVOT are narrower than the single NUV filter of STIS and
GALEX so more color information is provided which is potentially
useful for more involved modeling. Future plans include measurements
of the UV galaxy luminosity function as a function of redshift.
\acknowledgments
We acknowledge support from NASA Astrophysics Data Analysis grant,
\#NNX09AC87G. This work is sponsored at PSU by NASA
contract NAS5-00136 and at MSSL by funding from the Science and
Technology Facilities Council (STFC).
{\it Facilities:} \facility{Swift (UVOT)}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Community detection within a network arises from a wide variety of applications, including advertisement generation to cancer detection \cite{fortunato13 , newman04,pandit2011,kawadia2012,chen2013}.
These problems often consist of some graph $\mathcal{G}$ which contains a community $\mathcal{C} \subset \mathcal{G}$ where $\mathcal{C}$ and $\mathcal{G} \backslash \mathcal{C}$ differ in some fundamental characteristic, such as the frequency of interaction (see \cite{radicchi04} for more details). Often this community $\mathcal{C}$ is assumed to be a clique, which is a network of nodes in which every two nodes are connected by an edge.
Here we consider the {\it statistical community detection} problem, where the observed edges are noisy realizations of the true graph structures, i.e., the observations are random graphs.
Community detection problems can be divided into either one-shot \cite{leung09, leskovec10, bhattacharyya13, verzelen13, cai14} or dynamic categories \cite{koujaku13, online_comm_Zhejiang2013, duan13}. The more commonly considered one-shot setting assumes observations from static networks. The dynamic setting is concerned with sequential observations from possibly dynamic networks, and has become increasingly important since such scenarios become prevalent in social networks \cite{online_comm_Zhejiang2013}. These dynamic categories can be further divided into networks with (1) structures that either continuously change over time \cite{koujaku13}, or (2) structures that change abruptly after some changepoint $\kappa$ \cite{duan13}, the latter of which will be the focus of this paper.
In online community detection problems, due to the real time processing requirement, we cannot simply adopt the exponentially complex algorithms, especially for large networks.
Existing approaches for community detection can also be categorized into parametric \cite{kolar04, lambiotte10,barbieri12,sharpnack12} and non-parametric methods \cite{wasserman08}.
However, many such methods \cite{wasserman08, sharpnack12} rely on the data being previously collected and would not be appropriate for streaming data.
Existing online community detections algorithms are usually based on heuristics (e.g. \cite{leung09}). It is also recognized in \cite{arias13} that there has been tenuous theoretical research regarding the detection of communities in static networks. The community detection problem in \cite{arias13} was therefore cast into a hypothesis testing framework, where the null hypothesis is the nonexistence of a community, and the alternative is the presence of a {\it single} community. They model networks using an Erd\H{o}s-Renyi graph structure due to its comparability to a scale-free network. Based on this model, they derive scan statistics which rely on counting the number of edges inside a subgraph \cite{verzelen13}, and establish the fundamental detectability regions for such problems.
In this paper, we propose a sequential changepoint detection framework for detecting an abrupt emergence of a {\it single} community using sequential observations of random graphs. We also adopt the Erd\H{o}s-Renyi model, but our methods differ from \cite{verzelen13} in that we use a sequential hypothesis testing formulation and the methods are based on sequential likelihood ratios, which have statistically optimal properties. From the likelihood formulations, three sequential procedures are derived: the Exhaustive Search (ES), the mixture, and the Hierarchical Mixture (H-Mix) methods. The ES method performs the best but it is exponentially complex even if the community size is known; the mixture method is polynomially complex and exploits the fact that the size of the community inside a network is typically small. A limit of the mixture method is that it raises a false alarm due to a set of highly active edges that do not form a community. The H-Mix method addresses this problem by imposing a dendrogram decomposition of the graph. The performance of the changepoint detection procedures are evaluated using the average-run-length (ARL) and the detection delay. We derived a theoretical asymptotic approximation of the ARL of the mixture method, which was numerically verified to be accurate even in the non-asymptotic regime. Hence, the theoretical approximation can be used to determine the detection threshold efficiently. The complexity and performance of the three methods are also analyzed using numerical examples.
This paper is structured as follows. Section \ref{sec:formulation} contains the problem formulation. Section \ref{sec:sequential} presents our methods for sequential community detection. Section \ref{sec:theoretical} explains the theoretical analysis of the ARL of the mixture model. Section \ref{sec:num_eg} contains numerical examples for comparing performance of various methods, and Section \ref{sec:con} concludes the paper.
\section{Formulation}\label{sec:formulation}
Assume a network with $N$ nodes and an observed sequence of adjacency matrices over time $X_1, X_2, \ldots$ with $X_t \in \mathbb{R}^{N\times N}$, where $X_i$ represents the interaction of these nodes at time $i$. Also assume when there is no community, there are only random interactions between all nodes in the network with relatively low frequency.
There may exist an (unknown) time at which a community emerges and nodes inside the community have much higher frequencies of interaction. Figures~\ref{changepoint_basis_3}
and \ref{fig:spy_plot_3} illustrate such a scenario.
\begin{figure}[h!]
\centering
\includegraphics[width=0.65\linewidth]{changepoint_basis_3.pdf}
\caption{The graph structure displays interactions between edges in a 6 node network. Assume that when there is no community present, edges between nodes form with probability $p_0$ (denoted by light lines). Starting from time $\kappa$, a community forms and the nodes in the community interact with each other with a higher probability $p_1$ (denoted by bold lines).}\label{changepoint_basis_3}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\linewidth]{spy_plot_4.pdf}
\caption{An instance of adjacency matrices observed over time for the 6 node network in Figure \ref{changepoint_basis_3}. Each column in the figure represents the lower triangle entries of the adjacency matrix (the matrix is symmetric since we assume the graph is undirectional).
At time $\kappa=50$ (indicated by the red line) a community forms between nodes 1, 2, and 3 and the edges between these three nodes are formed with a higher probability. }
\label{fig:spy_plot_3}
\end{figure}
We formulate this problem as a sequential changepoint detection problem. The null hypothesis is that the graph corresponding to the network at each time step is a realization of an Erd\H{o}s-Renyi random graph, i.e., edges are independent Bernoulli random variables that take values of 1 with probability $p_0$ and values of 0 with probability $1-p_0$. Let $[X]_{ij}$ denote the $ij$th element of a matrix $X$, then
\begin{eqnarray}
[X_t]_{ij} =\left\{ \begin{array}{ll}
1 & \mbox{w. p. } p_0\\
0 & \mbox{otherwise}
\end{array}\right. \quad \forall (i, j).
\label{eq:hyp_base}
\end{eqnarray}
The alternative hypothesis is that there exists an unknown time $\kappa$ such that after $\kappa$, an {\it unknown} subset of nodes $\mathcal{S}^*$ in the graph form edges between community nodes with a higher probability $p_1$, $p_1 > p_0$:
\begin{eqnarray}
[X_t]_{ij} =\left\{ \begin{array}{ll}
1 & \mbox{w. p. } p_1\\
0 & \mbox{otherwise}
\end{array}\right. \quad \forall i, j \in \mathcal{S^*}, \quad t > \kappa
\label{eq:hyp_in}
\end{eqnarray}
and for all other connections
\begin{eqnarray}
[X_t]_{ij} =\left\{ \begin{array}{ll}
1 & \mbox{w. p. } p_0\\
0 & \mbox{otherwise}
\end{array}\right. \forall i \notin \mathcal{S}^* \mbox{ or } j \notin \mathcal{S}^*, \quad t > \kappa.
\label{eq:hyp_out}
\end{eqnarray}
We assume that $p_0$ is known, as it is a baseline parameter which can be estimated from historic data. We will consider both cases when $p_1$ is either unknown or known. Our goal is to define a stopping rule $T$ such that for a large {\it average-run-length (ARL)} value, $\mathbb{E}^\infty\{T\}$, the {\it expected detection delay} $\mathbb{E}^\kappa\{T-\kappa|T>\kappa\}$ is minimized. Here $\mathbb{E}^\infty$ and $\mathbb{E}^\kappa$ consecutively denote the expectation when there is no changepoint, and when the changepoint occurs at time $\kappa$.
\section{Sequential community detection} \label{sec:sequential}
Define the following statistics for edge $(i, j)$ and assumed changepoint time $\kappa=k$ for observations up to some time $t$,
\begin{equation}
U_{k, t}^{(i,j)} = \sum_{m=k+1}^t [X_m]_{ij} \log \left(\frac{p_1}{p_0}\right) +
(1-[X_m]_{ij})\log \left(\frac{1-p_1}{1-p_0}\right),
\label{U_def}
\end{equation}
Then for a given changepoint time $\kappa = k$ and a community $\mathcal{S}$, we can write the log-likelihood ratio for (\ref{eq:hyp_base}), (\ref{eq:hyp_in}) and (\ref{eq:hyp_out}) as follows:
\begin{equation}
\begin{split}
\mathcal{\ell} (\kappa = k | \mathcal{S}) & \triangleq \log \left( \prod_{m = k+1}^t \prod_{(i,j) \in \mathcal{S} }\frac{p_1^{[X_m]_{ij}} (1-p_1)^{1-[X_m]_{ij}}}{p_0^{[X_m]_{ij}} (1-p_0)^{1-[X_m]_{ij}}} \right) \\
& = \sum_{(i, j) \in \mathcal{S}} U_{k, t}^{(i,j)}.
\end{split}
\label{eq:log_like_eq}
\end{equation}
Often, the probability $p_1$ of two community members interacting is unknown since it typically represents an anomaly (or new information) in the network. In this case, $p_1$ can be replaced by its maximum likelihood estimate, which can be found by taking the derivative of $\ell (\kappa = k | \mathcal{S})$ (\ref{eq:log_like_eq}) with respect to $p_1$, setting it equal to $0$ and solving for $p_1$:
\begin{equation}
\begin{split}
\widehat{p}_1 & = \frac{2 }{| \mathcal{S} | (| \mathcal{S} |-1)(t-k) } \sum_{(i, j) \in \mathcal{S}} \sum_{m=k+1}^t [X_m]_{ij}.
\label{eq:p1_hat_equation}
\end{split}
\end{equation}
where $|\mathcal{S}|$ is the cardinality of a set $\mathcal{S}$. In the following procedures, whenever $p_1$ is unknown, we replace it with $\widehat{p}_1$.
\subsection{Exhaustive Search (ES) method}
First consider a simple sequential community detection procedure assuming the size of the community, $|\mathcal{S}^*|=s$, and $p_1$ are known.
The test statistic is the maximum log likelihood ratio (\ref{eq:log_like_eq}) over all possible sets $\mathcal{S}$ and all possible changepoint locations in a time window $k \in [t-m_1, t-m_0]$. Here $m_0$ is the start and $m_1$ is the end of the window. This window limits the complexity of the statistic, which would grow linearly with time if a window is not used. The stopping rule is to claim there has been a community formed whenever the likelihood ratio exceeds a threshold $b > 0$ at certain time $t$.
Let $\llbracket N \rrbracket \triangleq \{ 1, \dots , N\}$. The exhaustive search (ES) procedure is given by the following
\begin{equation}
T_{\rm ES} = \inf \{ t: \max_{t-m_1\leq k\leq t-m_0} \max_{\mathcal{S}\subset \llbracket N \rrbracket: |\mathcal{S}| = s} \sum_{(i, j) \in \mathcal{S}} U_{k, t}^{(i,j)} \geq b \},
\label{T_ES_def}
\end{equation}
where $b > 0$ is the threshold.
Note that the testing statistic in (\ref{T_ES_def}) searches over all $2^s$ possible communities, which is exponentially complex in the size of the community $s$. One fact that alleviates this problem is when $p_1$ is known, there exists a recursive way to calculate the test statistic $\max_{k\leq t} \sum_{(i, j) \in \mathcal{S}} U_{k, t}^{(i,j)}$ in (\ref{T_ES_def}), called the CUSUM statistic \cite{Siegmund1985}.
For each possible $\mathcal{S}$, when $m_0 = 0$, we calculate
\begin{equation}
W_{\mathcal{S}, t+1} =\max\{ W_{\mathcal{S},t} + \sum_{(i,j) \in \mathcal{S}} U_{t,t+1}^{(i,j)}, 0 \},
\label{eq:cusum_equation}
\end{equation}
with
$
W_{\mathcal{S},t} = 0,
$
and the detection procedure (\ref{T_ES_def}) is equivalent to:
\begin{equation}
\begin{split}
& T_{\rm ES} = \inf \{ t: \max_{\mathcal{S}\subset \llbracket N \rrbracket: |\mathcal{S}| = s} W_{\mathcal{S}, k} \geq b \},
\end{split}
\end{equation}
where $b$ is the threshold. When $p_1$ is unknown, however, there is no recursive formula for calculating the statistic, due to a nonlinearity resulting from substituting $\widehat{p}_1$ for $p_1$.
\subsection{Mixture method}
\label{sec:mixture_model}
The mixture method avoids the exponential complexity of the ES method by introducing a simple probabilistic mixture model, which exploits the fact that typically the size of the community is small, i.e. $|\mathcal{S}^*| /N \ll 1$. It is motivated by the mixture method developed for detecting a changepoint using multiple sensors \cite{xie13} and detecting aligned changepoints in multiple DNA sequences \cite{SiegmundYakirZhang2011}.
The mixture method does not require knowledge of the size of the community $|\mathcal{S}^*|$.
We assume that each edge will happen to be a connection between two nodes inside the community with probability $\alpha$, and use i.i.d. Bernoulli $Q_{ij}$ indicator variables that take on a value of 1 when node $i$ and node $j$ are both inside the community and $0$ otherwise:
\begin{eqnarray}
Q_{ij} =\left\{ \begin{array}{ll}
1 & \mbox{w. p. } \alpha\\
0 & \mbox{otherwise}
\end{array}\right. \quad \forall i, j \in \mathcal{S}^*.
\end{eqnarray}
Here $\alpha$ is a guess for the fraction of edges that belong to the community. Let
\begin{equation}
\begin{split}
& h(x) \triangleq \log \{ 1 - \alpha + \alpha \exp ( x )\}.
\label{eq:h_equation}
\end{split}
\end{equation}
With such a model, the likelihood ratio can be written as
\begin{equation}
\begin{split}
& \ell ( \kappa = k | \mathcal{S}) \\
= & \log \prod_{1 \leq i < j \leq N} \mathbb{E}_{Q_{ij}} [ (1-Q_{ij}) + \\
&Q_{ij} \prod_{m = k+1 }^t \frac{p_1^{[X_m]_{ij}} (1- p_1) ^{1- [X_m]_{ij}} }{p_0^{[X_m]_{ij}} (1- p_0) ^{1- [X_m]_{ij}} } ] \\
= & \sum_{1\leq i < j \leq N} h(U_{k, t}^{(i, j)}),
\end{split}
\end{equation}
and the mixture method detects the community using a stopping rule:
\begin{equation}
\begin{split}
& T_{\rm Mix} = \inf\{t: \max_{t-m_1\leq k\leq t-m_0} \sum_{1\leq i < j \leq N} h( U_{k, t}^{(i,j)} ) \geq b\} ,
\end{split}
\label{T_mix}
\end{equation}
where $b>0$ is the threshold. Here $h(x)$ can be viewed as a soft-thresholding function \cite{xie13} that selects the edges which are more likely to belong to the community.
However, one problem with a mixture method is that its statistic can be gathered from edges that do not form a community. Figure \ref{fig:mixture_model_failure_4} below displays two scenarios where the mixture statistics will be identical, but Figure \ref{fig:mixture_model_failure_4}(b) does not correspond to a network forming a community. To solve this problem, next we introduce the hierarchical mixture method.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.3\linewidth]{mixture_model_failure_4.pdf} & \includegraphics[width=0.3\linewidth]{mixture_model_failure_5.pdf} \\
(a) & (b)
\end{tabular}
\end{center}
\caption{(a): a community where all nodes in the community are connected with a higher probability than under the null hypothesis. (b): a model which would output the same mixture statistic that does not correspond to a community.}
\label{fig:mixture_model_failure_4}
\end{figure}
\subsection{Hierarchical Mixture method (H-Mix) }
In this section, we present a hierarchical mixture method (H-Mix) that takes advantage of the low computational complexity of the mixture method in section~\ref{sec:mixture_model} while enforcing the statistics to be calculated only over meaningful communities.
Hence, the H-Mix statistic is robust to the non-community interactions displayed in Figure~\ref{fig:mixture_model_failure_4}.
The H-Mix method requires the knowledge (or a guess) of the size of the community $|\mathcal{S}^*|$.
The H-Mix method enforces the community structure by constructing a dendrogram decomposition of the network, which is a hierarchical partitioning of the graphical network \cite{krishnamurthy13}. The hierarchical structure provided by dendrogram enables us to systematically remove nodes from being considered for the community. Suppose a network has a community of size $s$. Starting from the root level with all nodes belonging to the community, each of the nodes in the dendrogram tree decomposition is a subgraph of the entire network that contains all but one node. Then the mixture statistic from (\ref{T_mix}) is evaluated for each subgraph: using $h(x)$ defined in
(\ref{eq:h_equation}), for a given set of nodes $\mathcal{S}_0$ and a hypothesized changepoint location $k$, the mixture statistic is calculated as
\begin{equation}
M (\mathcal{S}_0) = \sum_{ (i,j) \in \mathcal{S}_0} h \left( U_{k, t}^{(i,j)} \right).
\label{mix_stat}
\end{equation}
We iteratively select the subgraph with the highest mixture statistic value, since it indicates that the associated node removed is most likely to be a non-member of the community and will be eliminated from subsequent steps. The algorithm repeats until there are only $s$ nodes remaining in the subgraph. Denote the mixture statistic for the selected subgraph as $P_k$. Then $\{P_k\}_{k=1}^t$ is a series of test statistics at each hypothesized changepoint location $k$. Finally, the H-Mix method is given by
\begin{equation}
\begin{split}
& T_{\rm H-Mix}= \inf\{t: \max_{t-m_1\leq k\leq t-m_0} P_k \geq b\},
\end{split}
\end{equation}
where $b$ is the threshold. The idea for a dendrogram decomposition is similar to the edge removal method \cite{newman04}, and here we further combine it with the mixture statistic. Figure~\ref{fig:online_community_detection_forced_conn_alg_pic} illustrates the procedure described above and Algorithm \ref{alg:hier_alg} summarizes the H-Mix method.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{H_Mix_Dendrogram_Node_Decomposition.pdf}
\caption{A possible outcome for a single instance with $s=3$ and $N=6$, in which a community is found consisting of nodes 1, 2, and 3. The original set of nodes consisted of the set $\{1, 2, 3, 4, 5, 6\}$, and the H-Mix method followed the dendrogram down the node sets with darker outlines (which had the highest mixture method statistic amongst their group) until at the $s=3$ level the set $\{1, 2, 3\}$ was selected.
}
\label{fig:online_community_detection_forced_conn_alg_pic}
\end{figure}
\begin{algorithm}[h!]
\caption{Hierarchical Mixture Method} \label{alg:hier_alg}
\begin{algorithmic}[1]
\STATE Input: $\{X_m\}_{m=1}^t, X_m \in \mathbb{R}^{N \times N}$
\STATE Output: $\{P_k\}_{k=1}^t \in \mathbb{R}^t $, a set of statistics for each hypothesized changepoint location $k$.
\FOR{$k=1 \to t$}
\STATE $\mathcal{S} = \llbracket N \rrbracket$
\WHILE{$|\mathcal{S}|>s$}
\STATE $i^* = \text{argmax}_{i \in \mathcal{S}} M \left( \mathcal{S} \text{\textbackslash} \{i\} \right)$
\STATE $\mathcal{S} = \mathcal{S} \text{\textbackslash} \{i^*\}$
\ENDWHILE
\STATE $P_k = M ( \mathcal{S})$
\ENDFOR
\end{algorithmic}\label{MMFCalg}
\end{algorithm}
\subsection{Complexity}
\begin{table}[h!]
\begin{center}
\caption{Complexities of algorithms under various conditions.}\label{table:complexity_table}
\begin{tabular}{ |c|c|c|}
\hline
& $s \ll N/2$ & $s \sim N/2$ \\ \hline
ES & $\mathcal{O} ( {N^{ s }} )$ & $\mathcal{O} ( {2^{ \frac{s}{2} }} )$\\ \hline
Mixture & $\mathcal{O} (N^2)$ & $\mathcal{O} (N^2)$\\ \hline
H-Mix & $\mathcal{O} (N^4)$ & $\mathcal{O} (N^4)$\\ \hline
\end{tabular}
\end{center}
\end{table}
In this section, the algorithm complexities will be analyzed and the complexities are summarized in Table \ref{table:complexity_table}. The derivation of these complexities are explained as follows. Given a known subset size $s$ and at a given changepoint location $k$ and current time $t$, evaluating the ES test statistic requires in $\mathcal{O} ( {N \choose s } )$ operations. Using Stirling's Approximation $\log( {N \choose s }) \sim N H(\frac{s}{N})$, where the entropy function is $H(\epsilon) = -\epsilon \log(\epsilon) - (1-\epsilon) \log( 1- \epsilon)$, the complexity of evaluating ES statistic is $\mathcal{O} ((\frac{N}{s})^{s } (\frac{N}{N- s})^{N - s } )$. This implies that for $s \ll N/2$, the complexity will be approximately polynomial $\mathcal{O} ( {N^{s}} )$. However, a worst case scenario occurs when $s \sim N/2$, as the statistic must search over the greatest number of possible networks and the complexity will consequently be $\mathcal{O} ( {2^{ \frac{s}{2} }} )$ which is exponential in $s$.
The mixture method only uses the sum of all edges and therefore the complexity is $\mathcal{O} (N^2)$. Unlike the exhaustive search algorithm, the mixture model has no dependence on the subset size $s$, by virtue of introducing the mixture model with parameter $\alpha$.
On the $i$th step, the H-Mix algorithm computes $N+1-i$ mixture statistics, and there are $N-s$ steps required to reduce the node set to $s$ final nodes.
Therefore the total complexity is on the order of $\sum_{i=1}^{N-s} \sum_{j=1}^{N+1-i} (N+1-i)^2 $, which is further reduced to $\mathcal{O} (N^4)$ if it is assumed that $s \ll N$.
\section{Theoretical Analysis for mixture method}\label{sec:theoretical}
In this section, we present a theoretical approximation for the ARL of the mixture method when $p_1$ is known using techniques outlined as follows. In \cite{SiegmundYakirZhang2011} a general expression for the tail probability of scan statistics is given, which can be used to derive the ARL of a related changepoint detection procedure. For example, in \cite{xie13} a generalized form for ARL was found using the general expression in \cite{SiegmundYakirZhang2011} . The basic idea is to relate the probability of stopping a changepoint detection procedure when there is no change, $\mathbb{P}^\infty\{T\leq m\}$, to the tail probability of the maxima of a random field: $\mathbb{P}^\infty\{S \geq b\}$, where $S$ is the statistic used for the changepoint detection procedure, $b$ is the threshold, and $\mathbb{P}^\infty$ denotes the probability measure when there is no change. Hence, if we can write $\mathbb{P}^\infty\{S \geq b\} \approx m \lambda$ for some $\lambda$, by relying on the assumption that the stopping time is asymptotically exponentially distributed when $b\rightarrow \infty$, we can find the ARL is $1/\lambda$.
However, the analysis for the mixture method here differs from that in \cite{xie13} in two major aspects: (1) the detection statistics here rely on a binomial random variable, and we will use a normal random variable to approximate its distribution; (2) the change-of-variable parameter $\theta$ depends on $t-k$ and, hence, the expression for ARL will be more complicated than that in \cite{xie13}.
\begin{thm}\label{thm1}
When $b\rightarrow \infty$, an upper bound to the ARL $\mathbb{E}^\infty[T_{\rm mix}]$ of the mixture method with known $p_1$ is given by:
\begin{equation}
\begin{split}
\mbox{ARL}_{\rm UB}
= \left[ \int_{\sqrt{2N/m_1}}^{\sqrt{2N/m_0}} \frac{y \nu^2(y\sqrt{\gamma(\theta_y)})}{H(N, \theta_y)} dy \right]^{-1},
\end{split}
\label{eq:ARL_integration}
\end{equation}
and a lower bound to the ARL is given by:
\begin{equation}
\mbox{ARL}_{\rm LB} = \left[\sum_{\tau = m_0}^{m_1}
\frac{2N \nu^2(2N\sqrt{\gamma(\theta_\tau)}/\tau^2) }{\tau^2 H(N, \theta_\tau)}
\right]^{-1}, \label{ARL_LB}
\end{equation}
where
\begin{equation}
\begin{split}
c_0 & = \log (p_1/p_0), \quad c_1 = \log [(1-p_1)/(1-p_0)],\\
g_{\tau} (x) &= \tau[p_0 (c_0 - c_1) +
c_1] +
x \sqrt{\tau (c_0 -c_1)^2 p_0 (1-p_0)},\\
h_{\tau}(x) & \triangleq h(g_{\tau} (x)), \mbox{ for $h(x)$ defined in (\ref{eq:h_equation})}, \\
\psi_\tau(\theta) & = \log \mathbb{E}\{e^{\theta h_{\tau} (Z) }\}, \\
\dot{\psi}_\tau(\theta) & = \frac{\mathbb{E}\{ h_{\tau} (Z) e^{\theta h_{\tau} (Z) }\} }{\mathbb{E}\{e^{\theta h_{\tau} (Z) }\}}, \\
\ddot{\psi}_\tau(\theta) & = \frac{\mathbb{E}\{ h_{\tau}^2 (Z) e^{\theta h_{\tau} (Z) }\} }{\mathbb{E}\{e^{\theta h_{\tau} (Z) }\}} - \frac{ (\mathbb{E}\{ h_{\tau} (Z) e^{\theta h_{\tau} (Z) }\} )^2 }{ (\mathbb{E}\{e^{\theta h_{\tau} (Z) }\} )^2 },\\
\gamma(\theta) &= \frac{1}{2} \theta^2 \mathbb{E}\{ [ \dot{h}_{\tau}(Z) ]^2 \exp \{ \theta h_{\tau}(Z) - \psi_{\tau} (\theta) \}, \\
\theta_\tau & \mbox{ is solution to } \dot{\psi}_{\tau}(\theta_\tau) = b/N,\\
H(N, \theta_\tau) & = \frac{\theta [2\pi \ddot{\psi}(\theta_\tau)]^{1/2}}{\gamma^2(\theta_\tau) \sqrt{N}}e^{N[\theta\dot{\psi}(\theta_\tau) - \psi(\theta_\tau)]},
\end{split}
\end{equation}
and $\dot{f}$ and $\ddot{f}$ denote the first and second order derivatives of a function $f$, $Z$ is a normal random variable with zero mean and unit variance, the expectation is with respect to $Z$, and the special function $\nu(x)$ is given by \cite{xie13}
\[
\nu(x) \approx \frac{(2/x)[\Phi(x/2) - 1/2]}{(x/2)\Phi(x/2) + \phi(x/2)}.
\]
Here $\theta_\tau$ is the solution to \[\dot{\psi}_{\tau}(\theta_\tau) = b/N.\]
\end{thm}
We verify the theoretical upper and lower bounds for ARL of the mixture method versus the simulated values, and consider a case with $p_0 = 0.3$, $p_1 = 0.8$, and $N = 6$. The comparison results are shown in Figure~\ref{fig:theory_vs_actual}, and listed in Table~\ref{table:desired_arl_table}. These comparisons show that the lower bound is especially a good approximation to the simulated ARL and, hence, it can be used to efficiently determine a threshold corresponding a desired ARL (which is typically set to a large number around 5000 or 10000), as shown in Table \ref{table:theo}.
Figure \ref{fig:integrand} demonstrates that only small $\tau$ values in the integration equation (\ref{eq:ARL_integration}) contribute to the sum and play a role in determining the ARL.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.45\linewidth]{linear_scale_theory_vs_sim.pdf} & \includegraphics[width=0.45\linewidth]{log_scale_theory_vs_sim.pdf} \\
(a) & (b)
\end{tabular}
\end{center}
\caption{Comparison of the theoretical upper and lower bounds with the simulated ARL for a case with $N=6$, $p_0 = 0.3$, and $p_1 = 0.8$.
}
\label{fig:theory_vs_actual}
\end{figure}
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width = 0.45\linewidth]{y_plot.pdf} &
\includegraphics[width = 0.45\linewidth]{tau_plot.pdf} \\ (a) & (b)
\end{tabular}
\end{center}
\caption{For a case $b=7.041$, $p_0=0.3$, $p_1 = 0.8$, and $N=6$, (a): value of the integrand
in (\ref{eq:ARL_integration}), and (b): value of the terms in the sum in (\ref{ARL_LB}). Note that only a small subset of these values contribute to the integration or sum in these expressions, and these values correspond to when $\tau$ is relatively small (i.e., hypothesized changepoint location $k$ is not too far away from the current time $t$).
}
\label{fig:integrand}
\end{figure}
\begin{table}[h!]
\begin{center}
\caption{Theoretical vs. Simulated ARL values for $N = 6$, $p_0 = 0.3$, and $p_1 = 0.8$.}
\label{table:desired_arl_table}
\begin{tabular}{|c|c|c|c|}
\hline
Threshold & Theory LB &Theory UB & Simulated \\ \hline
7.3734 & 5000 & 33878 & 6963 \\ \hline
8.0535 & 10000 & 74309 & 14720 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\caption{Theoretical vs. simulated thresholds for $p_0 = 0.3$, $p_1 = 0.8$, and $N = 6$. The threshold $b$ calculated using theory is very close to the corresponding threshold obtained using simulation.}
\label{table:theo}
\begin{tabular}{|c|c||c|c|}
\hline
ARL & Theory $b$ & Simulate ARL & Simulated $b$\\ \hline
5000 & 7.37 & 5049 & 7.04 \\ \hline
10000 & 8.05 & 10210 & 7.64 \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Numerical Examples}\label{sec:num_eg}
In this section, we compare the performance of our three methods via numerical simulations. We first use simulations to determine the threshold $b$ for each method, so these methods all have the same average run length (ARL) which is equal to 5000, and then estimate the detection delays using these $b$'s under different scenarios. The results are shown in Table \ref{table:full_search_table}, including the detection delay and the thresholds (shown in brackets). Note that the low-complexity mixture and H-Mix methods can both detect the community quickly.
\begin{table}[h!]
\begin{center}
\caption{Comparison of detection delays for various cases of $s$, $N$, $p_0$, and $p_1$. The numbers inside the brackets are the threshold $b$ such that ARL = 5000.}
\label{table:full_search_table}
\begin{tabular}{|m{1.3cm}|m{1.2cm}|m{1.3cm}|m{1cm}|m{1.6cm}|}
\hline
Parameters & ES & Mixture ($p_1$known) & H-Mix & Mixture ($p_1$unknown)\\ \hline
$p_0 = 0.2$ $p_1 = 0.9$ $s=3$ $N = 6$ & 3.8 (9.96) & 4.3 (6.71) & 3.8 (9.95) & 9.1 (3.03)\\ \hline
$p_0 = 0.3$ $p_1 = 0.7$ $s=3$ $N = 6$ & 9.5 (10.17) & 12.8 (6.77) & 10.8 (10.18) & 12.5 (2.94) \\ \hline
$p_0 = 0.3$ $p_1 = 0.7$ $s=4$ $N = 6$ & 5.0 (8.48) & 6.7 (6.88) & 6.4 (10.17) & 7.7 (2.03)\\ \hline
\end{tabular}
\end{center}
\end{table}
Next, we test the case when there are a few active edges inside the network that do not form a community, as shown in Fig. \ref{fig:mixture_model_failure_4}(b). Assume the parameters are $p_0 = 0.2$, $p_1 = 0.9$, $s = 3$, and $ N = 6$, and we set the thresholds for each method so they have the same ARL which is equal to 5000. Table \ref{table:mixture_model_failure_table} demonstrates that the ES and H-Mix methods can both identify this ``false community'' case by having relatively long detection delay and, hence, we can effectively rule out such ``false communities'' by setting a small window size $m_1$. In contrast, the mixture method cannot identify a ``false community'' and it (falsely) raises an alarm quickly. Code for implementing theoretical calculation and simulations can be found at http://www2.isye.gatech.edu/$\sim$yxie77/CommDetCode.zip.
\begin{table}[h!]
\caption{ARL and detection delay for each algorithm under the conditions $p_0 = 0.2$, $p_1 = 0.9$, $s = 3$, and $N = 6$, where the ARL is 5000 and there are three active edges according to Figure~\ref{fig:mixture_model_failure_4}(b) that do not form the community.}
\label{table:mixture_model_failure_table}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Method & Threshold (ARL = 5000) & Detection Delay \\ \hline
ES & 9.96 & 49.7 \\ \hline
Mixture & 6.71 & 4.3 \\ \hline
H-Mix & 9.95 & 100.7 \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusions and Future Work}\label{sec:con}
In this paper, we have presented and studied three methods for quickly detecting emergence of a community using sequential observations: the exhaustive search (ES), the mixture, and the hierarchical mixture (H-Mix) methods. These methods are derived using sequential changepoint detection methodology based on sequential likelihood ratios, and we use simple Erd\H{o}s-Renyi models for the networks. The ES method has the best performance; however, it is exponentially complex and is not suitable for the quick detection of a community. The mixture method is computationally efficient, and its drawback of not being able to identify the ``false community'' is addressed by our H-Mix method by incorporating a dendrogram decomposition of the network. We derive accurate theoretical approximations for the average run length (ARL) of the mixture method, and also demonstrated the performance of these methods using numerical examples.
Though community detection has been the focus of this paper, locating the community within the network (community localization) can still be accomplished by the exhaustive search and hierarchical mixture methods (find the subgraph with the highest likelihood).
The mixture method cannot be directly used for localizing the community.
We focused on simple Erd\H{o}s-Renyi graphical models in this paper. Future work includes extending our methods to other graph models such as stochastic block models \cite{gomezgardenes06}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and Main Results}\label{section1}
\subsection{Setting and assumptions}\label{subsection1-1}
Let $(D,\D(D))$ be a symmetric non-local Dirichlet form as follows
\begin{equation}\label{non-local}
\begin{split}
D(f,f)&=\int_{\R^d}\int_{\R^d}\big(f(x)-f(y)\big)^2 J(x,y)\,dx\,dy,\\
\D(D)&=\overline{C_c^{1}(\R^d)}^{D_1},
\end{split}
\end{equation}
where $J(x,y)$ is a non-negative measurable function on $\R^d\times
\R^d$ satisfying that
\begin{itemize}
\item[(1)] $J(x,y)=J(y,x)$ for all $x$, $y\in\R^d$;
\item[(2)] There exist $\alpha_1,\alpha_2\in(0,2)$
with $\alpha_1\le \alpha_2$ and positive $\kappa, c_1,c_2$ such that
\begin{equation}\label{e3-1}
c_1|x-y|^{-d-\alpha_1}\le J(x,y)\le c_2|x-y|^{-d-\alpha_2}, \quad
0<|x-y|\le \kappa
\end{equation} and
\begin{equation}\label{e3-2}
\sup_{x \in \R^d}\int_{\{|x-y|>\kappa\}}J(x,y)\,dy<\infty.
\end{equation}
\end{itemize}
Here, $C_c^{1}(\R^d)$ denotes the space of $C^1$ functions on $\R^d$
with compact support, $D_1(f,f):=D(f,f)+\int f^2(x)\,dx$ and $\D(D)$
is the closure of $C_c^1(\R^d)$ with respect to the metric
$D_1(f,f)^{1/2}.$ It is easy to see that \eqref{e3-1} and
\eqref{e3-2} imply that
\begin{equation*}\label{con1-1}
\sup_{x\in\R^d}\int_{\R^d}(1\wedge|x-y|^2)J(x,y)\,dy<\infty,
\end{equation*}
which in turn gives us that $D(f,f)<\infty$ for each $f \in
C_c^{1}(\R^d)$. According to \cite[Example 1.2.4]{FOT}, we know that
$(D,\D(D))$ is a regular Dirichlet form on $L^2(\R^d;dx)$. Therefore,
there exists $\mathcal{N}\subset\R^d$ having zero capacity with
respect to the Dirichlet form $(D,\D(D))$, and there is a Hunt
process $\big((X_t)_{t \ge 0},\Pp^x\big)$ with state space
$\R^d\backslash\Nn$ such that for every $f\in L^2(\R^d;dx)$ and
$t>0$, $x\mapsto \Ee^x(f(X_t))$ is a quasi-continuous version of
$T_tf$, where $\Ee^x$ is the expectation under the probability
measure $\Pp^x$ and $(T_t)_{t\ge0}$ is the $L^2$-semigroup
associated with $(D,\D(D))$, see e.g. \cite[Theorem 1.1]{BBCK}. The set $\Nn$ is called the properly exceptional set of the
process $(X_t)_{t\ge0}$ (or, equivalently, of the Dirichlet form
$(D,\D(D)))$, and it has zero Lebesgue measure. Furthermore, by (\ref{e3-1}), (\ref{e3-2}) and
the proof of \cite[Theorem 1.2]{BBCK}, there exists a positive
symmetric measurable function $p(t,x,y)$ defined on
$[0,\infty)\times(\R^d \setminus \Nn )\times (\R^d \setminus \Nn)$
such that
\begin{equation*}
T_f(x)=\Ee^x\big(f(X_t)\big)=\int_{\R^d \setminus
\Nn}p(t,x,y)f(y)\,dy,\quad x \in \R^d \setminus\Nn,\ t>0,\ f \in
B_b(\R^d);
\end{equation*}
moreover, for every $t>0$ and $y\in \R^d \setminus \Nn$, the
function $x\mapsto p(t,x,y)$ is quasi-continuous on $\R^d
\setminus\Nn$, and for any $t>0$ there is a constant $c_t>0$ such
that for any $x$, $y\in\R^d \setminus\Nn$, $0<p(t,x,y)\le c_t$.
First, we make the following continuity assumption on $p(t,x,y)$.
\begin{itemize}
\item[{\bf (A1)}] \emph{$\Nn=\varnothing$. For every $t>0$, the function $(x,y)\mapsto p(t,x,y)$ is
continuous on $\R^d \times \R^d$, and $0<p(t,x,y)\le c_t$ for all $x, y \in \R^d$.}
\end{itemize}
In particular, {\bf (A1)} implies that the Hunt process
$\big((X_t)_{t\ge0}, \Pp^x\big)$ is well defined for all $x\in\R^d$,
and the associated strongly continuous Markov semigroup $(T_t)_{t
\ge 0}$ is ultracontractive, i.e. $\|T_tf\|_{L^\infty(\R^d;dx)} \le
c_t\|f\|_{L^1(\R^d;dx)}$ for all $t>0$ and every $f\in
L^1(\R^d;dx)$.
When for any $x,y\in\R^d$, $J(x,y)=\rho(x-y)$ holds with some non-negative
measurable function $\R^d$ such that $\rho(z)=\rho(-z)$ for all $z\in\R^d$ and
$\int_{\R^d\setminus\{0\}}(1\wedge|z|^2)\rho(z)\,dz<\infty$,
the corresponding Hunt process $(X_t)_{t \ge 0}$ is a symmetric
L\'evy process having L\'{e}vy jump measure $\nu(dz):=\rho(z)\,dz$. In
this case, assumption {\bf (A1)} is equivalent to
$e^{-t\Psi_0(\cdot)}\in L^1(\R^d;dx)$ for any $t>0$, where the
characteristic exponent or the symbol $\Psi_0$ of L\'{e}vy process
$(X_t)_{t \ge 0}$ is defined by
$$
\Ee^x\bigl(e^{i\langle{\xi},{X_t-x}\rangle}\bigr)
=e^{-t\Psi_0(\xi)},\quad x,\xi\in\R^d, t>0.
$$ It is well known that the L\'{e}vy process enjoys the space-homogeneous property.
For sufficient conditions on the jump density $J(x,y)$ such that the
associated space-inhomogeneous Hunt process $(X_t)_{t \ge 0}$
satisfies assumption {\bf (A1)}, we refer the reader to \cite{CK,
CK1, CKK2,BBCK,CKK1} and the references therein.
\ \
Let $V$ be a non-negative measurable and locally bounded potential
function on $\R^d$. Define the Feynman-Kac semigroup
$(T^V_t)_{t\ge0}$ associated with the Hunt process $(X_t)_{t \ge 0}$ as
follows:
\begin{equation*} T^V_t(f)(x)=\Ee^x\left(\exp\Big(-\int_0^tV(X_s)\,ds\Big)f(X_t)\right),\,\, x\in\R^d,
f\in L^2(\R^d;dx).\end{equation*} It is easy to check that
$(T_t^V)_{t \ge 0}$ is a bounded symmetric semigroup on
$L^2(\R^d;dx)$.
Furthermore, following the arguments in \cite[Section 3.2]{CZ} (see
also the proof of \cite[Lemma 3.1]{KS}), we can find that for each
$t>0$, $T_t^V$ is a bounded operator from $L^1(\R^d;dx)$ to
$L^{\infty}(\R^d;dx)$, and there exists a bounded, positive and
symmetric transition kernel $p^V(t,x,y)$ on $[0,\infty)\times\R^d\times \R^d$ such
that for any $t>0$, the function $(x,y)\mapsto
p^V(t,x,y)$ is continuous on $\R^d \times \R^d$, and for every $1 \le p \le \infty$,
\begin{equation*}
T_t^Vf(x)=\int_{\R^d} p^V(t,x,y)f(y)\,dy,\quad x\in \R^d, f\in L^p(\R^d;dx).
\end{equation*}
\ \
The following result gives us an easy criterion for the compactness of
the semigroup $(T_t^V)_{t\ge0}$. The proof is mainly based on \cite[Corollary 1.3]{WW08}.
For the sake of completeness, we will provide its proof in the Appendix.
\begin{proposition}\label{p1-1}
Under Assumption {\bf (A1)}, if for any $r>0$, Lebesgue measure of the set $$\{x\in\R^d: V(x)\le r\}$$ is finite, then the semigroup
$(T^V_t)_{t\ge0}$ is compact.
\end{proposition}
From now on, we will take the following assumption:
\begin{itemize}
\item[{\bf (A2)}] \emph{Lebesgue measure of the set $\{x\in\R^d: V(x)\le r\}$ is finite for any $r>0$. }
\end{itemize}
In particular, according to Proposition \ref{p1-1}, the semigroup
$(T^V_t)_{t\ge0}$ is compact. By general theory of semigroups for
compact operators, there exists an orthonormal basis of eigenfunctions $\{\phi_n\}_{n=1}^\infty$
in
$L^2(\R^d;dx)$
associated with corresponding eigenvalues
$\{\lambda_n\}_{n=1}^\infty$ satisfying $0< \lambda_1<\lambda_2\le
\lambda_3\cdots$ and $\lim_{n\to\infty}\lambda_n=\infty$. That is,
$L^V \phi_n=-\lambda_n \phi_n$ and $T_t^V\phi_n=e^{-\lambda_n
t}\phi_n$, where $(L^V,\D(L^V))$ denotes the infinitesimal
generator of the semigroup $(T_t^V)_{t \ge 0}$.
The first eigenfunction $\phi_1$ is called ground state in the
literature. Furthermore, according to assumptions above, we have the
following property for $\phi_1$. The proof is also left to the Appendix.
\begin{proposition}\label{p1-2}
Under Assumptions {\bf (A1)} and {\bf (A2)}, there exists a version
of $\phi_1$ which is bounded, continuous and strictly positive.
\end{proposition}
\ \
To derive a upper bound estimate for the ground state $\phi_1$, we
need the explicit expression of the operator $L^V$, which is given
by
$$L^Vf(x)=Lf(x)-V(x)f(x).$$ Here, $(L,\D(L))$ is the generator associated with Dirichlet form $(D,\D(D))$. In
L\'{e}vy case, it is easy to see that for any $f\in
C_c^2(\R^d)\subset\D(L)$,
$$
Lf(x)=\int_{\R^d} \Big(f(x+z)-f(x)-\langle \nabla f(x),z \rangle \I_{\{|z|\le 1\}}\Big)\rho(z)\,dz,$$ where $\rho$ is the density function of the L\'{e}vy measure.
For general non-local Dirichlet form $(D,\D(D))$, if for every $x \in
\R^d$,
\begin{equation*}\label{oper-1}
\int_{\{|z|\le 1\}} |z| \left|J(x,x+z)-J(x,x-z)\right|\,dz<\infty,
\end{equation*} and for any $r>0$ large enough, \begin{equation*}\label{oper-2} x\mapsto \I_{B(0,2r)^c}
\int_{\{|x+z|\le r\}} J(x,x+z)\,dz \in L^2(\R^d;dx),\end{equation*}
then $C_c^2(\R^d)\subset \D(L)$ and for any $f\in C_c^2(\R^d)$,
\begin{equation*}
\begin{split}
Lf(x)=&
\int_{\R^d} \Big(f(x+z)-f(x)-\langle \nabla f(x),z \rangle \I_{\{|z|\le 1\}}\Big)J(x,x+z)\,dz\\
&+\frac{1}{2}\int_{\{|z|\le 1\}}\langle\nabla f(x),
z\rangle\left(J(x,x+z)-J(x,x-z)\right)\,dz,
\end{split}
\end{equation*} e.g.\ see \cite[Theorem 1.1]{W2009} for more details.
According to the discussions above, sometime we adopt the following
regular assumptions on $J(x,y)$ and the operator $L^V$, which are
satisfied for all symmetric L\'{e}vy processes.
\begin{itemize}
\item[{\bf (A3)}] \emph{The jump kernel $J(x,y)$ satisfies that $$\sup_{x\in\R^d}\int_{\{|z|\le 1\}} |z|
\left|J(x,x+z)-J(x,x-z)\right|\,dz<\infty,$$ and for any $f\in C_c^2(\R^d)\subseteq \D(L^V)$,
\begin{equation}\label{ope11}
\begin{split}
L^Vf(x)=&
\int_{\R^d} \Big(f(x+z)-f(x)-\langle \nabla f(x),z \rangle \I_{\{|z|\le 1\}}\Big)J(x,x+z)\,dz\\
&+\frac{1}{2}\int_{\{|z|\le 1\}}\langle\nabla f(x),
z\rangle\left(J(x,x+z)-J(x,x-z)\right)\,dz-V(x)f(x).
\end{split}
\end{equation}
}
\end{itemize}
\ \
\subsection{Main results}
\emph{Throughout this paper, we always assume that assumptions {\bf
(A1)} and {\bf (A2)} hold, and that the ground state $\phi_1$ is
bounded, continuous and strictly positive.} In this paper, we are
concerned with the intrinsic ultracontractivity for the semigroup
$(T_t^V)_{t\ge0}$. We first recall the definition of intrinsic
ultracontractivity for Feynman-Kac semigroups introduced in
\cite{DS}. The semigroup $(T_t^V)_{t\ge0}$ is intrinsically
ultracontractive if and only if for any $t>0$, there exists a
constant $C_t>0$ such that for all $x$, $y\in\R^d$,
\begin{equation*}\label{iuc}p^V(t,x,y)\le C_t\phi_1(x)\phi_1(y).\end{equation*}
In the framework of the semigroup theory, define
\begin{equation}\label{e1}
\tilde{T}_t^Vf(x):=\frac{e^{\lambda_1t}}{\phi_1(x)}T_t^V((\phi_1f))(x),\quad t>0,
\end{equation}
which is a Markov semigroup on $L^2(\R^d; \phi^2_1(x)\,dx)$. Then, $(T_t^V)_{t\ge0}$ is intrinsically ultracontractive if and only if $(\tilde{T}_t^V)_{t\ge0}$ ultracontractive, i.e., for every $t>0$,
$\tilde{T}_t^V$ is a bounded operator from $L^2(\R^d; \phi^2_1(x)\,dx)$ to $L^\infty(\R^d; \phi^2_1(x)\,dx)$.
Recently, the intrinsic ultracontractivity of $(T_t^V)_{t\ge0}$
associated with some special pure jump symmetric L\'evy process
$(X_t)_{t\ge0}$ has been investigated in \cite{KS,KK,KL}. The
approach of all these cited papers is based on sharp and explicit
pointwise upper and lower bound estimates for the ground state $\phi_1$
corresponding to the semigroup $(T_t^V)_{t\ge0}$. However, to apply
such powerful technique, some restrictions on the density function
of jump kernel are needed, e.g.\ see \cite[Assumption 2.1]{KL}. In
particular, in L\'{e}vy case the following typical example
\begin{equation}\label{jjj} J(x,y)\asymp|x-y|^{-d-\alpha}\I_{\{|x-y|\le
1\}}+e^{-|x-y|^\gamma}\I_{\{|x-y|> 1\}}\end{equation} with $\alpha \in (0,2)$
and $\gamma\in (1,\infty]$ is not included in \cite{KS,KL,KK}. Here and in what follows,
for two functions
$f$ and $g$ defined on $\R^d\times \R^d$, $f \asymp g$ means that there is a constant $c>1$ such that $c^{-1}g(x,y)\le f(x,y) \le c g(x,y)$ for all $(x,y)\in\R^d\times \R^d$.
In particular, when $\gamma=\infty$,
$$J(x,y)\asymp|x-y|^{-d-\alpha}\I_{\{|x-y|\le 1\}},$$ which is
associated with the truncated symmetric $\alpha$-stable process. As
mentioned in \cite{CKK2, CKK1,BBCK}, such jump density function
$J(x,y)$ is very important in applications,
and
it
arises in statistical physics to model turbulence as well as in
mathematical finance to model stochastic volatility.
Furthermore, the following growth condition on the potential function
\begin{equation}\label{e1-1}
\lim_{|x| \to \infty}V(x)=\infty
\end{equation}
was commonly used in \cite{KS,KK, KL} to derive the compactness
of $(T_t^V)_{t \ge 0}$, e.g. \cite[Assumption 2.4]{KL}. However, as shown by Proposition \ref{p1-2}, assumption
{\bf (A2)}, which is much weaker than (\ref{e1-1}), is sufficient to ensure the compactness of
$(T_t^V)_{t \ge 0}$. Therefore, a natural question is whether one can give some sufficient conditions
for the intrinsic ultracontractivity of $(T_t^V)_{t \ge 0}$ without the restrictive condition
(\ref{e1-1}).
In this paper, we will make use of super Poincar\'e inequalities
with respect to infinite measure developed in \cite{Wang02} and
functional inequalities for non-local Dirichlet forms recently
studied in \cite{WW,WJ13,CW14} to deal with the questions mentioned above. We aim to present some sharp
conditions on the potential function $V$ such that the associated
Feynman-Kac semigroup $(T_t^V)_{t\ge0}$ is intrinsically
ultracontractive, and also derive explicit two-sided estimates for
the ground state $\phi_1$. Our method is different from that of \cite{KS,KK,KL}, and deals with the intrinsic ultracontractivity of $(T_t^V)_{t\ge0}$
for non-local Dirichlet forms in more general situations. The following points indicate the novelties of
our paper.
\begin{itemize}
\item[(i)] We can deal with the example $J(x,y)$ mentioned
in \eqref{jjj}, which essentially means that small jumps play the dominant
roles for the behavior of the associated process. On the other hand, by \eqref{e3-1}, we also consider the case that the density of the small jumps
can enjoy the variable order property.
\item[(ii)] For a large class of
potential functions $V$ which do not satisfy the growth condition (\ref{e1-1}) or
the regularity condition (\ref{e1-1a}) below, we can still obtain some
sufficient conditions for the intrinsic ultracontractivity of
$(T_t^V)_{t \ge 0}$, which to the
best of our knowledge do not appear in the literature.
\item[(iii)]
Our method here efficiently
applies to Hunt process generated by non-local Dirichlet forms. In
particular, the associated process does not like L\'{e}vy process,
and it is usually not space-homogeneous.
\end{itemize}
\bigskip
Now, we will present main results of our paper, which will be split into two subsections.
\subsubsection{{\bf The case that $\lim\limits_{|x|\to\infty} V(x)=\infty$.}} The following statement is a consequence of more general Theorem \ref{t3-1}
below.
\begin{theorem}\label{thm2}
Suppose that \eqref{e3-1}, \eqref{e3-2}, {\bf (A1)} and {\bf (A2)}
hold, and that there exist positive constants $c_i$ $(i=3,4)$,
$\theta_i$ $(i=1,3)$ and constants $\theta_i$ $(i=2,4)$ such that
for every $x \in \R^d$ with $|x|$ large enough,
\begin{equation}\label{vv} c_3|x|^{\theta_1}\log^{\theta_2}(1+|x|) \le V(x)\le
c_4|x|^{\theta_3}\log^{\theta_4}(1+|x|).\end{equation} If
$\theta_1=1$ and $\theta_2>2$ or if $\theta_1>1$, then $(T_t^V)_{t \ge 0}$ is
intrinsically ultracontractive, and for any $\varepsilon>0$, there
exists a constant $c_5=c_5(\varepsilon)>0$ such that for all
$x\in\R^d$,
$$ c_5\exp\Big(- \frac{(1+\varepsilon)\theta_3}{\kappa}|x|\log(1+|x|)\Big)\le
\phi_1(x).$$
Additionally, if {\bf (A3)} also holds and
$$J(x,y)=0,\quad x,y\in\R^d\textrm{ with } |x-y|>\kappa,$$ then for any $\varepsilon>0$, there exists a constant
$c_6=c_6(\varepsilon)>0$ such that for all $x\in\R^d$,
$$
\phi_1(x)\le c_6\exp\Big(-\frac{(1-\varepsilon)\theta_1}{\kappa}
|x|\log(1+|x|)\Big).
$$
\end{theorem}
To show that Theorem \ref{thm2} is sharp, we have the following
example, which, as mentioned above, can not be studied by the method
used in \cite{KS, KK, KL}.
\begin{example}\label{ex2}\it
Suppose that assumptions {\bf (A1)}, {\bf (A2)} and {\bf (A3)} hold,
and
$$J(x,y)\asymp|x-y|^{-d-\alpha}\I_{\{|x-y|\le
1\}}+e^{-|x-y|^\gamma}\I_{\{|x-y|> 1\}},$$ where $\alpha \in (0,2)$
and $\gamma\in(1,\infty]$. If $V(x)=|x|^{\theta}$ for some constant
$\theta>0$, then the semigroup $(T_t^V)_{t \ge 0}$ is intrinsically
ultracontractive if and only if $\theta>1$. When $\theta>1$, we have the following explicit two-sided estimates for the
ground state $\phi_1$.
\begin{enumerate}
\item[(1)] If $\gamma=\infty$, i.e. the associated Hunt process $(X_t)_{t\ge0}$ is with finite range jumps,
then for any $\varepsilon\in(0,1)$, there exist $c_i=c_i(\varepsilon,\theta)$ $(i=1,2)$ such that for all $x\in\R^d$,
\begin{equation}\label{ex2-1}
\begin{split}c_1\exp\Big(-(1+\varepsilon)\theta|x|\log(1+|x|)\Big)& \le
\phi_1(x)\\
&\le
c_2\exp\Big(-(1-\varepsilon)\theta|x|\log(1+|x|)\Big).\end{split}
\end{equation}
\item[(2)] If $1<\gamma<\infty$, then there exist positive constants $c_i:=c_i(\gamma)$ $(i=4, 6)$ independent of $\theta$ such that for all $x\in\R^d$,
\begin{equation}\label{ex2-2}
\begin{split}c_3\exp\Big(-c_4 \theta^{\frac{\gamma-1}{\gamma}}|x|\log^{\frac{\gamma-1}{\gamma}}(1+|x|)\Big) &\le
\phi_1(x)\\
&\le
c_5\exp\Big(-c_6 \theta^{\frac{\gamma-1}{\gamma}}|x|\log^{\frac{\gamma-1}{\gamma}}(1+|x|)\Big)\end{split}
\end{equation}
holds for some positive constants $c_3=c_3(\theta,\gamma)$ and $c_5(\theta,\gamma)$.
\end{enumerate}
\end{example}
We make some comments on Theorem \ref{thm2} and Example \ref{ex2}.
\begin{remark}
(1) Compared with \cite{KS,KK, KL}, to ensure the intrinsic ultracontractivity of $(T_t^V)_{t \ge 0}$ Theorem \ref{thm2}
gets rid of the following restrictive condition on potential function $V$:
\begin{equation}\label{e1-1a}
\sup_{z\in B(x,1)}V(z)\le C V(x),\quad |x|\ge1,
\end{equation}
see e.g.\ \cite[Assumption 2.5 and Corollary 2.3(1)]{KL}.
Intuitively, regularity condition (\ref{e1-1a}) means that the rate for the oscillation of
$V$ is mild. However, according to \eqref{vv}, we know from Theorem \ref{thm2} that $(T_t^V)_{t \ge 0}$ still may be intrinsically
ultracontractive
without such regular condition on $V$. The reader can refer to Proposition \ref{pro} below for more general conditions on $V$.
Roughly speaking, the upper bound for $V$ in \eqref{vv} is used to control the lower bound for the ground state $\phi_1$, while the lower bound for $V$ is needed to establish the upper bound estimate for $\phi_1$, and also the intrinsic (local) super Poincar\'{e} inequality for Dirichlet form $(D,\D(D))$.
(2) In L\'{e}vy case, if $V(x)=|x|^{\theta}$ for some $\theta>0$,
the conclusion of Example \ref{ex2} says that $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive if and
only if $\theta>1$. Such condition on $V$ is the
same as that in case of $\gamma=1$, which is associated with the Feynman-Kac
semigroup for relativistic $\alpha$-stable processes, see
\cite[Theorem 1.6 and the remark below]{KS} for more details.
However,
the case $\gamma\in(1,\infty]$ does not fit the framework of
\cite{KS,KK,KL}, and it is essentially different from the case $\gamma \in
(0,1)$. Indeed, let $\rho$ be the density function of the L\'{e}vy
measure. According to \cite[Assumption 2.1]{KL}, the function $\rho$
is required to satisfy that
\begin{itemize}
\item[(i)] There exists a constant $C_1>0$ such that for every $1\le |y|\le
|x|$,
$$\rho(y)\le C_1\rho(x).$$
\item[(ii)] There exists a constant $C_2>0$ such that for all $x$,
$y\in\R^d$ with $|x-y|>1$,
$$\int_{\{|z-x|\ge1, |z-y|\ge 1\}} \rho(x-z)\rho(z-y)\,dz\le
C_2\rho(x-y).$$
\end{itemize}
By \cite[Example 4.1 (3)]{KL}, the assumptions (i) and
(ii) are only satisfied when $\gamma\in(0,1]$. On the other hand,
the difference between $\gamma>1$ and $0<\gamma\le 1$ is also
indicated by \cite[Theorem 1.2 (1) and (2)]{CKK1}, where explicit
global heat kernel estimates of the associated process (depending on
the parameter $\gamma$) are presented.
(3) In Example \ref{ex2} (1), i.e.\ $\gamma=\infty$, the symmetric Hunt process associated with density function $J$ above is the truncated symmetric $\alpha$-stable-like process, e.g.\ see \cite{CKK2}. On the other hand,
if the Hunt process is a Brownian motion and $V(x)=|x|^\theta$ for
some $\theta>0$, then, according to \cite[Theorem 6.1]{DS} (at least
in one dimension case), we know that the associated Feynman-Kac
semigroup is intrinsically ultracontractive if and only if
$\theta>2$. This, along with Example \ref{ex2} (1), indicates the
difference of the intrinsic ultracontractivity for Feynman-Kac
semigroups between L\'evy process (symmetric jump processes) with finite range jumps and
Brownian motion.
\end{remark}
\ \
\subsubsection{{\bf The case that $\lim\limits_{|x|\to\infty} V(x)\neq\infty$.}}
The following theorem gives us sufficient conditions on the intrinsic ultracontractivity
of $(T_t^V)_{t \ge 0}$ for a class of irregular potential functions $V$ such that $\lim\limits_{|x|\to\infty} V(x)\neq\infty$.
Denote by $|A|$
Lebesgue measure of a Borel set $A\subseteq \R^d$.
\begin{theorem}\label{thm3}
Suppose that \eqref{e3-1}, \eqref{e3-2}, assumptions {\bf (A1)} and {\bf (A2)} hold, and that there
exists a unbounded subset $A\subseteq \R^d$ such that the following conditions are satisfied.
\begin{enumerate}
\item [(1)] $|A|<\infty$ and $$A \cap \{x \in \R^d: |x|\ge R\}\neq\emptyset, \quad\forall\ R>0.$$
\item [(2)] There
exist positive constants $c_i$ $(i=3,4)$, $\theta_i$ $(i=1,2)$ with $\theta_1>2$ and
constant $\theta_3\in\R$ such that for all $x \in \R^d$ with $|x|$ large enough,
$$ V(x)=1,\quad x \in A$$ and
$$c_3|x|\log^{\theta_1}(1+|x|) \le V(x)\le
c_4|x|^{\theta_2}\log^{\theta_3}(1+|x|),\quad x \notin A.$$
\item [(3)] There exist positive constants $c_i$ $(i=5,6)$ and $\eta_i$ $(i=1,2)$ such that
for every $R>2$,
$$|\{x \in \R^d: x \in A, |x|\ge R\}|\le c_5 \exp(-c_6R^{\eta_1}\log^{\eta_2}R).$$
\end{enumerate}
Then, we have
\begin{enumerate}
\item [(i)] If $\eta_1=1$ and $\eta_2>1$, then the associated
Feynman-Kac
semigroup $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive.
\item [(ii)] If $\eta_1=\eta_2=1$, then there exists a constant
$c_0>0$ such that for any $c_6>c_0$, the associated
semigroup $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive.
\end{enumerate}
\ \
Suppose moreover $d>\alpha_1$, and replace $(3)$ by the following weaker condition
\begin{itemize}
\item[(4)] There exist positive constants $c_7$ and $\eta_3$ such that
for every $R>2$,
\begin{equation}\label{power} |\{x \in \R^d: x \in A, |x|\ge R\}|\le \frac{c_7}{ R^{d/\alpha_1}\log^{\eta_3}R} .\end{equation}
\end{itemize}
Then, if $d>\alpha_1$, $(1)$, $(2)$ and $(4)$ hold with $\eta_3>{2d}/{\alpha_1}$, then the associated
semigroup $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive.
\end{theorem}
{ The following example shows that one can not
replace the decay rate $d/\alpha_1$ in \eqref{power} by $d/\alpha_1-\varepsilon$ with any
$\varepsilon>0$.
\begin{example}\label{ex3}\it Consider the truncated symmetric $\alpha$-stable process on $\R^d$ with
some $0<\alpha<2$, i.e. $$J(x,y)=|x-y|^{-d-\alpha},\quad 0<|x-y|\le 1$$
and $$J(x,y)=0,\quad |x-y|>1.$$
For any $\varepsilon\in(0,1)$, let $A=\bigcup_{n=1}^{\infty}B(x_n,r_n)$ be such that $x_n \in \R^d$ with $|x_n|=n^{k_0}$ and
$r_n=n^{-\frac{k_0}{\alpha}+\frac{1}{d}}$ for $n \ge 1$, where $k_0>\frac{2}{\varepsilon}$.
Suppose that
\begin{equation*}
V(x)=\begin{cases}
\,\,1,\ \ \ \ \ \text{if}\ \ x \in A,\\
|x|^\theta,\ \ \ \text{if}\ \ x \notin A
\end{cases}
\end{equation*} with some constant
$\theta>1$. Then $(T_t^V)_{t \ge 0}$ is not intrinsically ultracontractive.
However, there is a constant $c_0>0$ such that
\begin{equation}\label{ex3-0}
|\{x \in \R^d: x \in A, |x|\ge R\}|\le \frac{c_0}{ R^{\frac{d}{\alpha}-\varepsilon}},\quad R>2.
\end{equation}
\end{example}}
\ \
The remainder of this paper is arranged as follows. In Section
\ref{section2}, we will present sufficient conditions for the intrinsic
ultracontractivity of Feynman-Kac semigroup in terms of
intrinsic super Poincar\'e inequality, see Theorem \ref{p2-1}. These conditions are
interesting of themselves, and they work for general framework including local Dirichlet forms and non-local Dirichlet forms.
Section \ref{section3} is devoted to applying Theorem \ref{p2-1} to
yield general results about the
intrinsic ultracontractivity of the Feynman-Kac semigroups for non-local Dirichlet forms.
We use the probabilistic method and the iterated
approach to derive an explicit lower bound estimate for ground state
of the semigruoup $(T_t^V)_{t\ge0}$, e.g.\ Proposition \ref{p3-1}.
The intrinsic local super Poincar\'e inequality for the Dirichlet
form $(D^V,\D(D^V))$ is established in Proposition \ref{l3-4}.
Proofs of all the statements in Section \ref{section1} are presented in Section \ref{section4}, and proofs of Propositions \ref{p1-1} and \ref{p1-2} are given in Appendix.
\ \
\noindent {\bf Notation}\,\, Throughout this paper, let $d\ge 1$. By
$|x|$ we denote the Euclidean norm of $x\in\R^d$, and by $|A|$ the
Lebesgue measure of a Borel set $A$. Denote by $B(x,r)$ the ball
with center $x \in \R^d$ and radius $r>0$. For any $A$,
$B\subset\R^d$, let $dist (A,B)=\inf\{|x-y|: x\in A, y\in B\}$. We
will write $C=C(\kappa,\delta,\varepsilon,\lambda,\ldots)$ to
indicate the dependence of the constant $C$ on parameters. The
constants may change their values from one line to the next, even on
the same line in the same formula. Let
$B_b(\R^d)$ be the set of bounded measurable functions on $\R^d$. For any measurable functions $f$,
$g$ and any $\sigma$-finite measure $\mu$ on $\R^d$, we set $\langle
f, g\rangle_{L^2(\R^d; \mu)}:=\int f(x)g(x)\,\mu(dx)$, and for any $p\in[1,\infty)$, $\|f\|_{L^p(\R^d;\mu)}:=\big(\int |f(x)|^p\,\mu(dx)\big)^{1/p}$.
Denote by $\|f\|_{\infty}$ the $L^{\infty}(\R^d;dx)$-norm for any
bounded function $f$. For any increasing function $f$ on
$(0,\infty)$, $f^{-1}(r):=\inf\{s>0: f(s)\ge r\}$ is its right
inverse.
\section{Intrinsic Ultracontractivity for General Dirichlet Forms}\label{section2}
The aim of this section is to present sufficient conditions for
intrinsic ultracontractivity of Feynman-Kac semigroup associated with general symmetric Dirichlet forms (including local Dirichlet forms).
Since we believe that the result below is interesting of itself and has wide applications, for sake of self-containedness we first introduce some necessary notations even if they are repeated by previous section.
\ \
Let $(D,\mathscr{D}(D))$ be a regular symmetric Dirichlet form (not necessarily non-local) on $L^2(\R^d,dx)$ with core $C_c^2(\R^d)$, and let $V$ be a locally bounded non-negative measurable function on $\R^d$. Consider the following regular Dirichlet form with killing on $L^2(\R^d,dx)$:
$$ D^V(f,f) =D(f,f)+\int f^2(x)V(x)\,dx, \quad \mathscr{D}(D^V)= \overline{C_c^2(\R^d)}^{{D_1^V}},$$
where $${D_1^V}(f,f):={D^V(f,f)+\|f\|_{L^2(\R^d;dx)}^2}.$$ Denote by $(T_t^V)_{t\ge 0}$ the associated (Feynman-Kac) semigroup on $L^2(\R^d,dx)$.
To consider the intrinsic ultracontractivity of Feynman-Kac semigroup $(T_t^V)_{t\ge 0}$, we assume that
\begin{itemize}
\item[(A)] The Feynman-Kac semigroup $(T_t^V)_{t\ge 0}$ is compact on $L^2(\R^d,dx)$, and its ground state
$\phi_1$ corresponding to the first eigenvalue $\lambda_1>0$ is bounded, continuous and strictly positive.
\item[(B)] The potential function $V$ satisfies
\begin{itemize}
\item[{\bf (A2)}] \emph{For every $r>0$, $$|\{x\in\R^d: V(x)\le r\}|<\infty.$$}
\item[{\bf (A4)}]
\emph{There exists a constant $K>0$ such that
$$\lim_{R \to\infty} \Phi(R)=\infty,$$
where
$$\Phi(R)=\Phi_K(R):=\inf_{|x|\ge R, V(x)>K} V(x),\quad \ R>0.$$}
\end{itemize}
\end{itemize}
According
to assumption {\bf (A2)}, $\big|\{x \in \R^d: V(x)\le
K\}\big|<\infty$. Therefore, assumption {\bf (A4)} means
that the potential function $V$ tends to infinity as $|x| \to
\infty$ on the complement of a set (maybe unbounded) with finite
Lebesgue measure. Obviously both {\bf (A2)} and {\bf (A4)} hold true when
\begin{equation}\label{e2-1}
\lim_{|x| \rightarrow \infty}V(x)=\infty.
\end{equation}
For the constant $K$ in assumption {\bf (A4)}, let
$$
\Theta(R)=\Theta_K(R):=\big|\{x \in \R^d: |x|\ge R, V(x)\le
K\}\big|,\quad R>0.
$$
On the other hand, due to the fact $\big|\{x \in \R^d: V(x)\le K\}\big|<\infty$, it is easy to
see that
\begin{equation*}
\lim_{R \to \infty}\Theta(R)=0.
\end{equation*} In particular, if \eqref{e2-1} holds, then for any constant $K>0$,
$\Theta(R)=0$ when $R>0$ is large enough.
\ \
Now, we sate the main result in this section.
\begin{theorem}\label{p2-1}
Let $(T_t^V)_{t\ge 0}$ be a compact Feynman-Kac semigroup on $L^2(\R^d,dx)$, and $V$ be a locally bounded non-negative measurable function such that Assumptions $(A)$ and $(B)$ are satisfied. Suppose that there exists a bounded
measurable function $\varphi \in B_b(\R^d)$ such that the following
conditions are satisfied.
\begin{enumerate}
\item[(1)] There is a constant $r_0>0$ such that for every $r\ge r_0$, the following local intrinsic super Poincar\'e inequality
\begin{equation*}
\begin{split}
\int_{B(0,r)}f^2(x)\,dx\le & sD^V(f,f)\\
&+\alpha(r,s)\Big(\int |f|(x)\varphi(x)\,dx\Big)^2, \quad s>0,\
f \in \D(D^V)
\end{split}
\end{equation*}
holds for some positive measurable function $\alpha$.
\item[(2)] Let
$\phi_1$ be the ground state for the semigroup $(T_t^V)_{t\ge0}$.
It holds for some constant $C_0>0$ that
\begin{equation*}
\varphi(x)\le C_0\phi_1(x),\quad x\in\R^d.
\end{equation*}
\end{enumerate}
Then, the following intrinsic mixed type super Poincar\'e inequality
\begin{equation}\label{mixed}
\begin{split}
\int f^2(x)\,dx\le s D^V(f,f)&+\beta(s\wedge s_0) \left(\int|f|(x)\phi_1(x) \,dx\right)^2\\
&
+\gamma(s\wedge s_0)^{(p-2)/p}\|f\|_{L^p(\R^d;dx)}^2
\end{split}
\end{equation}
holds for all $s>0$, $f\in \D(D^V)$ and $p\in(2,\infty]$ with the rate functions
\begin{equation}\label{p2-1-5}
\beta(s):=C_0^2\alpha\left( \Phi^{-1}\left(\frac{2}{s}\right),
\frac{s}{2}\right),\quad \gamma(s):=
\Theta\left(\Phi^{-1}\left(\frac{2}{s}\right)\right)
\end{equation}
and constant $s_0=\frac{2}{\Phi(r_0)}$. Here, we use the convention that
$(p-2)/p=1$ when $p=\infty$.
Moreover, we have \begin{enumerate}
\item[(i)] If \eqref{e2-1} holds, then the following super Poincar\'e inequality
\begin{equation}\label{p2-1-6}
\begin{split}
\int f^2(x)\,dx\le& s D^V(f,f)\\
&+\beta(s \wedge r_1) \left(\int |f|(x)\phi_1 (x)\,dx\right)^2
,\quad s>0, f\in \D(D^V)
\end{split}
\end{equation}
holds for some constant $r_1>0$. Consequently, if
\begin{equation*}\label{t2-1-1a}
\int_t^{\infty}\frac{\beta^{-1}(s)}{s}\,ds<\infty,\quad t>\inf
\beta,
\end{equation*}
then the semigroup $(T^V_t)_{t \ge 0}$ is intrinsically
ultracontractive.
\item[(ii)] If for some $p>2$ there is a constant $c_0>0$ such that the following Sobolev inequality holds true
\begin{equation}\label{p2-2-1}
\|f\|_{L^p(\R^d;dx)}^2 \le c_0\bigg[ D^V(f,f)+\|f\|_{L^2(\R^d;dx)}^2\bigg],\quad f \in C_c^{\infty}(\R^d),
\end{equation}
then the super Poincar\'e inequality \eqref{p2-1-6} holds with
the rate function $\beta$ and the constant $r_1$ replaced by
\begin{equation}\label{p2-2-3}
{\hat \beta(s)}:=2C_0^2\alpha\left( \Psi^{-1}\left(\frac{s}{4}\right),
\frac{s}{4}\right)
\end{equation} and some constant $r_2>0$ respectively,
where $$\Psi(R):=\frac{1}{\Phi(R)}+c_0\Theta(R)^{\frac{p-2}{p}},\quad R>1.$$ {Consequently, if}
\begin{equation*}\label{t2-1-1a}
\int_t^{\infty}\frac{{\hat \beta^{-1}(s)}}{s}\,ds<\infty,\quad t>\inf
\hat \beta,
\end{equation*}
then the semigroup $(T^V_t)_{t \ge 0}$ is intrinsically
ultracontractive.
\item[(iii)] If there exists a constant $\delta>1$ such that
\begin{equation}\label{p2-1-7}
\sum_{n=1}^{\infty}\gamma(s_n)\delta^n<\infty,
\end{equation}
where $s_n:=\beta^{-1}(\frac{c_1 \delta^n}{2})$ with
$c_1:=\|\phi_1\|_{\infty}^2$, then the following super Poincar\'e
inequality holds
\begin{equation}\label{p2-1-8}
\begin{split}
\int f^2(x)\,dx\le & s D^V(f,f)\\
&+\tilde \beta(s\wedge r_2) \left(
\int|f|(x)\phi_1(x)\,dx\right)^2 ,\quad s>0,\ f\in \D(D^V),
\end{split}
\end{equation}
where $r_2$ is a positive constant, $$ \tilde
\beta(s):=2\beta\left(\gamma^{-1}\left(\frac{1}{4\delta^{n_0(s)+1}}\right)\right)$$
and
\begin{equation}\label{p2-1-8a}
\begin{split}
n_0(s):=\inf\Bigg\{N \ge&\left(\log_\delta\Big(\frac{2\beta(s_0)}{c_1}\Big)\right)\vee\left(-\log_\delta\big(4\delta\gamma(s_0)\big)\right):\\
&\frac{4\delta(\sqrt{\delta}+1)s_{N}}{\sqrt{\delta}-1}
+2\gamma^{-1}\left(\frac{1}{4\delta^{N+1}}\right)\le s\Bigg\}.
\end{split}
\end{equation}
Consequently, if
\begin{equation*}\label{t2-1-2}
\int_t^{\infty}\frac{\tilde \beta^{-1}(s)}{s}\,ds<\infty,\quad
t>\inf \tilde \beta,
\end{equation*}
then the semigroup $(T^V_t)_{t \ge 0}$ is intrinsically
ultracontractive.
\end{enumerate}
\end{theorem}
\begin{proof} Throughout the proof, we denote by $\mu$ the Lebesgue measure on $\R^d$.
For the constant $K$ in assumption {\bf (A4)},
let $A_1:=\{x \in \R^d: V(x)> K\}$ and $A_2:=\R^d\setminus A_1$.
For any $f \in \D(D^V)$, $R\ge r_0$ and $p\in(2,\infty]$, it holds that
\begin{equation}\label{ddd-222}
\begin{split}
\int_{B(0,R)^c}f^2(x)\,\mu(dx)&= \int_{B(0,R)^c\bigcap
A_1}f^2(x)\,\mu(dx)
+\int_{B(0,R)^c\bigcap A_2}f^2(x)\,\mu(dx)\\
&\le \frac{1}{\Phi(R)}
\int_{B(0,R)^c \bigcap A_1}f^2(x)V(x)\,\mu(dx)\\
&\quad+\mu(B(0,R)^c\cap A_2)^{(p-2)/p}\|f\|^2_{L^p(\R^d;dx)}\\
& \le \frac{1}{\Phi(R)}D^V(f,f)+\Theta(R)^{(p-2)/p}\|f\|_{L^p(\R^d;dx)}^2,
\end{split}
\end{equation} where in the first inequality we have used the H\"{o}lder inequality when $p\in(2,\infty)$.
This, along with conditions (1) and (2), gives us that for any $R$, $\tilde s>0$ and $f\in \D(D^V)$,
\begin{equation*}
\begin{split}
\mu(f^2)\le \left(\frac{1}{\Phi(R)}+\tilde s\right) D^V(f,f)+
C_0^2\alpha(R,\tilde s)\mu\big(\phi_1 |f|\big)^2+
\Theta(R)^{(p-2)/p}\|f\|_{L^p(\R^d;dx)}^2.
\end{split}
\end{equation*}
For any $0<s\le s_0:=\frac{2}{\Phi(r_0)}$, taking $R=\Phi^{-1}\left(\frac{2}{s}\right)$
and $\tilde s=\frac{s}{2}$ in the inequality above, we can get the
required mixed type super Poincar\'{e} inequality (\ref{mixed}) for all $s\in(0,
s_0]$. Hence, the proof of the first
assertion is completed by choosing $\beta(s)=\beta(s_0)$ and
$\gamma(s)=\gamma(s_0)$ for all $s\ge s_0$.
(i) We take $p=\infty$ in \eqref{ddd-222}. Suppose that (\ref{e2-1}) holds. Then $\Theta(R)=0$ for $R>0$
large enough. This immediately yields the true super Poincar\'{e}
inequality (\ref{p2-1-6}) with some constant $r_1>0$.
Let $(L^V, \mathscr{D}(L^V))$ be the generator associated with $(T^V_t)_{t\ge0}$, and
$(\tilde T_t^V)_{t\ge0}$ be the strongly continuous semigroup defined by (\ref{e1}).
Due to the fact that $L_V \phi_1=-\lambda_1 \phi_1$, the (regular)
Dirichlet form $(D_{\phi_1},\mathscr{D}(D_{\phi_1}))$ associated
with $(\tilde T_t^V)_{t \ge 0}$ enjoys the properties that,
$C_c^2(\R^d)$ is a core for $(D_{\phi_1},\mathscr{D}(D_{\phi_1}))$,
and for any $f\in C_c^2(\R^d)$,
\begin{equation}\label{t2-1-1}
\begin{split}
D_{\phi_1}(f,f)=D^V(f\phi_1,f\phi_1)-\lambda_1 \int_{\R^d}
f^2(x)\phi_1^2(x)\,dx.
\end{split}
\end{equation}
Let $\mu_{\phi_1}(dx)=\phi_1^2(x)\,dx$. Combining (\ref{t2-1-1})
with (\ref{p2-1-6}) gives us the following intrinsic super Poincar\'e inequality
\begin{equation*}\label{t2-1-2a}
\begin{split}
\mu_{\phi_1}(f^2)&\le s D^V(f\phi_1,f\phi_1)+
\beta(s\wedge r_1) \Big(\int |f|(x)\phi^2_1(x)\,dx\Big)^2\\
&\le s \Big(D_{\phi_1}(f,f)+\lambda_1\mu_{\phi_1}(f^2)\Big)+\beta(s\wedge r_1) \mu_{\phi_1}^2(|f|),
\quad s>0,
\end{split}
\end{equation*}
where the rate function $\beta(s)$ is given by (\ref{p2-1-5}). In
particular, for any $s\in(0,1/(2\lambda_1))$,
\begin{equation*}
\mu_{\phi_1}(f^2)\le 2sD_{\phi_1}(f,f)+2\beta({s}\wedge r_1),\quad f\in C_c^2(\R^d),
\end{equation*}which implies that
\begin{equation*}
\mu_{\phi_1}(f^2)\le sD_{\phi_1}(f,f)+2\beta\Big(\frac{s}{2}\wedge r_1\wedge \frac{1}{\lambda_1}\Big),\quad f\in C_c^2(\R^d), s>0.
\end{equation*}
Therefore, the desired assertion in (i) for the ultracontractivity of the (Markovian)
semigroup $(\tilde T_t^V)_{t \ge 0}$ (or, equivalently, the intrinsic ultracontractivity of the semigroup $(T_t^V)_{t \ge 0}$)
follows from \cite[Theorem
3.3.13]{WBook} or \cite[Theorem 3.1]{Wang00}.
(ii) Suppose \eqref{p2-2-1} holds true. According to \eqref{ddd-222} and \eqref{p2-2-1}, for any $f \in C_c^\infty(\R^d)$ and $R\ge r_0$,
\begin{equation*}
\begin{split}
\int_{B(0,R)^c}f^2(x)\,\mu(dx)&\le \left( \frac{1}{\Phi(R)}+c_0\Theta(R)^{({p-2})/{p}}\right)
D^V(f,f)+c_0\Theta(R)^{({p-2})/{p}}\mu(f^2),
\end{split}
\end{equation*}
Combining this with conditions (1) and (2) in Theorem \ref{p2-1},
we have that for any $R\ge R_1\vee r_0$ with $c_0\Theta(R_1)^{({p-2})/{p}}\le{1}/{2}$, each $\tilde s>0$ and $f\in C_c^\infty(\R^d)$,
\begin{equation*}
\begin{split}
\mu(f^2) &\le 2\left(\frac{1}{\Phi(R)}+c_0\Theta(R)^{{(p-2)/}{p}}+\tilde s\right) D^V(f,f)+
2C_0^2\alpha(R,\tilde s)\mu\big(\phi_1 |f|\big)^2\\
&=2\left(\Psi(R)+\tilde s\right) D^V(f,f)+2C_0^2\alpha(R,\tilde s)\mu\big(\phi_1 |f|\big)^2.
\end{split}
\end{equation*}
Taking $R=\Psi^{-1}\left(\frac{s}{4}\right)$
and $\tilde s=\frac{s}{4}$ in the inequality above for $s>0$ small enough, we can get the super Poincar\'e inequality \eqref{p2-1-6} with the desired rate function $\hat{\beta}$ given by \eqref{p2-2-3}, also thanks to the fact that $C_c^\infty(\R^d)$ is dense in $\mathscr{D}(D^V).$
Having this at hand, we can arrive at the final assertion for (ii) by the same argument as that in the proof of part (i).
(iii) Now we take $p=\infty$ in \eqref{ddd-222}, and assume that (\ref{p2-1-7}) holds. Note that, for every $f
\in \D(D^V)$, $|f|\in \D(D^V)$ and $D^V(|f|,|f|)\le D^V(f,f)$, and
so it suffices to prove that (\ref{p2-1-8}) holds for any $f \in
\D(D^V)$ with $f \ge 0$.
Given any $f \in \D(D^V)$ with $f \ge 0$ and $\mu(f^2)=1$, for any $\delta>1$, we have
\begin{equation}\label{aaa}
\begin{split}
\mu(f^2)&=\int_0^{\infty}\mu(f^2>t)\,dt\\
&=\int_0^{\delta^{n_0+1}}\mu(f^2>t)\,dt
+\sum_{n=n_0+1}^{\infty}\int_{\delta^n}^{\delta^{n+1}}\mu(f^2>t)\,dt\\
&\le \mu\big((f\wedge \delta^{\frac{n_0+1}{2}})^2\big)+
\sum_{n=n_0+1}^{\infty}(\delta^{n+1}-\delta^{n})\mu(f^2>\delta^n)\\
&=:J_{n_0}+\sum_{n=n_0+1}^{\infty}I_n,
\end{split}
\end{equation}
where $n_0$ is an integer to be determined later.
Next, we define
\begin{equation*}
f_n:=\big(f-\delta^{\frac{n}{2}}\big)^+\wedge
\big(\delta^{\frac{n+1}{2}}-\delta^{\frac{n}{2}}\big),\quad n\ge0.
\end{equation*}
Noticing that for any $n\ge0$, $$f_n \ge
\big(\delta^{\frac{n+1}{2}}-\delta^{\frac{n}{2}}\big)\I_{\{f^2>\delta^{n+1}\}},$$
we get \begin{equation}\label{p2-1-11}
\begin{split}
I_n&=(\delta^{n+1}-\delta^{n})\mu(f^2>\delta^n)\\
&\le
\frac{(\delta^{n+1}-\delta^{n})\mu(f_{n-1}^2)}{\big(\delta^{\frac{n}{2}}-\delta^{\frac{n-1}{2}}\big)^2}\\
&=\frac{\delta(\sqrt{\delta}+1)}{\sqrt{\delta}-1}\mu(f_{n-1}^2),\quad
n\ge0.
\end{split}
\end{equation}
According to \eqref{mixed} and the fact that $f_n \in \D(D^V)$, for all
$n\ge0$ and $0<s\le s_0$,
\begin{equation}\label{p2-1-10}
\mu(f_n^2)\le sD^V(f_n,f_n)+\beta(s)\mu(\phi_1
f_n)^2+\gamma(s)\delta^n(\sqrt{\delta}-1)^2.
\end{equation}
Due to the Cauchy-Schwarz inequality, the Chebyshev inequality and the fact that $\mu(f^2)=1$,
\begin{equation*}
\begin{split}
\mu(\phi_1 f_n)^2&=\mu(\phi_1 f_n \I_{\{f^2>\delta^n\}})^2\le
\mu(\phi_1^2\I_{\{f^2>\delta^n\}})\mu(f_n^2)\le
c_1\delta^{-n}\mu(f_n^2).
\end{split}
\end{equation*}
Then, taking $s=s_n:=\beta^{-1}\left(\frac{c_1\delta^n}{2}\right)$ with $n\ge\log_\delta\Big(\frac{2\beta(s_0)}{c_1}\Big)$
in \eqref{p2-1-10}, we obtain
\begin{equation*}
\mu(f_n^2)\le s_n
D^V(f_n,f_n)+\frac{1}{2}\mu(f_n^2)+(\sqrt{\delta}-1)^2\gamma(s_n)\delta^n,
\end{equation*}
which implies that
\begin{equation*}\label{p2-1-10a}
\mu(f_n^2)\le 2s_n
D^V(f_n,f_n)+2(\sqrt{\delta}-1)^2\gamma(s_n)\delta^n.
\end{equation*}
Since (\ref{p2-1-7}) holds true, there exists an integer $$n_0\ge\left(\log_\delta\Big(\frac{2\beta(s_0)}{c_1}\Big)\right)\vee\left(-\log_\delta\big(4\delta\gamma(s_0)\big)\right)$$
such that
$${\delta({\delta}-1)}\sum_{i=n_0}^{\infty}\gamma(s_i)\delta^i\le \frac{1}{8}.$$
Furthermore, it is easy to see that
\begin{equation*}
\sum_{n=0}^{\infty}|f_n(x)-f_n(y)|\le |f(x)-f(y)|,\ \ \sum_{n=0}^{\infty}|f_n(x)|\le |f(y)|,
\end{equation*}
so, according to \cite[Lemma 3.3.2]{WBook},
$$
\sum_{n=0}^{\infty}D^V(f_n,f_n)\le D^V(f,f).
$$
Combining all the estimates above with (\ref{p2-1-11}), and noting that $s_n$ is
non-increasing with respect to $n$, we arrive at
\begin{equation}\label{p2-1-11a}
\sum_{n=n_0+1}^{\infty}I_n\le\frac{2\delta(\sqrt{\delta}+1)s_{n_0}}{\sqrt{\delta}-1}
D^V(f,f)+\frac{1}{4}.
\end{equation}
On the other hand, applying $f\wedge \delta^{\frac{n_0+1}{2}}$ into
(\ref{p2-1-6}), we have
\begin{equation*}
\begin{split}
J_{n_0}&=\mu\big((f \wedge \delta^{\frac{n_0+1}{2}})^2\big)\\
& \le s D^V(f \wedge \delta^{\frac{n_0+1}{2}},f \wedge
\delta^{\frac{n_0+1}{2}})
+\beta(s)\mu(\phi_1 f)^2+\gamma(s)\delta^{n_0+1}\\
&\le sD^V(f,f)+\beta(s)\mu(\phi_1 f)^2+\gamma(s)\delta^{n_0+1},\quad 0<s\le s_0,
\end{split}
\end{equation*}
where the second inequality also follows from \cite[Lemma
3.3.2]{WBook}. Hence, noticing that
$n_0\ge -\log_\delta\big(4\delta\gamma(s_0)\big)$ and taking
$s=\gamma^{-1}\left(\frac{1}{4\delta^{n_0+1}}\right)$ in the
inequality above, we get that
\begin{equation}\label{p2-1-12}
\begin{split}
J_{n_0}\le
\gamma^{-1}\left(\frac{1}{4\delta^{n_0+1}}\right)D^V(f,f)+
\beta\left(\gamma^{-1}\left(\frac{1}{4\delta^{n_0+1}}\right)\right)
\mu(\phi_1 f)^2+\frac{1}{4}.
\end{split}
\end{equation}
According to \eqref{aaa}, (\ref{p2-1-11a}) and (\ref{p2-1-12}), we
obtain
\begin{equation*}
\begin{split}
\mu(f^2)\le& \left(\frac{2\delta(\sqrt{\delta}+1)s_{n_0}}{\sqrt{\delta}-1}
+\gamma^{-1}\left(\frac{1}{4\delta^{n_0+1}}\right)\right)D^V(f,f)\\
&+
\beta\left(\gamma^{-1}\left(\frac{1}{4\delta^{n_0+1}}\right)\right)
\mu(\phi_1 f)^2+\frac{1}{2}.
\end{split}
\end{equation*}
Since $\mu(f^2)=1$, this implies that
\begin{equation*}
\begin{split}
\mu(f^2)&\le \left(\frac{4\delta(\sqrt{\delta}+1)s_{n_0}}{\sqrt{\delta}-1}
+2\gamma^{-1}\left(\frac{1}{4\delta^{n_0+1}}\right)\right)D^V(f,f)\\
&
\quad +
2\beta\left(\gamma^{-1}\left(\frac{1}{4\delta^{n_0+1}}\right)\right)
\mu(\phi_1 f)^2
\end{split}
\end{equation*}
Hence, for $s>0$ small enough, we arrive at the desired super
Poincar\'{e} inequality by taking $n_0$ to be $n_0(s)$ defined by
(\ref{p2-1-8a}). The intrinsic ultracontractivity of $(T_t^V)_{t
\ge 0}$ is easily verified by following the argument of (i).
\end{proof}
\begin{remark}
To derive the intrinsic ultracontractivity of Feynman-Kac semigroups (or Dirichlet semigroups), Rosen's lemma in the context of super log-Sobolev inequality was applied in \cite{CS,DS}.
Instead of such approach, in Theorem \ref{p2-1} we use super Poincar\'e inequality. The main advantage of our method is due to that,
a mixed type super Poincar\'e inequality can be applied to study the situation that Sobolev inequality (\ref{p2-2-1}) fails, e.g.\
a symmetric $\alpha$-stable process on $\R$ with $\alpha \ge 1$, for which the method of \cite{CS} does not work.
\end{remark}
In Theorem \ref{p2-1}, we essentially use the lower bound estimate for the ground state. On the contrary, at the end of this section we present the following sufficient conditions for a upper bound estimate of the ground state.
\begin{proposition}\label{pro--00} Suppose that the semigroup
$(T^V_t)_{t \ge 0}$ is intrinsically ultracontractive. Let $(L^V, \mathscr{D}(L^V))$ be the generator associated with $(T^V_t)_{t\ge0}$. If there
exist a positive function $\psi \in C_b^2(\R^d)\bigcap$ $ L^2(\R^d;dx)$ and a constant
$\lambda>0$ such that $\psi \in \D(L^V)$,
\begin{equation}\label{t2-1-0}
L^V\psi(x)\le \lambda \psi(x),\quad x \in \R^d,
\end{equation}
then there is a constant $c_1>0$ such that \begin{equation*}\label{t2-1-0a}
\phi_1(x)\le c_1\psi(x),\quad x \in \R^d.
\end{equation*}
\end{proposition}
\begin{proof}
Under \eqref{t2-1-0}, we know that $$T_t^V\psi(x)\le e^{\lambda t}\psi(x),\quad x\in\R^d, t>0.$$
According to \cite[Theorem 3.2]{DS}, the intrinsic ultracontractivity of
$(T_t^V)_{t \ge 0}$ implies that for every $t>0$, there is a constant $c_t>0$ such that
\begin{equation*}
p^V(t,x,y)\ge c_t\phi_1(x)\phi_1(y),\quad \ x,y\in \R^d.
\end{equation*}
Therefore,
\begin{equation*}
\begin{split}\psi(x)&\ge e^{-\lambda} T_1^V\psi(x)= e^{-\lambda} \int p^V(1,x,y)\psi(y)\,dy\\
&\ge c_1e^{-\lambda} \int \psi(y)\phi_1(y)\,dy\phi_1(x)=:C_0\phi_1(x),\end{split}
\end{equation*} which yields the required assertion.\end{proof}
\section{Intrinsic Ultracontractivity for Non-local Dirichlet Forms}\label{section3}
In this section, we come back to the framework introduced in Subsection \ref{subsection1-1} which is recalled below. Let $(D,\mathscr{D}(D))$ be the non-local Dirichlet form given in \eqref{non-local} such that jump kernel satisfies (\ref{e3-1}) and
(\ref{e3-2}),
i.e., there exist $\alpha_1, \alpha_2\in(0,2)$ with $\alpha_1\le\alpha_2$
and positive $c_1, c_2, \kappa$ such that
\begin{equation*}
c_1|x-y|^{-d-\alpha_1}\le J(x,y)\le c_2|x-y|^{-d-\alpha_2},\quad
0<|x-y|\le \kappa
\end{equation*} and
\begin{equation*}
\sup_{x \in \R^d}\int_{\{|y-x|>\kappa\}}J(x,y)\,dy<\infty.\end{equation*}
Assume that the corresponding symmetric Hunt process $((X_t)_{t\ge0}, \Pp^x)$ is well defined for all $x\in\R^d$, and that the process $(X_t)_{t\ge0}$ possesses a positive, bounded and continuous density function $p(t,x,y)$ on $\R^d\times \R^d$ for all $t>0$. That is, assumption {\bf(A1)} holds. Note that, when $J(x,y)=0$ for any $x$, $y\in\R^d$ with $|x-y|>
\kappa$, the process $(X_t)_{t\ge0}$ has finite range jumps.
Let $(T_t)_{t\ge0}$ be a symmetric semigroup associated with $(D,\mathscr{D}(D))$, i.e.\
\begin{equation*} T_t(f)(x)=\Ee^x\left(f(X_t)\right),\,\, x\in\R^d,
f\in L^2(\R^d;dx).\end{equation*} Let $V$ be a non-negative measurable and locally bounded potential
function on $\R^d$ such that Assumption {\bf (A2)} is satisfied. Define the Feynman-Kac semigroup
$(T^V_t)_{t\ge0}$ associated with the Hunt process $(X_t)_{t \ge 0}$ as
follows:
\begin{equation*} T^V_t(f)(x)=\Ee^x\left(\exp\Big(-\int_0^tV(X_s)\,ds\Big)f(X_t)\right),\,\, x\in\R^d,
f\in L^2(\R^d;dx).\end{equation*} Then, the non-local regular Dirichlet form associated with $(T_t^V)_{t\ge0}$ is given by
\begin{equation*}
\begin{split}
D^V(f,f)&=\frac{1}{2} \iint\big(f(x)-f(y)\big)^2J(x,y)\,dx\,dy+\int
f^2(x)V(x)\,dx, \\
\mathscr{D}(D^V)&= \overline{C_c^1(\R^d)}^{{D_1^V}},
\end{split}
\end{equation*}
where ${D_1^V}(f,f):={D^V(f,f)+\|f\|_{L^2(\R^d;dx)}^2}.$ According to Propositions \ref{p1-1} and \ref{p1-2}, we know that the Feynman-Kac semigroup $(T_t^V)_{t\ge 0}$ is compact on $L^2(\R^d,dx)$, and it has a bounded, continuous and strictly positive ground state $\phi_1$ corresponding to the first eigenvalue $\lambda_1>0$.
\ \
In the following, we will apply Theorem \ref{p2-1} to establish the intrinsic ultracontractivity of Feynman-Kac semigroup $(T_t^V)_{t\ge 0}$. For this, we need
obtain some nice lower bound estimate for the ground state
$\phi_1$, and derive local super Poincar\'{e} inequality or Sobolev inequality for the Dirichlet form $(D^V,\mathscr{D}(D^V))$. These will be considered in the following three subsections respectively, and then general results about the intrinsic ultracontractivity of Feynman-Kac semigroup for non-local Dirichlet forms are presented in Subsection \ref{subsection3-4}.
\subsection{Lower bound estimate for the ground state}
For any Borel set $D \subseteq \R^d$, let
$\tau_D:=\inf\{t>0:\ X_t\notin D\}$ be the first exit time from $D$ of the process $(X_t)_{t\ge0}$. Denote by $B(x,r)$ the ball with center at $x\in\R^d$ and radius $r>0$.
\begin{lemma}\label{l3-1}
There exist positive constants $c_0:=c_0(\kappa)$ and
$r_0:=r_0(\kappa)$ such that for every $r \in (0,r_0]$ and $x \in
\R^d$, we have
\begin{equation}\label{l3-1-1}
\Pp^x\left(\tau_{B(x,r)}>c_0
r^{\alpha_2+\frac{(\alpha_2-\alpha_1)d}{\alpha_1}} \right)\ge \frac{1}{2}.
\end{equation}
\end{lemma}
\begin{proof}
For any $0<s<\kappa$, set
\begin{equation*}
\begin{split}
& L_1(s):=\sup_{x \in \R^d}\int_{\{|y-x|>s\}}J(x,y)\,dy,\\
& L_2(s):=\sup_{x \in \R^d}\int_{\{|y-x|\le s\}}|x-y|^2J(x,y)\,dy,\\
& L(s):=L_1(s)+ s^d
\big(s^{-2}L_2(s)\big)^{\frac{(d+\alpha_1)}{\alpha_1}}.
\end{split}
\end{equation*}
According to \cite[Theorem 2.1]{BKK}, there exists a constant
$r_0:=r_0(\kappa)>0$ such that for every $0<r<r_0$, $t>0$ and $x \in
\R^d$,
\begin{equation}\label{l3-1-2}
\Pp^x \left(\tau_{B(x,r)}<t\right)\le C_1 t L(r),
\end{equation}
where $C_1$ is a positive constant independent of $t$ and $r$.
Without lose of generality, we may and do assume that
$0<r_0<1$. Then, by (\ref{e3-1}) and (\ref{e3-2}), for every
$r \in (0,r_0)$
\begin{equation*}
\begin{split}
L(r)\le C_2\Big(r^{-\alpha_2-\frac{(\alpha_2-\alpha_1)d}{\alpha_1}}+
L_1(\kappa)\Big)\le C_3r^{-\alpha_2-\frac{(\alpha_2-\alpha_1)d}{\alpha_1}}.
\end{split}
\end{equation*}
Let $c_0:=c_0(\kappa)$ be a positive constant such that $c_0C_1C_3\le
{1}/{2}$. Then the required assertion \eqref{l3-1-1} follows from
\eqref{l3-1-2} by taking $t=c_0r^{\alpha_2+\frac{(\alpha_2-\alpha_1)d}{\alpha_1}}.$
\end{proof}
\begin{lemma}\label{l3-2}
Let $r_0$, $c_0$ be the constants given in Lemma $\ref{l3-1}$, and
set $\varepsilon_0=r_0/\kappa$. Then, for any
$\varepsilon\in(0,\varepsilon_0)$, any two disjoint sets $B\supseteq
B(x,\varepsilon\kappa)$ and $D=B(y,\varepsilon\kappa)$ for some $x$,
$y \in \R^d$ satisfying that $dist(B,D)>{\varepsilon\kappa}$ and
$|z_1-z_2|\le \kappa$ for every $z_1 \in B$ and $z_2 \in D$, and
every $0< t_1<t_2<T(\kappa,\varepsilon):=c_0(\varepsilon
\kappa)^{\alpha_2+\frac{(\alpha_2-\alpha_1)d}{\alpha_1}},$ it holds that
\begin{equation*}\label{l3-2-1}
\Pp^x\Big(X_{\tau_B}\in D,t_1\le\tau_B<t_2\Big)\ge c_1\varepsilon^d\kappa^{-\alpha_1}(t_2-t_1),
\end{equation*}
where $c_1$ is a positive constant independent of
$\kappa$, $\varepsilon$, $x$ and $y$.
\end{lemma}
\begin{proof}
Denote by $p_B(t,x,y)$ the density of the process $(X_t)_{t\ge0}$ killed on exiting the set $B$, i.e.\
$$p_B(t,x,y)=p(t,x,y)-\Ee^x(\tau_B\le t; p(t-\tau_B, X(\tau_B),y)).$$
According to the framework of the L\'evy system for the Dirichlet form $(D,\D(D))$ (see e.g. \cite[Lemma 4.8]{CK1}), we have for disjoint open
sets $B$ and $D$ that
\begin{equation*}
\begin{split}\Pp^x\Big(X_{\tau_B}\in D\Big)&=\Ee^x\Big(\int_{0}^{\tau_B}\int_{D}J(X_s,z)\,dz\,ds\Big)\\
&=\int_B\int_{0}^{\infty}p_B(s,x,y)\,ds\int_{D}J(y,z)\,dz\,dy,\quad x\in B.\end{split}
\end{equation*}
Then, following the proof of \cite[Proposition 2.5]{KS}, we get that
\begin{equation*}\label{l3-2-2}
\Pp^x\Big(X_{\tau_B}\in D,t_1\le\tau_B<t_2\Big)=\int_B\int_{t_1}^{t_2}p_B(s,x,y)\,ds\int_{D}J(y,z)\,dz\,dy.
\end{equation*}
Therefore, it holds for every
$\varepsilon \in (0,\varepsilon_0)$,
\begin{equation*}
\begin{split}
\Pp^x\Big(X_{\tau_B}\in D,t_1\le\tau_B<t_2\Big)&=\int_B\int_{t_1}^{t_2}p_B(s,x,y)\,ds\int_{D}J(y,z)\,dz\,dy\\
&\ge
\int_B\int_{t_1}^{t_2}p_B(s,x,y)\,ds\int_{D}\frac{c_1}{|z-y|^{d+\alpha_1}}\,dz\,dy\\
&\ge { C}\kappa^{-d-\alpha_1}|D|\int_B\int_{t_1}^{t_2}p_B(s,x,y)\,ds\,dy\\
&=
C\varepsilon^d\kappa^{-\alpha_1}\int_{t_1}^{t_2} \Pp^x\big(\tau_B>s\big)\,ds\\
&\ge C\varepsilon^d\kappa^{-\alpha_1}(t_2-t_1)\Pp^x\big(\tau_B>T(\kappa,\varepsilon)\big)\\
&\ge C\varepsilon^d\kappa^{-\alpha_1}(t_2-t_1)\Pp^x\big(\tau_{B(x,\varepsilon\kappa)}>T(\kappa,\varepsilon)\big)\\
&\ge {C\varepsilon^d}{\kappa^{-\alpha_1}}(t_2-t_1),
\end{split}
\end{equation*}
where in the first inequality we have used \eqref{e3-1}, the second inequality is
due to $|y-z|\le \kappa$ for every $y \in B$ and $z \in D$, and the last inequality follows from \eqref{l3-1-1} with $r={\varepsilon\kappa}$.
\end{proof}
\begin{lemma}\label{l3-3} Let $\varepsilon_0$, $c_0$ be the two constants
given in Lemma $\ref{l3-2}$. For any $\varepsilon\in
(0,\min({1}/{11},\varepsilon_0))$, let $D=B(0,2\varepsilon\kappa)$,
and $t_0=T(\kappa,\varepsilon):=c_0(\varepsilon
\kappa)^{\alpha_2+\frac{(\alpha_2-\alpha_1)d}{\alpha_1}}.$ Then, there
is a constant $c_2(\kappa,\varepsilon)>0$ such that for all
$x\in\R^d$ with $|x|> \frac{\kappa
(1-5\varepsilon)(1-4\varepsilon)}{\varepsilon}$,
\begin{equation}\label{l3-3-1}
T_{t_0}^V(\I_D)(x)\ge
\exp\Big(-\frac{1}{(1-6\varepsilon)\kappa}|x|\log\big(1+|x|+\sup_{|z|\le
|x|+2\varepsilon\kappa}V(z)\big)-c_{2}(\kappa,\varepsilon)\Big).
\end{equation}
\end{lemma}
\begin{proof}
For any $x\in\R^d$ with $|x|>\frac{\kappa (1-5\varepsilon)(1-4\varepsilon)}{\varepsilon}$, let $$n =\bigg\lfloor\frac{1}{(1-4\varepsilon)\kappa}|x|\bigg\rfloor+1$$ and
$x_i={ix}/{n}$ for any $0\le i\le n$, where $\lfloor x\rfloor$ denotes the largest integer less than or equal to $x$. In particular, $x_0=0$, $x_n=x$ and
$$\frac{1}{(1-4\varepsilon)\kappa}|x|\le n < \frac{1}{(1-5\varepsilon)\kappa}|x|.$$Next, for all $0 \le i \le n$,
set $D_i:=B(x_i,2\varepsilon{\kappa})$, $\tilde D_i:=B(x_i,\varepsilon\kappa)$. We can check that for all $0\le i\le n-1$, $dist(D_i,D_{i+1})>(1-5\varepsilon)\kappa-4\varepsilon\kappa\ge2\varepsilon{\kappa}$, and
$|z_i-z_{i+1}|\le (1-4\varepsilon)\kappa+4\varepsilon\kappa=\kappa$ for every $z_i \in D_i$ and $z_{i+1}\in
D_{i+1}$.
In the following, we define for all $n\ge 1$,
\begin{equation*}
\begin{split}
\tilde{\tau}_{D_i}:&=\inf\{t\ge \tilde{\tau}_{D_{i+1}}: X_t\notin D_i\},\quad 1\le i\le n-1; \\
\tilde{\tau}_{D_n}:&={\tau}_{D_n}.
\end{split}
\end{equation*}
By the convention, we also set $\tilde{\tau}_{D_{n+1}}=0$. Then,
\begin{equation}\label{l3-3-2}
\begin{split}
&T_{t_0}^V(\I_D)(x)\\
&=\Ee^x\Big(\I_D(X_{t_0})\exp\Big(-\int_0^{t_0} V(X_s)\,ds\Big)\Big)\\
& \ge \Ee^x\Big(0<\tilde{\tau}_{D_{i}}-\tilde{\tau}_{D_{i+1}}<\frac{t_0}{n},
X_{\tilde{\tau}_{D_i}}\in \tilde D_{i-1}\
{\rm for\ each}\ 1\le i \le n,\forall_{s\in[\tilde{\tau}_{D_1},t_0] } X_{s} \in D; \\
&\qquad\quad
\exp\Big(-\sum_{i=1}^n\int^{\tilde{\tau}_{D_{i}}}_{\tilde{\tau}_{D_{i+1}}}
V(X_s)\,ds-\int_{\tilde{\tau}_{D_1}}^{t_0}
V(X_s)\,ds\Big)\Big)\\
& = \Ee^x\Big(0<{\tau}_{D_{n}}<\frac{t_0}{n}, X_{{\tau}_{D_{n}}}\in
\tilde D_{n-1}; \exp\Big(-\int^{{\tau}_{D_{n}}}_{0} V(X_s)\,ds\Big)\\
&\qquad\quad\cdot\Ee^{X_{\tilde{\tau}_{D_{n}}}}\Big(0<\tau_{D_{n-1}}<\frac{t_0}{n},X_{{\tau}_{D_{n-1}}}\in
\tilde D_{n-2}; \exp\Big(-\int^{{\tau}_{D_{n-1}}}_{0} V(X_s)\,ds\Big)\\
&\qquad\quad\,\cdot \Ee^{X_{\tilde{\tau}_{D_{n-1}}}}\Big(\cdots
\Ee^{X_{\tilde{\tau}_{D_{2}}}}\Big(0<{\tau}_{D_{1}}<\frac{t_0}{n},X_{\tau_{D_{1}}}\in
\tilde D_0; \exp\Big(-\int^{{\tau}_{D_{1}}}_{0} V(X_s)\,ds\Big)\\
&\qquad\quad\,\,\cdot
\Ee^{X_{\tilde{\tau}_{D_{1}}}}\Big(\forall_{s\in[0,t_0-\tilde {\tau}_{D_1}]} X_{s}
\in D;
\exp\Big(-\int_{0}^{t_0-\tilde {\tau}_{D_{1}}}V(X_s)\,ds\Big)\Big)\Big)\cdots\Big)\Big)\Big),
\end{split}
\end{equation}
where in the last equality we have used the strong Markov property.
On the one hand, according to Lemma \ref{l3-2}, for any
$2\le i\le n+1$, if $X_{\tilde{\tau}_{D_{i}}} \in \tilde D_{i-1}$, then for every $i>1$,
\begin{align*}
&
\Ee^{X_{\tilde{\tau}_{D_{i}}}}\Big(0<{\tau}_{D_{i-1}}<\frac{t_0}{n},X_{{\tau}_{D_{i-1}}}\in
\tilde D_{i-2}; \exp\Big(-\int^{{\tau}_{D_{i-1}}}_{0} V(X_s)\,ds\Big)\Big)\\
&\ge
\sum_{j=1}^{\infty}\Ee^{X_{{\tau}_{D_{i}}}}\Big(\frac{t_0}{(j+1)n}\le
{\tau}_{D_{i-1}}<\frac{t_0}{jn},X_{\tau_{D_{i-1}}}\in
\tilde D_{i-2}; \exp\Big(-\int^{{\tau}_{D_{i-1}}}_{0} V(X_s)\,ds\Big)\Big)\\
& \ge \sum_{j=1}^{\infty}\exp\Big(-\frac{t_0}{jn}\sup_{x \in
D_{i-1}}V(x)\Big)\\
&\qquad\qquad \times\inf_{y \in \tilde D_{i-1}}
\Ee^y\Big(\frac{t_0}{(j+1)n}\le{\tau}_{D_{i-1}}<\frac{t_0}{jn},
X_{{\tau}_{D_{i-1}}}\in
\tilde D_{i-2}\Big)\\
&\ge
\frac{C\varepsilon^d\kappa^{-\alpha_1} t_0}{n}\sum_{j=1}^{\infty}\frac{1}{j(j+1)}\exp\Big(-\frac{t_0}{jn}\sup_{x
\in D_{i-1}}V(x)\Big)\\
& \ge \frac{C\varepsilon^d\kappa^{-\alpha_1}t_0 }{n+t_0\sup_{x \in D_{i-1}}V(x)},
\end{align*}
where in the third inequality we have used Lemma \ref{l3-2} with
$B=D_{i-1}$ and $D=\tilde D_{i-2}$, and the last inequality follows
from \cite[Lemma 5.2]{KS}, i.e.\
\begin{equation}\label{rrr1}\sum_{j=1}^\infty \frac{e^{-r/j}}{j(j+1)}\ge \frac{e^{-1}}{r+1},\quad r\ge 0.\end{equation}
On the other hand, due to Lemma \ref{l3-1}, if $X_{\tilde{\tau}_{D_1}}\in
\tilde D_0$, then
\begin{equation}\label{l3-3-3}
\begin{split}
&\Ee^{X_{\tilde{\tau}_{D_{1}}}}\Big(\forall_{s\in [0,t_0-\tilde {\tau}_{D_1}]} X_{s}
\in D;
\exp\Big(-\int_{0}^{t_0-\tilde {\tau}_{D_{1}}}V(X_s)\,ds\Big)\Big)\\
&\ge \exp\Big(-t_0\sup_{z\in D}V(z)\Big)\inf_{y \in \tilde
D_0}\Ee^y\Big (\tau_D>t_0\Big)\\
&\ge \exp\Big(-t_0\sup_{z\in D}V(z)\Big)\inf_{y \in \tilde
D_0}\Ee^y\Big
(\tau_{B(y,\varepsilon\kappa)}>T(\kappa,\varepsilon)\Big)\\
& \ge {C}(\kappa,\varepsilon).
\end{split}
\end{equation}
Combining all the estimates above with the fact that $n \le\frac{1}{(1-5\varepsilon)\kappa}|x|$, we obtain that
\begin{equation*}
\begin{split}
T_{t_0}^V(\I_D)(x)& \ge C(\kappa,\varepsilon) \prod_{i=1}^n\Big(\frac{c\varepsilon^d\kappa^{-\alpha_1} t_0}{n+t_0\sup_{z
\in D_{i-1}}V(z)}\Big)\\
&\ge C(\kappa,\varepsilon)\Big(\frac{c\varepsilon^d\kappa^{-\alpha_1} t_0}{n+t_0\sup_{|z|\le |x|+2\varepsilon\kappa}V(z)}\Big)^n\\
&\ge \exp\Big(-\frac{1}{(1-6\varepsilon)\kappa}|x|\log(1+|x|+\sup_{|z|\le |x|+2\varepsilon\kappa}V(z))-C(\kappa,\varepsilon)\Big),
\end{split}
\end{equation*} which completes the proof.
\end{proof}
According to Lemma \ref{l3-3}, we can obtain the following lower bound estimate for the ground state.
\begin{proposition}\label{p3-1} For any $\varepsilon\in (0,\min({1}/{11},\varepsilon_0))$ and $x\in \R^d$, it holds
\begin{equation}\label{bbb}
\phi_1(x) \ge \exp\Big(-\frac{1}{\kappa(1-6\varepsilon)}|x|\log(1+|x|+\sup_{|z|\le |x|+2\varepsilon\kappa}V(z))-c_3(\kappa,\varepsilon)\Big)
\end{equation}
for some positive constant $c_3(\kappa,\varepsilon)$ independent of $x$.
\end{proposition}
\begin{proof} Since $\phi_1$ is continuous and strictly positive, we only need to verify the desired assertion for $x\in\R^d$ with $|x|>\frac{\kappa (1-5\varepsilon)(1-4\varepsilon)}{\varepsilon}$.
According to (\ref{l3-3-1}), we have for any $x\in\R^d$ with $|x|>\frac{\kappa (1-5\varepsilon)(1-4\varepsilon)}{\varepsilon}$ that
\begin{equation*}
\begin{split}
\exp\Big(-\frac{1}{\kappa(1-6\varepsilon)}|x|\log
\big(1+|x|&+\sup_{|z|\le |x|+2\varepsilon\kappa}V(z)\big)-C(\kappa,\varepsilon)\Big)\\
&\le T_{t_0}^V(\I_D)(x)\le cT_{t_0}^V(\phi_1)(x)=ce^{-\lambda_1 t_0}\phi_1(x),
\end{split}
\end{equation*}
where $c:=(\inf_{y \in D}\phi_1(y))^{-1}<\infty$. This immediately yields the desired assertion.
\end{proof}
\subsection{Intrinsic local super Poincar\'e inequality}
In this part, we will present the following local intrinsic super
Poincar\'e inequality.
\begin{proposition}\label{l3-4}
Let $\varphi$ be a positive and continuous function
on $\R^d$. For any
$r\ge\kappa$, $s>0$ and $f\in C_c^2(\R^d)$,
\begin{equation*}
\begin{split}
\int_{B(0,r)} f^2(x) \,dx
\le & s D^V(f,f)\\
&+\frac{c(\kappa)}{\inf_{|x|\le r+\kappa} \varphi^2(x)}
\big(1+s^{-\frac{d}{\alpha_1}}\big)\Big(\int_{B(0,r+\kappa)}|f(x)|\varphi(x)\,dx\Big)^2.
\end{split}
\end{equation*}
\end{proposition}
\begin{proof}
(i) According to \eqref{e3-1} and the fact that $V \ge 0$, for any $f\in C_c^2(\R^d)$,
\begin{equation}\label{pro1-00}
\begin{split}
D_{\alpha_1,\kappa}(f,f)&:=c_1\iint_{\{|x-y|\le \kappa\}}\big(
f(x)-f(y)\big)^2 |x-y|^{-d-\alpha_1}\,dx\,dy\\
&\le D^V(f,f).
\end{split}
\end{equation}
Next, we follow the argument of \cite[Theorem 3.1]{CK1} to obtain that for any $s>0$, $r\ge \kappa$ and $f\in C_c^2(\R^d)$,
\begin{equation}\label{pro-1-01}
\begin{split}
\int_{B(0,r)} f^2(x) \,dx \le& s D_{\alpha_1,\kappa}(f,f)+C(\kappa)
\big(1+s^{-\frac{d}{\alpha_1}}\big)\Big(\int_{B(0,r+\kappa)}|f(x)|\,dx\Big)^2
\end{split}
\end{equation}
holds with some constant $C(\kappa)>0$. If \eqref{pro-1-01} holds,
then, combining it with (\ref{pro1-00}) above will complete the proof.
(ii) Next, we turn to the proof of \eqref{pro-1-01}. For any $0<s\le r$
and $f\in C_c^2(\R^d)$, define
$$f_s(x):=\frac{1}{|B(0,s)|}\int_{B(x,s)}f(z)\,dz,\quad x\in B(0,r).$$ We have
$$\sup_{x\in B(0,r)}|f_s(x)|\le \frac{1}{|B(0,s)|} \int_{B(0,r+s)}|f(z)|\,dz,$$
and $$\aligned \int_{B(0,r)}|f_s(x)|\,dx&\le \int_{B(0,r)}\frac{1}{|B(0,s)|}\int_{B(x,s)}|f(z)|\,dz\,dx\\
&\le
\int_{B(0,r+s)}\bigg(\frac{1}{|B(0,s)|}\int_{B(z,s)}\,dx\bigg)|f(z)|\,dz\\
&\le
\int_{B(0,r+s)}|f(z)|\,dz.
\endaligned$$ Thus,
$$\aligned\int_{B(0,r)}f_s^2(x)\,dx\le & \Big(\sup_{x\in B(0,r)}|f_s(x)|\Big) \int_{B(0,r)}|f_s(x)|\,dx\\
\le &\frac{1}{|B(0,s)|}
\bigg(\int_{B(0,r+s)}|f(z)|\,dz\bigg)^2.\endaligned$$
Therefore, for any $f\in C_c^2(\R^d)$ and $0<s\le r,$
$$\aligned\int_{B(0,r)}f^2(x)\,dx
\le & 2\int_{B(0,r)}\big(f(x)-f_s(x)\big)^2\,dx+ 2\int_{B(0,r)}f^2_s(x)\,dx\\
\le &2\int_{B(0,r)}\frac{1}{|B(0,s)|}\int_{B(x,s)}(f(x)-f(y))^2\,dx\,dy\\
&+ \frac{2}{|B(0,s)|} \bigg(\int_{B(0,r+s)}|f(z)|\,dz\bigg)^2\\
\le & \bigg(\frac{2s^{d+\alpha_1}}{|B(0,s)|}\bigg)\int\int_{\{|x-y|\le s\}}\frac{(f(x)-f(y))^2}{|x-y|^{d+\alpha_1}}\,dx\,dy\\
&+ \frac{2}{|B(0,s)|} \bigg(\int_{B(0,r+s)}|f(z)|\,dz\bigg)^2.\endaligned$$
In particular, for any $f\in C_c^2(\R^d)$ and $0<s\le \kappa\le r,$
$$\aligned\int_{B(0,r)}f^2(x)\,dx\le &c_3\bigg[s^{\alpha_1}D_{\alpha_1,\kappa}(f,f)+
s^{-d}\bigg(\int_{B(0,r+\kappa)}|f(z)|\,dz\bigg)^2\bigg], \endaligned$$
which implies that there exists a constant $s_0:=s_0(\kappa)>0$ such
that for all $s\in(0,s_0]$, $$\aligned\int_{B(0,r)}f^2(x)\,dx\le &s
D_{\alpha_1,\kappa}(f,f)\\
&+
c_4s^{-d/\alpha_1}\bigg(\int_{B(0,r+\kappa)}|f(z)|\,dz\bigg)^2,\quad
r\ge\kappa,f\in C_c^2(\R^d). \endaligned$$ This proves the desired
assertion \eqref{pro-1-01}.
\end{proof}
\begin{remark} One also can derive the inequality \eqref{pro-1-01} along the lines of the proof of \cite[Lemma 2.1]{CW14}. Indeed, when $\kappa=1$, the inequality \eqref{pro-1-01} is just
\cite[Lemma 2.1]{CW14}. For general $\kappa>0$, the proof of \eqref{pro-1-01} is almost the same as that of \cite[Lemma 2.1]{CW14}, and one only need to replace $B(0,\frac{1}{2})$ in \cite[(2.17)]{CW14} by $B(0,\frac{\kappa}{2})$.\end{remark}
\subsection{Sobolev inequalities}
\begin{proposition}\label{ppp-444} If $d>\alpha_1$, then there exists a constant $c_0(\kappa)>0$ such that the following Sobolev inequality holds
\begin{equation}\label{e3-3}
\begin{split}
\|f\|_{L^{2d/(d-\alpha_1)}(\R^d;dx)}^2 & \le c_0(\kappa)\bigg[D(f,f)+\|f\|_{L^{2}(\R^d;dx)}^2\bigg],\quad f \in C_c^{\infty}(\R^d).
\end{split}
\end{equation}
\end{proposition}
\begin{proof} According to \cite[(2.3)]{CK} and (\ref{e3-1}), we get that for any $f\in C_c^{\infty}(\R^d)$, $$\aligned
\|f\|_{L^{{2d}/({d-\alpha_1})}(\R^d;dx)}^2 &\le c_5
\int_{\R^d}\int_{\R^d}\frac{\left(f(x)-f(y)\right)^2}{|x-y|^{d+\alpha_1}}dxdy\\
& \le c_5
\int\int_{\{|x-y|\le \kappa\}}\frac{\left(f(x)-f(y)\right)^2}{|x-y|^{d+\alpha_1}}dxdy
+c_6(\kappa)\|f\|_{L^{2}(\R^d;dx)}^2\\
&\le c_1c_5 D(f,f)+c_6(\kappa)\|f\|_{L^{2}(\R^d;dx)}^2.
\endaligned$$
This completes the proof.
\end{proof}
\subsection{General results about intrinsic ultracontractivity of Feynman-Kac
semigroups}\label{subsection3-4}
According to Propositions \ref{p3-1}, \ref{l3-4}, \ref{ppp-444} and
Theorem \ref{p2-1}, we immediately have the following statement.
\begin{theorem}\label{t3-1} Let $(T_t^V)_{t\ge 0}$ be a compact Feynman-Kac semigroup on $L^2(\R^d,dx)$ given in the beginning of this section (or in Subsection \ref{subsection1-1}), and $V$ be a locally bounded non-negative measurable function on $\R^d$ such that assumptions {\bf (A2)} and {\bf (A4)} hold.
For some $\varepsilon\in
(0,\min({1}/{11},\varepsilon_0))$, define
\begin{equation}\label{t3-1-1}
\varphi(x):=\exp\Big(-\frac{1}{\kappa(1-6\varepsilon)}|x|\log(1+|x|+\sup_{|z|\le |x|+2\varepsilon\kappa}V(z))\Big),
\end{equation} and
\begin{equation*}\label{t3-1-3}
\begin{split}
\alpha(r,s):=\frac{ c(\kappa)}{\inf_{|x|\le r+\kappa}
\varphi^2(x)} \big(1+s^{-\frac{d}{\alpha_1}}\big),
\end{split}
\end{equation*} where $\varepsilon_0$ is the constant in Lemma $\ref{l3-2}$ and $c(\kappa)$ is the constant in Proposition
$\ref{l3-4}$.
For the constant $K$ in {\bf (A4)}, let \begin{equation*}\label{t3-1-3}
\begin{split}
\Phi(R):=&\inf_{x \in \R^d: |x|\ge R, V(x)>K}V(x),\\ \Theta(R):=&
\big|\{x \in \R^d: |x|\ge R, V(x)\le K\}\big|,\\
\Psi(R):=&\frac{1}{\Phi(R)}+c_0(\kappa)\Theta(R)^{{\alpha_1}/{d}},\end{split}
\end{equation*}where $c_0(\kappa)$ is a constant in \eqref{e3-3}.
We furthermore define
\begin{equation}\label{t3-1-3a}
\begin{split}\beta_\Phi(s):=&\left[1+\alpha\left(
\Phi^{-1}\left(\frac{2}{s}\right), \frac{s}{2}\right)\right],\\
\beta_\Psi(s):=&\left[1+\alpha\left( \Psi^{-1}\left(\frac{s}{4}\right),
\frac{s}{4}\right)\right],\\
\gamma(s):=&\Theta^{-1}\left(\frac{s}{2}\right).
\end{split}\end{equation}
\begin{enumerate}
\item [(1)]
If $\lim_{|x|\to\infty}V(x)=\infty$, and
\begin{equation*}
\int_t^{\infty}\frac{\beta_\Phi^{-1}(s)}{s}\,ds<\infty,\quad t\gg1,
\end{equation*}
then $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive.
\item[(2)] If $d>\alpha_1$ and
\begin{equation*}
\int_t^{\infty}\frac{\beta_\Psi^{-1}(s)}{s}\,ds<\infty,\quad t\gg1,
\end{equation*}
then $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive.
\item [(3)] Suppose that there exists a constant $\delta>1$ such that
\begin{equation}\label{t3-1-4}
\sum_{n=1}^{\infty}\gamma(s_n)\delta^n<\infty
\end{equation}
and
\begin{equation}\label{t3-1-5}
\int_t^{\infty}\frac{\tilde \beta^{-1}_\Phi(s)}{s}\,ds<\infty,\quad t\gg1,
\end{equation}
where $s_n:=\beta_\Phi^{-1}(\frac{c_1 \delta^n}{2})$ with
$c_1:=\|\phi_1\|_{\infty}^2$, and
\begin{equation}\label{t3-1-5a}
\tilde
\beta_\Phi(s):=2\beta_\Phi\left(\gamma^{-1}\left(\frac{1}{4\delta^{n_0(s)+1}}\right)\right)
\end{equation}
with
\begin{equation*}
\begin{split}
n_0(s):=\inf\Bigg\{N \ge1:\frac{4\delta(\sqrt{\delta}+1)s_{N}}{\sqrt{\delta}-1}
+2\gamma^{-1}\left(\frac{1}{4\delta^{N+1}}\right)\le s\Bigg\}.
\end{split}
\end{equation*}
Then $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive.
\end{enumerate}
\end{theorem}
\section{Proofs of Theorems and Examples}\label{section4}
In the section, we will give the proofs of all the statements in Section \ref{section1}. First, we present the
\begin{proof}[Proof of Theorem $\ref{thm2}$]
(1) It is clear that $$\lim_{|x|\to\infty}V(x)=\infty.$$ Let
$\varepsilon\in (0,\min({1}/{11},\varepsilon_0))$. Since
$$V(x)\le c_4|x|^{\theta_3}\log^{\theta_4}(1+|x|),\quad x\in\R^d,$$ we have the following estimate for the function
$\varphi$ given by (\ref{t3-1-1})
\begin{equation*}
\varphi(x)\ge \exp\Big(-\frac{\theta_3}{\kappa(1-7\varepsilon)}(1+|x|)\log(1+|x|)-
C(\kappa,\varepsilon,\theta_3,\theta_4)\Big).
\end{equation*}
On the other hand, since $$V(x)\ge
c_3|x|^{\theta_1}\log^{\theta_2}(1+|x|),\quad x\in\R^d,$$ we obtain
\begin{equation*}\label{t1-3}
\begin{split}
\Phi(r)\ge c_3r^{\theta_1}\log^{\theta_2}(1+r).
\end{split}
\end{equation*}
Therefore, the rate function
$\beta_\Phi(r)$ defined by (\ref{t3-1-3a}) satisfies that for $s>0$ small enough
\begin{equation*}
\beta_\Phi(s)\le C(\kappa, \varepsilon)\exp\left\{C(\kappa,
\varepsilon,\theta_3,\theta_4)
\Big(1+s^{-\frac{1}{\theta_1}}\log^{1-\frac{\theta_2}{\theta_1}}\big(1+s^{-1}\big)\Big)\right\}.
\end{equation*}
In particular, $$\beta_\Phi^{-1}(r)\le \frac{C}{\log^{\theta_1}(1+
r)\log^{\theta_2-\theta_1}\log(e+r)},\quad r>0\textrm{ large enough}.$$
Then, if $\theta_1=1$ and $\theta_2>2$ or if $\theta_1>1$, the intrinsic ultracontractivity of
$(T_t^V)_{t \ge 0}$ immediately follows from
Theorem \ref{t3-1}(1).
(2)
The required lower bound for
the ground state $\phi_1$ immediately follows from \eqref{bbb}.
Next, we will verify the upper bound. If $\theta_1=1$ and $\theta_2>2$ or if $\theta_2>1$,
then the semigroup $(T_t^V)_{t\ge0}$ is intrinsically ultracontractive. For any $0<\lambda<\theta_1$, let
\begin{equation*}
\psi(x):=\exp\Big(-\frac{\lambda}{2\kappa} \sqrt{1+|x|^2}\log(1+|x|^2)\Big).
\end{equation*}
Suppose that assumption {\bf (A4)} holds and for any $x,y\in\R^d$ with $|x-y|>\kappa$, $J(x,y)=0$.
Then, the generator $L^V$ of the associated Feynman-Kac semigroup
$(T^V_t)_{t\ge0}$ enjoys the expression
(\ref{ope11}). By the approximation argument, it is easy to verify that $\psi \in \D(L^V)$. For
$|x|$ large enough, we obtain by the mean value theorem that
\begin{equation*}
\begin{split}
L^V \psi(x)&\le C_1 \sup_{z\in B(x,\kappa)}\Big(
\big|\nabla \psi(z)\big|+\big|\nabla^2 \psi(z)\big|\Big)-V(x)\psi(x)\\
&\le C_2(\kappa, \lambda)\log^2(1+|x|)
\cdot \exp\Big(-\frac{\lambda}{\kappa} (|x|-\kappa)\log\big(1+(|x|-\kappa)\big)-C_3(\kappa,\lambda)\Big)\\
&\quad -c_3(1+|x|)^{\theta_1}\log^{\theta_2}(1+|x|)\psi(x)\\
&\le C_4(\kappa, \lambda) (1+|x|)^{\lambda}\log^2(1+|x|)\psi(x)-c_3(1+|x|)^{\theta_1}\log^{\theta_2}(1+|x|)\psi(x).
\end{split}
\end{equation*}
Since $0<\lambda<\theta_1$, $$L^V \psi(x)\le 0$$ for $|x|$ large
enough. Note that the function $x\mapsto L^V \psi(x)$ is locally
bounded, we know from the inequality above that (\ref{t2-1-0}) holds
with some constant $\lambda>0$. Therefore, the required upper bound
for $\phi_1$ follows from Proposition \ref{pro--00}.
\end{proof}
Indeed, according to Theorem \ref{t3-1}(1), we also have the following statements. The proofs are similar to that of Theorem \ref{thm2}, and so we omit them here.
\begin{proposition}\label{pro} Suppose that \eqref{e3-1}, \eqref{e3-2}, assumptions {\bf(A1)} and {\bf (A2)} hold. Then, we have the following two assertions.
\begin{itemize}
\item[(1)] If there are positive constants $c_5$, $c_6$, $\theta_5$, $\theta_6$ with $\theta_5>\theta_6+1$ such that for all $x\in\R^d$,
$$ c_5(1+|x|^{\theta_5}) \le V(x)\le e^{c_6(1+|x|^{\theta_6})},$$
then $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive, and for any $\varepsilon>0$ there is a constant $C_3:=C_3(\varepsilon)>0$ such that for all $x\in\R^d$,
$$C_3\exp\Big(-\frac{(1+\varepsilon)}{\kappa} |x|^{\theta_6+1}\Big)\le
\phi_1(x).$$
Additionally, if moreover {\bf (A3)} also holds and
$$J(x,y)=0,\quad x,y\in\R^d\textrm{ with } |x-y|>\kappa,$$ then for any $\varepsilon>0$, there exists a constant
$C_4:=C_4(\varepsilon)>0$ such that for all $x\in\R^d$,
$$
\phi_1(x) \le C_4\exp\Big(-\frac{(1-\varepsilon) \theta_5}{\kappa}
|x|\log(1+|x|)\Big).
$$
\item [(2)] If there are positive constants $c_7$, $c_8$, $\theta_7$, $\theta_8$ with $\theta_7\le \theta_8$ such that for all $x\in\R^d$,
$$e^{c_7(1+|x|^{\theta_7})} \le V(x)\le e^{c_8(1+|x|^{\theta_8})},$$
then $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive, and for any $\varepsilon>0$ there is a constant $C_5:=C_5(\varepsilon)>0$ such that for all $x\in\R^d$,
$$C_5\exp\Big(-\frac{c_8(1+\varepsilon)}{\kappa}
|x|^{\theta_8+1}\Big)\le\phi_1(x).$$
Additionally, if moreover {\bf (A3)} also holds and
$$J(x,y)=0,\quad x,y\in\R^d\textrm{ with } |x-y|>\kappa,$$ then for any $\frac{\theta_7}{\theta_7+1}<\varepsilon<1$, there exists a constant
$C_6:=C_6(\varepsilon)>0$ such that for all $x\in\R^d$,
$$
\phi_1(x)\le
C_6\exp\Big(-\frac{c_7(1-\varepsilon)}{\kappa} |x|^{\theta_7+1}\Big).
$$
\end{itemize}
\end{proposition}
\ \
Next, we turn to the
\begin{proof}[Proof of Example $\ref{ex2}$]
(1) According to Theorem \ref{thm2}, we know
that $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive if $V(x)=|x|^{\theta}$ for some $\theta>1$. Now we are going to
verify that
if
$V(x)=|x|^{\theta}$ for some $0<\theta\le 1$, then $(T_t^V)_{t \ge 0}$ is not intrinsically ultracontractive.
We mainly use the method of \cite[Theorem 1.6]{KS} (see \cite[pp. 5055-5056]{KS})
and disprove \cite[Condition 1.3, p.\ 5027]{KS}.
In fact, according to \cite[Condition 1.3, p.\ 5027]{KS}, if $(T_t^V)_{t \ge 0}$
is intrinsically ultracontractive, then for
every fixed $t \in (0,1]$, there exists a constant $C_t>0$ such that
\begin{equation}\label{ex2-0}
T_t^V(\I_D)(x)\ge C_t T_t^V(\I_{B(x,1)})(x).
\end{equation}
Let $p(t,x,y)$ be the heat kernel for the associated process $(X_t)_{t\ge0}$. According to \cite[(1.16) in Theorem 1.2 and (1.20) in Theorem 1.4]{CKK1}, for
any fixed $t\in(0,1]$ and $|x-y|$ large enough,
\begin{equation*}
p(t,x,y)\le C_1t\exp\bigg(-C_2|x-y|\Big(\log\frac{|x-y|}{t}\Big)^{\frac{\gamma-1}{\gamma}}\bigg).
\end{equation*}
Set $D=B(0,1)$. For $|x|$ large enough,
\begin{equation*}
\begin{split}
& T_t^V(\I_D)(x)\le \int_D p(t,x,y)\,dy \le
C_3t\exp\Big(-C_2(|x|-1)\Big(\log\frac{|x|-1}{t}\Big)^{\frac{\gamma-1}{\gamma}}\Big).
\end{split}
\end{equation*}
On the other hand, for $|x|$ large enough,
\begin{equation*}
\begin{split}
T_t^V(\I_{B(x,1)})(x)&\ge
\Ee^x\Big(\tau_{B(x,1)}>t;
\exp\Big(-\int_0^t V(X_s)ds\Big)\Big)\\
&\ge C\Pp^x\big(\tau_{B(x,1)}>t\big)e^{-t|x|^{\theta}}\\
&\ge C\Pp^x\big(\tau_{B(x,1)}>1\big)e^{-t|x|^{\theta}}\\
&\ge C e^{-t|x|^{\theta}}.
\end{split}
\end{equation*}
Combining both conclusions above with the fact that $\theta\in(0,1]$, we
get that for any fixed $t\in(0,1]$, the inequality (\ref{ex2-0}) does not hold for
any constant $C_t>0$,
which contradicts with \cite[Condition 1.3, p.\ 5027]{KS}. Hence, according to the remark below \cite[Condition 1.3, p.\ 5027]{KS}, the semigroup
$(T_t^V)_{t \ge 0}$ is not intrinsically ultracontractive.
(2) If $\gamma=\infty$ and $\theta>1$, then the ground state estimate (\ref{ex2-1}) immediately follows from
Theorem \ref{thm2}. When $1<\gamma<\infty$ and $\theta>1$, one can apply Proposition \ref{p3-1}
to get a lower bound estimate for $\phi_1$, which however is not optimal. Instead, we will adopt a slightly different argument
from that of
Proposition \ref{p3-1}, and will
derive a more accurate lower bound estimate, which is partly inspired by \cite[Theorem 5.4]{CKK1}.
For any $\lambda>0$, we choose a constant $$0<\varepsilon<\varepsilon_0\wedge\bigg(\frac{1}{2}\theta^{\frac{1}{\gamma}}
\Big(\big(1+\lambda)^{\frac{1}{\gamma}}-1\Big)\bigg), $$
where $\varepsilon_0>0$ is the same constant in
Lemma \ref{l3-2}. For every $x \in \R^d$ with $$|x|\ge e^{2^\gamma\theta(1+2\varepsilon)^\gamma}\vee(e-1)\quad\textrm{ and } \quad\theta^{-\frac{1}{\gamma}}|x|\log^{-\frac{1}{\gamma}}(1+|x|)\ge1,$$ let
$$n=\Big\lfloor\theta^{-\frac{1}{\gamma}}|x|\log^{-\frac{1}{\gamma}}(1+|x|)\Big\rfloor+1,$$
and $x_i:={i}x/n$ for any $0\le i\le n$, where $\lfloor x\rfloor$ denotes the largest integer less than or equal to $x$. Next, for all $0\le i\le n$,
we set $D_i:=B(x_i, \varepsilon)$ and $\tilde D_i:=B(x_i, {\varepsilon}/{2})$.
Note that
\begin{equation}\label{ex2-3}
\theta^{-\frac{1}{\gamma}}|x|\log^{-\frac{1}{\gamma}}(1+|x|)\le n \le
\theta^{-\frac{1}{\gamma}}|x|\log^{-\frac{1}{\gamma}}(1+|x|)+1,
\end{equation}
we can check that for each $0 \le i \le n-1$,
$$1\le \frac{1}{2}\theta^{\frac{1}{\gamma}}\log^{\frac{1}{\gamma}}(1+|x|)-
2\varepsilon\le\frac{1}{\theta^{-\frac{1}{\gamma}}\log^{-\frac{1}{\gamma}}(1+|x|)+1}-2\varepsilon\le dist(D_i,D_{i+1})$$
and for every $z_i \in D_i$ and $z_{i+1}\in D_{i+1}$,
$$|z_i-z_{i+1}|\le \frac{|x|}{n}+2\varepsilon\le {\theta^{\frac{1}{\gamma}}\log^{\frac{1}{\gamma}}(1+|x|)}+2\varepsilon\le \Big((1+\lambda)\theta\log(1+|x|)\Big)^{\frac{1}{\gamma}}.$$
In the following, we define for all $n\ge1$
\begin{equation*}
\begin{split}
\tilde{\tau}_{D_i}:&=\inf\{t\ge \tilde{\tau}_{D_{i+1}}: X_t\notin D_i\},\quad 1\le i\le n-1; \\
\tilde{\tau}_{D_n}:&={\tau}_{D_n}.
\end{split}
\end{equation*}
By the convention, we also set $\tilde{\tau}_{D_{n+1}}=0$.
Let $T(1,\varepsilon)$ be the same constant in Lemma \ref{l3-2}
with $\kappa=1$.
First, if $X_{\tilde \tau_{D_{i}}} \in \tilde D_{i-1}$, then we have for each $i\ge2$, $j\ge1$ and
$t_0=T(1,\varepsilon)$,
\begin{align*}
& \Pp^{X_{\tilde \tau_{D_{i}}}}\Big(\frac{t_0}{(j+1)n}\le \tau_{D_{i-1}}<\frac{t_0}{jn},X_{\tau_{D_{i-1}}}\in
\tilde D_{i-2}\Big)\\
&\ge \inf_{x \in \tilde D_{i-1}}\Pp^x\Big(\frac{t_0}{(j+1)n}\le \tau_{D_{i-1}}<\frac{t_0}{jn},X_{\tau_{D_{i-1}}}\in
\tilde D_{i-2}\Big)\\
&=\inf_{x \in \tilde D_{i-1}}\int_{\frac{t_0}{(j+1)n}}^{\frac{t_0}{jn}}\int_{D_{i-1}}
p_{D_{i-1}}(s,x,y)\int_{\tilde D_{i-2}}J(y,z)\,dz\,dy\,ds\\
&\ge C\inf_{x \in \tilde D_{i-1}}\int_{\frac{t_0}{(j+1)n}}^{\frac{t_0}{jn}}\int_{D_{i-1}}
p_{D_{i-1}}(s,x,y)\int_{\tilde D_{i-2}}e^{-|y-z|^{\gamma}}\,dz\,dy\,ds\\
&\ge C \inf_{x \in \tilde D_{i-1}}\int_{\frac{t_0}{(j+1)n}}^{\frac{t_0}{jn}}
\Pp^x\big(\tau_{D_{i-1}}>s\big)\,ds \, e^{-(1+\lambda)\big(\theta\log(1+|x|)\big)}\\
&\ge \frac{Ct_0}{j(j+1)n(1+|x|)^{(1+\lambda)\theta}}\inf_{x \in \tilde D_{i-1}}\Pp^x\Big(\tau_{D_{i-1}}>\frac{t_0}{jn}\Big)\\
&\ge \frac{Ct_0}{j(j+1)n(1+|x|)^{(1+\lambda)\theta}}\inf_{x \in \tilde D_{i-1}}\Pp^x\Big(\tau_{B(x,\varepsilon/2)}>t_0\Big)\\
&\ge \frac{C}{j(j+1)n(1+|x|)^{(1+\lambda)\theta}},
\end{align*}
where the equality above is due to \eqref{l3-2-2}, in the third inequality we have used the fact that $|y-z|\le
\big((1+\lambda)\theta\log|x|\big)^{{1}/{\gamma}}$ for $y \in D_{i-1}$ and $z \in \tilde D_{i-2}$, and the last inequality follows from Lemma \ref{l3-1}.
Hence, if $X_{\tilde \tau_{D_{i}}} \in \tilde D_{i-1}$, then we have for all $i\ge2$,
\begin{align*}
& \Ee^{X_{\tilde \tau_{D_{i}}}}\Big(0<\tau_{D_{i-1}}<\frac{t_0}{n},X_{\tau_{D_{i-1}}}\in
\tilde D_{i-2}; \exp\Big(-\int^{\tau_{D_{i-1}}}_{0} V(X_s)\,ds\Big)\Big)\\
&\ge \sum_{j=1}^{\infty}\Ee^{X_{\tau_{D_{i}}}}\Big(\frac{t_0}{(j+1)n}\le \tau_{D_{i-1}}<\frac{t_0}{jn},X_{\tau_{D_{i-1}}}\in
\tilde D_{i-2}; \exp\Big(-\int^{\tau_{D_{i-1}}}_{0} V(X_s)\,ds\Big)\Big)\\
& \ge \sum_{j=1}^{\infty}\exp\Big(-\frac{t_0}{jn}\sup_{x \in D_{i-1}}V(x)\Big)\inf_{y \in \tilde D_{i-1}}
\Pp^y\Big(\frac{t_0}{(j+1)n}\le\tau_{D_{i-1}}<\frac{t_0}{jn}, X_{\tau_{D_{i-1}}}\in
\tilde D_{i-2}\Big)\\
&\ge \frac{C}{n(1+|x|)^{(1+\lambda)\theta}}\sum_{j=1}^{\infty}\frac{1}{j(j+1)}\exp\Big(-\frac{t_0}{jn}\sup_{x \in D_{i-1}}V(x)\Big)\\
&\ge \frac{C}{(1+|x|)^{(1+\lambda)\theta}\big(n+\sup_{x \in D_{i-1}}V(x)\big)},
\end{align*}
where in the last inequality we have used (\ref{rrr1}).
Furthermore, we find that (\ref{l3-3-2}) and (\ref{l3-3-3}) are still valid here. Therefore, combining with all the estimates
above, we obtain that for $|x|$ large enough
\begin{equation*}
\begin{split}
T_{t_0}^V(\I_D)(x)& \ge C_4 \prod_{i=1}^n\Big(\frac{C_5}{|x|^{(1+\lambda)\theta}\big(n+\sup_{z \in D_{i-1}}V(z)\big)}\Big)\\
&\ge C_4\Big(\frac{C_5}{|x|^{(1+\lambda)\theta}\big(n+\sup_{|z|\le |x|+\varepsilon}V(z)\big)}\Big)^n\\
&\ge \exp\Big(-(1+\lambda)|x|\big(\theta\log|x|\big)^{-\frac{1}{\gamma}}\log(1+|x|^{(2+\lambda)\theta})-C(\theta)\Big)\\
&\ge \exp\Big(-(1+\lambda)(2+\lambda)\theta^{\frac{\gamma-1}{\gamma}}|x|\big(\log|x|\big)^{\frac{\gamma-1}{\gamma}}-C(\theta)(1+|x|)\Big),
\end{split}
\end{equation*}
where in the third inequality we have used the property (\ref{ex2-3}).
Hence, following the same argument as that of Proposition \ref{p3-1}, we finally arrive at
\begin{equation*}
\begin{split}
\phi_1(x)\ge \exp\Big(-(1+\lambda)(2+\lambda)\theta^{\frac{\gamma-1}{\gamma}}|x|\log^{\frac{\gamma-1}{\gamma}}(1+|x|)-
C(\theta)(1+|x|)\Big).
\end{split}
\end{equation*}
In particular, by taking $\lambda>0$ small enough in the inequality above, we indeed can get the lower bound estimate in (\ref{ex2-2})
with any constant $c_4>2$.
(3) Let
\begin{equation*}
\psi(x):=\exp\Big(-(c_0\theta)^{\frac{\gamma-1}{\gamma}}\sqrt{1+|x|^2}\log^{\frac{\gamma-1}{\gamma}}
\sqrt{1+|x|^2}\Big),
\end{equation*}
where $c_0>0$ is a constant to be determined later.
Under assumption {\bf (A3)}, by the approximation argument again it is easy to verify that $\psi \in
\D(L^V)$, we know from \eqref{ope11} that
\begin{align*}
L^V \psi(x)&=\int_{\R^d} \Big(\psi(x+z)-\psi(x)-\langle \nabla \psi(x),z \rangle \I_{\{|z|\le 1\}}\Big)J(x,x+z)\,dz\\
&+\frac{1}{2}\int_{\{|z|\le 1\}}\langle\nabla \psi(x),
z\rangle\left(J(x,x+z)-J(x,x-z)\right)\,dz -V(x)\psi(x)\\
&\le c_1\sup_{z \in B(x,1)}\big(|\nabla^2 \psi(x)|+|\nabla \psi(x)|\big)\\
&+
\int_{\{|z|>1\}} \big(\psi(x+z)-\psi(x)\big)J(x,x+z)\,dz-V(x)\psi(x)\\
&=:I_1(x)+I_2(x)-V(x)\psi(x).
\end{align*}
According to the proof of Theorem \ref{thm2} and the mean value theorem, for $|x|$ large enough,
\begin{equation*}
\begin{split}
I_1(x)& \le C(\theta)\log^{\frac{2(\gamma-1)}{\gamma}}(1+|x|)\exp\Big(-(c_0\theta)^{\frac{\gamma-1}{\gamma}}(|x|-1)
\log^{\frac{\gamma-1}{\gamma}}(|x|-1)\Big)\\
&\le C(\theta)\exp\Big((c_0\theta)^{\frac{\gamma-1}{\gamma}}\log^{\frac{\gamma-1}{\gamma}}(1+|x|)\Big)
\log^{\frac{2(\gamma-1)}{\gamma}}(1+|x|) \psi(x).
\end{split}
\end{equation*}
On the other hand,
\begin{equation*}
I_2(x)\le \int_{\{|z|>1\}} \psi(x+z)J(x,x+z)\,dz \le
c_2\int_{\{|z|>1\}} \psi(x+z)e^{-|z|^{\gamma}}\,dz.
\end{equation*}
For each $i\ge 1$, set $A_i:=B(0,i+1)\setminus B(0,i)$. Since
for every $x \in \R^d$ and $z \in A_i$, $$|x|-i-1\le |x+z| \le |x|+i+1,$$ we get that for any
$0<\lambda<1$ and any $x\in\R^d$ with $|x|$ large enough,
\begin{equation*}
\begin{split}
\int_{\{|z|>1\}} &\psi(x+z)e^{-|z|^{\gamma}}\,dz\\
&=
\sum_{i=1}^{\infty} \int_{A_i}\psi(x+z)e^{-|z|^{\gamma}}\,dz\\
& { \le c_3 \sum_{i=2}^{\infty}i^{d-1}e^{-(i-1)^{\gamma}}\sup_{|x|-i-1\le |z| \le |x|+i+1}
|\psi(z)|}\\
&{ \le c_4\psi(x)\sum_{i=2}^{\infty}i^{d-1}\exp\Big[\big((1+\lambda)c_0\theta
\log|x|\big)^{\frac{\gamma-1}{\gamma}}i-(i-1)^{\gamma}\Big],}
\end{split}
\end{equation*}
where in the first inequality we have used the fact that $|A_i|\le Ci^{d-1}$, and the second inequality follows from the fact that
for $|x|$ large enough $$\sup_{|z|\ge |x|-i-1}
|\psi(z)|\le \exp\Big[\big((1+\lambda)c_0\theta\log|x|\big)^{\frac{\gamma-1}{\gamma}}i\Big]\psi(x),$$ thanks to again the mean value theorem.
Furthermore, set $N(x):=\Big\lfloor
(6(1+\lambda)c_0\theta\log|x|)^{\frac{1}{\gamma}}\Big\rfloor+1$. Noticing that
for every $i>N(x)$,
$$\big((1+ \lambda)c_0\theta\log|x|\big)^{\frac{\gamma-1}{\gamma}}i-(i-1)^{\gamma}\le -\frac{i^{\gamma}}{2},$$
we have
\begin{align*}
&\sum_{i=2}^{\infty}i^{d-1}\exp\left[\left(\left(1+
\lambda\right)c_0\theta\log|x|\right)^{\frac{\gamma-1}{\gamma}}i-(i-1)^{\gamma}
\right]\\
&=\sum_{i=2}^{N(x)}i^{d-1}\exp\left[\left(\left(1+ \lambda\right)c_0\theta\log|x|\right)^{\frac{\gamma-1}{\gamma}}i-
(i-1)^{\gamma}
\right]\\
&\quad +\sum_{i=N(x)+1}^{\infty}i^{d-1}\exp\left[\left(\left(1+ \lambda\right)c_0\theta\log|x|\right)^{\frac{\gamma-1}{\gamma}}i-(i-1)^{\gamma}
\right]\\
&\le N(x)^d\exp\left[\sup_{s \in \R}
\left(\left(\left(1+\lambda\right)c_0\theta\log|x|\right)^{\frac{\gamma-1}{\gamma}}s-(s-1)^{\gamma}\right)\right]+
\sum_{i=N(x)+1}^{\infty}i^{d-1}e^{-\frac{i^{\gamma}}{2}}\\
&\le C(\theta)
(\theta\log|x|)^{\frac{d}{\gamma}}\exp\left((1+2\lambda)c_0\theta\left(\left(\frac{1}{\gamma}\right)^{\frac{1}{\gamma-1}}
-\left(\frac{1}{\gamma}\right)^{\frac{\gamma}{\gamma-1}}\right)\log|x|\right)\\
&\quad+ \exp\bigg[{-\frac{(1+\lambda)c_0\theta }{4}}\log|x|\bigg] ,
\end{align*}
where in the last inequality we have used the facts that for $|x|$ large enough,
{ \begin{equation*}
\begin{split}
& \sup_{s \in \R}
\left\{\left(\left(1+ \lambda\right)c_0\theta\log|x|\right)^{\frac{\gamma-1}{\gamma}}s-(s-1)^{\gamma}\right\}\\
&\le
(1+2\lambda)c_0\theta\left(\left(\frac{1}{\gamma}\right)^{\frac{1}{\gamma-1}}
-\left(\frac{1}{\gamma}\right)^{\frac{\gamma}{\gamma-1}}\right)\log|x|
\end{split}
\end{equation*}
}
and $$\sum_{i=n}^{\infty}i^{d-1}e^{-\frac{i^{\gamma}}{2}} \le e^{-\frac{n^{\gamma}}{4}}$$ for $n$ large enough.
Combining all the estimates above and taking $$c_0
\frac{1}{2(1+2\lambda)
\left(\left(\frac{1}{\gamma}\right)^{\frac{1}{\gamma-1}}
-\left(\frac{1}{\gamma}\right)^{\frac{\gamma}{\gamma-1}}\right)},$$ we derive that
for $|x|$ large enough,
\begin{equation*}
\begin{split}
L^V (x)\psi (x)&\le C(\theta)|x|^{\frac{2}{3}\theta}\psi(x)-V(x)\psi(x)\le 0,
\end{split}
\end{equation*}
which implies that
\begin{equation*}
L^V \psi(x) \le C(\theta) \psi(x), \ \ x \in \R^d
\end{equation*}
for some constant $C(\theta)>0$. Therefore, according to Proposition \ref{pro--00}, we can obtain the desired upper bound estimate for $\phi_1$ as that in
(\ref{ex2-2}).
\end{proof}
\ \
At the last, we turn to the proofs of Theorem $\ref{thm3}$ and Example \ref{ex3}.
\begin{proof}[Proof of Theorem $\ref{thm3}$]
(1) Following the argument of Theorem \ref{thm2}, for $R$, $r>0$ large
enough
$$
\Phi(R)\ge { C}R\log^{\theta_1}R$$ and
$$ \beta_\Phi^{-1}(r)\le \frac{C}{\log(1+
r)\log^{\theta_1-1}\log(e+r)}.$$
Hence, for $s>0$ small enough,
\begin{equation*}
\gamma(s)=\Theta\left(\Phi^{-1}\left(\frac{2}{s}\right)\right)
\le C_1\exp\left(-{ {C_2}c_6}\left(\frac{1}{s}\right)^{{\eta_1}}
\left(\log \frac{1}{s}\right)^{{\eta_2-\eta_1\theta_1}}\right).
\end{equation*}
Then, for any fixed $\delta>e$, there is an integer $N_0(\delta)\ge1$ such that for all
$n\ge N_0(\delta)$,
\begin{equation*}
\begin{split}
& s_n:=\beta_\Phi^{-1}\left(c\delta^n\right)\le C_3(\log\delta)^{-1} n^{-1}
(\log n)^{-(\theta_1-1)},
\end{split}
\end{equation*}
where $C_3>0$ is independent of $\delta$. Therefore, for $n$ large enough,
\begin{equation}\label{thm3-1}
\gamma(s_n)\le C\exp\left[-c_6C_4\left(\log \delta\right)^{\eta_1}
n^{\eta_1}
\left(\log n\right)^{\eta_2-\eta_1}\right],
\end{equation}
where {$C_4$ } is a constant independent of $\delta$.
According to (\ref{thm3-1}), if
$\eta_1=1$ and $\eta_2>1$, then (\ref{t3-1-4}) holds true, and we have the following estimate for the rate function
$\tilde \beta_\Phi(s)$ defined by (\ref{t3-1-5a})
\begin{equation}\label{thm3-2}
\tilde \beta_\Phi(s)\le C\exp\left[C\Big(1+\frac{1}{s}\Big)\left(\log^{1-{\theta_1}}\Big(1+ \frac{1}{s}\Big)\right)\right],
\end{equation}
which implies that (\ref{t3-1-5}) is satisfied. Therefore, according to
Theorem \ref{t3-1}(3), we know that the semigroup $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive.
On the other hand, it follows from (\ref{thm3-1}) that, when $\eta_1=\eta_2=1$ and {
$c_6>\frac{1}{C_4}$,} (\ref{t3-1-4}) holds true. Then, following the arguments above, we can get the same estimate (\ref{thm3-2}) for
$\tilde \beta_\Phi(s)$ (possibly with different constant $C$ in (\ref{thm3-2})), and
so the semigroup $(T_t^V)_{t \ge 0}$ is intrinsically ultracontractive. The proof of the first assertions is complete.
(2) When $d>\alpha_1$, there exists a constant $C>0$ such that
for $R$ large enough
\begin{equation*}
\Psi(R)\le C\left(\frac{1}{R\log^{\theta_1}R}+\frac{1}{R\log^{{\alpha_1\eta_3}/{d}}R}\right).
\end{equation*}
Then, following the same argument of part (1), we can arrive at the second conclusion by Theorem \ref{t3-1}(2).
\end{proof}
\begin{proof}[Proof of Example $\ref{ex3}$]
Here we still try to disprove \cite[Condition 1.3, p.\ 5027]{KS}. Let
$D=B(0,1)$ and $t=1$. According to the proof of Example \ref{ex2}, for all $n$ large enough,
\begin{equation}\label{ex3-1}
T_1^V(\I_D)(x_n)\le C_1\exp\Big(-C_2|x_n|\log|x_n|\Big)=
C_1\exp\Big(-C_3n^{k_0}\log n\Big).
\end{equation}
On the other hand, for $n$ large enough
\begin{equation*}
\begin{split}
T_1^V(\I_{B(x_{n},1)})(x_{n})&\ge T_1^V(\I_{B(x_{n},r_{n})})(x_{n})\\
& \ge \Ee^{x_{n}}\Big(\tau_{B(x_{n},r_{n})}>1; \exp\Big(-\int_0^1 V(X_s)ds\Big)\Big)\\
& =e^{-1}\Pp^{x_{n}}(\tau_{B(x_{n},r_{n})}>1)\\
&=e^{-1}\Pp^{0}(\tau_{B(0,r_n)}>1),
\end{split}
\end{equation*}
where the first equality follows from the fact that $V(z)=1$ for
every $z \in B_n$, and in the last equality we have used the space-homogeneous property of truncated symmetric $\alpha$-stable process.
Let $(X_t)_{\ge0}$ be the truncated symmetric $\alpha$-stable process. By the Meyer construction for truncated $\alpha$-stable process (see \cite[Remark 3.5]{BBCK}), there corresponds to a symmetric $\alpha$-stable process $(X_t^*)_{t\ge0}$ (on the same probability space), such that
$X_t=X_t^*$ for any $t\in(0, N_1^*)$, where
$$
N^*_1=\inf\big\{t\ge0: |\Delta X^*_t|>1\big\},$$
and $\Delta X^*_t:=X^*_t-X^*_{t-}$ denotes the jump of $(X^*_t)_{t\ge0}$ at time $t$.
In the following, let $$\tau^*_{B(0,r)}=\inf\big\{t>0:X_t^*\notin B(0,r)\big\}$$ be the first exit time from $B(0,r)$ of the process $(X_t^*)_{t\ge0}$.
Note that, under $\Pp^0$ the event
$\{X_t^* \in B(0,r),\,\forall\ t\in [0,1]\}$ implies that the process $(X_t^*)_{t\ge0}$ does not have any jump bigger than $1$.
Then we find that there are constants $C_4, \lambda^*_1>0$ such that for all $n\ge1$ large enough, \begin{equation*}
\begin{split}
\Pp^{0}(\tau_{B(0,r_{n})}>1)&\ge \Pp^{0}(\tau^*_{B(0,r_{n})}>1)\\
&=\Pp^{0}(\tau^*_{B(0,1)}>r_{n}^{-\alpha})\\
&=\int_{B(0,1)} p^*_{B(0,1)}(r_n^{-\alpha},0,z)\,dz\\
&\ge C_4 e^{-\lambda^*_1 r^{-\alpha}_{n}}.
\end{split}
\end{equation*}
Here in the first equality we have used the scaling property of symmetric $\alpha$-stable process, in the second equality $p^*_{B(0,1)}(t,x,y)$ denotes the Dirichlet heat kernel of the process $(X_t^*)_{t\ge0}$ on $B(0,1)$, and the last inequality follows from lower bound of $p^*_{B(0,1)}(t,x,y)$ established in \cite[Theorem 1.1(ii)]{CKS}.
Hence, for $n$ large enough,
\begin{equation}\label{ex3-2}
\begin{split}
T_1^V(\I_{B(x_n,1)})(x_n)&\ge C_4 \exp\Big(-\lambda_1 n^{k_0-\frac{\alpha}{d}}\Big).
\end{split}
\end{equation}
According to (\ref{ex3-1}) and (\ref{ex3-2}) above,
we know that for any constant $C>0$, the following inequality
\begin{equation*}\label{ex3-3}
T_1^V(\I_{B(x,1)})(x)\le C T_1^V(\I_D)(x).
\end{equation*}
does not hold for $x=x_{n}$ when $n$ large enough.
In particular,
\cite[Condition 1.3, p.\ 5027]{KS} is not satisfied, and so the semigroup $(T_t^V)_{t \ge 0}$ is not intrinsically ultracontractive.
However, for every $R\ge 2$ and $n\ge 1$ with $n^{k_0}\le R \le (n+1)^{k_0}$,
\begin{equation*}
\begin{split}
|\{x \in \R^d: x \in A, |x|\ge R\}|&\le
\sum_{m=n}^{\infty}|B(x_m,r_m)|=
\sum_{m=n}^{\infty}r_m^d\\
&= \sum_{m=n}^{\infty} m^{-\frac{dk_0}{\alpha}+1}
\le C_5n^{-\frac{dk_0}{\alpha}+2}\\
&\le C_6 \Big((n+1)^{k_0}\Big)^{-\frac{d}{\alpha}+\frac{2}{k_0}}\le
\frac{C_6}{ R^{\frac{d}{\alpha}-\varepsilon}}
\end{split}
\end{equation*}
holds for some constant $C_6$ independent of $R$, where
in the last inequality we have used the fact that $\frac{2}{k_0}<\varepsilon$. Therefore
(\ref{ex3-0}) holds true. By now we have finished the proof.
\end{proof}
\section{Appendix}
In this appendix, we will present the proofs of Propositions \ref{p1-1}
and \ref{p1-2}.
\begin{proof}[Proof of Proposition $\ref{p1-1}$]
Let $(T_t)_{t\ge0}$ be the Markov semigroup associated with the regular
Dirichlet form $(D, \mathscr{D}(D))$. Under assumption {\bf (A1)},
for every $t>0$,
$$\|T_t\|_{L^1(\R^d;dx)\to L^\infty(\R^d; dx)}=\sup_{x,y \in \R^d}p(t,x,y)\le
c_t.$$
According to \cite[Theorem 3.3.15]{WBook}, the following super Poincar\'{e} inequality holds
\begin{equation*}\int f^2(x)\,dx\le rD(f,f)+\beta(r)\Big(\int |f(x)|\,dx\Big)^2,\quad r>0, f\in \mathscr{D}(D),\end{equation*} where
$$\beta(r)=\inf_{s\le r, t>0} \frac{s \|T_t\|_{L^1(\R^d;dx)\to L^\infty(\R^d; dx)}}{t} \exp\Big(\frac{t}{s}-1\Big)\le \|T_r\|_{L^1(\R^d;dx)\to L^\infty(\R^d; dx)}\le c_r.$$
Therefore, we can take the reference symmetric function $\mu$ in \cite[(1.2)]{WW08} to be Lebesgue measure.
Clearly, the potential function $V$ satisfies \cite[(1.3) and (1.5)]{WW08}. Then, the desired assertion immediately follows from \cite[Corollary 1.3]{WW08}.
\end{proof}
\begin{proof}[Proof of Proposition $\ref{p1-2}$]
For any $t>0$ and $x,y\in\R^d$, $p^V(t,x,y)\le p(t,x,y)\le c_t$, so
the semigroup $(T_t^V)_{t \ge 0}$ is ultracontractive. In
particular, by the symmetric property of $(T_t^V)_{t \ge 0}$ on
$L^2(\R^d;dx)$, we know that $\|T_t^V\|_{L^2(\R^d;dx)\to
L^\infty(\R^d; dx)}<\infty.$ This, along with $T_t^V
\phi_1=e^{-\lambda_1 t}\phi_1$ and $\phi_1 \in L^2(\R^d;dx)$, yields
that there is a version of $\phi_1$ which is bounded.
For any $R>0$, let $\phi_1^R(x):=e^{-\lambda_1 t}\int_{\{|y|\le R\}}
p^V(t,x,y)\phi_1(y)\,dy$. For any $y\in\R^d$ and $t>0$, the function
$x\mapsto p_t^V(x,y)$ is continuous and $p^V(t,x,y)\le p(t,x,y)\le
c_t$. According to the fact that $\phi_1$ is locally
$L^1(\R^d;dx)$-integrable and the dominated convergence theorem,
$\phi_1^R$ is also a continuous function. Now, for every fixed $x_0
\in \R^d$, let $\{x_n\}_{n=1}^{\infty}\subseteq \R^d$ be a sequence
such that $\lim_{n \rightarrow \infty}x_n=x_0$. Then we obtain
$$\aligned
&|\phi_1(x_n)-\phi_1(x_0)|\\
&=e^{-\lambda_1 t}
|T^V_t\phi_1(x_n)-T^V_t\phi_1(x_0)|\\
&=e^{-\lambda_1 t}\Big|\int_{\R^d}p^V(t,x_n,y)\phi_1(y)\,dy-
\int_{\R^d}p^V(t,x_0,y)\phi_1(y)\,dy\Big|\\
&\le |\phi_1^R(x_n)-\phi_1^R(x_0)|+
2\sup_{x \in \R^d}\Big(\int_{\R^d}\big(p^V(t,x,y)\big)^2\,dy\Big)^{{1}/{2}}
\Big(\int_{\{|y|>R\}}\phi_1^2(y)\,dy\Big)^{{1}/{2}}\\
&\le |\phi_1^R(x_n)-\phi_1^R(x_0)|+2\sqrt{c_t}\Big(\int_{\{|y|>R\}}\phi_1^2(y)\,dy\Big)^{{1}/{2}}.
\endaligned$$
Letting $n \rightarrow \infty$ and then $R \rightarrow \infty$, we
arrive at $\lim_{n \rightarrow \infty}\phi_1(x_n)=\phi_1(x)$.
Therefore there exists a version of $\phi_1$ which is continuous.
Let $(D^V, \mathscr{D}(D^V))$ be the Dirichlet form associated with $(T_t^V)_{t \ge 0}$. Due to the following variational principle
\begin{equation*}
\lambda_1=\inf\bigg\{\frac{D^V(f,f)}{\int_{\R^d} f^2(x)\,dx}: f \in
\D(D^V),\ f \neq 0\bigg\}= D^V(\phi_1,\phi_1),
\end{equation*}
and the fact $D^V(|\phi_1|,|\phi_1|)\le D^V(\phi_1,\phi_1)$, we know
that $\phi_1 \ge 0$. Now, assume that $\phi_1(x_0)=0$ for some $x_0
\in \R^d$. Since $p^V(t,x,y)>0$ for any $t>0$ and $x,y\in\R^d$, and
$$\phi_1(x_0)=e^{-\lambda_1
t}\int_{\R^d}p^V(t,x_0,y)\phi_1(y)\,dy=0,$$
we find by the continuity of $\phi_1$ that $\phi_1(x)=0$ for every $x \in \R^d$. This contradiction implies that
$\phi_1>0$ is positive everywhere. The proof is complete.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{intro}
Toric manifolds were introduced in the pioneering paper \cite{DJ} by Davis and Januszkiewicz. These manifolds are topological generalization of smooth projective toric varieties. The paper \cite{DJ} considered the standard action of the compact abelian torus $T^n$ on ${\bb{C}}^n$ as the local model to define toric manifolds. Briefly, a $2n$-dimensional smooth manifold with a locally standard $T^n$-action is called a toric manifold if the orbit space has the structure of an $n$-dimensional simple polytope.
Moreover, \cite[Section 7]{DJ} initiated the notion of toric orbifolds generalizing toric manifolds. These orbifolds are explicitly studied in \cite{PS} with the name `quasitoric orbifolds'. Examples of toric orbifolds include simplicial projective toric varieties.
On the other hand, contact toric manifolds are odd-dimensional analogues of symplectic toric manifolds with Hamiltonian torus actions, see \cite{BoGa}. Lerman provided a complete classification of compact connected contact toric manifolds in \cite[Theorem 2.18]{Lerman}. Motivated by the work of Davis and Januszkiewicz \cite{DJ}, Luo discussed the combinatorial construction of good contact toric manifolds and studied some of their topological properties in \cite{Luo-Thesis}.
Recently, the construction of toric manifolds in \cite{DJ} has been extended to introduce the notion of \emph{locally k-standard $T$-manifolds}, in \cite{SaSo} where $T$ is a compact abelian torus. The paper \cite{SaSo} considers the invariant subset ${\bb{C}}^{n}\times (S^1)^k$ of ${\bb{C}}^{n+k}$ with respect to the standard $T^{n+k}$-action, and calls the restricted $T^{n+k}$-action on ${\bb{C}}^{n}\times (S^1)^k$ as the local model to define a locally $k$-standard $T$-manifold. In particular, when $k=1$ we call this manifold a \emph{quasi-contact toric manifold}. Briefly, a $(2n+1)$-dimensional smooth compact $T^{n+1}$-manifold $N$ is called a quasi-contact toric manifold if each point of $N$ has an invariant neighbourhood which is diffeomorphic to an invariant open subset of ${\bb{C}}^{n}\times S^1$ such that the orbit space ${N}/{T^{n+1}}$ has the structure of an $n$-dimensional simple polytope. The article \cite{SaSo} shows that the category of quasi-contact toric manifolds contains all good contact toric manifolds. We use the term `quasi-contact toric manifold' as the topological counterpart of `good contact toric manifold'.
Resolution of singularity is a widely used tool in algebraic geometry nowadays, see \cite{Kol, cut}. One can trace back to Newton and Riemann for the idea of resolution of singularities of the curves. If $X$ is a singular toric variety then there is a resolution of singularities of $X$, see \cite[Chapter 2]{Ful}. In algebraic topology, \cite{GP} discusses the resolution of singularities of four dimensional toric orbifolds. We note that there are infinitely many toric orbifolds which are not toric varieties. For example, an equivariant connected sum of two weighted projective spaces is a toric orbifold but not a toric variety. In this article, we discuss the resolution of singularities for any toric orbifolds. Our technique is different than the proof of the resolution of singularities of a singular toric variety.
Lev Pontryagin introduced the notion of cobordism in \cite{Pon47}. Cobordism theory discusses about an equivalence on the same dimensional compact manifolds using the concept of the boundary of a manifold. Considering the disjoint union as addition, one can get an abelian group structure on these equivalence classes. If a manifold is a boundary of a manifold with boundary then it is called null-cobordant. The cobordism groups could be computed through homotopy theory using the Thom construction, see \cite{Tho}. We have a complete information about oriented, non-oriented and complex cobordism rings.
The notion of equivariant cobordism was introduced in early 1970's, see \cite{ Hook73-1,Hook73, Peter}.
The equivariant cobordism rings for some of the nontrivial groups can be found in \cite{Han}. However, they are not known for any nontrivial groups. One of the key reason is that in equivariant category, the Thom transversality theorem does not hold. Thus the equivariant cobordism theory cannot be reduced to the equivariant homotopy theory.
In this article, we study the equivariant cobordism of quasi-contact toric manifolds, good contact toric manifolds and generalized lens spaces.
The paper is organized as follows.
In Section \ref{sec_res_od_sing}, we recall the notion of a toric manifold and a toric orbifold following \cite{DJ}.
Then we recall the notion of blowup of a simple polytope as well as blowup of a toric orbifold following \cite{BSS}. We discuss the orbifold singularity on any face in the orbit space of a toric orbifold. Then we apply the techniques of blowup of a toric orbifold for resolution of singularity and prove the following result.
\begin{customthm}{A}[Theorem \ref{thm_res_sing}]
For a toric orbifold $M(P,\lambda)$, there exists a resolution of singularities $$M(P^{(d)},\lambda^{(d)})\to \dots \to M(P^{(1)},\lambda^{(1)})\to M(P,\lambda)$$ such that $M(P^{(d)},\lambda^{(d)})$ is a toric manifold, the toric orbifold $M(P^{(i)},\lambda^{(i)})$ is obtained by a blowup of $M(P^{(i-1)},\lambda^{(i-1)})$ for $i=1,2,\dots,d$ and the arrows $\rightarrow$ indicate the associated blowups.
\end{customthm}
In Section \ref{sec_tctm}, we recall the concept of quasi-contact toric manifolds and discuss some of their properties. The idea of the construction of these spaces is similar to the construction of toric manifolds introduced by Davis and Januszkiewicz \cite{DJ}.
Generalized lens spaces \cite{SS} and good contact toric manifolds are some well known examples of quasi-contact toric manifolds, see Example \ref{ex_gen_lens_sp} and Example \ref{ex_gd_ct_toric_mfd}.
Then we prove the following.
\begin{customthm}{B}[Theorem \ref{thm_zoro_cob}]
Let $N$ be a $(2n+1)$-dimensional quasi-contact toric manifold. Then there is a smooth oriented $T^{n+1}$-manifold with boundary $M$ such that $\partial{M}$ is equivariantly diffeomorphic to $N$.
\end{customthm}
As a consequence, one can obtain that good contact toric manifolds and generalized lens spaces are equivariantly cobordant to zero, and hence all the Stiefel-Whitney numbers of these manifolds are zero. We note that a lens space is a contact toric manifold.
\section{Resolution of singularities of toric orbifolds}\label{sec_res_od_sing}
In this section, we discuss the constructive definition of toric manifolds and toric orbifolds following \cite{DJ}.
These spaces are even dimensional orbifolds and equipped with half-dimensional locally standard torus actions where the orbit spaces are simple polytopes. We also recall the notion of blowup of a toric orbifold along certain invariant suborbifolds. We discuss when an open convex bounded subset of ${\bb{R}}^k$ contains an integral point. We prove that there is a resolution of singularities of a toric orbifold.
A convex hull of finitely many points in ${\bb{R}}^n$ for some $n \in {\bb{Z}}_{\geq 1}$ is called a convex polytope. If each vertex (zero dimensional face) in an $n$-dimensional convex polytope is the intersection of $n$ facets (codimension one faces), then the polytope is called a simple polytope. One can find the basic properties of simple polytopes in \cite{Zi, BP-book}. For a simple polytope $P$ we denote the vertex set of $P$ by $V(P):=\{b_1,\dots,b_m\}$ and the facet set by $\mathcal{F}(P):=\{F_1,\dots,F_r\}$ throughout this paper.
\begin{definition}
Let $P$ be an $n$-dimensional simple polytope and $\lambda \colon \mathcal{F}(P) \rightarrow \mathbb{Z}^{n}$ a function such that $\lambda(F_i)$ is primitive for $i \in \{1, \ldots, r\}$ and
\begin{equation}\label{Eq_lin ind vec}
\{\lambda(F_{i_1}),\dots,\lambda(F_{i_k})\}~ \text{is linearly independent if } \bigcap_{j=1}^k F_{i_j}\neq \varnothing.
\end{equation}
\noindent Then $\lambda$ is called an $\mathcal{R}$-{characteristic function} on $P$, and the pair $(P,\lambda)$ is called an $\mathcal{R}$-{characteristic pair}.
If the set $\{\lambda(F_{i_1}),\dots,\lambda(F_{i_k})\}$ in \eqref{Eq_lin ind vec} spans a $k$-dimensional unimodular submodule of ${\bb{Z}}^{n}$, then $\lambda$ is called a characteristic function. In this case, the pair $(P,\lambda)$ is called a characteristic pair.
\end{definition}
Observe that a characteristic function is an $\mathcal{R}$-characteristic function. We denote $\lambda(F_i)$ by $\lambda_i$ and call it the $\mathcal{R}$-{characteristic vector} or characteristic vector assigned to the facet $F_i$ according to the situation.
Next, we recall the construction of a toric orbifold from an $\mathcal{R}$-characteristic pair $(P,\lambda)$. We denote the standard $n$-dimensional torus by $T^n$. Note that $\mathbb{Z}^n$ is the standard $n$-dimensional lattice in the Lie algebra of $T^n$. Also, each $\lambda_i \in {\bb{Z}}^n$ determines a line in $\mathbb{R}^n(={\bb{Z}}^n \otimes_{{\bb{Z}}} {\bb{R}})$, whose image under the exponential map $exp \colon \mathbb{R}^n \to T^n$ is a circle subgroup, denoted by $T_i$.
Let $F$ be a codimension-$k$ face of $P$ where $0 < k \leq n$ and $\mathring{F}$ the relative interior of $F$. Then $F=\bigcap_{j=1}^k F_{i_j}$ for some unique facets $F_{i_1}, \dots, F_{i_{k}}$ of $P$.
Let
\begin{equation*}
T_F := \big< T_{i_1}, \dots, T_{i_{k}} \big>
\end{equation*}
be the $k$-dimensional subtorus of $T^n$ generated by $ T_{i_1}, \dots, T_{i_{k}}$. We define $T_P: =1 \in T^n$.
We consider the identification space $M(P,\lambda): =(T^n \times P) / \sim$, where the equivalence relation $\sim$ is defined by
\begin{equation*
(t,x) \sim (s,y)~ \text{if and only if}~ x=y\in \mathring{F} ~\text{and}~ t^{-1}s \in T_F.
\end{equation*}
The identification space $M(P,\lambda)$ has a $2n$-dimensional orbifold structure with a natural $T^n$-action induced by the group operation on the first factor of $T^n\times P$. The projection onto the second factor gives the orbit map
\begin{equation}\label{orbit map}
\pi \colon M(P,\lambda) \to P \text{ defined by } [t,x]_{\sim} \mapsto x \nonumber
\end{equation}
where $[t,x]_{\sim}$ is the equivalence class of $(t,x)$.
An explicit construction of the orbifold structure on $M(P,\lambda)$ is discussed in \cite{PS} and \cite{GP}. Poddar and the second author gave the axiomatic definition of (quasi)toric orbifolds and showed that the constructive and axiomatic definitions of toric orbifolds are equivalent, see \cite[Proposition 2.9]{PS}.
We note that if $\lambda$ is a characteristic function, then $M(P, \lambda)$ is a toric manifold, see \cite{DJ}. The equivalence of the axiomatic and constructive definitions of toric manifolds is discussed in \cite{DJ}.
Now, we review the notion of blowup of a simple polytope $P$ as well as blowup of the toric orbifold $M(P, \lambda)$ along an invariant suborbifold.
\begin{definition}\label{def_qtoric_orb}
Let $P$ be an $n$-dimensional simple polytope in ${\bb{R}}^n$ and $F$ a face of $P$. Take an $(n-1)$-dimensional hyperplane $H$ in ${\bb{R}}^n$ such that $V(F)$ is a subset of the open half space $H_{>0}$ and $V(P) \setminus V(F)$ is a subset of the other open half space $H_{<0}$. Then $\widebar{P}:= P \cap H_{\leq 0}$ is called a blowup of $P$ along $F$.
\end{definition}
Note that $\widebar{P}\subset P$ and $\widebar{P}$ is an $n$-dimensional simple polytope. If $\mathcal{F}(P)=\{F_1,\dots,F_r\}$ and dim$(F)<(n-1)$ then the facets of $\widebar{P}$ is $\mathcal{F}(\widebar{P}):=\{\widebar{F}_1, \dots, \widebar{F}_r, \widebar{F}_{r+1}\}$ where
\begin{align}\label{eq_facet}
\widebar{F}_i := \begin{cases} F_i \cap \widebar{P} \quad &\text{for } i=1, \dots, r,\\ H \cap P \quad &\text{for } i=r+1. \end{cases}
\end{align}
\noindent In this paper, a blowup $\widebar{P}$ of $P$ might also be denoted by $P^{(1)}$ and a blowup of ${P}^{(\ell)}$ is denoted by $P^{(\ell+1)}$, for $\ell \geq 1$.
\begin{definition}\label{ex_blow_up}
Let $\widebar{P}$ be a blowup of a simple polytope $P$ along a face $F$. Let $(P, \lambda)$ and $(\widebar{P}, \bar{\lambda})$ be two $\mathcal{R}$-characteristic pairs such that $$\bar{\lambda}(\widebar{F}_i)=\lambda(F_i) \quad \text{ if }\widebar{F}_i=F_i \cap \widebar{P} \text{ for } F_i\in \mathcal{F}(P).$$ Then the toric orbifold $M(\widebar{P}, \bar{\lambda})$ is called a blowup of the toric orbifold $M(P, \lambda)$ along the suborbifold $\pi^{-1}(F)$.
\end{definition}
\begin{example}\cite[Example 5.2]{BSS}\label{rmk_char_fnc}
Let $(P, \lambda)$ be an $\mathcal{R}$-characteristic pair and $\widebar{P}$ a blowup of $P$ along a codimension $k$ face $F$ where $0 < k \leq n$. Then $F=\bigcap_{j=1}^{k} F_{i_j}$ for some unique facets $F_{i_1}, \dots, F_{i_{k}}$ of $P$.
Let $${\bb{Q}}(F) := \{ (c_1, \ldots, c_{k}) \in {\bb{Q}}^{k} ~|~ c_j \in {\bb{Q}} \setminus \{0\} ~\mbox{and}~ \textbf{0} \neq \sum_{j=1}^{k} c_j \lambda_{i_j} \in {\bb{Z}}^n\}.$$
We define $\bar{\lambda} \colon \mathcal{F}(\widebar{P}) \to \mathbb{Z}^n$ by
\begin{align}\label{characteristic vector after blowup}
\bar{\lambda}(\widebar{F_i}):=
\begin{cases}
\lambda_i &\quad\text{if}~ \widebar{F}_i = F_i \cap \widebar{P} \text { for } i=1,2,\dots,r\\
\text{prim}(\sum_{j=1}^{k} c_j \lambda_{i_j}) &\quad\text{if}~ \widebar{F}_i=\widebar{F}_{r+1}~ \text{and}~(c_1, \dots, c_{k}) \in {\bb{Q}}(F)
\end{cases}
\end{align}
where prim$(\alpha)$ indicates the primitive vector of $\alpha \in \mathbb{Z}^n \setminus \{\textbf{0}\}$.
Then $\bar{\lambda}$ is an $\mathcal{R}$-{characteristic function} on $\widebar{P}$. The pair $(P,\lambda)$ and $(\widebar{P},\widebar{\lambda})$ satisfy Definition \ref{ex_blow_up}. Thus $M(\widebar{P},\bar{\lambda})$ is a blowup of the toric orbifold $M(P,\lambda)$ along the suborbifold $\pi^{-1}(F)$.
\end{example}
Let $(P, \lambda)$ be an $\mathcal{R}$-characteristic pair and $F$ a face as in Example \ref{rmk_char_fnc}.
Let $\lambda_{i_j} = \lambda(F_{i_j})$ for all $j=1,2,\dots,k$. Then the group
$$G_F(P,\lambda):= \frac{(\big< \lambda_{i_1},\lambda_{i_2},\dots,\lambda_{i_k} \big> \bigotimes_{{\bb{Z}}}{\bb{R}})\cap {\bb{Z}}^{n}}{\big< \lambda_{i_1},\lambda_{i_2},\dots,\lambda_{i_k} \big>}$$
is finite and abelian, where ${\big< \lambda_{i_1},\lambda_{i_2},\dots,\lambda_{i_k} \big>}$ is the submodule generated by $ \{\lambda_{i_1}, \lambda_{i_2}, \dots, \lambda_{i_k}\}$. This group measures the order of singularity of points in $\pi^{-1}(\mathring{F}) \subseteq M(P, \lambda)$. For simplicity, we may denote $G_F(P,\lambda)$ by $G_F$, whenever the context is clear. Note that if $F$ is a face of $F'$ then $|G_{F'}|$ divides $|G_F|$ and the group $G_F$ is trivial if $M(P,\lambda)$ is a toric manifold.
We recall the definition of the volume of a parallelepiped in an inner product space. Let $\{u_1,u_2,\dots,u_k\}$ be an orthonormal basis in the $k$-dimensional real inner product space $V$. Let
$$C_V:=\{\sum_{i=1}^{k}r_iu_i \in V ~|~ 0\leq r_i \leq \alpha_i, r_i\in {\bb{R}} ~\mbox{and} ~ 0 < \alpha_i\in {\bb{R}}\}.$$ Then $C_V$ is a $k$-dimensional parallelepiped and $$\mbox{vol}(C_V) =\alpha_1\alpha_2\cdots \alpha_k.$$
For a face $F=\bigcap_{j=1}^{k} F_{i_j}$, we consider the vector space $V_F=(\big< \lambda_{i_1},\lambda_{i_2},\dots,\lambda_{i_k} \big> \bigotimes_{{\bb{Z}}}{\bb{R}})$.
Let $\{v_1,\dots,v_k\}$ be a basis of the lattice $$L_F := (\big< \lambda_{i_1},\lambda_{i_2},\dots,\lambda_{i_k} \big> \bigotimes_{{\bb{Z}}}{\bb{R}})\cap {\bb{Z}}^{n}$$ in $V_F$. So $L_F={\bb{Z}} v_1+{\bb{Z}} v_2+\dots+{\bb{Z}} v_k$.
Define $$C:=\{\sum_{i=1}^{k}r_iv_i~~|~~ 0\leq r_i < 1, r_i\in {\bb{R}}\}.$$ This $C$ is called a fundamental parallelepiped for the lattice $L_F$. It is well known that $\mbox{vol}(C)$ is independent of the bases of $L_F$.
Let
\begin{equation}\label{eq_para}
C_F=\{\sum_{j=1}^{k}r_j\lambda_{i_j} ~~|~~ 0\leq r_j < 1, r_j\in {\bb{R}}\}.
\end{equation}
Then $|G_F|$ measures the volume of the $k$-dimensional parallelepiped $C_F\subset V_F$ made by the $k$ vectors $\{\lambda_{i_1}, \lambda_{i_2}, \dots, \lambda_{i_k}\}$, where $|G_F|$ is the order of the group $G_F$.
Mathematically, one can write $$\mbox{vol}(C_F)=|G_F|\times \mbox{vol}(C).$$
The following Minkowski theorem says when an open convex subset of ${\bb{R}}^n$ contains a non-zero lattice point.
\begin{proposition}\cite[Theorem 5.2]{OLD}\label{thm_min}
Let $X$ be an open convex subset of a $k$-dimensional inner product space $V\subset {\bb{R}}^n$ and $C$ the fundamental parallelepiped for the lattice $V\cap{\bb{Z}}^n$. If $X$ is symmetric about origin and has volume greater than $2^k vol(C)$, then $X$ contains a point ${\mathbf b}=(b_1,b_2,\dots,b_n)\in V\cap {\bb{Z}}^n$ other than the origin.
\end{proposition}
\begin{lemma}\label{lem_non_zero_int_pt}
Let $\lambda$ be an $\mathcal{R}$-characteristic function on $n$-dimensional simple polytope $P$ and $F=\bigcap_{j=1}^k F_{i_j}$ a face in $P$. If $|G_F|>1$ then there exists a non-zero ${\lambda}_F\in {\bb{Z}}^{n}$ such that ${\lambda}_F=\sum_{j=1}^{k}c_j\lambda_{i_j}$ with $ |c_j| < 1$ and $c_j\in {\bb{Q}}$. Moreover, if $|G_{F'}|=1$ for every face $F'$ properly containing $F$ in $P$, then there exists a non-zero ${\lambda}_F\in {\bb{Z}}^{n}$ such that ${\lambda}_F=\sum_{j=1}^{k}c_j\lambda_{i_j}$ where $0 < |c_j| < 1$ and $c_j\in {\bb{Q}}$ for $j=1,2,\dots,k$.
\end{lemma}
\begin{proof}
The parallelepiped $C_F$ constructed in \eqref{eq_para} is convex but not symmetric about the origin. Consider the union of $2^k$ many parallelepiped defined by $$\widetilde{C}_F=\{\sum_{j=1}^{k}r_j\lambda_{i_j} ~~|~~ -1 < r_j < 1, r_j\in {\bb{R}}\}.$$
This is an open parallelepiped which is convex and symmetric about the origin such that $$\mbox{vol}(\widetilde{C}_F)=2^k\mbox{vol}(C_F)=2^k|G_F|\mbox{vol}(C)>2^k\mbox{vol}(C).$$ Then by Proposition \ref{thm_min}, there exists a non-zero point ${\lambda}_F\in {\bb{Z}}^n\cap \widetilde{C}_F$.
Then there exist $c_j\in {\bb{Q}}$ with $|c_j| < 1$ for all $j\in\{1,2,\dots,k\}$ such that ${\lambda}_F=\sum_{j=1}^{k}c_j\lambda_{i_j}$.
Now for the second part, without loss of generality, we may assume that $c_1=0$. Consider the face $F':=\cap_{j=2}^{k}F_{i_j}$. Then $F'$ contains the face $F$ properly. By our assumption, we have $|G_{F'}|=1$. Then $\{\lambda_{i_2},\lambda_{i_3},\dots,\lambda_{i_k}\}$ is a ${\bb{Z}}$-basis of the lattice
$(\big<\lambda_{i_2},\lambda_{i_3},\dots,\lambda_{i_k}\big>\bigotimes_{{\bb{Z}}}{\bb{R}})\cap {\bb{Z}}^{n}$. Observe that this contradicts the existence of a non-zero lattice point ${\lambda}_F=\sum_{j=2}^{k}c_j\lambda_{i_j}$ such that $|c_j|<1$ for $j\in\{2,3,\dots,k\}$. Therefore $0<|c_1|<1$. Similarly, we can show $0<|c_j|<1$ for $j\in\{2,3,\dots,k\}$.
\end{proof}
\begin{remark}
Let $F=\bigcap_{j=1}^k F_{i_j}$ be a face of $P$ and $\lambda$ an $\mathcal{R}$-characteristic function on $P$.
\begin{enumerate}
\item If $|G_F|=1$, then
$$(\big<\lambda_{i_1},\lambda_{i_2},\dots,\lambda_{i_k}\big>\bigotimes_{{\bb{Z}}}{\bb{R}})\cap {\bb{Z}}^{n}={\bb{Z}} \lambda_{i_1}+{\bb{Z}} \lambda_{i_2}+\dots+{\bb{Z}} \lambda_{i_k},$$
and there does not exist a point of $(\big<\lambda_{i_1},\lambda_{i_2},\dots,\lambda_{i_k}\big>\bigotimes_{{\bb{Z}}}{\bb{R}})\cap {\bb{Z}}^{n}$ in the parallelepiped $C_F$ except the origin.
\item If $|G_F|>1$ then the non-zero lattice points of $(\big<\lambda_{i_1},\lambda_{i_2},\dots,\lambda_{i_k}\big>\bigotimes_{{\bb{Z}}}{\bb{R}})\cap {\bb{Z}}^{n}$ in the parallelepiped $C_F$ have a one-one correspondence to non-identity elements of the group $G_F$. So the total number of lattice points in the parallelepiped $C_F$ is $|G_F|$. Thus $\lambda_F$ obtained in the Lemma \ref{lem_non_zero_int_pt} is not unique if $|G_F|>2$.
\end{enumerate}
\end{remark}
\begin{theorem}\label{thm_res_sing}
For a toric orbifold $M(P,\lambda)$, there exists a resolution of singularities
$$M(P^{(d)},\lambda^{(d)})\to \dots \to M(P^{(1)},\lambda^{(1)})\to M(P,\lambda)$$ such that $M(P^{(d)},\lambda^{(d)})$ is a toric manifold, the toric orbifold $M(P^{(i)},\lambda^{(i)})$ is a blowup of $M(P^{(i-1)},\lambda^{(i-1)})$ for $i=1,2,\dots,d$ and the arrows $\to$ indicate the associated blowups.
\end{theorem}
\begin{proof}
Let $$\mathcal{L}:=\{F ~\big{|}~ F \text{ is a face of } P \text{ and }|G_F(P,\lambda)|\neq 1\}.$$
Define a partial order `$\leq$' on $\mathcal{L}$
by $F\leq F'$ if $F\subseteq F'$.
Without loss of generality, let $F$ be a maximal element in the set $\mathcal{L}$ with respect to the partial order `$\leq$'.
Now consider the blowup of the polytope $P$ along the face $F$ and denote this blowup by $P^{(1)}$. If $\mathcal{F}(P)=\{F_1,\dots,F_r\}$, then we have $$\mathcal{F}(P^{(1)})=\{\widebar{F}_1, \dots, \widebar{F}_r, \widebar{F}_{r+1}\}$$ defined as in \eqref{eq_facet}. Let $F=\cap_{j=1}^k F_{i_j}$ where $F_{i_j}\in \mathcal{F}(P)$ and $\lambda_{i_j}=\lambda(F_{i_j})$ for $j\in \{1,2,\dots,k\}$.
We define a function ${\lambda}^{(1)} : \mathcal{F}(P^{(1)}) \to {\bb{Z}}^{n}$ by
\begin{align}\label{eq_lambda_blowup}
{\lambda}^{(1)}(\widebar{F_i}):=
\begin{cases}
\lambda_i &\quad\text{if}~ \widebar{F}_i = F_i \cap \widebar{P} \text { for } i=1,2,\dots,r\\
\text{prim}(\lambda_F)=\text{prim}(\sum_{j=1}^{k} c_j \lambda_{i_j}) &\quad\text{if}~ \widebar{F}_i=\widebar{F}_{r+1}
\end{cases}
\end{align}
where ${\lambda}_F$ is obtained by Lemma \ref{lem_non_zero_int_pt}. Then $(P^{(1)},\lambda^{(1)})$ is an $\mathcal{R}$-characteristic pair.
There does not exist a face $F'$ containing $F$ such that $|G_{F'}|\neq 1$, by the definition of $\mathcal{L}$. Thus we can assume $0<|c_j|<1$ and $c_j\in {\bb{Q}}$ for every $j\in\{1,2,\dots,k\}$ by Lemma \ref{lem_non_zero_int_pt}.
Let $$V(F):=\{b_1,b_2,\dots,b_{\ell}\} \text{ and } V(P)=\{b_1,b_2,\dots,b_{\ell},b_{\ell+1},\dots,b_m\}.$$ Note that there is a homeomorphism from $\widebar{F}_{r+1}$ to $F\times \Delta^{k-1}$ preserving the face structure. Let $V(\Delta^{k-1})=\{u_1,u_2,\dots,u_k\}$. We have $P^{(1)}\subset P$ from Definition \ref{def_qtoric_orb}.
Then $$V(P^{(1)}):=\{(b_p,u_s) ~\big{|}~ 1\leq p\leq \ell; 1\leq s\leq k\}\cup\{b_{\ell+1},\dots,b_m\}.$$
We can assume $(b_p,u_s)$ is not a vertex of $\widebar{F}_{i_s}$ for $1\leq s\leq k$. For a vertex $b_p\in V(F) \subset P$, let $b_p= (\bigcap_{j=k+1}^{n}F_{i_j}) \bigcap F = \bigcap_{j=1}^{n}F_{i_j}$. Then for all $1\leq s\leq k$ and $1\leq p\leq \ell$ $$(b_p,u_s)=\bigcap_{j=1~ j\neq s}^{n}\widebar{F}_{i_j}\bigcap\widebar{F}_{r+1}.$$
Now we calculate the singularity over each vertex in $P^{(1)}$. Note that
\begin{gather*}
{|G_{(b_p,u_s)}(P^{(1)},\lambda^{(1)})|=|\det[\lambda_{i_1},\dots,\widehat{\lambda_{i_s}},\dots,\lambda_{i_n},\text{prim}(\sum_{j=1}^{k} c_j \lambda_{i_j})]|=\frac{|c_j|}{d_F}|G_{b_p}(P,\lambda)|<|G_{b_p}(P,\lambda)|
}
\end{gather*}
for $1\leq p\leq \ell$ and $1\leq s\leq k$ and $d_F$ is a positive integer satisfying $\lambda_F=d_F.\text{prim}(\lambda_F)$, and $$|G_{b_{\ell+i}}(P^{(1)},\lambda^{(1)})|=|G_{b_{\ell+i}}(P,\lambda)| \text{ for } 1\leq i\leq m-\ell. $$ Therefore, in this above process if a vertex of $P$ is not in the face $F$ then the corresponding singularity remains same. Also corresponding to every vertex $b$ in $F$ we get exactly $k$ many new vertices in $\widebar{F}_{r+1}$ such that at
each of them the singularity is strictly less than the singularity on $b\in F\subset P$ in the given orbifold.
If $|G_F|=1$ for every faces of $P^{(1)}$ then $M(P^{(1)},\lambda^{(1)})$ is the desired resolution of singularities of $M(P,\lambda)$. Otherwise
we repeat the above process on $M(P^{(1)},\lambda^{(1)})$ to obtain an $\mathcal{R}$-characteristic pair $(P^{(2)},\lambda^{(2)})$ where $P^{(2)}$ is a blowup of $P^{(1)}$, and $\lambda^{(2)}$ is defined similarly as in \eqref{eq_lambda_blowup} from $\lambda^{(1)}$. If $|G_F|=1$ for every faces of $P^{(2)}$ corresponding to the pair $(P^{(2)},\lambda^{(2)})$ then we are done. Otherwise, we repeat the process. Since the order of $G_F$ is finite for any face $F$ of $P$, this inductive process ends after a finite steps.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.8]
\draw (0,0)--(1,-1)--(2,0)--(2,4)--(0,4)--(0,0);
\draw (0,4)--(1,3)--(2,4);
\draw[thick, blue] (1,3)--(1,-1);
\draw [dashed] (0,0)--(2,0);
\draw [<-] (1,.7)--(3,.7);
\draw [<-] (1.1,3.5)--(3,3.5);
\node at (3.8,3.5) {$(0,0,1)$};
\node at (1,3) {$\bullet$};
\draw[->] (1.5,-.2)--(1.5,-1.5);
\node[below] at (1.5, -1.5) {\footnotesize$(0,0,1)$};
\node at (3.2,.7) {$F$};
\node at (1.3,2.9) {$b$};
\draw [<-] (1.5,2)--(3,2);
\node at (3.8,2) {$(1,2,0)$};
\draw [<-] (.5,1.2)--(-1,1.2);
\node at (-1.8,1.2) {$(1,0,0)$};
\draw[thick, dotted] (.7,2.7)--(0,2.7); \draw[->](0,2.7)--(-1,2.7);
\node at (-1.8,2.7) {$(0,1,0)$};
\node at (1,-2.5) {(A)};
\begin{scope}[xshift=250]
\draw [->] (0,1.5)--(0,4);
\draw [->] (1.5,0)--(4,0);
\draw [->] (0,0)--(-1.5,-1.5);
\node at (1.5,1.5) {$\bullet$};
\draw[->,thick, red] (0,0)--(1.5,0);
\draw[->,thick, green] (0,0)--(0,1.5);
\draw [dashed] (1.5,0)--(1.5,1.5)--(0,1.5);
\draw[->,thick, blue] (0,0)--(1.5,3);
\draw[dashed] (1.5,3)--(3,3)--(1.5,0);
\node[above] at (1.5,-.8) {$(1,0,0)$};
\node[left] at (0,1.5) {$(0,1,0)$};
\node[above] at (1.5,3) {$(1,2,0)$};
\node[right] at (1.5,1.5) {$(1,1,0)$};
\node at (2,-1.5) {(B)};
\end{scope}
\begin{scope}[yshift=-200,xshift=100]
\draw[dashed, thick] (0,0)--(1,1)--(4,1); \draw[dashed, thick] (1,1)--(1,4);
\draw[thick] (0,0)--(3,0)--(3,3)--(0,3)--cycle;
\draw[thick] (3,0)--(4,1)--(4,4)--(3,3)--cycle;
\draw[thick] (4,4)--(1,4)--(0,3)--(3,3)--cycle;
\draw[->] (1.5,3.5)--(1.5,4.5);
\node[above] at (1.5, 4.5) {\footnotesize$(0,0,1)$};
\draw[thick, dotted] (1.5,0.5)--(1.5,0); \draw[->] (1.5, 0)--(1.5, -0.5);
\node[below] at (1.5, -0.5) {\footnotesize$(0,0,1)$};
\draw[thick, dotted] (0.5,2)--(0,2); \draw[->] (0,2)--(-0.5, 2);
\node[left] at (-.5, 2) {\footnotesize$(1,0,0)$};
\draw[->] (3.5,2)--(4.5,2);
\node[right] at (4.5, 2) {\footnotesize$(1,2,0)$};
\draw[->] (0.5,1.5)--(-.5,0.5);
\node[left] at (-.5, 0.5) {\footnotesize$(1,1,0)$};
\draw[thick, dotted] (3.5,2.5)--(4,3); \draw[->] (4,3)--(4.5, 3.5);
\node[right] at (4.5, 3.5) {\footnotesize$(0,1,0)$};
\node at (2,-1.5) {(C)};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{}
\label{Fig_Example of GF}
\end{figure}
\begin{example}
Consider the $\mathcal{R}$-characteristic function and the edge $F$ of the triangular prism $P_1$ in Figure \ref{Fig_Example of GF}(A). We have the ${\bb{Z}}$-module generated by $\{(1,2,0),(1,0,0)\}$. Then $$(\big<(1,2,0),(1,0,0)\big>\bigotimes_{{\bb{Z}}}{\bb{R}})\cap {\bb{Z}}^{3}=\{(x,y,0) ~~|~~ x,y\in {\bb{Z}}\} \cong {\bb{Z}}^2$$
which has a basis $\{(1,0,0),(0,1,0)\}$. In this case, $\mbox{vol}(C_F)=2$.
Thus for this edge $F$, we get $|G_F|=|{\bb{Z}}^2/\big<(1,2,0),(1,0,0)\big>| = 2$. Since the faces containing $F$ are facets of $P_1$, by Lemma \ref{lem_non_zero_int_pt} there exists a non-zero $\lambda_F \in \mathring{C_F} \cap {\bb{Z}}^3$. Here, the only non-zero lattice point in the parallelepiped $C_F$ is $\lambda_F = (1,1,0)$ which can be represented as $$(1,1,0)=\frac{1}{2}(1,0,0)+\frac{1}{2}(1,2,0).$$
Also, similarly, one can calculate $|G_b|=2$ for $b \in V(F)$ and $|G_b|=1$ for $b \in V(P_1) - V(F)$. So, the maximal element in $\mathcal{L}$ for this case is $F$. Thus we blowup $P_1$ along the face $F$, and get the cube $P^{(1)}_1$ as in Figure \ref{Fig_Example of GF}(C). One can define $\lambda^{(1)}$ on $P^{(1)}_1$ as in \eqref{eq_lambda_blowup}.
Then in the toric orbifold $M(P^{(1)}_1,\lambda^{(1)})$, we have $|G_E|=1$ for every face $E$ of $P^{(1)}_1$. Thus $M(P^{(1)},\lambda^{(1)})$ is a toric manifold, and it is a resolution of singularities of $M(P_1,\lambda)$. \hfill $\square$ \vspace{0.03in}
\end{example}
\section{Equivariant cobordism of quasi-contact toric manifolds}\label{sec_tctm}
In this section, we recall the concept of quasi-contact toric manifolds and discuss some of their properties. Then we prove that any quasi-contact toric manifold is equivariantly the boundary of an oriented smooth manifold. In particular, good contact toric manifolds and generalized lens spaces are equivariantly cobordant to zero.
Consider the action
$\alpha \colon T^{n+1}\times {\bb{C}}^{n+1} \to {\bb{C}}^{n+1}$ of $(n+1)$-dimensional torus $T^{n+1}$ on ${\bb{C}}^{n+1}$ defined by
$$\alpha((t_1, \ldots, t_n, t_{n+1}), (z_1, \ldots, z_n, z_{n+1}))=
(t_1z_1, \ldots, t_nz_n, t_{n+1}z_{n+1}).$$
Then ${\bb{C}}^n \times S^1$
is a $T^{n+1}$-invariant subset of ${\bb{C}}^{n+1}$, and the orbit space $({\bb{C}}^{n}\times S^1)/T^{n+1}$ is $({\bb{R}}_{\geq 0})^n$. The restriction $\alpha|_{T^{n+1}\times ({\bb{C}}^{n}\times S^1)}$ is called the \emph{standard} $T^{n+1}$-action on ${\bb{C}}^{n}\times S^1$.
\begin{definition}\label{def:axiom_top_cont}
A $(2n+1)$-dimensional smooth manifold $N$ with an effective $T^{n+1}$-action is called a quasi-contact toric manifold if
the orbit space ${N}/{T^{n+1}}$ has the structure of an $n$-dimensional simple polytope and, for each point $p\in N$, there is
\begin{enumerate}
\item an automorphism $\theta\in {\rm Aut}(T^{n+1})$,
\item a $T^{n+1}$-invariant neighbourhood $U$ of $p$ in $N$ and a $T^{n+1}$-invariant open subset $U'$ of ${\bb{C}}^n \times S^1$ such that there is a $\theta$-equivariantly diffeomorphism $f_{\theta}:U\to U'$ (that is, $f_{\theta}(tx)=\theta(t)f_{\theta}(x)$ for $(t,x)\in T^{n+1}\times U$).
\end{enumerate}
\end{definition}
Note that $(2n+1)$-dimensional quasi-contact toric manifolds are locally $1$-standard $T$-manifolds of \cite{SaSo}.
Let
$$\mathfrak{q} \colon N \to Q $$
be the orbit map where $Q$
is an $n$-dimensional simple polytope. Let $\mathcal{F}(Q)\colonequals \{E_1, \ldots, E_{\ell}\}$ be the set of facets of $Q$. Then each $N_{j} \colonequals \mathfrak{q}^{-1}(E_j)$ is a $(2n-1)$-dimensional $T^{n+1}$-invariant submanifold of $N$.
Then, the isotropy subgroup of $N_{j}$ is a circle subgroup $T_{j}$ of
$T^{n+1}$. The group $T_{j}$ is uniquely determined by a primitive vector
$\eta_j \in {\bb{Z}}^{n+1}$ for $j=1,2,\dots,\ell$; that is, we get a natural function
\begin{equation}\label{eq_axiomatic_lambda}
\eta \colon \{E_1, \ldots, E_{\ell}\} \to {\bb{Z}}^{n+1}
\end{equation}
defined by $\eta(E_j) = \eta_j$.
We discuss the constructive definition of $(2n+1)$-dimensional quasi-contact toric manifolds on simple polytopes following \cite{SaSo}. Let $\mathcal{F}(Q)\colonequals \{E_1, \dots, E_\ell\}$ be the
set of facets of an $n$-dimensional simple polytope $Q$.
\begin{definition}\label{def_hyp_char_map}
A function $\xi \colon \mathcal{F}(Q) \to {\bb{Z}}^{n+1} $ is called a \emph{hyper characteristic function} if
$\big<\xi_{j_1}, \dots, \xi_{j_n}\big> \text{ is a rank $n$ unimodular submodule of } {\bb{Z}}^{n+1}$ whenever $E_{j_1}\cap\cdots\cap E_{j_n}\neq \emptyset$
where $ \xi_j \colonequals \xi(E_j)$ for $j=1, \ldots, \ell$.
The pair $(Q, \xi)$ is called a \emph{hyper characteristic pair.}
\end{definition}
Note that hyper characteristic function was defined on simplices in \cite[Definition 2.1]{SS} and Definition \ref{def_hyp_char_map} is the case $k=1$ in \cite[Definition 2.2]{SaSo}.
Let $\xi$ be a hyper characteristic function on an $n$-dimensional simple polytope $Q$. For a point $p\in Q$, let $E_{j_1} \cap \cdots \cap E_{j_k}$ be the face of $Q$ containing $p$ in its relative interior. Then
$\exp(\big< \xi_{j_1}, \dots, \xi_{j_k}\big> \otimes_{{\bb{Z}}} {\bb{R}})$ is a $k$-dimensional subtorus of $T^{n+1}$. We denote this subgroup by $T_p$. If $p$ belongs to the relative interior of $Q$, we denote $T_p=1$, the identity in $T^{n+1}$. We define the following identification space.
\begin{equation}\label{eq_constr_DJ}
N(Q, \xi)\colonequals (T^{n+1}\times Q)/{\sim'}
\end{equation}
where
\begin{equation*}
(t, p)\sim' (s,q) ~~ \mbox{if and only if} ~~ p=q ~~ \mbox{and} ~~ t^{-1}s \in T_p.
\end{equation*}
Here, $T^{n+1}$ acts on $N(Q, \xi)$ induced by the multiplication on the first factor
of $T^{n+1} \times Q$.
\begin{proposition}\label{prop_two_definitions_are_equiv}
Let $(Q, \xi)$ be a hyper characteristic pair. Then the space $N(Q, \xi)$ in \eqref{eq_constr_DJ} is a quasi-contact toric manifold.
\end{proposition}
\begin{proof}
Let $Z_Q$ be the moment angle manifold corresponding to $Q$, see \cite[Section 6.2]{BP-book}. Then $Z_Q$ is a smooth manifold and there is a smooth $T^m$-action on $Z_Q$ where $m$ is the number of facets of $Q$, see \cite[Corollary 6.2.5]{BP-book}. The space $N(Q, \xi)$ has a manifold structure which satisfy condition (1) and (2) in Definition \ref{def:axiom_top_cont}, see \cite[Proposition 2.3]{SaSo}. If rank of $\big<\xi_1, \ldots, \xi_{\ell}\big>$ is $n$, then $N(Q, \xi)$ is equvariantly homeomorphic to $M(Q, \lambda_{\xi}) \times S^1$ for some toric manifold $M(Q, \lambda_{\xi})$, see \cite[Proposition 2.6]{SaSo}. If rank of $\big<\xi_1, \ldots, \xi_{\ell}\big>$ is $n+1$, then $N(Q, \xi)$ is equvariantly homeomorphic to $Z_Q/T_{\xi}$ for some $(m-n-1)$-dimensional subgruop $T_{\xi}$ of $T^m$, see \cite[Proposition 2.7]{SaSo}. Also, $M(Q, \lambda_{\xi})$ is equivariantly homeomorphic to $Z_Q/T_{\lambda_{\xi}}$ for some some $(m-n)$-dimensional subgruop $T_{\lambda_{\xi}}$ of $T^m$. Therfore, $N(Q, \xi)$ has a smooth structure such that the $T^{n+1}$-action $N(Q, \xi)$ is smooth.
\end{proof}
Note that the function $\eta$ defined in \eqref{eq_axiomatic_lambda} satisfies Definition \ref{def_hyp_char_map}, see the explanation in \cite[Subsection 2.1]{SaSo}. Also the orientation of a quasi-contact toric manifold $N$ can be induced from an orientation of $Q$ and $T^{n+1}$.
\begin{proposition}\cite{SaSo}\label{cor_two_def_are_equiv}
Let $N$ be a quasi-contact toric manifold over the $n$-dimensional simple polytope $Q$, and $\eta$ a function as defined in \eqref{eq_axiomatic_lambda}. Then $N$ is equivariantly diffeomorphic to $N(Q, \eta)$.
\end{proposition}
\begin{proof}
This follows by similar arguments in the proofs of Lemma 1.4 and Proposition 1.8 in \cite{DJ} and Proposition \ref{prop_two_definitions_are_equiv}.
\end{proof}
\begin{example}[Generalized lens spaces]\label{ex_gen_lens_sp}
Let $\Delta^n$ be the $n$-dimensional simplex and $\xi$ a hyper characteristic function on it. We assume that the rank of $\big<\{\xi(E) \mid E\in \mathcal{F}(\Delta^n)\}\big> \subseteq {\bb{Z}}^{n+1}$ is $(n+1)$.
The article \cite{SS} shows that $N(\Delta^n,\xi)$ is equivariantly homoeomorphic (hence diffeomorphic) to the orbit space $S^{2n+1}/ G_ \xi$ for some free action of a finite group $G_{\xi}$. This space is called a \emph{generalized lens space} in \cite{SS}.
In particular, if $\{\xi(E) \mid E\in \mathcal{F}(\Delta^n)\}$ forms a basis of ${\bb{Z}}^{n+1}$, then $N(\Delta^n, \xi)$ is homeomorphic to $S^{2n+1}$.
Consider an integer $p>1$ and $n$ integers $q_1,\dots,q_n$ such that $\gcd\{p,q_i\}=1$ for all $i=1,\dots,n$. Then ${\bb{Z}}_{p}$ acts freely on $S^{2n+1}$ by the following $$g(z_0,z_1,\dots,z_n)=(gz_0,g^{q_1}z_1,\dots,g^{q_n}z_n).$$
The lens space $L(p; q_1,\dots,q_n)$ is defined to be the orbit space $S^{2n+1}/{\bb{Z}}_p$. The paper \cite{SS} showed
that there is a hyper characteristic function $\xi$ on $\Delta^n$ such that $L(p; q_1,\dots,q_n)$ is equivariantly diffeomorphic to $N(\Delta^n, \xi)$.\hfill $\square$ \vspace{0.03in}
\end{example}
\begin{example}[Good contact toric manifolds]\label{ex_gd_ct_toric_mfd}
Luo \cite[Chapter 2]{Luo-Thesis} discussed the construction of good contact toric manifolds which are compact connected contact toric manifolds studied in \cite[Section 2]{Lerman}. We briefly, recall the construction. Let $Q$ be an $n$-dimensional simple lattice polytope embedded in ${\bb{R}}^{n+1}\setminus \{\mathbf{0}\}$. Consider the cone $C(Q)$ on $Q$ with apex $\mathbf{0}\in \mathbb{R}^{n+1}$ and the set $\{\widetilde{E} \mid E\in \mathcal{F}(Q)\}$ of facets of $C(Q)$ where $\widetilde{E}\colonequals C(E)\setminus \{0\}$. Let $\xi(E)$ be the primitive outward normal vector on $\widetilde{E}$. This defines a function $\xi \colon \mathcal{F}(Q) \to \mathbb{Z}^{n+1}$. Since the facets of $C(Q)-\{0\}$ intersects transversely, the function $\xi$ satisfies Definition \ref{def_hyp_char_map}. Then the space $N(Q, \xi)$ is $T^{n+1}$-equivariantly homeomorphic to a good contact toric manifold whose moment cone is $C(Q)$. Moreover, a good contact toric manifold can be obtained in this way. For details, we refer \cite{Lerman} and \cite{Luo-Thesis}.
\hfill $\square$ \vspace{0.03in}
\end{example}
\begin{lemma}\label{lem_out_pt}
Let $Q$ be an $n$-dimensional simple polytope and $$\xi \colon \mathcal{F}(Q) \to {\bb{Z}}^{n+1}$$ a hyper characteristic function. Then there exists ${\bf a}=(a_1,\dots,a_{n+1})\in {\bb{Z}}^{n+1}$ such that
$\{{\bf a},\xi(E_{j_1}), \dots, \xi(E_{j_n})\}$ is a linearly independent subset in ${\bb{Z}}^{n+1}$ whenever $E_{j_1}\cap\cdots\cap E_{j_n}$ is a vertex of $Q$
\end{lemma}
\begin{proof}
Let $b$ be a vertex of $Q$. Then $b=E_{j_1}\cap\cdots\cap E_{j_n}$ for some unique facets $E_{j_1},\cdots, E_{j_n}$.
Let ${\bb{Z}}_b$ be the submodule of ${\bb{Z}}^{n+1}$ generated by $\xi_{j_1}, \dots, \xi_{j_n}$. So the rank of ${\bb{Z}}_b$ is $n$ for any vertex $b\in V(Q)$. Since there are only finitely many vertices in $Q$ we have ${\bb{Z}}^{n+1}\setminus \bigcup_{b\in V(Q)}{\bb{Z}}_b$ is non-empty. If possible, let
\begin{equation}\label{eq_ten}
{\bb{Z}}^{n+1}=\bigcup_{b\in V(Q)}{\bb{Z}}_b.
\end{equation}
Let $V_b:={\bb{Z}}_b\otimes_{{\bb{Z}}}{\bb{R}}$. Then $V_b$ is an $n$-dimensional linear subspace of ${\bb{R}}^{n+1}$. From \eqref{eq_ten} it follows that
$${\bb{R}}^{n+1}={\bb{Z}}^{n+1}\otimes_{{\bb{Z}}} {\bb{R}} =\bigcup_{b\in V(Q)}({\bb{Z}}_b\otimes_{{\bb{Z}}} {\bb{R}}) =\bigcup_{b\in V(Q)}V_b,$$ which is a contradiction as $V(Q)$ is a finite set.
Let ${\bf a}\in {\bb{Z}}^{n+1}\setminus \bigcup_{b\in V(Q)}{\bb{Z}}_b$ be primitive. Then ${\bf a}$ is the desired vector of the lemma.
\end{proof}
\begin{theorem}\label{thm_zoro_cob}
Let $N$ be a $(2n+1)$-dimensional quasi-contact toric manifold. Then there is a smooth oriented $T^{n+1}$-manifold with boundary $M$ such that $\partial{M}$ is equivariantly diffeomorphic to $N$.
\end{theorem}
\begin{proof}
There exist a hyper characteristic pair $(Q,\xi)$ such that $N$ is equivariantly diffeomorphic to $N(Q,\xi)$ by Proposition \ref{cor_two_def_are_equiv}. Let $\mathcal{F}(Q)\colonequals \{E_1, \dots, E_\ell\}$ and $I=[0,1]$. Then $Q\times I$ is an $(n+1)$-dimensional simple polytope and $\mathcal{F}(Q\times I)\colonequals \{E_1\times I, \dots, E_\ell\times I,Q\times\{0\},Q\times\{1\}\}$. Let ${ \bf a}\in {\bb{Z}}^{n+1}$ satisfies Lemma \ref{lem_out_pt}. Define a map $\lambda \colon \mathcal{F}(Q\times I) \to {\bb{Z}}^{n+1}$ by
\begin{align}\label{eq_new_lmd}
\lambda(F)=\begin{cases} \xi(E_i) \quad &\text{ if } F=E_i\times I \text{ for }i =1,2,\dots,l\\
{\bf a} \quad &\text{ if } F=Q\times \{0\}, Q\times \{1\}. \end{cases}
\end{align}
Then $\lambda$ is an $\mathcal{R}$-characteristic function on $P:=Q\times I$, and $M(P, \lambda)$ is a toric orbifold. Theorem \ref{thm_res_sing} gives a resolution of singularities
\begin{equation}\label{eq_res_sing}
M(P^{(d)},\lambda^{(d)})\to \dots \to M(P^{(1)},\lambda^{(1)})\to M(P,\lambda)
\end{equation}
for $M(P,\lambda)$, where $M(P^{(d)},\lambda^{(d)})$ is a toric manifold and the arrows represent the associated blowups.
Note that for any face $E$ of $Q$, the codimension of $E$ in $Q$ is same as the codimension of the face $E\times I$ in $P$. Moreover, if $$E=\bigcap_{s=1}^{k} E_{j_s}$$ for some unique facets $ E_{j_1}, E_{j_2},\dots, E_{j_k}$ of $Q$, then $E_{j_1}\times I,E_{j_2}\times I,\dots,E_{j_k}\times I$ are facets of $P$ and $$E\times I=\bigcap_{s=1}^{k} (E_{j_s}\times I).$$
Now $\lambda(E_{j_s}\times I)=\xi(E_{j_s})$ and $\{\xi(E_{j_1}),\xi(E_{j_2}),\dots,\xi(E_{j_k})\}$ is a direct summand of ${\bb{Z}}^{n+1}$. Then we have $$|G_{E\times I}(P,\lambda)|=1$$ for any face $E\times I$ of $P$ where $E$ is a face of $Q$. Therefore, if $|G_{F}(P,\lambda)|\neq 1$ for a face $F$ of $P$, then either $F\subseteq Q\times \{0\}$ or $F\subseteq Q\times \{1\}$. Thus we have $|G_{(E\times I)\cap P^{(j)}}(P^{(j)},\lambda^{(j)})|=1$ for every $j=1, \dots, d$. Therefore, the blowups in the resolution \eqref{eq_res_sing} need to take only on the faces arising from $Q\times\{0\}$ or $Q\times \{1\}\subset P$.
Since $P$ is a simple polytope, one can consider the necessary blowups in the resolution \eqref{eq_res_sing} such that $Q\times [\frac{1}{2}-\delta,\frac{1}{2}] \subset P^{(d)}$, for some $\delta>0$. Let
\begin{equation*
\widetilde{P} :=\iota^{-1}(Q\times [0,1/2])\subset P^{(d)},
\end{equation*}
where $\iota$ is the inclusion map $$\iota\colon P^{(d)} \to P.$$
Then $\widetilde{P}$ is a simple polytope. Let
$$\pi' \colon M(P^{(d)}, \lambda^{(d)} )=(P^{(d)}\times T^{n+1})/{\sim} \longrightarrow P^{(d)}$$
be the quotient map. We prove that $(\pi')^{-1}(\widetilde{P})$ is a $T^{n+1}$-manifold with boundary, where the boundary is $N(Q,\xi )$.
Let $\alpha$ be a point in $\widetilde{P}$ and $\alpha \notin \{(x,\frac{1}{2}): x\in Q\}$. Since $M(P^{(d)}, \lambda^{(d)})$ is a smooth manifold, then there exists a neighbourhood $U_{\alpha}$ of $\alpha$ in $\widetilde{P}$ such that $U_{\alpha}$ does not intersect $ \{(x,\frac{1}{2}): x\in Q\}$. So $(\pi')^{-1}(U_{\alpha})$ is an open subset of the smooth manifold $M(P^{(d)}, \lambda^{(d)})$.
Now if $\alpha=(x,\frac{1}{2})\in \widetilde{P}$ for some $x\in Q$, then consider the tubular neighbourhood $Q\times (\frac{1}{2}-\delta,\frac{1}{2}] $ of $Q\times \{\frac{1}{2}\}$ in $\widetilde{P}$. Note that $\{(x,\frac{1}{2}): x\in Q\}=Q\times \{\frac{1}{2}\}$ which is nothing but $Q$. Then
$$(Q\times (\frac{1}{2}-\delta,\frac{1}{2}]\times T^{n+1})/{\sim}= \Big{(}(Q\times \{\frac{1}{2}\} \times T^{n+1})/{\sim'}\Big{)}\times (\frac{1}{2}-\delta,\frac{1}{2}].$$
The characteristic function on $P^{(d)}$ induces a hyper characteristic function on the facets of $Q\times \{\frac{1}{2}\}$ which is same as the hyper characteristic function on the facets of $Q$. This implies that $\Big{(}(Q \times\{\frac{1}{2}\}\times T^{n+1})/{\sim'}\Big{)}$ is equivariantly diffeomorphic to $N(Q,\xi)$. Thus the manifold with boundary $(\pi')^{-1}(\widetilde{P})$ satisfies our claim.
Hence a quasi-contact toric manifold is equivariantly cobordant to zero.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=.6]
\begin{scope}[ yshift=80, xshift=-100]
\draw (0,0)--(1,-2)--(3,-2)--(4,0)--(2,2)--(0,0);
\node at (4.8,-1) {$(0,0,1)$};
\node at (4.4,1) {$(1,1,0)$};
\node at (-0.4,1) {$(1,1,1)$};
\node at (-.8,-1) {$(0,1,1)$};
\node at (2,-2.5) {$(1,0,0)$};
\node at (2,-3.2) {(A)};
\end{scope}
\begin{scope}[xshift=-180, yshift=-140]
\draw [thick, blue ](0,0)--(2,0);
\draw (2,0)--(3,1)--(1,2)--(-1,1)--(0,0);
\draw[fill=yellow, yellow, opacity=0.3](0,-1.3)--(2,-1.3)--(3,-0.3)--(1,0.7)--(-1,-.3)--(0,-1.3);
\draw[dotted, thick](0,-1.3)--(2,-1.3)--(3,-0.3)--(1,0.7)--(-1,-.3)--(0,-1.3);
\draw (-1,1)--(-1,-3)--(0,-4);
\draw (2,-4)--(3,-3)--(3,1);
\draw [thick, blue](0,-4)--(2,-4);
\draw (0,-4)--(0,0);
\draw (2,-4)--(2,0);
\draw[dashed] (-1,-3)--(1,-2)--(3,-3);
\draw [dashed] (1,2)--(1,-2);
\draw[->] (1.5,1)--(1.7,3);
\node[above] at (1.7,3) {$(1,2,0)$};
\draw[dotted, thick] (.5,1)--(0,1.5);
\draw[->] (0,1.5)--(-1.5,3);
\node[left] at (-1.5,3) {$(1,1,1)$};
\draw[->] (-.5,-.5)--(-2,-.5);
\node[left] at (-2,-.5) {$(0,1,1)$};
\node at (1,-4.6) {(B)};
\draw[->] (.5,-2) to [out=220, in=330] (-2,-2);
\node[left] at (-2,-2) {$(1,0,0)$};
\draw[dotted, thick] (.6,-3) to [out=260, in=340] (-.4,-3.57);
\draw[->] (-.3,-3.6)--(-2,-3.6);
\node[left] at (-2,-3.6) {$(1,2,0)$};
\draw[dotted, thick] (2.5,-.5)--(3,0);
\draw[->] (3,0)--(4,1);
\node[right] at (3.8,1.2) {$(1,1,0)$};
\draw[->] (2.5,-1.5)--(4,-1.5);
\node[right] at (4,-1.5) {$(0,0,1)$};
\draw[->] (1.25,0)--(3,2.25);
\node[right] at (3,2.5) {$F_1$};
\draw[->] (1,-4) to [out=330, in=200] (4, -4);
\node[right] at (4,-4) {$F_0$};
\end{scope}
\begin{scope}[xshift=120, yshift=-140]
\draw (0,0)--(2,0)--(3,1)--(1,2)--(-1,1)--(0,0);
\draw[dotted, thick](0,-1.3)--(2,-1.3)--(3,-0.3)--(1,0.7)--(-1,-.3)--(0,-1.3);
\draw[fill=yellow, yellow, opacity=0.3](0,-1.3)--(2,-1.3)--(3,-0.3)--(1,0.7)--(-1,-.3)--(0,-1.3);
\draw[fill=blue, blue, opacity=0.3](0,-3.3)--(2,-3.3)--(2.2,-4)--(-.2,-4)--(0,-3.3);
\draw (-1,1)--(-1,-3)--(-.2,-4)--(2.2,-4)--(3,-3)--(3,1);
\draw (2,-3.3)--(0,-3.3)--(0,0);
\draw (2.2,-4)--(2,-3.3)--(2,0);
\draw (0,-3.3)--(-.2,-4);
\draw[dashed] (-1,-3)--(1,-2)--(3,-3);
\draw [dashed] (1,2)--(1,-2);
\draw[->] (1.5,1)--(1.7,3);
\node[above] at (1.7,3) {$(1,2,0)$};
\draw[dotted, thick] (.5,1)--(0,1.5);
\draw[->] (0,1.5)--(-1.5,3);
\node[left] at (-1.5,3) {$(1,1,1)$};
\draw[->] (-.5,-.5)--(-2,-.5);
\node[left] at (-2,-.5) {$(0,1,1)$};
\draw[->] (.5,-2) to [out=220, in=330] (-2,-2);
\node[left] at (-2,-2) {$(1,0,0)$};
\draw[dotted, thick] (.6,-3) to [out=260, in=340] (-.4,-3.57);
\draw[->] (-.3,-3.6)--(-2,-3.6);
\node[left] at (-2,-3.6) {$(1,2,0)$};
\draw[->] (1.25,0)--(3,2.25);
\node[right] at (3,2.5) {$F_1$};
\draw[dotted, thick] (2.5,-.5)--(3,0);
\draw[->] (3,0)--(4,1);
\node[right] at (3.8,1.2) {$(1,1,0)$};
\draw[->] (2.5,-1.5)--(4,-1.5);
\node[right] at (4,-1.5) {$(0,0,1)$};
\draw[->] (1,-3.7) to [out=330, in=200] (4, -4);
\node[right] at (4,-4) {$(1,1,0)$};
\node at (1,-4.5) {(C)};
\end{scope}
\end{tikzpicture}
\caption{Procedure of constructing a manifold whose boundary is a given quasi-contact toric manifold.}
\label{Fig cobor_q_mfd}
\end{center}
\end{figure}
\begin{example}
Consider the pentagon $Q$ and the hyper characteristic function
$$\xi\colon \mathcal{F}(Q)\to {\bb{Z}}^3$$
as in Figure \ref{Fig cobor_q_mfd}(A). Then we have the quasi-contact toric manifold $N(Q,\xi)$. There exists a point ${\bf a}\in {\bb{Z}}^3$ which satisfies Lemma \ref{lem_out_pt} for this pair $(Q, \xi)$. In this case one can take ${\bf a}=(1,2,0)$. Then using \eqref{eq_new_lmd} we can define an $\mathcal{R}$-characteristic function $\lambda$ on the pentagonal prism $P:=Q\times I$, see Figure \ref{Fig cobor_q_mfd}(B). Thus, we have the toric orbifold $M(P, \lambda )$.
In the simple polytope $P$, the only faces $F$ with $|G_{F}|\neq 1$ are two edges $F_0$ and $F_1$ which are coloured with blue in Figure \ref{Fig cobor_q_mfd}(B) and the $4$ vertices of these edges. Using the techniques of the proof of Theorem \ref{thm_res_sing}, first we blowup $P$ along the face $F_0$ and get the simple polytope $P'$ as in \ref{Fig cobor_q_mfd}(C). We denote the corresponding blowup of toric orbifold $M(P, \lambda )$ by $M(P',\lambda')$. Note that after this blowup the only face $F$ in the polytope $P'$ with $|G_{F}|\neq 1$ is $F_1$ and the two vertices of $F_1$, see figure \ref{Fig cobor_q_mfd}(C).
Note that $Q\times \{\frac{1}{2}\} \subseteq P' \subseteq P $ which is denoted by the dotted pentagon filled with yellow in Figure \ref{Fig cobor_q_mfd}(B) and \ref{Fig cobor_q_mfd}(C). Also $Q\times \{\frac{1}{2}\}$ can be identified with $Q$. Let $\widetilde{P}$ denotes the lower portion of this pentagon $Q\times \{\frac{1}{2}\}$ in $P'$, that is $\widetilde{P} = (Q \times [0, \frac{1}{2}]) \cap P'$. Then $N(Q,\xi)$ is equivariantly the boundary of the oriented smooth manifold $(\pi')^{-1}(\widetilde{P})$ with boundary for this example. \hfill $\square$ \vspace{0.03in}
\end{example}
\begin{corollary}
Any good contact toric manifold is equivariantly the boundary of an oriented smooth manifold.
\end{corollary}
\begin{proof}
This follows from Example \ref{ex_gd_ct_toric_mfd} and Theorem \ref{thm_zoro_cob}.
\end{proof}
\begin{corollary}\label{cor_lens_sp}
Any generalized lens space is equivariantly the boundary of an oriented smooth manifold.
\end{corollary}
\begin{proof}
This follows from Example \ref{ex_gen_lens_sp} and Theorem \ref{thm_zoro_cob}.
\end{proof}
We note that the conclusion of Corollary \ref{cor_lens_sp} can be obtained from \cite{Han}. However, our proof is more geometric and explicit. We also note that the article \cite{SS} gave partial answer to Corollary \ref{cor_lens_sp} under some number theoretic sufficient conditions.
\begin{remark}
Using Theorem \ref{thm_zoro_cob} and \cite[Theorem 4.9]{MS}, we get that all Stiefel-Whitney numbers of quasi-contact toric manifolds are zero.
\end{remark}
\subsection*{Acknowledgments}
The authors would like to thank Dong Youp Suh and Jongbaek Song for helpful discussion. The first author thanks `IIT Madras' for PhD fellowship. The second author thanks `International office IIT Madras' and Science and Engineering Research Board India for research grants. The third author thanks to IIT Madras for PDEF fellowship and IMSc for PDF fellowship.
\bibliographystyle{abbrv}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{INTRODUCTION}
The general topic of Intelligent Transportation System (ITS) has been widely explored in search of solutions to transportation problems. Some examples of the the popular research directions include driver's assistance system for safe driving and unmanned autonomous parking. Extensive research has been done in these areas, however, with limited progress. One major challenge comes from the fact that there are too many uncertainties in driving such that an accurate analytical model of the system is difficult to obtain. Even with an advanced controller which is able to capture the complexity of driving process, the high computational cost of real-time processing hinders the wide application of these research findings. A big breakthrough in this area is Artificial Intelligence (AI) techniques which aim at a human-like driving control. One of the most widely applied techniques is fuzzy logic control.
First proposed by L.A. Zadeh, fuzzy logic control makes control decision in a way that resembles human reasoning ([1],[2]). A Fuzzy Logic Controller (FLC) does not require extensive knowledge of the process, but yet achieves satisfactory results. The superior performance comes from the robustness of underlying fuzzy logic. Fuzzy logic control is commonly applied in speed control and autonomous parking. Early development in speed control is focused on conventional Cruise Control (CC), where FLC controls the accelerator to maintain a constant speed ([3],[4]). Adaptive Cruise Control (ACC) takes one step further to keep a safety gap between two vehicles in the same lane ([4],[5]). Researches in FLC application to autonomous parking can be grouped into two general categories, tracking method ([6],[7]) and posture stabilization method ([8],[9]). Tracking method focuses on control algorithm to define optimal trajectory, while posture stabilization aims at achieving optimal final posture regardless of its initial status. One major drawback of both approaches is the high computational cost which prohibits the wide application in industry.
The work presented in this paper is an intelligent auto-parking system with hybrid fuzzy logic controller as an effective solution to parking problems. Unmanned vehicles equipped with the system are able to achieve slot detection and auto-parking in either parallel or vertical parking mode. Fundamental control logics, including speed control, turning angle control and posture stabilization, are designed and implemented to ensure accuracy and efficiency in the parking process. The system is divided into different functional blocks (auto-driving, slot detection, parallel parking and vertical parking), each supported by a combination of different control logics. Section II gives an overview of the Hybrid Fuzzy Controller (HFC) by comparing it with previous Fuzzy-Based Onboard System (FBOS). The detailed HFC design is discussed in Section III and IV. Simulation and real-model testing are carried out and the results are summarized in Section V. Section VI briefly discusses the directions of future improvement.
\section{Hybrid Fuzzy Controller}
The Hybrid Fuzzy Controller (HFC) presented in this paper is an improvement of the Fuzzy-Based Onboard System (FBOS) designed by the same research group [10]. The basic functionalities remain the same. Vehicles equipped with HFC or FBOS are able to achieve autonomous parking without human intervention. Currently, most of the driving assistance systems in market require drivers to find a suitable parking slot. As its major advantage, FBOS integrates two functions, parking slot detection and autonomous parking, into one single system. Moreover, different parking modes, parallel parking or vertical parking, can be automatically selected.
The flow of autonomous parking under the control of FBOS is re-captured here. Driver leaves the car at car-park entrance and switches it to auto-parking mode. The car proceeds into the car park and starts searching for available slots under searching mode. Once a suitable slot is detected, either parallel-parking or vertical-parking mode is activated. An exit mode is also implemented which enables the car to move out of the parking slot and to drive to car-park exit.
In the entire process, FBOS makes control decisions by analysing measurements from different sensors, which include angular/linear velocities and distance. In the 1:14 mini-scale vehicle prototype, measurements are gathered from infrared sensors and IMU. Based on the data, three important control tasks are performed, thus posture stabilization, turning control and parking slot detection. The original FBOS delivered accurate slot detection and smooth parking process in testing. However, the testing was carried out in a less dynamic environment, where same slot dimension, fixed vehicle size and uniform ground condition were maintained. In real-life situations, there are more complications which tend to degrade controller performance. Among all disturbances, friction (between tires and ground) and vehicle length are the two most important ones. If there are large deviations in these parameters, the outcome of autonomous parking might not be as perfect.
The fuzzy logic controller is therefore re-designed to overcome the limitations discussed above. The new Hybrid Fuzzy Controller (HFC) significantly increases the system robustness. It allows vehicles with different lengths to park properly and smoothly regardless of ground conditions. Major improvements are made in turning control and posture stabilization. Section III and IV discuss the design of HFCs in detail.
\section{Turning Control}
The most crucial step in autonomous parking is to turn the vehicle around by a fixed angle. Take vertical parking as an example, the vehicle needs to turn around 90 degrees to fit into the parking slot (assuming that the vehicle has already adjusted its position properly before parking). The accuracy of this step is critical for a successful parking, especially when the size of parking slot is limited.
Previous steering angle controller based on fuzzy logic demonstrated good results. However, the controller performance is degraded if there is large deviation in system parameters. An intuitive example is the vehicle length. A longer vehicle normally undergoes a larger turning radius than shorter ones. Therefore, if the steering-angle configuration is optimized for a shorter vehicle, it will not work on a longer one. The other major concern is the road friction. With coarse ground and new tires, the friction is noticeably larger and the vehicle tends to move much slower. The parking process is not smooth and takes longer time. On the contrary, with wet floor and worn tires, the result will be fast movement or even slipping. The turning trajectory will deviate from desired path in the above cases.
The improved HFC for turning control consists of two separate control paths. One is to control the steering angle of the front wheels. This is to ensure that the vehicle follows a fixed trajectory during turning regardless of its own length. The other control path manages the speed during turning so that there is minimal jittering or slipping when the friction varies. Each of the paths consists of two fuzzy logic controllers. One is the Base Fuzzy Controller (BFC) that controls the steering angle and speed. The other is the Supervisory Fuzzy Controller (SFC) that fine-tunes the control signals.
The design of BFC remains the same as in FBOS. The vehicle is equipped with IMU for angle and speed measurement. Before turning starts, angle measurement is cleared to zero. The set point is the desired angle to be turned (e.g. $90^\circ$ in vertical parking). In the middle of turning, velocity (linear and/or angular) and current angle is continuously monitored. Based on the control law, both front-wheel steering angle and speed should be large at the start and gradually reduces as the vehicle approaches the pre-set target.
\subsection{Steering Angle Control}
\begin{figure}[thpb]
\centering
\includegraphics[width=7cm,height=4cm]{anglecontrol.jpg}
\caption{Steering Angle Control}
\label{figurelabel}
\end{figure}
Fig.1 illustrates the block diagram for the control path of front-wheel steering angle.\footnote{Symbols in the control diagram: $\beta$ is the set angle, a is the the linear acceleration and v is the linear velocity; $\alpha$ is the angular acceleration, $\omega$ is the angular velocity and $\gamma$ is the accumulated angle.} Inputs to the BFC are the difference between set angle and current angle (e), and change rate in that difference ($\hat{e}$). The two inputs are fuzzified based on the membership functions in Fig.2. The output is a voltage signal ($u_1$) sent to the servo, controlling the steering angle. The larger the voltage signal is, the larger the steering angle. Positive voltage (i.e. positive direction) indicates turning in the clockwise direction and vice versa. The IF-THEN rules for the BFC are summarized in Table I.
\begin{figure}[thpb]
\centering
\includegraphics[width=5cm,height=2.5cm]{anglediff.jpg}
\end{figure}
\begin{figure}[thpb]
\centering$
\begin{array}{cc}
\includegraphics[width=2.7cm,height=1.8cm]{angleerror.jpg}&
\includegraphics[width=4.5cm,height=2cm]{sigu1.jpg}
\end{array}$
\caption{BFC Inputs/Output in Steering Angle Control}
\end{figure}
The BFC gives satisfactory performance for normal-sized vehicles. Path {\bf a} in Fig.3 illustrates the trajectory of such a vehicle turning $90^\circ$. However, deviation occurs if there is a significant variation in vehicle length. Consider the extreme case when a mini cooper and a limo are trying to fit into the same vertical parking slot. The limo will follow path {\bf c} while mini cooper follows path {\bf a} instead. In spite of the different trajectories, it is also noticed that for each path the turning radius is smaller at the beginning and larger in the end. Therefore, a Supervisory Fuzzy Controller (SFC) is designed to achieve a uniform turning trajectory (path {\bf b} as in Fig.3).
\begin{table}
\caption{IF-THEN Rules for BFC in Steering Angle Control}
\begin{center}
\includegraphics[width=6.5cm,height=3cm]{steeringtable.jpg}
\end{center}
\end{table}
\begin{figure}[thpb]
\centering
\includegraphics[width=4cm,height=3cm]{trajectory.jpg}
\caption{Turning Trajectories for Vehicles with Different Lengths}
\label{figurelabel}
\end{figure}
One input to the SFC is the angle difference (e) between set point and current angle while the other is the current turning radius ($r_1$). The current turning radius can be estimated as the ratio between linear velocity (v) and angular velocity ($\omega$) and normalized around zero as in Eq (1).
\begin{equation}
r_1=k_1\times \frac{v}{\omega}-1, \text{ $k_1$ is the normalization coefficient}
\end{equation}
$k_1$ is determined by the average turning radius of the entire trajectory. Therefore in ideal situation, the normalized radius should be negative at the start, zero in the middle and positive in the end. The output is an incremental voltage signal ($u_2$) sent to servo for steering angle control. It is added on top of the output signal ($u_1$) from BFC. (Note that the magnitude of voltage output from SFC is much smaller than that of BFC, usually around one-tenth). The membership functions for normalized radius (input) and incremental servo voltage (output) are given in Fig.4.
\begin{figure}[thpb]
\centering$
\begin{array}{cc}
\includegraphics[width=3.5cm,height=2.3cm]{sigr1.jpg}&
\includegraphics[width=3.5cm,height=2.3cm]{sigu2.jpg}
\end{array}$
\caption{SFC Inputs/Output in Steering Angle Control}
\end{figure}
Take vertical parking as an example, where the vehicle is trying to turn $90^\circ$ in the clockwise direction as in Fig.3. The control decision is made based on the following reasoning. At the beginning of turning, if the normalized radius is negative small, no modification is required and output voltage is zero. If it is negative large, the vehicle must be shorter than average, hence the steering angle should be reduced. A small positive voltage signal will be send to the servo to reduce the steering angle. If it is zero or positive, a small negative voltage should be sent so that the steering angle is increased. \footnote{Throughout the paper, it is assumed that turning in the clockwise direction is positive.} Expressed in IF-THEN rule format:
IF e is NL AND $r_1$ is NL, $u_2$ is PS;
IF e is NL AND $r_1$ is NS, $u_2$ is ZO;
IF e is NL AND $r_1$ is ZO, $u_2$ is NS;
IF e is NL AND $r_1$ is PS, $u_2$ is NL;
Table II summarizes the complete set of IF-THEN rules.
\begin{table}
\caption{IF-THEN Rules for SFC in Steering Angle Control}
\begin{center}
\includegraphics[width=6cm,height=3cm]{steeringtable2.jpg}
\end{center}
\end{table}
\subsection{Speed Control}
\begin{figure}[thpb]
\centering
\includegraphics[width=6.5cm,height=4cm]{speedcontrol.jpg}
\caption{Speed Control}
\label{figurelabel}
\end{figure}
Fig.5 illustrates the control logic for the other path, i.e. speed control during turning. The speed is governed by the same principle and the inputs of BFC remain the same as in steering angle control, angle error e (between current angle and set point) and the change in that error $\hat{e}$. However, the case can be simplified since only magnitudes of the inputs are important. The output is a signal $u_3$ that sets the Pulse-Width-Modulation (PWM) duty cycle. The PWM duty cycle governs the power input to motor, hence the speed. The larger the duty cycle, the larger the speed. The membership functions for inputs and output are shown in Fig.6, while Table III summarises the IF-THEN rules.
\begin{figure}[thpb]
\centering$
\begin{array}{ccc}
\includegraphics[width=2.5cm,height=2cm]{anglediff2.jpg}&
\includegraphics[width=2.5cm,height=2cm]{angleerror2.jpg}&
\includegraphics[width=2.5cm,height=2cm]{sigu3.jpg}
\end{array}$
\caption{BFC Inputs/Output in Speed Control}
\end{figure}
\begin{table}
\caption{IF-THEN Rules for BFC in Speed Control}
\begin{center}
\includegraphics[width=4cm,height=2.8cm]{speedtable.jpg}
\end{center}
\end{table}
Actual speed under same PWM may vary due to friction. Therefore, a SFC is designed to adjust PWM duty cycle so that the vehicle maintains a constant speed in different environments. If the friction is fixed, the ratio between velocity and PWM duty cycle should be fixed as well (as long as the velocity is stable). With larger friction, the ratio is smaller and vice versa. Hence one of the inputs to the SFC is given by Eq (2).
\begin{equation}
r_2=k_2\times \frac{v}{u_3}-1, \text{ $k_2$ is the normalization coefficient}
\end{equation}
where $u_3$ is the output signal from BFC. The coefficient $k_2$ is defined such that $r_2$ equals zero when there is medium friction. Another input is the linear acceleration (a) measured by IMU. The output signal $u_4$ is the incremental signal to be added on top of the output signal of BFC. (Note that the magnitude of $u_4$ is much smaller compared to $u_3$, usually around one-tenth) The sum of $u_3$ and $u_4$ determines the duty cycle of the PWM controlling motor speed. The membership functions of the inputs/output are given in Fig.7.
\begin{figure}[thpb]
\centering$
\begin{array}{ccc}
\includegraphics[width=2.7cm,height=2cm]{sigr2.jpg}&
\includegraphics[width=2.7cm,height=2cm]{linearacc.jpg}&
\includegraphics[width=2.5cm,height=2cm]{sigu4.jpg}
\end{array}$
\caption{SFC Inputs/Output in Speed Control}
\end{figure}
Here are two examples of the SFC IF-THEN rules.
IF $r_2$ is NS AND a is ZO, $u_4$ is PS;
IF $r_2$ is NS AND a is PS, $u_4$ is ZO;
A negative small $r_2$ indicates that the friction is slightly larger than normal, hence the velocity does not reach the desired level. If the current acceleration is zero, the velocity has already reached stable state. Therefore, the PWM duty cycle should be increased to counteract additional friction. The SFC output $u_4$ is a positive small signal. However, if the acceleration is a small positive value, the implication is that velocity will be further increased. The velocity has not reached the stable value but it is moving in the correct direction. No modification is required at this stage. The system will evaluate the readings at next sampling time and adjust the SFC output accordingly. The complete IF-THEN rules are summarized in Table IV.
\begin{table}
\caption{IF-THEN Rules for SFC in Speed Control}
\begin{center}
\includegraphics[width=4.5cm,height=3cm]{speedtable2.jpg}
\end{center}
\end{table}
\section{Posture Stabilization}
\begin{figure}[thpb]
\centering
\includegraphics[width=7cm,height=4cm]{posturecontrol.jpg}
\caption{Posture Stabilization}
\label{figurelabel}
\end{figure}
In the previous FBOS design, posture stabilization is based on measurements from two infrared sensors located at the right side of the car, one at the front ($d_1$) and one at the back ($d_2$). The FBOS controls the car to move in a straight line while keeping a safe distance away from the wall by adjusting the steering angle of the front wheels. One of the inputs is the distance measurement of the sensor located at the front right-side of the car ($X_d$). The other input is the difference between the readings of the two sensors located at the right side ($X_e = d_1-d_2$). The output signal is the servo voltage ($u_5$) to control steering angle. The inputs/output membership functions are presented in Fig.8 while the IF-THEN rules are summarized in Table V.
\begin{figure}[thpb]
\centering$
\begin{array}{cc}
\includegraphics[width=2.7cm,height=2cm]{sigxd.jpg}&
\includegraphics[width=2.7cm,height=2cm]{sigxe.jpg}
\includegraphics[width=2.7cm,height=2cm]{sigu5.jpg}
\end{array}$
\caption{BFC Inputs/Output in Posture Stabilization}
\end{figure}
The control rule is designed based on the assumption that a large $X_e$ indicates a large angular deviation from the forward-direction. However this is not always true. For a longer vehicle, a small angular deviation may result in a large $X_e$ since the two sensors are located further apart. On the contrary, a short vehicle may have a large angular deviation but a moderate $X_e$ due to the closeness of two sensors. Therefore, vehicle length must be taken into consideration for more effective posture stabilization.
\begin{table}
\caption{IF-THEN Rules for BFC in Posture Stabilization}
\begin{center}
\includegraphics[width=4.5cm,height=3cm]{posturetable1.jpg}
\end{center}
\end{table}
Similar as in the case of steering angle control, a Hybrid Fuzzy Controller (HFC) is designed to improve the performance. The previous fuzzy logic controller can be treated as the Base Fuzzy Controller (BFC). A Supervisory Fuzzy Controller (SFC) is designed to adjust for the variation in vehicle length. The control diagram is illustrated in Fig.9.
\begin{figure}[thpb]
\centering$
\begin{array}{cc}
\includegraphics[width=3.2cm,height=2cm]{sigr3.jpg}&
\includegraphics[width=3.2cm,height=2cm]{sigu6.jpg}
\end{array}$
\caption{SFC Input/Output in Posture Stabilization}
\end{figure}
Consider the case where a long vehicle has a small negative angular deviation. Since $X_e$ can be positive large in this case, the original control signal from BFC is positive large. With a large steering angle of the front wheel, $X_e$ changes quickly. As a result, there might be some unnecessary oscillation before the car can move forward steadily. As discussed before, longer vehicle has a larger turning radius if the front-wheel steering angle is the same. Therefore, the estimated turning radius can be taken as one of the inputs to the SFC. If both $X_e$ and estimated radius are large, the current vehicle has a larger size. The angular deviation is not large and the original steering angle can be reduced. The output signal of SFC is thus the incremental voltage signal sent to servo for steering angle adjustment ($u_6$). The estimated radius ($r_3$) is normalized by Eq (3).
\begin{equation}
r_3=k_3\times \frac{v}{\omega}-1, \text{$k_3$ is the normalization coefficient}
\end{equation}
Membership function for $X_e$ is the same as that in BFC, membership functions for $r_3$ and $u_6$ are shown in Fig.10. Table VI summarizes the IF-THEN rules of the SFC.
\begin{table}
\caption{IF-THEN Rules for SFC in Posture Stabilization}
\begin{center}
\includegraphics[width=5cm,height=3cm]{posturetable2.jpg}
\end{center}
\end{table}
\section{Simulation and Implementation}
Several experiments are simulated in MATLAB to verify the robustness and reliability of newly designed Hybrid Fuzzy Controllers (HFCs) by comparing the responses under HFC and original FBOS. The experiments are designed using Control Variable Method, meaning that only one disturbance is introduced at a time. The purpose is to evaluate the performance of individual HFC. The parameters in each fuzzy rule base are configured through trial before actual experiments.
Snapshot pictures are extracted from simulation at a frequency of 0.5 frame per second. Performance of different controllers can be compared using the sequential images of the parking process. In addition, steering angle of the front wheels is illustrated for reference.
Both parallel and vertical parking are controlled under the same fundamental principles. Only the test results of parallel parking are demonstrated here due to space constraint.
The first experiment is designed to test the robustness of improved HFC when the vehicle length varies. A comparison test is done by using the original FBOS. In order to make the simulation closer to real-life situations, the dimension of the long vehicle is set to one-fourteenth of a normal bus, i.e. 500mm in length and 180mm in width. Fig.11 presents the sequential images extracted from simulated experiment of long vehicle parallel parking controlled by original FBOS. Comparatively, Fig.12 demonstrates the parallel parking process of long vehicle controlled by improved HFC.
\begin{figure}[thpb]
\centering
\includegraphics[width=7cm,height=1.8cm]{LongCarFBOS1.jpg}
\includegraphics[width=7cm,height=1.8cm]{LongCarFBOS2.jpg}
\caption{Long vehicle parallel parking under the control of FBOS}
\label{figurelabel}
\end{figure}
In Fig.11, it is observed in the last few pictures that a collision occurs between the vehicle and the one parked in neighbouring slot. The result shows that steering configuration tuned for a small car does not guarantee same satisfactory performance when applied to a longer vehicle, confirming the previous assumption that longer vehicles usually require larger turning radius. In Fig.12, the issue is resolved with the improved HFC. Based on real-time measurements from sensors and IMU, the vehicle adjusts the steering angles to adapt to variation in length, thus avoiding significant deviation from the optimal turning trajectory.
\begin{figure}[thpb]
\centering
\includegraphics[width=7cm,height=1.8cm]{LongCarAFC1.jpg}
\includegraphics[width=7cm,height=1.8cm]{LongCarAFC2.jpg}
\caption{Long vehicle parallel parking under the control of AFC}
\label{figurelabel}
\end{figure}
The second experiment is designed in a similar way to test the robustness of improved HFC with different ground conditions. Simulated experiments are conducted to compare the two controllers, given same changes (increased friction) in parking environment. The friction simulated here follows Hook's law, f=$\mu$mg, where $\mu$ is the friction coefficient. Increased friction counteracts part of the motor power, thus slowing down the entire parking process.
Fig.13 presents the captured images of parking process under the control of original FBOS.
Since the images are captured every two seconds, parking time consumed is around 12 seconds since only the last image indicates complete parking. Fig.14 illustrates the experiment result using improved HFC. The entire parking process ends at the fifth image, where the total time required is only around 10 seconds. Comparing the time cost of two parking processes, it can be easily determined that HFC has superior performance than original FBOS.
\begin{figure}[thpb]
\centering
\includegraphics[width=7cm,height=1.8cm]{FrictionFBOS1.jpg}
\includegraphics[width=7cm,height=1.8cm]{FrictionFBOS2.jpg}
\caption{Coarse ground parallel parking under the control of FBOS}
\label{figurelabel}
\end{figure}
\begin{figure}[thpb]
\centering
\includegraphics[width=7cm,height=1.8cm]{FrictionAFC1.jpg}
\includegraphics[width=7cm,height=1.8cm]{FrictionAFC2.jpg}
\caption{Coarse ground parallel parking under the control of AFC}
\label{figurelabel}
\end{figure}
\section{Conclusions and Future Work}
In this paper, an intelligent autonomous parking system with Hybrid Fuzzy Controller (HFC) is proposed. Vehicles equipped with the system are able to perform slot detection and auto-parking in either parallel or vertical parking mode. A major improvement from previous work is the design and implementation of HFCs for two critical parking steps, turning control and posture stabilization. In turning control, two separate control paths, steering angle control and speed control, work simultaneously to achieve better performance. Both paths use the proposed HFC, consisting of a Base Fuzzy Controller (BFC) and a Supervisory Fuzzy Controller (SFC). In posture stabilization, the proposed HFC is also used to improve robustness. The optimized controllers ensure a smooth and efficient parking process even when there are variations in vehicle length and ground conditions.
The deployment of HFC improves the system performance, yet at the same time opens up more opportunities for future research. The current SFC only modifies BFC's output. Future work can look into the possibilities of dynamic fuzzy rule base. There is great potential to make the system even more robust and efficient. However, the complications coming along should be addressed properly. With more advanced control rules, the complexity of implementation increases exponentially. Whether the improvement in performance is able to justify for the increased cost should be carefully analysed since practical application is one of the main objectives in developing this auto-parking system.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{Section:Introduction}
The classical theorem of Sard \cite{Sard:Theorem} asserts that for a $C^2$-smooth function $f$ in a planar domain $\Omega$ almost every value is a regular value. That is, for almost all $t\in \mathbb R$ the set $f^{-1}(t)$ does not intersect the critical set of $f$, and hence, $f^{-1}(t)$ is an embedded $1$-dimensional $C^2$-smooth submanifold of the plane. This theorem is sharp, in the sense that the $C^2$-smoothness cannot be relaxed to $C^1$-smoothness, as was shown by Whitney \cite{Whitney:SardOptimal}.
In fact, Sard's theorem and some of the other theorems that we quote below have more general statements that hold for maps defined in subsets of $\mathbb R^n$, taking values in $\mathbb R^m$, and having appropriate regularity. In order to facilitate the comparison to our results, we will only give formulations in the case of real-valued functions defined in planar domains.
Several generalizations and improvements of Sard's theorem have been proved since the original theorem was published. In particular, Dubovitskii \cite{Dubovitskii:Sard} proved that a $C^1$-smooth function $f$ in a planar domain has the property that for a.e.\ value $t\in \mathbb R$ the set $f^{-1}(t)$ intersects the critical set in a set of Hausdorff $1$-measure zero. De Pascale \cite{DePascale:Sard} extended the conclusion of Sard's theorem to Sobolev functions of the class $W^{2,p}$, where $p>2$. For other versions of Sard's theorem in the setting of H\"older and Sobolev spaces see \cite{Bates:Sard}, \cite{BojarskiHajlaszStrzelecki:Sard}, \cite{Figalli:Sard}, \cite{Norton:Sard}.
Now, we turn our attention to the structure of the level sets of functions, instead of discussing the critical set. Theorem 1.6 in \cite{BojarskiHajlaszStrzelecki:Sard} states that if $f\in W^{1,p}_{\loc}(\mathbb R^2)$, then there exists a Borel representative of $f$ such that for a.e.\ $t\in \mathbb R$ the level set $f^{-1}(t)$ is equal to $Z\cup \bigcup_{j\in \mathbb N} K_j$, where $\mathcal H^1(Z)=0$, $K_j\subset K_{j+1}$, and $K_j$ is \textit{contained} in an $1$-dimensional $C^1$-smooth submanifold $S_j$ of $\mathbb R^2$ for $j\in \mathbb N$.
Under increased regularity, Bourgain, Korobkov, and Kristensen \cite{BourgainEtAl:Sard} proved that if $f\in W^{2,1}(\mathbb R^2)$ then for a.e.\ $t\in \mathbb R$ the level set $f^{-1}(t)$ is an $1$-dimensional $C^1$-smooth manifold. We direct the reader to \cite{BourgainEtAl:Sard} and the references therein for a more detailed exposition of generalizations of Sard's theorem.
Our first theorem is the following:
\begin{theorem}\label{Intro:Sobolev:Theorem}
Let $\Omega\subset \mathbb R^2$ be an open set and $u\colon \Omega \to \mathbb R$ be a continuous function that lies in $W^{1,p}_{\loc}(\Omega)$ for some $1\leq p\leq \infty$. Then for a.e.\ $t\in \mathbb R$ the level set $u^{-1}(t)$ has locally finite Hausdorff $1$-measure and each component of $u^{-1}(t)$ is either a point, or a Jordan curve, or it is homeomorphic to an interval.
\end{theorem}
This result generalizes a result of Alberti, Bianchini, and Crippa \cite[Theorem 2.5(iv)]{AlbertiEtAl:LipschitzLevelSets}, who obtained the same conclusion under the weaker assumptions that $u$ is Lipschitz and compactly supported.
Under no further regularity assumptions, we do not expect in this case the level sets to be $1$-dimensional manifolds. Instead, we pose some topological restriction:
\begin{definition}\label{Monotone:Definition}
Let $\Omega\subset \mathbb R^n$ be an open set and $f\colon \Omega\to \mathbb R$ be a continuous function. We say that $f$ is monotone if for each open set $U\subset \subset \Omega$ the maximum and the minimum of $f$ on $\overline U$ are attained on $\partial U$. That is,
\begin{align*}
\max_{ \overline U}f= \max_{\partial U}f \,\,\, \textrm{and} \,\,\, \min_{\overline U}f=\min_{\partial U}f.
\end{align*}
\end{definition}
Here the notation $U\subset \subset \Omega$ means that $\overline U$ is compact and is contained in $\Omega$. This definition can also be extended to real-valued functions defined in a locally compact metric space $X$.
\begin{remark}\label{Monotone:Remark}
If $f$ extends continuously to $\overline \Omega$, then we may take $U\subset\subset \overline \Omega$ in the above definition.
\end{remark}
Monotonicity in Definition \ref{Monotone:Definition} is in the sense of Lebesgue \cite{Lebesgue:Monotone}. There are other more general notions of monotonicity; e.g.\ there is a notion of \textit{weak monotonicity} due to Manfredi \cite{Manfredi:Monotone} that agrees with Lebesgue monotonicity for the spaces $W^{1,p}$, $p\geq 2$. We now state our main theorem:
\begin{theorem}\label{Intro:MonotoneSobolev:Theorem}
Let $\Omega\subset \mathbb R^2$ be an open set and $u\colon \Omega \to \mathbb R$ be a continuous monotone function that lies in $W^{1,p}_{\loc}(\Omega)$ for some $1\leq p\leq \infty$. Then for a.e.\ $t\in \mathbb R$ the level set $u^{-1}(t)$ has locally finite Hausdorff $1$-measure and it is an embedded $1$-dimensional topological submanifold of $\mathbb R^2$.
\end{theorem}
Recall that a set $J\subset \mathbb R^2$ is an embedded $1$-dimensional sumbanifold of $\mathbb R^2$ if for each point $x\in J$ there exists an open set $U$ in $\mathbb R^2$ and a homeomorphism $\phi\colon U\to \mathbb R^2$ such that $\phi(J)=\mathbb R$. By the classification of $1$-manifolds (see Theorem \ref{Classification:theorem}), each component of $J$ is homeomorphic to $S^1$ or to $\mathbb R$.
Monotone Sobolev functions appear as infimizers of certain energy functionals, not only in the Euclidean setting, such as $\mathcal A$-harmonic functions \cite{Heinonenetal:DegenerateElliptic}, but also in metric spaces. For example, they appear as real and imaginary parts of ``uniformizing" quasiconformal mappings into the plane from metric spaces $X$ that are homeomorphic to the plane; see \cite{Rajala:uniformization}, \cite{RajalaRomney:reciprocal}. Our proof is partly topological and can be applied also in these settings, leading to the understanding of the level sets of the ``uniformizing" map, which is crucial for proving injectivity properties. The results in this paper can be used to simplify some of the arguments in \cite{Rajala:uniformization}. More specifically, our techniques yield the following result in the metric space setting.
\begin{theorem}\label{Intro:metric:theorem}
Let $(X,d)$ be a metric space homeomorphic to $\mathbb R^2$ and $\Omega$ be an open subset of $X$. Suppose that $u\colon \Omega \to \mathbb R$ is a continuous function with the property that for a.e.\ $t\in \mathbb R$ the level set $u^{-1}(t)$ has locally finite Hausdorff $1$-measure (in the metric of $X$). Then for a.e.\ $t\in \mathbb R$ each component of the level set $u^{-1}(t)$ is either a point, or a Jordan curve, or it is homeomorphic to an interval.
If, in addition, $u$ is monotone, then for a.e.\ $t\in \mathbb R$ the level set $u^{-1}(t)$ is an embedded $1$-dimensional topological submanifold of $X$.
\end{theorem}
Monotone Sobolev functions enjoy some important further regularity properties. For example, if $f=(u_1,\dots,u_n)\colon \mathbb R^n\to \mathbb R^n$ is continuous and monotone, in the sense that the coordinate functions $u_i$ are, and $f\in W^{1,n}_{\loc}(\mathbb R^n)$, then $f$ is differentiable a.e.\ and it has the \textit{Luzin property}, i.e, it maps null sets to null sets; see \cite[Lemma 1.3]{HajlaszMaly:approximation}. For this reason, the approximation of Sobolev functions $u$ in the Sobolev norm by {locally weakly monotone} Sobolev functions $u_n$, $n\in \mathbb N$, has been established in \cite[Theorem 1.3]{HajlaszMaly:approximation}; a property of the approximants $u_n$ in this theorem that is important in applications is that the gradient of $u_n$ vanishes on the set $\{u\neq u_n\}$. As it is pointed out in that paper, in some cases the approximating functions cannot be taken to be smooth, not even continuous.
As an application of Theorem \ref{Intro:MonotoneSobolev:Theorem} we give the following approximation theorem:
\begin{theorem}\label{Intro:Approximation:Theorem}
Let $\Omega\subset \mathbb R^2$ be a bounded open set and $u \colon \Omega\to \mathbb R$ be a continuous monotone function that lies in $W^{1,p}(\Omega)$ for some $1< p <\infty$. Then there exists $\alpha>0$ and a sequence $\{u_n\}_{n\in \mathbb N}$ of monotone functions in $\Omega$ such that
\begin{enumerate}[\upshape(A)]
\item $u_n$ is $C^{1,\alpha}$-smooth in $\Omega$, $n\in \mathbb N$,
\item $u_n$ converges uniformly to $u$ in $\Omega$ as $n\to\infty$,
\item $u_n$ converges to $ u$ in $W^{1,p}(\Omega)$ as $n\to\infty$,
\item $\|\nabla u_n\|_{L^p(\Omega)}\leq \|\nabla u\|_{L^p(\Omega)}$, $n\in \mathbb N$,
\item $u_n-u \in W^{1,p}_0(\Omega)$, $n\in \mathbb N$.
\end{enumerate}
In fact, $u_n$ may be taken to be $p$-harmonic on a subset of $\Omega$ having measure arbitrarily close to full and $C^\infty$-smooth, except at a discrete subset of $\Omega$. If $p=2$, then the functions $u_n$ can be taken to be $C^\infty$-smooth in $\Omega$.
\end{theorem}
Here, a function $f\colon \Omega\to \mathbb R$ is $C^{1,\alpha}$-smooth if it has derivatives of first order that are locally $\alpha$-H\"older continuous.
We remark that standard mollification with a smooth kernel does not produce a monotone function and one has to use the structure of the level sets of the monotone function in order to construct its approximants. Therefore, Theorem \ref{Intro:MonotoneSobolev:Theorem} provides us with a powerful tool in this direction.
For our proof we follow the strategy of \cite{IwaniecKovalevOnninen:W1papproximation} and \cite{IwaniecKovalevOnninen:W12approximation}, where it is proved that Sobolev homeomorphisms of planar domains can be approximated by $C^\infty$-smooth diffeomorphisms uniformly and in the Sobolev norm. In our proof we have to deal with further technicalities, not present in the mentioned results for homeomorphisms, which are related to the fact that our approximants $u_n$ might have critical points. In fact, $u_n$ will be $p$-harmonic in a large subset of $\Omega$ and it is known that at a critical point a $p$-harmonic function, $p\neq 2$, is only expected to be $C^{1,\alpha}$-smooth, rather than $C^{\infty}$-smooth.
The main motivation for Theorem \ref{Intro:Approximation:Theorem} was to study regularity properties of a certain type of infimizers that appear in the setting of Sierpi\'nski carpets and are called \textit{carpet-harmonic functions}; see \cite[Chapter 2]{Ntalampekos:CarpetsThesis}. Namely, these infimizers are restrictions of monotone Sobolev functions (under some geometric assumptions) and the approximation Theorem \ref{Intro:Approximation:Theorem} would imply some absolute continuity properties for these functions. We will not discuss these applications any further in this paper.
We pose some questions for further study. One of the reasons that we were not able to prove approximation by smooth functions for all $p\in (1,\infty)$ in Theorem \ref{Intro:Approximation:Theorem} was the presence of critical points of $p$-harmonic functions.
\begin{question}
Can a $p$-harmonic function be approximated uniformly and in the $W^{1,p}$ norm by $C^\infty$-smooth monotone functions near a critical point, when $p\neq 2$?
\end{question}
A positive answer to this question would imply that we may use smooth functions in Theorem \ref{Intro:Approximation:Theorem}. It would be very surprising if this fails. Moreover, what can one say for higher dimensions?
\begin{question}
Do analogs of Theorems \ref{Intro:MonotoneSobolev:Theorem} and \ref{Intro:Approximation:Theorem} hold in higher dimensions?
\end{question}
The smooth approximation of Sobolev homeomorphisms is still open in higher dimensions (\cite[Question 1.1]{IwaniecKovalevOnninen:W12approximation}, \cite{FoundCompMath:book-Ball}). Since the coordinate functions of a homeomorphism are monotone, a first step in studying this problem would be to study the smooth approximation of monotone Sobolev functions as in the above question.
The paper is structured as follows. In Section \ref{Section:Level} we study the level sets of Sobolev functions. In particular, in Subsection \ref{Section:Sobolev} we prove Theorem \ref{Intro:Sobolev:Theorem} and in Subsection \ref{Section:levelmonotone} we prove Theorem \ref{Intro:MonotoneSobolev:Theorem} and discuss the generalization that gives Theorem \ref{Intro:metric:theorem}. In Subsection \ref{Section:glue} we include some gluing results for monotone functions that are used in the proof of the approximation Theorem \ref{Intro:Approximation:Theorem}, but also are of independent interest. Section \ref{Section:Preliminaries} contains preliminaries on $p$-harmonic functions. The approximation Theorem \ref{Intro:Approximation:Theorem} is proved in Section \ref{Section:approximationTheorem}. Finally, in Section \ref{Section:Smoothing} we include some quite standard smoothing results for monotone functions that are needed for the proof of the approximation theorem.
\subsection*{Acknowledgments}
The author would like to thank Tadeusz Iwaniec for a motivating discussion and Matthew Romney for his comments on the manuscript.
\section{Level sets of Sobolev and monotone functions}\label{Section:Level}
Throughout the entire section we assume that $u\colon \Omega\to \mathbb R$ is a non-constant continuous function on an open set $\Omega\subset \mathbb R^2$. We define $A_t=u^{-1}(t)$ for $t\in \mathbb R$, which can be the empty set. For $ s<t$ we define $A_{s,t}=u^{-1}((s,t))$.
A \textit{Jordan arc} in a metric space $X$ is the image of the interval $[0,1]$ under a homeomorphic embedding $\phi \colon [0,1] \to X$. In this case, the set $\phi((0,1))$ is called an \textit{open Jordan arc}. Finally, a \textit{Jordan curve} is the image of $S^1$ under a homeomorphic embedding $\phi\colon S^1\to X$.
\subsection{Level sets of Sobolev functions}\label{Section:Sobolev}
In this subsection we study the level sets of Sobolev functions and prove Theorem \ref{Intro:Sobolev:Theorem}.
\begin{lemma}\label{FiniteLength:Lemma}
Suppose that $u\in W^{1,p}(\Omega)$ for some $1\leq p \leq \infty$ and $\Area(\Omega)<\infty$. Then for a.e.\ $t\in \mathbb R$ the level set $A_t$ has finite Hausdorff $1$-measure.
\end{lemma}
\begin{proof}
This follows from the co-area formula \cite[Theorem 1.1]{MalySwansonZiemer:Coarea}, which is attributed to Federer \cite[Section 3.2]{Federer:gmt}, and the $L^p$-integrability of $\nabla u$:
\begin{align*}
\int \mathcal H^1(u^{-1}(t))\, dt= \int_\Omega |\nabla u| \leq \|\nabla u\|_{L^p(\Omega)} \cdot \Area(\Omega)^{1/p'}<\infty,
\end{align*}
where $\frac{1}{p}+\frac{1}{p'}=1$.
\end{proof}
Now, we restate and prove Theorem \ref{Intro:Sobolev:Theorem}.
\begin{theorem}\label{Jordan:Theorem}
Suppose that $u\in W^{1,p}_{\loc}(\Omega)$ for some $1\leq p\leq \infty$. Then for a.e.\ $t\in \mathbb R$ the level set $A_t$ has locally finite Hausdorff $1$-measure and each component $E$ of $A_t$ is either a point, or a Jordan curve, or it is homeomorphic to an interval.
\end{theorem}
Recall that $u$ is assumed to be continuous. If a level set $A_t=u^{-1}(t)$ is the empty set then it has no components so the conclusions of the theorem hold trivially; this is also true for other statements later in the paper, so we will not mention again the possibility that the level sets can be empty.
\begin{proof}
\textbf{Main Claim:}
Consider an open set $U$ with $U\subset \subset \Omega$. We restrict $u$ to a neighborhood of $\overline U$ that is compactly contained in $\Omega$ and apply Lemma \ref{FiniteLength:Lemma}. It follows that for a.e.\ $t\in \mathbb R$ we have $\mathcal H^1(A_t\cap \overline U)<\infty$. We shall show as our Main Claim that if we further exclude countably many values $t$, then each component $E_0$ of $A_t\cap \overline U$ is either a point, or a Jordan arc, or a Jordan curve.
\vspace{1em}
\noindent
\textbf{Compact exhaustion argument:} Assuming that the Main Claim holds for each such $U$, we consider an exhaustion of $\Omega$ by a nested sequence of open sets $\{U_n\}_{n\in \mathbb N}$, each compactly contained in $\Omega$, such that for a.e.\ $t\in \mathbb R$ the following holds for the level set $A_t$: $\mathcal H^1(A_t\cap \overline{U_n})<\infty$ and each component $E_n$ of $A_t\cap \overline {U_n}$ is either a point, or a Jordan arc, or a Jordan curve, for all $n\in \mathbb N$. We let $A_t$ be such a level set and fix $x_0\in A_t$. We will show that the component $E$ of $A_t$ containing $x_0$ is either a point, or a Jordan curve, or it is homeomorphic to an interval.
We have $x_0\in U_n$ for all sufficiently large $n$. To simplify our notation, we assume that this holds for all $n\in \mathbb N$. Let $E, E_n$ be the component of $A_t, A_t\cap \overline{U_n}$, respectively, that contains $x_0$. We have $ E_n \subset E_{n+1} \subset E$ for each $n\in \mathbb N$, which follows from the definition of a connected component (as the largest connected set containing a given point). By the continuity of $u$, $A_t$ is rel.\ closed in $\Omega$, so $E_n$ is compact for all $n\in \mathbb N$. If $E\subset U_n$ for some $n\in \mathbb N$, then $E_n=E$ and therefore $E$ itself is either a point, or a Jordan curve, or a Jordan arc by the Main Claim. This completes the proof in this case.
Suppose now that $E$ is not contained in $U_n$ for any $n\in \mathbb N$. Then $E_n$ necessarily intersects $\partial U_n$ for each $n\in \mathbb N$ as we see below using the next lemma, which is a direct consequence of \cite[IV.5, Corollary 1, p.~83]{Newman:Topology}.
\begin{lemma}\label{Separation:Lemma}
Suppose that $F$ is a connected component of a compact set $A$ in the plane. Then for each open set $U\supset F$ there exists an open set $V$ with $F\subset V \subset \subset U$ and $\partial V \cap A=\emptyset$.
\end{lemma}
In our case, if $E_n \subset U_n$, then by the preceding lemma there exists an open set $V\supset E_n$ with $ V\subset\subset U_n$ and $\partial V\cap (A_t\cap \overline{U_n})=\emptyset$. This implies that $\partial V\cap A_t=\emptyset$. Then $\partial V\cap E=\emptyset$ and it follows that $V\cap E$ is both open and closed in $E$, so $V\cap E=E$ by the connectedness of $E$. Then $E=V\cap E\subset U_n$, which contradicts our assumption that $E\not\subset U_n$ for all $n\in \mathbb N$. Therefore, $E_n\cap \partial U_n \neq \emptyset$ for all $n\in \mathbb N$.
This implies that $E_n$ cannot be a single point since it also contains $x_0\in U_n$, so $E_n$ is either a Jordan arc, or a Jordan curve for each $n\in \mathbb N$, by the Main Claim. Note that $E_{n+1} \supsetneq E_n$ for all $n\in \mathbb N$, since $E_{n+1}\cap \partial U_{n+1}\neq \emptyset$ and $\partial U_{n+1}\cap \partial U_n=\emptyset$, as $U_n\subset \subset U_{n+1}$. If $E_n$ is a Jordan curve, then $E_{n+1}$ can be neither a Jordan curve, nor a Jordan arc, as it is strictly larger than $E_n$ (this uses the fundamental fact that $S^1$ is not homeomorphic to $[0,1]$). Therefore, $E_n$ is a Jordan arc for all $n\in \mathbb N$, and there exist homeomorphisms $\phi_n\colon [0,1]\to E_n$, $n\in \mathbb N$. Since $E_n\subsetneq E_{n+1}$, these homeomorphisms can be pasted appropriately to obtain a continuous injective map $\phi \colon I\to \bigcup_{n\in \mathbb N} E_n \eqqcolon \mathcal E$, where $I\subset \mathbb R$ is either $\mathbb R$ or $[0,\infty)$ (after a change of variables). Assume that $I=[0,\infty)$; the other case is treated in the same way. The map $\phi$ has the property that $\phi^{-1}(E_n)$ is a compact subinterval of $I$ and $\phi^{-1}(E_n)\subsetneq \phi^{-1}(E_{n+1})$ for $n\in \mathbb N$. Moreover, $\phi(I)=\mathcal E$ exits all compact subsets of $\Omega$.
It now remains to show that $\phi$ is a homeomorphism and that $E= \bigcup_{n\in \mathbb N} E_n$, concluding therefore that $E$ is homeomorphic to an interval as desired. This is subtle and requires to use the assumption that $\mathcal H^1(A_t\cap \overline {U_n}) <\infty$ for all $n\in \mathbb N$.
We first claim that $\phi(s)$ does not accumulate at any point of $\Omega$ as $s\to \infty$. Assume for the sake of contradiction that $\phi(s)$ converges to a point $y_0\in U_{n_0}\subset \Omega$ along a subsequence of $s\to \infty$, for some $n_0\in \mathbb N$. Since $\phi(s)$ exits all sets $U_n$, we may find disjoint intervals $[a_n,b_n]$, $n\in \mathbb N$, such that $\phi(a_n) \to y_0$ as $n\to\infty$, $\phi([a_n,b_n])\subset \overline {U_{n_0}}$, and $\phi(b_n)\in \partial U_{n_0}$ for all $n\in \mathbb N$. It follows that $\liminf_{n\to\infty}\diam (\phi([a_n,b_n])) \geq \dist(y_0, \partial U_{n_0})>0$. Since the diameter is a lower bound for the Hausdorff $1$-measure in connected spaces (see \cite[Section 18, p.~18]{Semmes:Hausdorff}), we have
\begin{align*}
\mathcal H^1( A_t\cap \overline{U_{n_0}}) \geq \sum_{n\in \mathbb N} \mathcal H^1(\phi([a_n,b_n])) \geq \sum_{n\in \mathbb N} \diam (\phi([a_n,b_n]))=\infty,
\end{align*}
a contradiction.
This now implies that $\phi$ is a homeomorphism of $I$ onto $\mathcal E$. Indeed, if $\phi(s_n)=y_n\in \mathcal E$ is a sequence converging to $\phi(s_0)=y_0\in \mathcal E$, then $y_n$ is contained in a compact subset of $\Omega$ for all $n\in \mathbb N$ and therefore $s_n,s_0$ lie in a compact subinterval $I_0$ of $I$ for all $n\in \mathbb N$. By the injectivity oh $\phi$, $\phi|I_0$ is a homeomorphism, so $s_n\to s_0$, and $\phi^{-1}$ is continuous on $\mathcal E$, as desired. Another implication of the previous paragraph is that $\overline{\mathcal E} \setminus \mathcal E$ is contained in $\partial \Omega$ and $\mathcal E=\overline{\mathcal E}\cap \Omega= \overline{\mathcal E}\cap E$ is rel.\ closed in $\Omega$ and in $E$.
We will show that $\mathcal E$ is also rel.\ open in $E$ and by the connectedness of $E$ we will have $E=\mathcal E$ as desired. Let $x\in \mathcal E$, so $x\in U_{n_0}$ for some $n_0\in \mathbb N$ and consider an open neighborhood $V\subset \subset U_{n_0}$ of $x$. We wish to show that $V\cap E \subset \mathcal E$, after shrinking the neighborhood $V$ if necessary. This will complete the proof that $\mathcal E$ is rel.\ open in $E$.
If $y_0 \in V\cap E\setminus \mathcal E$, then arguing as we did for the construction of $\mathcal E$ and using the Main Claim, one can construct a set $\mathcal F \subset E$ containing $y_0$ and a homeomorphism $\phi' \colon I' \to \mathcal F$, where $I'=\mathbb R$ or $I'=[0,\infty)$. Moreover, the set $\mathcal F$ is, by construction, necessarily disjoint from $\mathcal E$. To see this, assume that they intersect at a point $z_0=\phi(s_0)= \phi'(s_0')$. Then there exist non-trivial closed intervals $I_0\subset I$ and $I_0'\subset I'$ such that $\phi(I_0)$ is the subarc of $\mathcal E$ from $x_0$ to $z_0$ and $\phi'(I_0')$ is the subarc of $\mathcal F$ from $y_0$ to $z_0$. We choose a large $k\in \mathbb N$ so that $\phi(I_0)\cup \phi'(I_0')\subset U_k$. Then by the definition of $E_k$ we must have $E_k \supset \phi(I_0)\cup \phi'(I_0')$, since the latter is connected and contains $x_0$. Then we have $y_0\in \phi'(I_0')\subset E_k\subset \mathcal E$, which is a contradiction. Hence, we have indeed $\mathcal F \cap \mathcal E=\emptyset$.
Moreover, as in the construction of $\mathcal E$, since $E$ exits all compact subsets of $\Omega$, the set $\mathcal F$ must also have this property; see the statement right before Lemma \ref{Separation:Lemma}. Therefore,
\begin{align*}
\mathcal H^1(\mathcal F\cap \overline {U_{n_0}}) \geq \dist (y_0, \partial U_{n_0}) \geq \dist (\partial V, \partial U_{n_0}) \eqqcolon \delta>0.
\end{align*}
Another property of $\mathcal F$ is that it is rel.\ closed in $\Omega$, for the same reason as $\mathcal E$.
If $V\cap E \setminus (\mathcal E\cup \mathcal F)\neq \emptyset$, by repeating the above procedure we may find sets $\mathcal F_i\subset A_t$, $i=1,2,\dots,N$, with the same properties as $\mathcal F\eqqcolon \mathcal F_1$ so that they are disjoint with each other and with $\mathcal E$, and they intersect $V$. We have
\begin{align*}
\infty > \mathcal H^1(A_t\cap \overline{U_{n_0}} ) \geq \sum_{i=1}^N \mathcal H^1(\mathcal F_i\cap \overline{U_{n_0}} ) \geq N\delta.
\end{align*}
This implies that we can find only a finite number of such sets $\mathcal F_i$. Since the compact sets $\mathcal F_i\cap \overline V$ have positive distance from $\mathcal E\cap \overline V$, we may find a smaller neighborhood $W\subset V$ of $x\in \mathcal E$ such that $W\cap E \subset \mathcal E$. This completes the proof that $\mathcal E$ is rel.\ open in $E$, as desired.
\vspace{1em}
\noindent
\textbf{Proof of Main Claim:}
We will show that for all but countably many $t\in \mathbb R$ for which $\mathcal H^1(A_t\cap \overline U)<\infty$ we have that each component $E_0$ of $A_t\cap \overline U$ is either a point, or a Jordan arc, or a Jordan curve.
Suppose that $\mathcal H^1(A_t\cap \overline U)<\infty$ and let $E_0$ be a component of $A_t\cap \overline U$, so $L\coloneqq \mathcal H^1(E_0)<\infty$. Since $E_0$ is a continuum, it follows that there exists a $2L$-Lipschitz continuous parametrization $\gamma\colon [0,1]\to E_0$ of $E_0$; see \cite[Theorem 2a]{EilenbergHarrold:Curves} or \cite[Proposition 5.1]{RajalaRomney:reciprocal}. Hence, $E_0$ is a locally connected, compact set; see \cite[Theorem 9.2, p.~60 and Theorem 27.12, p.~200]{Willard:topology}.
Now, we need the following topological lemma that we prove later:
\begin{lemma}\label{Junction:Lemma}
Let $X$ be a Peano space, i.e., a connected, locally connected, compact metric space. If $X$ contains more than one points and is not homeomorphic to $[0,1]$ or $S^1$, then it must have a \textit{junction point} $x$, i.e., there exist three Jordan arcs $A_1,A_2,A_3\subset X$ that meet at $x$ but otherwise they are disjoint.
\end{lemma}
By our previous discussion, if $\mathcal H^1(A_t \cap \overline U)<\infty$ then each component $E_0$ of $A_t\cap \overline U$ is a Peano space. If there is a component $E_0$ of $A_t\cap \overline U$ that is not a point or a Jordan arc or a Jordan curve, then by Lemma \ref{Junction:Lemma}, $E_0$ must have a junction point. Hence, $A_t\cap \overline U$ has a junction point.
A theorem of Moore \cite[Theorem 1]{Moore:triods} (see also \cite[Proposition 2.18]{Pommerenke:conformal}) states that there can be no uncountable collection of disjoint compact sets in the plane such that each set has a junction point. Note that $A_s\cap A_t=\emptyset$ for $s\neq t$. Hence, for at most countably many $t\in \mathbb R$ the set $A_t\cap \overline U$ can have a junction point.
Summarizing, for at most countably many $t\in \mathbb R$ for which $\mathcal H^1(A_t\cap \overline U)<\infty$ the set $A_t\cap \overline U$ has a component $E_0$ that has a junction point and is not a Jordan arc or a Jordan curve.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Junction:Lemma}]
Suppose that $X$ contains more than one points and is not homeomorphic to $[0,1]$. Then there exist at least three \textit{non-cut points} $x_1,x_2,x_3\in X$, i.e., points whose removal does not disconnect $X$; see \cite[Theorems (6.1) and (6.2), p.~54]{Whyburn:topology}. Since $X$ is a Peano space, it is locally path-connected \cite[Theorem 31.4, p.~221]{Willard:topology}. It follows that each of the spaces $X\setminus \{x_{i}\}$, $i=1,2,3$, is connected and locally path-connected. Moreover, a connected, locally path-connected space is path-connected \cite[Theorem 4.3, p.~162]{Munkres:topology}. Hence, we may find Jordan arcs $J_{ij} \subset X \setminus \{x_k\}$ with endpoints $x_i$ and $x_j$, where $i,j,k\in \{1,2,3\}$, are distinct and $i<j$; see also \cite[Corollary 31.6, p.~222]{Willard:topology}.
If two of the arcs $J_{ij}$ intersect at an interior point, i.e., a point different from $x_1,x_2,x_3$, then it is straightforward to consider three subarcs $A_1,A_2,A_3$ of these arcs that intersect at one point, but otherwise are disjoint, as required in the statement of the lemma.
If the arcs $J_{ij}$ intersect only at the endpoints, then we can concatenate the three arcs to obtain a Jordan curve $J$, i.e., homeomorphic to $S^1$. Suppose now that the space $X$ is, in addition, not homeomorphic to $S^1$. Then, there must exist a point $x\in X\setminus J$. We claim that there exists a Jordan arc $J_x$ that connects $x$ to $J$ and intersects $J$ at one point. Assuming that claim, one can now define $A_1\coloneqq J_x$ and $A_2,A_3$ to be subarcs of $J$ that meet at $x$ but otherwise are disjoint.
To prove the claim note that since $X$ is a {Peano space}, any two points in $X$ can be joined with a Jordan arc \cite[Theorem 31.2, p.~219]{Willard:topology}. We connect $x$ to any point $y\in J$ with a Jordan arc $J_x$, parametrized by $\gamma\colon[0,1]\to J_x$, so that $\gamma(0)=x$ and $\gamma(1)=y$. If there exists $s\in (0,1)$ such that $\gamma(s)\in J$ then we consider the smallest such $s$ and we restrict $\gamma$ to $[0,s]$. This gives the desired Jordan arc.
\end{proof}
\begin{remark}\label{remark:metric spaces}
In Theorem \ref{Jordan:Theorem} (Theorem \ref{Intro:MonotoneSobolev:Theorem}) the assumption that $u\in W^{1,p}_{\loc}(\Omega)$ is only used to deduce that almost every level set of $u$ has locally finite Hausdorff $1$-measure; the latter is proved in Lemma \ref{FiniteLength:Lemma} using the co-area formula. All the other arguments rely on planar topology. Thus our proof can be generalized to functions defined on metric spaces homeomorphic to $\mathbb R^2$ and the first part of Theorem \ref{Intro:metric:theorem} follows.
\end{remark}
\subsection{Level sets of monotone functions}\label{Section:levelmonotone}
In this subsection we study the level sets of monotone functions and prove Theorem \ref{Intro:MonotoneSobolev:Theorem}. Recall that $u\colon \Omega\to \mathbb R$ is assumed to be continuous.
\begin{lemma}\label{Monotonicity:Corollary}
If $u$ is monotone in $\Omega$ then for each $t\in \mathbb R$ each component $E$ of the level set $A_t$ satisfies either of the following:
\begin{enumerate}[\upshape(i)]
\item $E$ exits all compact subsets of $\Omega$, or
\item all bounded components of $\mathbb R^2 \setminus E$ intersect $\partial \Omega$ and there exists at least one such component.
\end{enumerate}
In particular,
$$\diam (E) \geq \sup_{x\in E} \dist(x,\partial \Omega)>0.$$
\end{lemma}
\begin{proof}
In the proof we will use the following lemma, known as Zoretti's theorem, which is in the same spirit as Lemma \ref{Separation:Lemma}.
\begin{lemma}[{\cite[Corollary 3.11, p.~35]{Whyburn:TopologicalAnalysis}}]\label{Zoretti:Lemma}
Let $F$ be a connected component of a compact set $A$ in the plane. Then for each open set $U \supset F$ there exists a Jordan region $V$ with $F\subset V$, $\partial V\subset U$, and $\partial V\cap A=\emptyset$.
\end{lemma}
Suppose that a component $E$ of $A_t$ is compactly contained in $\Omega$. First we will show that $\mathbb R^2\setminus E$ has a bounded component. Suppose that this is not the case. Then $\widehat\mathbb C\setminus E$ is simply connected and contains $\partial \Omega$, so by using the Riemann mapping theorem we may find arbitrarily close to $E$ Jordan curves surrounding $E$ and separating $E$ from $\partial \Omega$. Hence, we may find a Jordan region $U \subset \subset \Omega$ containing $E$. Consider the compact set $A_t\cap \overline U$. Then $E$ is also a component of $A_t\cap \overline U$. By Lemma \ref{Separation:Lemma}, we can find a Jordan region $V$ such that $E\subset V$, $\partial V\subset U$, and $\partial V\cap A_t=\emptyset$. It follows that $V\subset U\subset \subset \Omega$. On $\partial V$ we must have $u>t$ or $u<t$. Without loss of generality, suppose that we have $u>t$ on $\partial V$. Then by the monotonicity of $u$ we have $u>t$ on $ V\supset E$, a contradiction. Therefore, $\mathbb R^2\setminus E$ has a bounded component.
Next, we will show that all bounded components of $\mathbb R^2\setminus E$ must intersect $\partial \Omega$. Suppose that a bounded component $U$ of $\mathbb R^2\setminus E$ does not intersect $\partial \Omega$. Then $U\subset \Omega$ and $\partial U \subset E\subset \subset \Omega$ so $U\subset \subset \Omega$. Since $u=t$ on $\partial U$, by the monotonicity of $u$ we have that $u=t$ on $\overline U$. Since $E$ is a connected component of $A_t$, we must have $U\subset E$, a contradiction.
Now we prove the claim involving the diameters. If $\Omega=\mathbb R^2$, then $E$ necessarily satisfies the first alternative, so it escapes to $\infty $ and $\diam(E)=\infty$. If $\Omega \subsetneq \mathbb R^2$, then for each $x\in E$ we may consider the largest ball $B(x,r)$ contained in $\Omega$, where $r=\dist(x,\partial \Omega)$. Then $E$ cannot be contained in a ball $B(x,r-\varepsilon)$ for any $\varepsilon>0$ since this would violate both alternatives. Therefore, $\diam(E) \geq r-\varepsilon$ for all $\varepsilon>0$, which implies that $\diam(E) \geq \dist(x,\partial \Omega)$, as desired.
\end{proof}
We record an immediate corollary:
\begin{corollary}
If $\Omega$ is simply connected, then $u$ is monotone in $\Omega$ if and only if each component of each level set of $u$ exits all compact subsets of $\Omega$.
\end{corollary}
\begin{proof}
Suppose that $u$ is monotone. If $\Omega$ is simply connected, then $\partial \Omega$ is connected, so it cannot be separated by a level set $A_t$ of $u$. Thus, in Lemma \ref{Monotonicity:Corollary} only the first alternative can occur, as desired.
Conversely, suppose that only the first alternative occurs and let $U\subset\subset \Omega$. Then for any $x_0\in U$ the component $E$ of the level set $A_t$, $t=u(x_0)$, that contains $x_0$ has to intersect $\partial U$. Thus, there exists $y_0\in \partial U$ with $u(x_0)=u(y_0)$. This implies the monotonicity of $u$.
\end{proof}
Next, we add the assumption that $u$ lies in a Sobolev space:
\begin{lemma}\label{MonotoneSobolev:Lemma}
Suppose that $u$ is monotone in $\Omega$ and lies in $W^{1,p}_{\loc}(\Omega)$ for some $1\leq p \leq \infty$. Then for a.e.\ $t\in \mathbb R$ the components of the level set $A_t$ are rel.\ open in $A_t$. In other words, if $E$ is a component of $A_t$ and $x\in E$ then there exists an open neighborhood $U$ of $x$ such that $E\cap U= A_t\cap U$.
\end{lemma}
\begin{proof}
We consider an exhaustion of $\Omega$ by a nested sequence of open sets $\{U_n\}_{n\in \mathbb N}$, each compactly contained in $\Omega$, such that for a.e.\ $t\in \mathbb R$ we have $\mathcal H^1(A_t\cap \overline{U_n})<\infty$ for all $n\in \mathbb N$; the existence of such an exhaustion can be justified using Lemma \ref{FiniteLength:Lemma}.
We fix $t\in \mathbb R$ such that $A_t\neq \emptyset$, and consider a component $E\subset A_t$ and $x\in E$. We claim that for each neighborhood $V\subset \subset \Omega$ of $x$ there are at most finitely many components of $A_t$ intersecting $V$.
There exists $n_0\in \mathbb N$ such that $V\subset \overline V\subset U_{n_0}$. Suppose that $F$ is a component of $A_t$ intersecting $V$. We consider the restriction of $u$ to $U_{n_0}$, which is still a monotone function and we let $G\subset F$ be a component of the level set $A_t\cap U_{n_0}$ of $u\big|{U_{n_0}}$ such that $G\cap V\neq \emptyset$. For a point $y\in G\cap V$ we have
$$\diam(G) \geq \dist(y,\partial U_{n_0})$$
by Lemma \ref{Monotonicity:Corollary}. Since $V\subset\subset U_{n_0}$, we have $\dist(y,\partial U_{n_0}) \geq \dist(\overline V, \partial U_{n_0}) >0$. Moreover, by the connectedness of $G$ we have $\mathcal H^1(G) \geq \diam (G)$; see \cite[Section 18, p.~18]{Semmes:Hausdorff}. Combining these inequalities, we have
\begin{align*}
\mathcal H^1(F\cap \overline{U_{n_0}} )\geq \mathcal H^1(G)\geq \diam(G)\geq \dist(y,\partial U_{n_0})\geq \dist(\overline V, \partial U_{n_0}) >0.
\end{align*}
Since $\mathcal H^1(A_t\cap\overline{U_{n_0}})<\infty$, there can be at most finitely many components $F$ of $A_t$ intersecting $V$.
Since the compact sets $\overline V\cap F$ and $\overline V\cap E$ have positive distance, it follows that if we choose a smaller neighborhood $U\subset V$ of $x$, then we have $E\cap U=A_t\cap U$ as desired.
\end{proof}
We will also need the following general lemma:
\begin{lemma}\label{Extremal:Lemma}
Let $(X,d)$ be a separable metric space and $f\colon X \to \mathbb R$ be any function. Then the set of local extremal values of $f$ is at most countable.
\end{lemma}
\begin{proof}
Let $E$ be the set of local maximum values of $f$. Then, by definition, for each $y\in E$ there exists $x\in X$ with $f(x)=y$ and a ball $B(x,r)$ such that for all $z\in B(x,r)$ we have $f(z)\leq y$. We can write
$E=\bigcup_{n\in \mathbb N} E_n$, where
$$E_n=\{y\in \mathbb R: y=f(x) \,\,\,\textrm{for some}\,\,\, x\in X \,\,\, \textrm{and}\,\,\, f(z)\leq y \,\,\, \textrm{for all}\,\,\, z\in B(x,2/n)\}.$$
We will show that $E_n$ is at most countable for each $n\in \mathbb N$. Let $y_1,y_2\in E_n$ be distinct, so there exist points $x_1,x_2\in X$ with $f(x_i)=y_i$, $i=1,2$, as in the definition of $E_n$. Then the balls $B(x_1, 1/n)$, $B(x_2,1/n)$ are necessarily disjoint. Indeed, otherwise, we have $f(x_2)\leq y_1$ and $f(x_1)\leq y_2$, so $y_1=y_2$, a contradiction. Therefore, the set $E_n$ is in one-to-one correspondence with a collection of disjoint balls in $X$. The separability of $X$ implies that there can be at most countably many such balls. The same proof shows that the set of local minimum values of $f$ is at most countable.
\end{proof}
Now we have completed the preparation for the proof of Theorem \ref{Intro:MonotoneSobolev:Theorem}, restated below:
\begin{theorem}\label{Sard:Theorem}
Suppose that $u$ is monotone in $\Omega$ and lies in $W^{1,p}_{\loc}(\Omega)$ for some $1\leq p \leq \infty$. Then for a.e.\ $t\in \mathbb R$ the level set $A_t$ is an embedded $1$-dimensional topological submanifold of $\mathbb R^2$ that has locally finite Hausdorff $1$-measure.
\end{theorem}
\begin{proof}
By Theorem \ref{Jordan:Theorem} and Lemma \ref{MonotoneSobolev:Lemma} we conclude that for a.e.\ $t\in \mathbb R$ the level set $A_t$ has locally finite Hausdorff $1$-measure, each component $E$ of the level set $A_t$ is rel.\ open in $A_t$, and it is either a point, or a Jordan curve, or it is homeomorphic to an interval. Using Lemma \ref{Extremal:Lemma}, we further exclude the countably many local extremal values $t\in \mathbb R$ of $u$ in $\Omega$. We fix a level set $A_t$ satisfying all the above. In particular, $A_t$ has the property that if $x\in A_t$ then each neighborhood $U$ of $x$ contains points with $y_1,y_2$ with $u(y_1)>t$ and $u(y_2)<t$.
Our goal is to prove that every component $E$ of $A_t$ is either a Jordan curve, or it is homeomorphic to $\mathbb R$. Since each component of $A_t$ is rel.\ open in $A_t$, it will then follow that each $x\in A_t$ has a neighborhood $U$ such that $U\cap A_t$ is homeomorphic to an open interval. This shows that $A_t$ is a $1$-manifold. Since there are no wild arcs in the plane (see Remark \ref{Straighten:Remark} below), each Jordan arc of $A_t$ can be mapped to $[0,1]\times \{0\}$ with a global homeomorphism of $\mathbb R^2$. This shows that $A_t$ is an embedded submanifold of $\mathbb R^2$, as desired.
We now focus on proving that every component $E$ of $A_t$ is either a Jordan curve, or it is homeomorphic to $\mathbb R$
We already know that $E$ is either a point, or a Jordan curve, or it is homeomorphic to an interval. If $E$ is a point or it is homeomorphic to the closed interval $[0,1]$, then $E$ is compactly contained in $\Omega$, so by Lemma \ref{Monotonicity:Corollary} the set $\mathbb R^2\setminus E$ must have at least one bounded component. This is a contradiction.
Finally, assume that $E$ is homeomorphic to $I=[0,\infty)$ under a map $\phi\colon I \to E$. Then $\phi(s)$ cannot accumulate to any point of $\Omega$ as $s\to\infty$. This is because $\phi$ is a homeomorphism and $E$ is rel.\ closed in $A_t$, and thus in $\Omega$. Let $x_0=\phi(0)$ and consider a ball $B(x_0,r)\subset \subset \Omega$. Then there exists $s_0$ such that $\phi(s_0)\in \partial B(x_0,r)$ and $\phi(s)\notin B(x_0,r)$ for all $s\geq s_0$. We may straighten $\phi([0,s_0])$ to the segment $[0,1]\times \{0\}$ with a homeomorphism of $\mathbb R^2$ (see Remark \ref{Straighten:Remark} below) and, using that, we can find a topological ball $U\subset B(x_0,r)$ containing $x_0$ such that $U\setminus E$ is connected. If $U$ is sufficiently small, then by Lemma \ref{MonotoneSobolev:Lemma} we have that $U\setminus A_t=U\setminus E$, so $U\setminus A_t$ is connected. This implies that $u>t$ or $u<t$ in $U\setminus E$. This is a contradiction, since $u(x_0)=t$ would then be a local extremal value of $u$ in this case.
\end{proof}
\begin{remark}\label{Straighten:Remark}
In the proof we used the fact that a Jordan arc in $\mathbb R^2$ can be mapped to $[0,1]\times \{0\}$ with a homeomorphism of $\mathbb R^2$. That is, there are no \textit{wild} arcs in the plane. To see this, one can first embed the Jordan arc in a Jordan curve, and then apply the Schoenflies theorem to straighten the Jordan curve. See also the discussion in \cite{NoWildArcs}.
\end{remark}
\begin{remark}\label{remark:metric spaces monotone}
As in Remark \ref{remark:metric spaces} the proof of Theorem \ref{Sard:Theorem} (Theorem \ref{Intro:MonotoneSobolev:Theorem}) can be generalized to monotone functions defined on metric spaces homeomorphic to $\mathbb R^2$. Indeed, monotonicity is a topological property. This observation yields the second part of Theorem \ref{Intro:metric:theorem}.
\end{remark}
\subsection{Gluing monotone functions}\label{Section:glue}
In this section we include some results that allow us to paste monotone functions in order to obtain a new monotone function. These results will be useful towards the proof of the approximation Theorem \ref{Intro:Approximation:Theorem}. The proofs are elementary but the assumptions of the statements are finely chosen and cannot be relaxed. Recall that $u$ is continuous in $\Omega$.
\begin{lemma}[Gluing Lemma]\label{Gluing:Lemma}
Suppose that $u$ is monotone in $\Omega$ and consider $t_1,t_2\in \mathbb R$ with $t_1<t_2$. Let $\Upsilon=u^{-1}((t_1,t_2))$ and consider a continuous function $v$ on $\Upsilon$ such that
\begin{enumerate}[\upshape(i)]
\item $v$ is monotone in $\Upsilon$,
\item $v$ extends continuously to $\partial \Upsilon\cap \Omega$ and agrees there with $u$, and
\item $t_1\leq v \leq t_2$ on $\Upsilon$.
\end{enumerate}
Then the function $\widetilde u$ that is equal to $u$ in $\Omega\setminus \Upsilon$ and to $v$ in $\Upsilon$ is monotone in $\Omega$.
\end{lemma}
The proof we give below is elementary but delicate. Note that it is important to assume that $u$ is monotone in all of $\Omega$, since the function $u(x)=|x|$ on $\{x\in \mathbb R^2: 1/2<|x|<1\}$ does not have a monotone extension in the unit disk. Moreover, (iii) cannot be relaxed. Indeed, the function $u(x)=|x|$ is monotone in the punctured unit disk, considered to be the set $\Omega$; however if we set $\widetilde u=u$ in $\{x\in \mathbb R^2: 1/2\leq |x|<1\}$ and $\widetilde u = 1-|x|$ in $\{x\in \mathbb R^2 : 0<|x|<1/2\}=u^{-1}((0,1/2))$, then $\widetilde u$ is not monotone in $\Omega$.
\begin{proof}
Aiming for a contradiction, suppose that $\widetilde u$ is not monotone in $\Omega$, so, without loss of generality, there exists an open set $U\subset \subset \Omega$ and $x_0\in U$ such that $\widetilde u(x_0)=\max_{\overline U} \widetilde u >\max_{\partial U} \widetilde u$.
Suppose first that $x_0\in U\cap \Upsilon$, so $\widetilde u(x_0)=v(x_0)$. Note that $\overline{U\cap \Upsilon} \subset \overline \Upsilon \cap \Omega$, so $v$ is continuous in $\overline{U\cap \Upsilon} \subset \overline \Upsilon \cap \Omega$ by assumption (ii) and monotone in $U\cap \Upsilon$ by (i). By Remark \ref{Monotone:Remark} we conclude that there exists $y_0\in \partial (U\cap \Upsilon) \subset (U\cap \partial \Upsilon)\cup \partial U$ such that
$$\widetilde u(y_0)=v(y_0)=\max_{\overline{U\cap \Upsilon}}v \geq v(x_0)= \widetilde u(x_0)= \max_{\overline U}\widetilde u.$$
It follows that $\widetilde u(y_0)= \widetilde u(x_0)=\max_{\overline U}\widetilde u> \max_{\partial U}\widetilde u$. Hence, $y_0\notin \partial U$, and we have $y_0\in U\cap \partial \Upsilon$. Since $\partial \Upsilon \cap \Omega \subset A_{t_1}\cup A_{t_2}$, we have $\widetilde u(y_0)=u(y_0)=t_1$ or $\widetilde u(y_0)=u(y_0)=t_2$.
By the monotonicity of $u$ in $\Omega$, it follows that there exists $z_0\in \partial U$ such that
$$u(z_0)=\max_{\overline U}u\geq u(y_0).$$
If $z_0\notin \Upsilon$, then $\widetilde u(z_0)=u(z_0)$, so $\widetilde u(z_0)\geq u(y_0)=\widetilde u(y_0)$. If $z_0\in \Upsilon$, then $\max_{\overline U}u=u(z_0)<t_2$ so on $U\setminus \Upsilon \supset U\cap \partial \Upsilon$ we necessarily have $u\leq t_1$. Since $y_0\in U\cap \partial \Upsilon$, we have $\widetilde u(y_0)=u(y_0)=t_1$. Moreover, $\widetilde u(z_0)=v(z_0)$ and by assumption (iii) we have $\widetilde u(z_0)\geq t_1$. It follows that $\widetilde u(z_0)\geq \widetilde u(y_0)$ also in this case. Therefore,
$$\max_{\overline U} \widetilde u= \widetilde u(y_0) \leq \widetilde u(z_0) \leq \max_{\partial U} (\widetilde u),$$
which is a contradiction.
It remains to treat the case that $x_0\in U\setminus \Upsilon$, so $\widetilde u(x_0)=u(x_0) \notin (t_1,t_2)$. If $u(x_0)\leq t_1$, then $\max_{\partial U}\widetilde u < u(x_0)\leq t_1$, so $\widetilde u<t_1$ on $\partial U$. By assumption (iii) we have $\partial U\cap \Upsilon =\emptyset$, so $u=\widetilde u$ on $\partial U$, and $\max_{\partial U} u =\max_{\partial U} \widetilde u <u(x_0)$, where $x_0\in U$. This contradicts the monotonicity of $u$ in $\Omega$. If $u(x_0)\geq t_2$, by the monotonicity of $u$ in $\Omega$, there exists $z_0\in \partial U$ such that $u(z_0)=\max_{\overline U}u \geq u(x_0) \geq t_2$. This implies that $z_0\notin \Upsilon$, so $\widetilde u(z_0)=u(z_0)$. Therefore,
$$\max_{\partial U} \widetilde u \geq \widetilde u(z_0) =u(z_0)\geq u(x_0)= \widetilde u(x_0) =\max_{\overline U} \widetilde u,$$
which is a contradiction.
\end{proof}
\begin{lemma}\label{ConvergenceMonotonicity:Lemma}
Let $\{u_n\}_{n\in \mathbb N}$ be a sequence of monotone functions in $\Omega$ converging locally uniformly to a function $u$. Then $u$ is monotone in $\Omega$.
\end{lemma}
\begin{proof}
Let $U\subset \subset \Omega$. Then $\max_{\overline U}u_n =\max_{\partial U}u_n$. Since $u_n\to u$ uniformly in $\overline U$, we have $\max_{\overline U}u_n \to \max_{\overline U} u$ and $\max_{\partial U}u_n \to \max_{\partial U}u$. The claim for the minima is proved in the same way.
\end{proof}
\begin{corollary}\label{Gluing:Corollary}
Suppose that $u$ is monotone in $\Omega$ and consider a bi-infinite sequence of real numbers $\{t_i\}_{i\in \mathbb Z}$, such that $t_i<t_{i+1}$, $i\in \mathbb Z$, and $\lim_{i\to \pm \infty} t_i= \pm \infty$. In each region $\Upsilon_i\coloneqq u^{-1}((t_i,t_{i+1}))$ consider a function $v_i$ such that
\begin{enumerate}[\upshape(i)]
\item $v_i$ is monotone in $\Upsilon_i$,
\item $v_i$ extends continuously to $\partial \Upsilon_i\cap \Omega$ and agrees there with $u$, and
\item $t_i\leq v_i\leq t_{i+1}$ on $\Upsilon_i$.
\end{enumerate}
Then the function $\widetilde u$ that is equal to $u$ on $\Omega\setminus \bigcup_{i\in \mathbb Z} \Upsilon_i$ and to $v_i$ in $\Upsilon_i$, $i\in \mathbb Z$, is continuous and monotone in $\Omega$.
\end{corollary}
\begin{proof}
By Lemma \ref{Gluing:Lemma} and induction, one can show that for each $n\in \mathbb N$ the function
$$\widetilde u_n \coloneqq u \cdot\x_{\Omega\setminus \bigcup_{|i|\leq n} \Upsilon_i}+ \sum_{|i|\leq n} v_i \cdot \x_{\Upsilon_i}$$
is continuous and monotone in $\Omega$. The important observation here is that $$u^{-1}((t_j,t_{j+1}))=\Upsilon_j= \widetilde u_n^{-1}((t_j,t_{j+1}))$$ for all $n\in \mathbb N$ and $|j|>n$, by assumption (iii).
If $U\subset \subset \Omega$ is an open set, then there exists $n\in \mathbb N$ such that
$$t_{-n} < \min_{\overline U} u \leq \max_{\overline U} u <t_n.$$
Hence, $\Upsilon_j\cap \overline U=\emptyset$ for $|j|> n$ and $\widetilde u_m=\widetilde u_n=\widetilde u$ on $\overline U$ for all $m\geq n$. It follows that $\{\widetilde u_n\}_{n\in \mathbb N}$ converges locally uniformly in $\Omega$ to $\widetilde u$, and therefore $\widetilde u$ is continuous and monotone by Lemma \ref{ConvergenceMonotonicity:Lemma}.
\end{proof}
In order to establish the approximation by $C^{\infty}$-smooth functions in Theorem \ref{Intro:Approximation:Theorem}, we need to introduce the notion of strict monotonicity and prove some further, more specialized, gluing lemmas.
\begin{definition}
A continuous function $f\colon \Omega\to \mathbb R$ is called strictly monotone if it is monotone and for each open set $U\subset\subset \Omega$ the maximum and minimum of $f$ on $\overline U$ are not attained at any point of $U$.
\end{definition}
\begin{example}
If a function $f$ is of class $C^1$ and has no critical points in $\Omega$, then it has no local maxima and minima in $\Omega$ so it is strictly monotone.
\end{example}
\begin{lemma}[Gluing Lemma]\label{GluingMonotonicity:Lemma}
Let $A,V$ be open subsets of $\Omega$ with $\overline V\cap \Omega\subset A$. Suppose that the function $u$ is monotone when restricted to $\Omega \setminus \overline V$ and strictly monotone when restricted to $A$. Then $u$ is monotone in $\Omega$.
\end{lemma}
\begin{proof}
If the statement fails, there exists an open set $U\subset \subset \Omega$ such that the maximum or minimum of $u$ in $U$ is not attained at $\partial U$, but at an interior point $x_0\in U$. Without loss of generality, assume that $\max_{\partial U} u < \max_{\overline U}u =u(x_0)$. Note that $U$ cannot be contained in $\Omega\setminus \overline V$ or in $A$, by the monotonicity of $u$ there. Hence, $U$ intersects both $\overline V$ and $\Omega\setminus \overline A$.
If $x_0\in U\cap \overline V \subset U\cap A$, then $u(x_0)= \max_{\overline U}u\geq \max_{\overline{U\cap A}}u$ and this contradicts the strict monotonicity of $u$ in $A$.
If $x_0\in U\setminus \overline V\subset \Omega\setminus \overline V$, then by the monotonicity of $u$ there, there exists $x_1\in \partial (U\setminus \overline V)$ such that $u(x_1)\geq u(x_0)> \max_{\partial U}u$. We necessarily have that $x_1\notin \partial U$. Since $\partial (U\setminus \overline V)\subset \partial U \cup (U\cap \partial V)$, it follows that $x_1\in U\cap \partial V \subset U\cap A$. Then we have $u(x_1)\geq u(x_0) = \max_{\overline U}u \geq \max_{\overline{U\cap A}}u$. Again, this contradicts the strict monotonicity of $u$ in $A$.
\end{proof}
\begin{lemma}\label{GluingConstant:Lemma}
Suppose that $J$ is a connected closed subset of $\Omega$ that exits all compact subsets of $\Omega$. If $u$ is monotone in $\Omega\setminus J$ and $u$ is constant in $J$, then $u$ is monotone in $\Omega$.
\end{lemma}
\begin{proof}
Assume that $u=t$ in $J$. Suppose that $u$ is not monotone in $\Omega$, so, without loss of generality, we can find an open set $U\subset\subset \Omega$ and a point $x_0\in U$ with $u(x_0)=\max_{\overline U}u >\max_{\partial U}u$. By the monotonicity of $u$ in $\Omega\setminus J$, we necessarily have that $U$ intersects both $\Omega\setminus J$ and $J$, and $x_0\in J \cap U$. Thus, $u(x_0)=t=\max_{\overline U}u$. Since $J$ is connected and is not contained in $\overline U$, we have $J\cap \partial U\neq \emptyset$. Hence, there exists $y_0\in \partial U$ such that $u(y_0)=t\leq \max_{\partial U}u$, a contradiction.
\end{proof}
\section{Preliminaries on $p$-harmonic functions}\label{Section:Preliminaries}
Let $\Omega\subset \mathbb R^2$ be an open set. A function $u\colon \Omega\to \mathbb R$ is called $p$-harmonic, $1<p<\infty$, if $u\in W^{1,p}_{\loc}(\Omega)$ and
\begin{align*}
\Delta_pu \coloneqq \textrm{div}(|\nabla u|^{p-2} \nabla u) =0
\end{align*}
in the sense of distributions. That is,
\begin{align*}
\int_\Omega \langle |\nabla u|^{p-2} \nabla u , \nabla \phi \rangle =0
\end{align*}
for all $\phi \in C^\infty_c(\Omega)$, i.e., compactly supported smooth functions in $\Omega$.
We mention some standard facts about $p$-harmonic functions. There exists an exponent $\alpha \in (0,1]$ such that every $p$-harmonic function $u$ on $\Omega$ lies in $C^{1,\alpha}_{\loc}(\Omega)$ \cite{Uralceva:C1alpha}, \cite{Evans:C1alpha}, \cite{Lewis:C1alpha}. In fact, outside the \textit{singular set}, i.e., the set where $\nabla u=0$, the function $u$ is $C^\infty$-smooth by elliptic regularity theory; see \cite[Corollary 8.11, p.~186]{GilbargTrudinger:pde}. The singular set consists of isolated points, unless $u$ is constant \cite[Corollary 1]{Manfredi:pharmonicplane}. Finally, the maximum principle \cite[p.~111]{Heinonenetal:DegenerateElliptic} implies that $p$-harmonic functions are monotone.
\begin{prop}[Solution to the Dirichlet Problem]\label{DirichletProblem:Prop}
Let $\Omega\subset \mathbb R^2$ be a bounded open set and let $u_0\in W^{1,p}(\Omega)$ be given Dirichlet data. There exists a unique $p$-harmonic function $u\in W^{1,p}(\Omega)$ that minimizes the $p$-harmonic energy
\begin{align*}
E_p(v)\coloneqq \int_\Omega |\nabla v|^p
\end{align*}
among all functions $v\in W^{1,p}(\Omega)$ with $v-u_0 \in W^{1,p}_0(\Omega)$.
\end{prop}
See e.g.\ \cite[Chapter 5]{Heinonenetal:DegenerateElliptic} for a general approach.
An open Jordan arc $J$ in $\mathbb R^2$ is $C^\infty$-\textit{smooth} if there exists an open set $U\supset J$ and a $C^\infty$-smooth diffeomorphism $\phi \colon U\to \mathbb R^2$ such that $\phi(J)=\mathbb R$. In this case, the open set $U\supset J$ may be taken to be arbitrarily close to $J$; see also the classification of $1$-manifolds in Theorem \ref{Classification:theorem}.
Let $\Omega\subset \mathbb R^2$ be an open set and $J\subset \partial \Omega$ be a Jordan arc. We say that $J$ is an \textit{essential} boundary arc if for each $x_0\in J$ there exists a neighborhood $U$ of $x_0$ and a homeomorphism $\phi\colon U \to \mathbb R^2$ such that $\phi(J\cap U)=\mathbb R=\{(s,0)\in \mathbb R^2:s\in \mathbb R\}$ and $\phi(\Omega\cap U)= \mathbb R^2_+= \{(s,t)\in \mathbb R^2: t>0\}$.
\begin{lemma}[Continuous extension]\label{ContinuousExtension:Lemma}
Suppose that $u$ and $u_0$ are as in Proposition \ref{DirichletProblem:Prop}. Assume further that $J\subset \partial \Omega$ is an essential open Jordan arc. If $u_0$ extends continuously to $\Omega\cup J$, then $u$ also extends continuously to $\Omega\cup J$ and $u|J=u_0|J$.
\end{lemma}
This follows from \cite[Theorem 6.27]{Heinonenetal:DegenerateElliptic}, which implies that each point $x_0\in J$ is a \textit{regular point} for the $p$-Laplace operator, since $\partial \Omega$ is \textit{$p$-thick} at $x_0$; see \cite[Lemma 2]{Lehrback:CapacityEstimate} for a relevant capacity estimate.
\begin{lemma}[Comparison]\label{Comparison:Lemma}
Suppose that $u$ and $u_0$ are as in Proposition \ref{DirichletProblem:Prop}. If there exists $M\in \mathbb R$ such that $u_0\leq M$ in $\Omega$, then $u\leq M$ in $\Omega$.
\end{lemma}
To see this, one can take an exhaustion of $\Omega$ by regular open sets $\Omega_n$ (see \cite[Corollary 6.32]{Heinonenetal:DegenerateElliptic}) and solve the $p$-Laplace equation in $\Omega_n$ with boundary data $u_{0,n}$ that smoothly approximates in $W^{1,p}(\Omega_n)$ the function $u_0$ and satisfies $u_{0,n}\leq M+1/n$. The solution $u_n$ of the $p$-Laplace equation satisfies $u_n=u_{0,n}$ on $\partial \Omega_n$, and by the maximum principle \cite[p.~111]{Heinonenetal:DegenerateElliptic} we have $u_n\leq M+1/n$ in $\Omega_n$. Now, by passing to a limit \cite[Theorem 6.12]{Heinonenetal:DegenerateElliptic}, we obtain a $p$-harmonic function $\widetilde u\leq M$ that necessarily solves the Dirichlet Problem in $\Omega$. Since the solution is unique, we have $u=\widetilde u\leq M$.
\begin{prop}[Non-degeneracy up to boundary]\label{NonDegeneracy:Prop}
Suppose that $u$ is a $p$-harmonic function in $\Omega$. Moreover, suppose that $J\subset \partial \Omega$ is a $C^\infty$-smooth essential open Jordan arc on which $u$ extends continuously and is equal to a constant $t$, and suppose that $u\neq t$ in a neighborhood of $J$ in $\Omega$. Then $|\nabla u| \neq 0$ in a neighborhood of $J$ in $\Omega$ and in fact each point $x_0\in J$ has a neighborhood in $\Omega$ in which $|\nabla u|$ is bounded away from $0$.
\end{prop}
This result follows from \cite[Theorem 2.8]{LewisNystrom:BoundaryRegularityPHarmonic} and \cite[Corollary 6.2]{AikawaKilpelainenShanmugalingamZhong:BoundaryHarnack}. The first result implies that each $x_0\in J$ has a neighborhood $B(x_0,r)$ such that $|\nabla u(x)| \geq c |u(x)-t|/\dist(x,\partial \Omega)$ for all $x\in B(x_0,r)\cap \Omega$, where $c>0$ is a constant depending on $p$ and on the geometry of $J$. The second cited result implies that $|u(x)-t|/ \dist(x,\partial \Omega) \geq c'$ for all $x$ in a possibly smaller ball $B(x_0,r')\cap \Omega$, where $c'>0$ is some constant depending on $u$.
If $J\subset \partial \Omega$ is a $C^\infty$-smooth essential open Jordan arc, then a function $u\colon \Omega \to \mathbb R$ is said to be ($C^\infty$-)\textit{smooth up to $J$} if there exists an open set $U\supset J$ and $u$ extends to a $C^\infty$-smooth function on $U$. Note that this does \textit{not} require that $u$ is smooth in all of $\Omega$. If $u$ is also smooth in $\Omega$, then we write that $u$ is smooth in $\Omega\cup J$.
\begin{prop}[Boundary regularity]\label{BoundaryRegularity:Prop}
Suppose that $u$ is a $p$-harmonic function in $\Omega$. Moreover, suppose that $J\subset \partial \Omega$ is a $C^\infty$-smooth essential open Jordan arc and that $u$ is $C^\infty$-smooth when restricted to $J$. If each $x_0\in J$ has a neighborhood in $\Omega$ in which $|\nabla u|$ is bounded away from $0$, then $u$ is $C^{\infty}$-smooth up to the arc $J$.
\end{prop}
This follows from \cite[Theorem 6.19, p.~111]{GilbargTrudinger:pde}, since under the assumptions the $p$-harmonic equation is uniformly elliptic up to each compact subarc of the Jordan arc $J$.
Combining Propositions \ref{NonDegeneracy:Prop} and \ref{BoundaryRegularity:Prop} we have:
\begin{corollary}\label{NonDegeneracy:Corollary}
Suppose that $u$ is a $p$-harmonic function in $\Omega$. Moreover, suppose that $J\subset \partial \Omega$ is a $C^\infty$-smooth essential open Jordan arc on which $u$ extends continuously and is equal to a constant $t$, and suppose that $u\neq t$ in a neighborhood of $J$ in $\Omega$. Then $u$ is $C^\infty$-smooth up to the arc $J$ and each point of $J$ has a neighborhood in $\Omega\cup J$ in which $|\nabla u|$ is bounded away from $0$.
\end{corollary}
\section{Proof of the approximation Theorem}\label{Section:approximationTheorem}
In this section we prove Theorem \ref{Intro:Approximation:Theorem}. Let us start with an elementary lemma:
\begin{lemma}\label{EssentialArcs:Lemma}
Let $f\colon \Omega\to \mathbb R$ be a continuous function and suppose that $\{s_i\}_{i\in \mathbb Z}$ is a bi-infinite sequence of real numbers with $s_i<s_{i+1}$, $i\in \mathbb Z$, and $\lim_{i\to\pm\infty} s_i= \pm\infty$. If each of the level sets $f^{-1}(s_i)$, $i\in \mathbb Z$, is an embedded $1$-dimensional topological submanifold of $\mathbb R^2$, then
\begin{enumerate}[\upshape(i)]
\item $\bigcup_{i\in \mathbb Z} f^{-1}(s_i)$ is also an embedded $1$-dimensional topological submanifold, \vspace{.3em}
\item $\partial f^{-1}((s_j,s_{j+1}))\cap \Omega \subset f^{-1}(s_j) \cup f^{-1}(s_{j+1})$ and $f^{-1}(s_j)\subset\partial f^{-1}((s_{j-1},s_{j})) \cup \partial f^{-1}((s_j,s_{j+1})) $ for all $j\in \mathbb Z$, and \vspace{.3em}
\item if $x\in f^{-1}(s_j)$ for some $j\in \mathbb Z$, then there exist disjoint Jordan regions $V_+, V_-$ such that the closure of $V_+\cup V_-$ contains a neighborhood of $x$ and there exists an open Jordan arc $J\subset \partial V_+\cap \partial V_- \cap f^{-1}(s_j)$ containing $x$ such that $J$ is an essential boundary arc of $ V_+$ and $ V_-$. Moreover, each of $V_+,V_-$ is contained in a component of $ f^{-1}((s_{j-1},s_j))\cup f^{-1}((s_j,s_{j+1}))$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $x\in \bigcup_{i\in \mathbb Z} f^{-1}(s_i)$, so $x\in f^{-1}(s_j)$ for some $j\in \mathbb Z$. Then we claim that there exists a neighborhood $V$ of $x$ such that $V\cap \bigcup_{i\in \mathbb Z} f^{-1}(s_i)= V\cap f^{-1}(s_j)$. This claim trivially implies (i). Note that $x$ has positive distance from $f^{-1}(s_i)$, for all $i\neq j$ by the continuity of $f$. If the claim were not true, then one would be able to find a sequence $x_n\in f^{-1}(s_{i_n})$, where $i_n$ are distinct integers with $i_n\neq j$, $n\in \mathbb N$, such that $x_n\to x$ as $n\to \infty$. By continuity $f(x_n)\to f(x)$ so $s_{i_n}\to s_j$ as $n\to \infty$. This contradicts the assumptions on the sequence $\{s_i\}_{i\in \mathbb Z}$.
Part (ii) follows by the continuity of $f$. Indeed, if $x_n \in f^{-1}((s_j,s_{j+1}))$ converges to $x \in \partial f^{-1}((s_j,s_{j+1}))\cap \Omega$, then $f(x_n)$ converges to $f(x)$. Note that $f(x)$ cannot lie in $(s_j,s_{j+1})$, otherwise $f(y)\in (s_j,s_{j+1})$ for all $y$ in a neighborhood of $x$ by continuity, so $x$ would not be a boundary point of $f^{-1}((s_j,s_{j+1}))$. Therefore $f(x)=s_j$ or $f(x)=s_{j+1}$. For the second claim in (ii) note that $f^{-1}(s_j)$ has empty interior (as a subset of the plane), since it is a $1$-manifold. Therefore for each point $x\in f^{-1}(s_j)$ we can find a sequence of points $x_n$ converging to $x$ with $f(x_n)\neq s_j$, so $f(x_n)\in (s_{j-1},s_j)\cup (s_j,s_{j+1})$ for all sufficiently large $n$. The claim follows.
For (iii), let $x\in f^{-1}(s_j)$. By part (i), we conclude that there exists a neighborhood $V$ of $x$ that does not intersect $f^{-1}(s_i)$ for any $i\neq j$. Since $f^{-1}(s_j)$ is an embedded $1$-dimensional manifold, after shrinking $V$ if necessary, there exists a homeomorphism $\phi\colon V \to \mathbb R^2$ such that $\phi(f^{-1}(s_j)\cap V)=\mathbb R$; see Remark \ref{Straighten:Remark}. Consider small Jordan regions $ V_+ \subset \phi^{-1}(\mathbb R^2_+)$, $ V_- \subset \phi^{-1}(\mathbb R^2_-)$ such that the closure of $V_+\cup V_-$ contains a neighborhood of $x$. It follows that there exists an open Jordan arc $J\subset \partial V_+\cap \partial V_- \cap f^{-1}(s_j)$ containing $x$ such that $J$ is an essential boundary arc of $ V_+$ and $ V_-$. By (ii) we have that $x\in \partial f^{-1}((s_{j-1},s_{j}))\cup \partial f^{-1}((s_j,s_{j+1}))$. Hence, $V_+,V_-$ must intersect the open set $f^{-1}((s_{j-1},s_{j}))\cup f^{-1}((s_j,s_{j+1}))$. Since $V_+$ and $V_-$ are connected and they do not intersect the boundary of that set (by the choice of $V$), it follows that they are contained in connected components of $f^{-1}((s_{j-1},s_{j}))\cup f^{-1}((s_j,s_{j+1}))$.
\end{proof}
We let $u\colon \Omega\to \mathbb R $ be a continuous monotone function with $u\in W^{1,p}(\Omega)$, as in the statement of Theorem \ref{Intro:Approximation:Theorem}. In our proof we will follow the steps of \cite{IwaniecKovalevOnninen:W12approximation} and \cite{IwaniecKovalevOnninen:W1papproximation}. Namely, using Theorem \ref{Intro:MonotoneSobolev:Theorem}, we will first partition $\Omega$ into disjoint open subsets $\Upsilon_i$ and smooth the function $u$ in these subsets by replacing it with a $p$-harmonic function. This is the first step below. Next, we have to mollify the new function along the boundaries of $\Upsilon_i$. For this purpose, we partition even further $\Omega$, so that the boundaries of the regions $\Upsilon_i$ are contained in regions $\Psi_j$. Following \cite{IwaniecKovalevOnninen:W12approximation}, we call this new partition a \textit{lens-type partition}. In the regions $\Psi_j$ we consider another $p$-harmonic replacement. This completes the second step. In the third and final step we have to smooth our function along the boundaries of the regions $\Psi_j$, using smoothing results from Section \ref{Section:Smoothing}. Throughout all steps we have to ensure that our functions are monotone. This is guaranteed by the gluing lemmas from Subsection \ref{Section:glue} that allow us to paste together the various $p$-harmonic functions.
\subsection{Step 1: Approximation by a piecewise smooth function}
\subsubsection{Initial partition of $\Omega$}\label{InitialPartition:Section}
For fixed $\delta>0$ we consider a bi-infinite sequence of real numbers $\{t_i\}_{i\in \mathbb Z}$ such that $t_i<t_{i+1}$, $|t_{i+1}-t_i|<\delta$ for $i\in \mathbb Z$, and $\lim_{i\to \pm\infty} t_i=\pm\infty$. Moreover, we may have that the conclusions of Theorem \ref{Sard:Theorem} are satisfied for the level sets $A_{t_i}$; that is, $A_{t_i}$ is an embedded $1$-dimensional topological submanifold of the plane. Note that $u^{-1}(t_i)$ or $u^{-1}((t_i,t_{i+1}))$ will be empty if $t_i\notin u(\Omega)$ or $(t_i,t_{i+1})\cap u(\Omega)=\emptyset$, respectively.
\subsubsection{Solution of the Dirichlet Problem}\label{Dirichlet:Section}
In each region $ \Upsilon_i =u^{-1}((t_i,t_{i+1}))$, using Proposition \ref{DirichletProblem:Prop}, we solve the Dirichlet Problem with boundary data $u$ and obtain a function $u_i\in W^{1,p}(\Upsilon_i)$. The function $u_i$ is monotone in $\Upsilon_i$ since it is $p$-harmonic. By Lemma \ref{Comparison:Lemma}, we have $t_i\leq u_i\leq t_{i+1}$.
Moreover, if $x_0\in \partial \Upsilon_j \cap \Omega$, then $x_0 \in A_t$, where $t=t_j$ or $t=t_{j+1}$ by Lemma \ref{EssentialArcs:Lemma}(ii). Part (iii) of the same lemma implies that there exist disjoint Jordan regions $V_+, V_-$ such that the closure of $V_+\cup V_-$ contains a neighborhood of $x_0$ and there exists an open Jordan arc $J\subset \partial V_+\cap \partial V_-\cap A_t$ containing $x_0$ such that $J$ is an essential boundary arc of $ V_+$ and $ V_-$. Moreover, $V_+ \subset \Upsilon_j$ and $V_- \subset \Upsilon_i$ for some $i\in \mathbb Z$. By Lemma \ref{ContinuousExtension:Lemma} the functions $u_j,u_i$ extend continuously to $V_+\cup J$, $V_-\cup J$, respectively, and $u_j\big |J=t$, $u_i\big|J=t$. Since $\overline{V_+\cup V_-}$ contains a neighborhood of $x$, we have that $u_j$ extends continuously to $x_0$. Hence, $u_j$ extends continuously to each point of $\partial \Upsilon_j\cap \Omega$ and agrees there with $u$.
Using Corollary \ref{Gluing:Corollary}, we ``glue" the functions $u_i$, $i\in \mathbb Z$, together with $u$, to obtain a continuous monotone function $u_\delta$ on $\Omega$. Note that
$$u_\delta-u= \sum_{i\in \mathbb Z} [u_i-u]_0,$$
where $[u_i-u]_0 \in W^{1,p}_0(\Omega)$ denotes the extension of $u_i-u$ by $0$ outside $u^{-1}((t_i,t_{i+1}))$. The completeness of $W^{1,p}_0(\Omega)$ implies that $u_\delta-u\in W^{1,p}_0(\Omega)$. Therefore, by Proposition \ref{DirichletProblem:Prop} we conclude that
$$E_p(u_\delta)\leq E_p(u)= \int_\Omega |\nabla u|^p.$$
If $u$ is not already $p$-harmonic, which we may assume, then the above inequality is strict by the uniqueness part of Proposition \ref{DirichletProblem:Prop}.
Since, $t_i\leq u_\delta\leq t_{i+1}$ in $ A_{t_i,t_{i+1}}$, we have
$$|u-u_\delta| \leq |u-t_i|+|t_i-u_\delta|\leq 2|t_{i+1}-t_i|<2\delta$$
in $\Omega$ and so $u_\delta$ is uniformly close to $u$. We also observe here that
\begin{align}\label{Dirichlet:vu}
u_\delta^{-1}((t_j,t_{j+1}))\subset A_{t_j,t_{j+1}}, \,\,\, j\in \mathbb Z,
\end{align}
which will be used later. This follows from the decomposition of $\Omega$ as
$$\Omega= \bigcup_{i\in \mathbb Z} (A_{t_i,t_{i+1}} \cup A_{t_i})$$
and the observation that $u_\delta^{-1}((t_i,t_{i+1}))$ cannot intersect $A_{t_j}$ for any $i,j\in \mathbb Z$, since $u_\delta=u=t_j$ on $A_{t_j}$.
Since $\Omega$ is bounded, it follows that $u_\delta \to u$ in $L^p(\Omega)$ as $\delta\to 0$. Moreover, $\|u_\delta\|_{W^{1,p}(\Omega)}= \|u_\delta\|_{L^p(\Omega)}+ \|\nabla u_\delta\|_{L^p(\Omega)}$ is uniformly bounded as $\delta\to 0$. By the Banach-Alaoglu theorem it follows that, along a sequence of $\delta\to 0$, $u_\delta$ converges weakly in $W^{1,p}(\Omega)$ to $u$, which is also the pointwise limit of $u_\delta$. Since the limits are unique, we do not need to pass to a subsequence. Hence, by Fatou's lemma for weak convergence
\begin{align*}
\|\nabla u\|_{L^p(\Omega)}\leq \liminf_{\delta\to 0} \|\nabla u_\delta\|_{L^p(\Omega)} \leq \limsup_{\delta\to 0} \|\nabla u_\delta\|_{L^p(\Omega)}.
\end{align*}
The latter is bounded by $E_p(u)^{1/p}=\|\nabla u\|_{L^p(\Omega)}$, hence $\lim_{\delta\to 0}\|\nabla u_\delta\|_{L^p(\Omega)} = \|\nabla u\|_{L^p(\Omega)}$. This implies that $\nabla u_\delta\to \nabla u$ in $L^p(\Omega)$, due to the uniform convexity of $L^p$ spaces, when $1<p<\infty$; see for example \cite[pp.~95--97]{Brezis:Functional} and \cite[Proposition 3.32, p.~78]{Brezis:Functional}.
We remark that $u_\delta$ might be constant in some component of a level region $u^{-1}((t_i,t_{i+1}))$. Summarizing, for each $\varepsilon>0$ there exists $\delta_0>0$ such that for $0<\delta<\delta_0$ there exists a monotone function $u_\delta$ on $\Omega$ satisfying the following:
\begin{enumerate}[(A)]
\item $u_\delta$ is $p$-harmonic in $u^{-1}((t_i,t_{i+1}))$, $i\in \mathbb Z$,
\item $\sup_{\Omega}|u_\delta-u|<\varepsilon$,
\item $\|\nabla u_\delta -\nabla u\|_{L^p(\Omega)}<\varepsilon$,
\item $E_p(u_\delta)< E_p(u)$, and
\item $u_\delta-u \in W^{1,p}_0(\Omega)$.
\end{enumerate}
In the next step, our goal is to approximate in the sense of (B)--(E) the function $v\coloneqq u_\delta$, for a small fixed $\delta$, by functions that are smooth along the level sets $A_{t_i}$, $i\in \mathbb Z$; note that these are the level sets of the original function $u$!
\subsection{Step 2: Smoothing along the level sets $u^{-1}(t_j)$}
\subsubsection{Lens-type partition}\label{Lens:Section}
Note that if a level set $v^{-1}(t)$ is a $1$-dimensional manifold, then it cannot intersect any component of $\bigcup_{i\in \mathbb Z}A_{t_i,t_{i+1}}$ where $v$ is constant. Since $v$ is $p$-harmonic in $\bigcup_{i\in \mathbb Z}A_{t_i,t_{i+1}}$, it has at most countably many critical points in each component of $A_{t_i,t_{i+1}}$, unless it is constant there; see Section \ref{Section:Preliminaries}. For $\eta>0$ and for each $j\in \mathbb Z$ we consider real numbers $t_j^-, t_j^+$ with $|t_j^+-t_j^-|<\eta$ such that $t_{j-1}^+<t_j^- <t_j<t_j^+<t_{j+1}^-$, $v^{-1}(t_j^{\pm})$ does not contain any critical points of $v$, and the conclusions of Theorem \ref{Sard:Theorem} hold for the level sets $v^{-1}(t_j^\pm)$. In particular, the level sets $v^{-1}(t_j^{\pm})$ intersect only components of $\bigcup_{i\in \mathbb Z}A_{t_i,t_{i+1}}$ in which $v$ is non-constant, and therefore the critical points of $v$ form a discrete set within these components. It follows that each point $x\in v^{-1}(t_j^\pm)$ has a neighborhood in $\bigcup_{i\in \mathbb Z}A_{t_i,t_{i+1}}$ that does not contain any critical points of $v$.
In each region $\Psi_j\coloneqq v^{-1}( (t_{j}^-, t_j^+))$ we solve the Dirichlet Problem with boundary data $v$ as in Section \ref{Dirichlet:Section} and obtain $p$-harmonic functions $v_j$ with $t_{j}^-\leq v_j\leq t_j^+$ that extend continuously to $\partial \Psi_j\cap \Omega$ and agree there with $v$. We define a function $v_\eta$ to be equal to $v_j$ in $\Psi_j$, $j\in \mathbb Z$, and equal to $v$ on $\Omega\setminus \bigcup_{j\in \mathbb Z} \Psi_j$.
This procedure results in a continuous monotone function $v_\eta$, by Corollary \ref{Gluing:Corollary}, that approaches $v$ in the uniform norm and also in $W^{1,p}(\Omega)$, as $\eta\to 0$. Moreover, we have $E_p(v_\eta)< E_p(v)$ (unless $v$ is $p$-harmonic in $\Omega$, which we may assume that is not the case) and $v_\eta-v\in W^{1,p}_0(\Omega)$. That is, (B)--(E) from Section \ref{Dirichlet:Section} are satisfied.
\subsubsection{Regularity of the approximation}\label{Section:LipschitzRegularity}
As far as the regularity of $w\coloneqq v_\eta$ is concerned, we claim the following:
\begin{enumerate}[(a)]
\item The function $w$ is $p$-harmonic in $\Psi_j$ and in $v^{-1}((t_j^+,t_{j+1}^-))$, for all $j\in \mathbb Z$. Therefore, $w$ is $C^{1,\alpha}$-smooth in each of these sets and if $p=2$, it is actually $C^\infty$-smooth.
\item Each point $x_0\in v^{-1}(t_j^\pm)$, $j\in \mathbb Z$, has a neighborhood $V=V_+\cup V_-$ in
$$\mathcal W\coloneqq \Omega \setminus \bigcup_{i\in \mathbb Z} v^{-1}(\{t_i^-,t_i^+\})= \bigcup_{i\in \mathbb Z} (\Psi_i \cup v^{-1}((t_i^+,t_{i+1}^-))),$$
where $V_+,V_-$ are disjoint Jordan regions contained in connected components of $\mathcal W$, $\overline V$ contains a neighborhood of $x_0$, and there exists a $C^\infty$-smooth open Jordan arc $J\subset \partial V_+\cap \partial V_- \cap v^{-1}(t_j^{\pm})$ containing $x_0$ such that $J$ is an essential boundary arc of $V_+$ and $V_-$. Moreover, inside each one of $V_+, V_-$, $|\nabla w|$ is either vanishing or bounded below away from $0$; in the second case we have $w\neq t_{j}^\pm$ in the corresponding region. The function $w$ is smooth up to the boundary arc $J$ in each of $V_+,V_-$.
\item The function $w$ is $C^{\infty}$-smooth in $\mathcal W$, except possibly for a discrete set of points of $\mathcal W$ that accumulates at $\partial \Omega$.
\end{enumerate}
For (a) note that, by definition, $w$ is $p$-harmonic on $\Psi_j$, while on $v^{-1}((t_j^+ ,t_{j+1}^-)) \subset \Omega \setminus \bigcup_{i\in \mathbb Z} \Psi_i$ we have $w=v$ (by the definition of $w$) and $v$ is $p$-harmonic in $v^{-1}((t_j^+ ,t_{j+1}^-))=u_\delta^{-1}((t_j^+ ,t_{j+1}^-))\subset u^{-1}( (t_j,t_{j+1}))$; see \eqref{Dirichlet:vu}. Therefore, by the regularity of $p$-harmonic functions (see Section \ref{Section:Preliminaries}), $w$ is $C^{1,\alpha}$-smooth in the open set $\mathcal W$ and in fact $w$ is $C^\infty$-smooth in $\mathcal W$, except possibly for the (at most) countably many critical points that are contained in components $U$ of $\mathcal W$ in which $w$ is non-constant. The critical points form a discrete subset of such components $U$ of $\mathcal W$, so they could only accumulate in points of $\partial U$.
Assuming that (b) is true, we will show (c). It suffices to show that the critical points contained in a component $U$ of $\mathcal W$ where $w$ is non-constant do not have any accumulation point in $\Omega$, but they can only accumulate at $\partial \Omega$. Suppose for the sake of contradiction that the critical points contained in $U$ accumulate at a point $x_0\in\partial U\cap \Omega$. Observe that the values $t_i^\pm$, $i\in \mathbb Z$, form a discrete set of real numbers with no finite accumulation points, and the level sets $v^{-1}(t_{i}^{\pm})$, $i\in \mathbb Z$, are embedded $1$-dimensional manifolds, as in the setting of Lemma \ref{EssentialArcs:Lemma}. By Lemma \ref{EssentialArcs:Lemma}(ii) there exists $j\in \mathbb Z$ such that $x_0 \in v^{-1}(\{t_j^-, t_j^+\})$. Consider the Jordan regions $V_+,V_- \subset \mathcal W$ as in (b) such that $\overline {V_+ \cup V_-}$ contains a neighborhood of $x_0$. This implies that there are infinitely many critical points of $w$ in, say, $V_+$, where $V_+\subset U$; note that at least one of $V_+,V_-$ has to be contained in $U$. Since $w$ is non-constant on $U\supset V_+$, the conclusion of (b) implies that $w$ does not have any critical points in $V_+$, a contradiction.
Finally we prove (b). Let $x_0\in v^{-1}(t_j^-)$ for some $j\in \mathbb N$; the argument is the same if $x_0\in v^{-1}(t_j^+)$. By Lemma \ref{EssentialArcs:Lemma}(iii) we conclude that there exist disjoint Jordan regions $V_+,V_-$ such that the closure of $V_+\cup V_-$ contains a neighborhood of $x_0$ and there exists an open Jordan arc $J\subset \partial V_+\cup \partial V_- \cap v^{-1}(t_j^-) $ containing $x_0$ that is an essential boundary arc of $V_+$ and $V_-$. Moreover, the last statement of Lemma \ref{EssentialArcs:Lemma}(iii) implies that $V_+,V_-$ are contained in connected components of $ v^{-1}(( t_{j-1}^+,t_j^-))\cup v^{-1}((t_j^-,t_j^+))$. We will show that $|\nabla w|$ is vanishing or bounded below in each of $V_+,V_-$ (after possibly shrinking them) and that $w$ is smooth up to the arc $J$ in $V_+$ and $V_-$. We work with $V_+$.
Let $U$ be the component of $v^{-1}(( t_{j-1}^+,t_j^-))\cup v^{-1}((t_j^-,t_j^+))$ containing $V_+$. If $w$ is constant in $U$ then $w$ is obviously smooth up to the arc $J$ inside $V_+$ and we have nothing to show. We suppose that $w$ is non-constant in $U$.
Assume first that $V_+\subset U$ is contained in $v^{-1}((t_{j-1}^+,t_{j}^-))$. Then $w=v<t_j^-$ on $V_+$ by the definition of $w$. By the choice of $t_j^-$ in Section \ref{Lens:Section}, each point of $ v^{-1}(t_j^-)\supset J$ has a neighborhood in $\bigcup_{i\in \mathbb Z}A_{t_i,t_{i+1}}$ that does not contain any critical points of $v$. In particular, each point of the arc $J$ has a neighborhood in which $v$ is non-constant, $p$-harmonic, and $|\nabla v|$ is non-zero. It follows that $v$ is smooth in a neighborhood of $J$; see Section \ref{Section:Preliminaries}. Therefore, after shrinking the Jordan region $V_+$ if necessary, $w$ is smooth in $V_+$ up to the arc $J$, and each point of $J$ has a neighborhood in $V_+$ in which $|\nabla w|=|\nabla v|$ is bounded away from $0$.
Suppose now that $V_+\subset U$ is contained in $\Psi_j=v^{-1}((t_j^-,t_j^+))$. We have $J\subset v^{-1}(t_j^-)$, so $w=v=t_j^-$. Since $w$ is $p$-harmonic on $V_+\subset U$ and non-constant, we must have $w>t_j^-$ on $V_+$ by the strong maximum principle \cite[p.~111]{Heinonenetal:DegenerateElliptic}. Note that the arc $J$ is $C^\infty$-smooth, because $J\subset v^{-1}(t_j^-)$ and the $t_j^-$ is a regular (i.e., non-critical) value of $v$ by the choice of $t_{j}^{-}$. Now we are exactly in the setting of Corollary \ref{NonDegeneracy:Corollary}, which implies that each point of $J$ has a neighborhood in $V_+$ that does not contain any critical points of $w$ and $|\nabla w|$ is bounded away from $0$. If we shrink the Jordan region $V_+$, we may have that these hold in $V_+$. By Corollary \ref{NonDegeneracy:Corollary} it also follows that $w$ is smooth up to the boundary in the region $V_+$. The proof of (b) is completed.\qed
\vspace{0.5em}
We have managed to obtain a function $w$ that is smooth along the level sets $u^{-1}(t_j)$, but it is not necessarily smooth along $v^{-1}(t_j^{\pm})$. It has, however, some smoothness up to $v^{-1}(t_j^\pm)$, as described in (b). In the next subsection we prove that $w$ can be $C^\infty$-smoothed at arbitrarily small neighborhoods of the level sets $v^{-1}(t_{j}^{\pm})$ so as to complete the proof of Theorem \ref{Intro:Approximation:Theorem}. For this purpose we utilize smoothing results from Section \ref{Section:Smoothing}.
\subsection{Step 3: Smoothing along the level sets $v^{-1}(t_j^{\pm})$}
We fix $j\in \mathbb Z$ and a component $J$ of $v^{-1}(t_j^-)$. Recall that $J$ is a smooth $1$-manifold, since $t_j^-$ was chosen to be a regular value of $v$, and $v$ is $C^\infty$-smooth in a neighborhood of $v^{-1}(t_j^-)$. There are two cases: either $J$ is homeomorphic to $\mathbb R$ or it is homeomorphic to $S^1$ (see the classification in Theorem \ref{Classification:theorem}).
By claim (b) from Subsection \ref{Section:LipschitzRegularity}, each $x_0\in J$ has a neighborhood $V=V_+\cup V_-$ in $\Omega\setminus v^{-1}(t_j^-)$, where $V_+,V_-$ are disjoint Jordan regions, $\overline V$ contains a neighborhood of $x_0$ and there exists an open arc $J_0\subset J$ containing $x_0$ that is contained in the common boundary of $V_+$ and $V_-$. Moreover, $w$ is $C^\infty$-smooth in ${ V_+}\cup J_0$ and in ${V_-}\cup J_0$, and $|\nabla w|$ is either vanishing or bounded below away from $0$ in each of $V_+, V_-$; in the second case we have $w\neq t_j^-$ in the corresponding set.
Assume first that $J$ is homeomorphic to $\mathbb R$. By Theorem \ref{Classification:theorem}, there exists a neighborhood $U$ of $J$ in $\mathbb R^2$ and a diffeomorphism $\phi\colon \mathbb R^2\to U$ such that $\phi(\mathbb R)=J$. The previous paragraph implies that there exist connected neighborhoods $\widetilde B^+$ and $\widetilde B^-$ of $\mathbb R$ in $\{y>0\}$ and $\{y<0\}$, respectively, such that $w\circ \phi$ is smooth in $\widetilde B^+ \cup \mathbb R$ and in $\widetilde B^-\cup \mathbb R$, and each point of $\mathbb R$ has a neighborhood in $\widetilde B^+\cup\widetilde B^-$ in which $|\nabla (w\circ \phi)|$ is either vanishing or bounded away from $0$. Moreover, in each of $\widetilde B^+$ and $\widetilde B^-$ the function $w\circ \phi$ is either constant or we have $w\circ \phi > t_j^-$ or $w\circ \phi<t_j^-$ .
Therefore, there exist disjoint connected open sets $B^+=\phi^{-1}(\widetilde B^+)$ and $B^-=\phi^{-1}(\widetilde B^-)$ so that $B=B^+\cup B^-\cup J$ is an open set containing $J$ and $w$ satisfies one of the following conditions, with the roles of $B^+$,$B^-$ possibly reversed:
\begin{enumerate}
\item[(i)] $w=t_j^-$ on $J$, $w>t_j^-$ on $B^+$, $w<t_j^-$ on $B^-$,
\item[(i$'$)] $w=t_j^-$ on $J$, $w>t_j^-$ on $B^+$, $w=t_j^-$ on $B^-$,
\item[(i$''$)] $w=t_j^-$ on $J$, $w>t_j^-$ on $B^+$, $w>t_j^-$ on $B^-$,
\item[(i$'''$)] $w=t_j^-$ on $J$, $w=t_j^-$ on $B^+$, $w=t_j^-$ on $B^-$.
\end{enumerate}
Note that (i$'''$) immediately implies that $w$ is smooth at each point of $J$.
If (i) holds, then $|\nabla w|$ must be non-zero in $B^+\cup B^-$, hence each point of $J$ has a neighborhood in $B^+\cup B^-$ in which $|\nabla w|$ is bounded away from $0$. We are now exactly in the setting of the smoothing Lemma \ref{SmoothingArc:Lemma}. Thus, there exists a monotone function $\widetilde w$ in $\Omega$ that is smooth in $B$, agrees with $w$ outside an arbitrarily small neighborhood $A$ of $J$, and is arbitrarily close to $w$ in the uniform and in the Sobolev norm. In particular, since $E_p(w)<E_p(u)$ (cf.\ (D) from Subsection \ref{Dirichlet:Section}) we may have $E_p(\widetilde w) < E_p(u)$. Finally, we have $\widetilde w -w \in W^{1,p}_0(\Omega)$ from Lemma \ref{SmoothingArc:Lemma}(d) as claimed in (E) in the statement of Theorem \ref{Intro:Approximation:Theorem}. If (i$'$) or (i$''$) holds instead, then we are in the setting of Lemma \ref{SmoothingArc:Lemma2} and obtain the same conclusions.
Now, if $J$ is homeomorphic to $S^1$ we can find connected open sets $B^+$, $B^-$ so that $J$ is their common boundary. In this case, we have the following alternatives:
\begin{enumerate}
\item[(i)] $w=t_j^-$ on $J$, $w>t_j^-$ on $B^+$, $w<t_j^-$ on $B^-$,
\item[(i$'$)] $w=t_j^-$ on $J$, $w>t_j^-$ on $B^+$, $w=t_j^-$ on $B^-$,
\item[(i$'''$)] $w=t_j^-$ on $J$, $w=t_j^-$ on $B^+$, $w=t_j^-$ on $B^-$.
\end{enumerate}
We note that the alternative (i$''$) of the previous case does not occur here, since it would violate the monotonicity of $w$. We can now apply Lemma \ref{SmoothingCurve:Lemma} to smooth the function $w$ near $J$.
We apply the above smoothing process countably many times in pairwise disjoint, arbitrarily small neighborhoods of components of $v^{-1}(t_j^{\pm})$, $j\in \mathbb Z$. Since these components do not accumulate in $\Omega$ (see Lemma \ref{EssentialArcs:Lemma}(i)), the resulting limiting function $\widetilde w$ will be $C^\infty$-smooth in $\Omega$, except possibly at a discrete set of points of $\Omega$ (by (a),(c) in Subsection \ref{Section:LipschitzRegularity}), in which $\widetilde w$ is $C^{1,\alpha}$-smooth; if $p=2$ then $\widetilde w$ is $C^\infty$-smooth everywhere. If the neighborhoods of $v^{-1}(t_j^{\pm})$, $j\in \mathbb Z$, where the smoothing process takes place are arbitrarily small, then $\widetilde w$ will be $p$-harmonic on a subset of $\Omega$ having measure arbitrarily close to full. Moreover, by Lemma \ref{ConvergenceMonotonicity:Lemma}, $\widetilde w$ is monotone in $\Omega$. Finally, it satisfies the corresponding conditions (B)--(E) from Subsection \ref{Dirichlet:Section}. This completes the proof of Theorem \ref{Intro:Approximation:Theorem}. \qed
\section{Smoothing results}\label{Section:Smoothing}
\begin{theorem}\label{Classification:theorem}
Let $J$ be an embedded $1$-dimensional smooth submanifold of $\mathbb R^2$ that is connected. Then $J$ is diffeomorphic to $X$, where $X=\mathbb R\times \{0\}\subset \mathbb R^2$ or $X=S^1\subset \mathbb R^2$. Moreover, there exists a neighborhood $U$ of $J$ in $\mathbb R^2$ and a diffeomorphism $\phi\colon \mathbb R^2\to U$ such that $\phi(X)=J$.
\end{theorem}
We direct the reader to \cite{Viro:1manifolds} for an account on the classification of $1$-manifolds. The last statement of the lemma can be proved in the same way as the existence of tubular neighborhoods of embedded submanifolds of $\mathbb R^n$; see for example \cite[Theorem 6.24, p.~139]{Lee:manifolds}.
\begin{lemma}[Smoothing along a line] \label{SmoothingArc:Lemma}
Let $\Omega\subset \mathbb R^2$ be an open set and let $J \subset \Omega$ be an embedded $1$-dimensional smooth submanifold of $\mathbb R^2$, homeomorphic to $\mathbb R$, such that $J$ has no accumulation points in $\Omega$; that is, the topological ends of $J$ lie in $\partial \Omega$. Suppose that $J$ is contained in an open set $B\subset \Omega$ that it is the common boundary of two disjoint regions $B^+$ and $B^-$ with $B=B^+\cup B^-\cup J$. Moreover, let $u\in W^{1,p}(\Omega)$ be a monotone function with the following properties:
\begin{enumerate}[\upshape(i)]
\item \label{SmoothingArc:Lemma:sign} $u=0$ on $J$, $u>0$ on $B^+$, $u<0$ on $B^-$,
\item \label{SmoothingArc:Lemma:smooth} $u$ is smooth in $B^+\cup J$ and in $B^-\cup J$,
\item \label{SmoothingArc:Lemma:nabla} each point of $J$ has a neighborhood in $B^+\cup B^-$ in which $|\nabla u|$ is bounded away from $0$.
\end{enumerate}
Then for any open set $U\subset B$ with $U\supset J$ and each $\varepsilon>0$ there exists an open set $A\subset U$ containing $J$ and a monotone function $\widetilde u$ in $\Omega$ such that
\begin{enumerate}[\upshape(a)]
\item $\widetilde u$ agrees with $u$ in $\Omega\setminus A$ and is smooth in $B$,
\item $|\widetilde u- u|<\varepsilon$ in $A$,
\item $\|\nabla \widetilde u-\nabla u\|_{L^p(A)}<\varepsilon$, and
\item $\widetilde u-u\in W^{1,p}_{0}(A)$.
\end{enumerate}
\end{lemma}
\begin{proof}
By using a diffeomorphism $\phi$ defined in a neighborhood of $J$ in $B$, given by Theorem \ref{Classification:theorem}, we may straighten $J$ and assume that $J=\mathbb R=\{(x,y):y=0\}$, $B=\mathbb R^2$, $B^+=\{(x,y):y>0\}$, $B^-=\{(x,y):y<0\}$. For each $\varepsilon>0$ we will construct a smooth function $\widetilde u$ that agrees with $u$ outside an arbitrarily small neighborhood $A$ of $\mathbb R$, $|\widetilde u-u|<\varepsilon$ in $A$ and $\|\nabla \widetilde u -\nabla u\|_{L^q(A)}<\varepsilon$, where $q>p$ is a fixed exponent (e.g., $q=2p$). If the neighborhood $A$ is sufficiently thin, then $\widetilde u-u\in W^{1,q}_0(A)$. Pulling back $\widetilde u$ under the diffeomorphism $\phi$ and using H\"older's inequality will yield the desired conclusions in $\Omega$. We argue carefully for the monotonicity of $\widetilde u$ in the end of the proof.
For the construction of $\widetilde u$, essentially, we are going to interpolate between the function $u$ away from $\mathbb R$ and the ``height function" $(x,y)\mapsto y$ near $\mathbb R$. Several technicalities arise though, in order to ensure that the new function is monotone.
By assumption \ref{SmoothingArc:Lemma:nabla}, for each $x\in \mathbb R$ there exists a constant $m>0$ such that $|\nabla u |>m$ in a neighborhood of $x$ in $\mathbb R^2\setminus \mathbb R$. Since $u=0$ on $\mathbb R$, we have $u_x=0$ on $\mathbb R$. Hence, using the smoothness of $u$ in $B^+\cup \mathbb R$ and in $B^-\cup \mathbb R$ we conclude that for each point $x\in \mathbb R$ there exist constants $m,M>0$ such that $M>|u_y|>m$ in a neighborhood of $x$ in $\mathbb R^2\setminus \mathbb R$. Thus, we may consider positive smooth functions $\gamma,\delta \colon \mathbb R \to \mathbb R$ such that $\delta(x)>|u_y(x,y)|>\gamma(x)>0$ for all $(x,y)$ lying in a small neighborhood of $\mathbb R$ in $\mathbb R^2\setminus \mathbb R$. By assumptions \ref{SmoothingArc:Lemma:sign} and \ref{SmoothingArc:Lemma:smooth}, we have $u_y\geq 0$ in a neighborhood of $\mathbb R$ in $\mathbb R^2\setminus \mathbb R$. Therefore,
\begin{align}\label{SmoothingArc:Lemma:gammadelta}
\delta(x)>u_y(x,y)>\gamma(x)>0
\end{align} for all $(x,y)$ lying in a neighborhood of $\mathbb R$ in $\mathbb R^2\setminus \mathbb R$. By choosing a possibly larger $\delta$ we may also have
\begin{align}\label{SmoothingArc:Lemma:gammadelta2}
\delta(x)>|u_x(x,y)|
\end{align}
in that neighborhood. We let $A$ be a sufficiently small open neighborhood of $\mathbb R$ in $\mathbb R^2$ so that the preceding inequalities hold. Later we will shrink $A$ even further.
For a positive smooth function $\beta\colon \mathbb R\to \mathbb R$ we define $V(\beta)=\{(x,y): |y|<\beta(x)\}$. Note that we may choose $\beta$ so that $\overline{V(\beta)}\subset A$. By scaling $\beta$, we may also have that
\begin{align}\label{SmoothingArc:Lemma:beta}
|\beta'|\leq 1.
\end{align}
Moreover, we consider a non-negative smooth function $\alpha\colon \mathbb R\to \mathbb R$ with $\alpha(t)=1$ for $t\geq 1$, $\alpha(t)=0$ for $t\leq 1/2$, and
\begin{align}\label{SmoothingArc:Lemma:alpha}
0\leq \alpha'\leq 4.
\end{align}
We now define $s=s(x,y)= |y|/\beta(x)$ and
\begin{align*}
\widetilde u(x,y)= \alpha(s)u(x,y)+ (1-\alpha(s))y\gamma(x).
\end{align*}
This function is smooth, agrees with $u$ outside $V(\beta)$ (where $s\geq 1$), and agrees with $(x,y)\mapsto y\gamma(x)$ in $V(\beta/2)$ (where $0\leq s<1/2$). We have
$|\widetilde u-u| \leq |1-\alpha(s)| |u-y\gamma|\leq |u|+ |y|\gamma$. By \eqref{SmoothingArc:Lemma:gammadelta} in $A$ we have
\begin{align}\label{SmoothingArc:Lemma:u bound}
|u(x,y)| = \left| \int_0^y u_y(x,t)dt \right| \leq |y|\delta(x).
\end{align}
Therefore, $|u|+|y|\gamma < |y|(\gamma+\delta)$.
If $A$ is so small that it is contained in the open set $\{(x,y): |y|<\varepsilon/(\gamma(x)+\delta(x))\}$, then we have $|\widetilde u-u|<\varepsilon$ in $A$, which proves claim (b).
We now compute the derivatives:
\begin{align*}
\widetilde u_x&= -\alpha'(s) \beta'(x) s^2 \frac{u-y\gamma}{|y|} +\alpha(s)u_x + (1-\alpha(s)) y\gamma' \,\,\,\,\, \textrm{and}\\
\widetilde u_y&= \alpha'(s)s \frac{u-y\gamma}{y}+ \alpha(s)u_y+(1-\alpha(s))\gamma.
\end{align*}
By \eqref{SmoothingArc:Lemma:beta}, \eqref{SmoothingArc:Lemma:alpha}, and \eqref{SmoothingArc:Lemma:u bound} we have
\begin{align*}
|\widetilde u_x| &\leq 4(\gamma+\delta)+\delta +|y||\gamma'| \,\,\,\,\, \textrm{and}\\
|\widetilde u_y| & \leq 4(\gamma+\delta)+ \delta+ \gamma
\end{align*}
in $V(\beta)\subset A$. Since the above bounds and \eqref{SmoothingArc:Lemma:gammadelta}, \eqref{SmoothingArc:Lemma:gammadelta2} do not depend on $A$ but only on the function $u$, if $A$ is sufficiently small, then $\|\nabla \widetilde u- \nabla u\|_{L^q(A)}\leq \|\nabla \widetilde u\|_{L^q(V(\beta))}+ \| \nabla u\|_{L^q(V(\beta))}$ can be made as small as we wish.
We next prove that $\widetilde u$ is monotone in $\mathbb R^2$. Note that $\frac{u-y\gamma}{y} \geq 0$ on $A$, since $u_y>\gamma$. Therefore,
\begin{align*}
\widetilde u_y \geq \gamma>0.
\end{align*}
This implies that $\nabla \widetilde u \neq 0$ in $A$, so $\widetilde u$ does not have any local maximum or minimum in $A$, and it is strictly monotone there. On the other hand, $\widetilde u =u$ outside $\overline{V(\beta)}\subset A$, and therefore $\widetilde u$ is monotone outside $V(\beta)$. By the Gluing Lemma \ref{GluingMonotonicity:Lemma} it follows that $\widetilde u$ is monotone in $\mathbb R^2$.
The argument of the previous paragraph and the use of Lemma \ref{GluingMonotonicity:Lemma} can be carried in $\Omega$, after precomposing $\widetilde u$ with the straightening diffeomorphism $\phi^{-1}$. We denote the composition, for simplicity, by $\widetilde u$. So $\widetilde u$ is a function in $\Omega$ that is strictly monotone in $\phi^{-1}(A)$ and can be extended to agree with $u$ in $\Omega\setminus \overline{\phi^{-1}(V(\beta))}$; in particular, $\widetilde u$ is monotone in $\Omega\setminus \overline{\phi^{-1}(V(\beta))}$. If we ensure that $\overline{\phi^{-1}(V(\beta))}\cap \Omega \subset \phi^{-1}(A)$, then Lemma \ref{GluingMonotonicity:Lemma} can be applied to yield the monotonicity of $\widetilde u$ in $\Omega$. The latter can be achieved by shrinking $\beta$ (and thus the neighborhood $V(\beta)$) if necessary, using the assumption that the topological ends of $J$ lie in $\partial \Omega$ and not in $\Omega$.
\end{proof}
\begin{lemma}\label{SmoothingArc:Lemma2}
The conclusions of Lemma \ref{SmoothingArc:Lemma} hold if the assumptions \textup{(i)},\textup{(iii)} are replaced by the assumptions
\begin{enumerate}\upshape
\item[(i$'$)] $u=0$ on $J$, $u>0$ on $B^+$, $u=0$ on $B^-$, and
\item[(iii$'$)] each point of $J$ has a neighborhood in $B^+$ in which $|\nabla u|$ is bounded away from $0$,
\end{enumerate}
or if \textup{(i)} is replaced by the assumption
\begin{enumerate}\upshape
\item[(i$''$)] $u=0$ on $J$, $u>0$ on $B^+$, $u>0$ on $B^-$.
\end{enumerate}
\end{lemma}
\begin{proof}
In the first case, one argues as in the proof of Lemma \ref{SmoothingArc:Lemma}, by straightening the line $J$ to the real line with a diffeomorphism $\phi$. This time we interpolate between $u$ and the function $e^{-1/y}$ instead of the height function $y$:
\begin{align*}
\widetilde u(x,y)= \begin{cases}
\alpha(s)u(x,y)+ (1-\alpha(s)) e^{-1/y} \gamma(x) & y>0\\
u & y\leq 0.
\end{cases}
\end{align*}
Here $\gamma$, $\delta$, $\beta$, $s$, $\alpha(s)$ are as in the previous proof, but working only in the upper half plane, and $A$, $V(\beta)=\{(x,y):0<y<\beta(x)\}$ are appropriate neighborhoods of $\mathbb R$ in the upper half plane $\{y>0\}$. The function $\widetilde u$ is smooth and the claims (a)--(d) from Lemma \ref{SmoothingArc:Lemma} follow as in the previous proof. We only have to argue differently for the monotonicity of $\widetilde u$.
We compute for $y>0$
$$\widetilde u_y= \alpha'(s) s \frac{u-e^{-1/y}\gamma}{y}+ \alpha(s) u_y + (1-\alpha(s)) \frac{e^{-1/y}}{y^2}\gamma.$$
Since $u_y>\gamma$ on $A$, we have $\frac{u-e^{-1/y}\gamma}{y} \geq 0$ for all sufficiently small $y$. In particular this holds in $A$ if we shrink $A$. It follows that $\widetilde u_y>0$ on $A$. Thus, $\widetilde u$ is strictly monotone in $A$. Moreover, outside $\overline {V(\beta)}\cap \{y\neq 0\} \subset A$ the function $\widetilde u$ agrees with $u$. Summarizing, in the domain $\widetilde\Omega =\mathbb R^2\setminus \mathbb R=\{y\neq 0\}$ we have $\overline{V(\beta)} \cap \widetilde \Omega \subset A$ and the function $\widetilde u$ is strictly monotone in $A$, and monotone in $\widetilde \Omega\setminus \overline{V(\beta)}$. By Lemma \ref{GluingMonotonicity:Lemma} we have that $\widetilde u$ is monotone in $\widetilde \Omega$.
Arguing as in the previous proof, we transfer the conclusions to $\Omega$ and deduce that $\widetilde u$ is monotone in $\Omega\setminus J$. Note that $J$ is a connected closed subset of $\Omega$. Since $\widetilde u=0$ in $J$ and $J$ exists all compact subsets of $\Omega$, by Lemma \ref{GluingConstant:Lemma} we conclude that $\widetilde u$ is monotone in $\Omega$. This completes the proof under the assumptions (i$'$) and (iii$'$).
Now, we assume that (i$'')$ holds in the place of (i). After straightening $J$, we define
\begin{align*}
\widetilde u(x,y)= \begin{cases}
\alpha(s)u(x,y)+ (1-\alpha(s)) e^{-1/|y|} \gamma(x) & y\neq 0\\
0 & y= 0
\end{cases}
\end{align*}
This is a smooth function in $\mathbb R^2$. The conclusions (a)--(d) are again straightforward, so we only argue for the monotonicity. In $V(\beta)= \{(x,y): 0<|y|<\beta(x)\}$ and in a slightly larger open set $A\supset \overline{V(\beta)}$ we have $\widetilde u_y\neq 0$. This implies that $\widetilde u$ is strictly monotone in $A$. Using Lemma \ref{GluingMonotonicity:Lemma}, we have that $\widetilde u$ is monotone in $\mathbb R^2\setminus \mathbb R$.
We transfer the conclusions to $\Omega$ and we obtain a function $\widetilde u$ that is monotone in $\Omega\setminus J$. Since $J$ is closed, connected and exits all compact subsets of $\Omega$, and $\widetilde u=0$ on $J$, by Lemma \ref{GluingConstant:Lemma} we conclude that $\widetilde u$ is monotone in $\Omega$.
\end{proof}
\begin{lemma}[Smoothing along a Jordan curve] \label{SmoothingCurve:Lemma}
Let $\Omega\subset \mathbb R^2$ be an open set and let $J \subset \Omega$ be an embedded $1$-dimensional smooth submanifold of $\mathbb R^2$, homeomorphic to $S^1$. Suppose that $J$ is contained in an open set $B\subset \Omega$ that it is the common boundary of two disjoint regions $B^+$ and $B^-$ with $B=B^+\cup B^-\cup J$. Moreover, let $u\in W^{1,p}(\Omega)$ be a monotone function satisfying one of the following triples of conditions:
\begin{enumerate}[\upshape(i)]
\item $u=0$ on $J$, $u>0$ on $B^+$, $u<0$ on $B^-$,
\item $u$ is smooth in $B^+\cup J$ and in $B^-\cup J$,
\item each point of $J$ has a neighborhood in $B^+\cup B^-$ in which $|\nabla u|$ is bounded away from $0$,
\end{enumerate}
or
\begin{enumerate}[\upshape(i$'$)]
\item $u=0$ on $J$, $u>0$ on $B^+$, $u=0$ on $B^-$,
\item $u$ is smooth in $B^+\cup J$ and in $B^-\cup J$,
\item each point of $J$ has a neighborhood in $B^+$ in which $|\nabla u|$ is bounded away from $0$.
\end{enumerate}
Then for any open set $U\subset B$ with $U\supset J$ and each $\varepsilon>0$ there exists an open set $A\subset U$ containing $J$ and a monotone function $\widetilde u$ in $\Omega$ such that
\begin{enumerate}[\upshape(a)]
\item $\widetilde u$ agrees with $u$ in $\Omega\setminus A$ and is smooth in $B$,
\item $|\widetilde u- u|<\varepsilon$ in $A$,
\item $\|\nabla \widetilde u-\nabla u\|_{L^p(A)}<\varepsilon$, and
\item $\widetilde u-u \in W^{1,p}_0(A)$.
\end{enumerate}
\end{lemma}
\begin{proof}
We will only sketch the differences with the proofs of Lemmas \ref{SmoothingArc:Lemma} and \ref{SmoothingArc:Lemma2}. By Theorem \ref{Classification:theorem} we may map $B$ with a diffeomorphism to a neighborhood of $S^1$. After shrinking $B$, we may assume that this neighborhood of $S^1$ is an annulus. Then, using logarithmic coordinates, we map $S^1$ to $\mathbb R$. Let $\phi$ denote the composition of the two maps described. By, precomposing $u$ with $\phi^{-1}$, we can obtain a periodic function, still denoted by $u$, in a strip $\{(x,y):|y|< c\}$. We consider a strip $V(\beta)= \{ |y|<\beta\}$ under assumption (i), or $V(\beta)=\{0<y<\beta\}$ under (i$'$), where $\beta$ is a constant rather than a function, and a strip $A \supset \overline {V(\beta)}$. We consider the function $\widetilde u$ with the same definition as in the proofs of Lemmas \ref{SmoothingArc:Lemma} and \ref{SmoothingArc:Lemma2}. The function $\widetilde u$ is also periodic and smooth, so it gives a function in the original domain $\Omega$, by extending it to be equal to $u$ outside $B$. We still denote this function by $\widetilde u$. The properties (a)--(d) are straightforward to obtain, upon shrinking the strip $A$. We only argue for the monotonicity.
Under the first set of assumptions, and in particular under (i), the function $\widetilde u$ is strictly monotone in $\phi^{-1}(A)$ and agrees with $u$ outside $\phi^{-1}(\overline{V(\beta)}) \subset A$. Hence, by Lemma \ref{GluingMonotonicity:Lemma}, $\widetilde u$ is monotone in $\Omega$.
Under the second set of assumptions, and in particular under (i$'$), we can only conclude that $\widetilde u$ is monotone in $\Omega\setminus J$; see also the proof of Lemma \ref{SmoothingArc:Lemma2}. However, now we cannot apply Lemma \ref{GluingConstant:Lemma}, since $J$ does not exit all compact subsets of $\Omega$. Instead, we are going to use Lemma \ref{Gluing:Lemma}. In $\phi^{-1}(V(\beta))$, which is precompact, by continuity we have $0<\widetilde u< t$ and $0<u<t$ for some $t>0$. We set $\Upsilon= u^{-1}((0,t))$, so we have $\phi^{-1}(V(\beta))\subset \Upsilon$. Observe that $\widetilde u$ is monotone in $\Upsilon \subset \Omega\setminus J$ and $0\leq \widetilde u\leq t$ in $\Upsilon$. By Lemma \ref{Gluing:Lemma} we conclude that the function
$$\widetilde u= \widetilde u \x_{\Upsilon} + u \x_{\Omega\setminus \Upsilon}$$
is monotone in $\Omega$.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{A comparison of the original and improved matrix methods}\label{sec:comparison}
As an example for the comparison of the original and improved matrix method, the estimation of the fake-lepton contributions in an inclusive cross-section measurement in top-quark pair production in association with a photon ($t\bar{t}\gamma$) with the ATLAS experiment~\cite{Aaboud_2019} is discussed in Section~\ref{sec:physics_example}. This measurement used data taken in proton--proton collisions at the Large Hadron Collider (LHC) in the years 2015 and 2016, corresponding to an integrated luminosity of \SI{36}{\per\femto\barn}. The matrix method was used to estimate the fake-lepton yield in the single-lepton channel, which mainly stems from multijet background processes.
Section~\ref{sec:limit_cases} is devoted to the limitations of the matrix method in certain regions of the phase space, as explained in Section~\ref{sec:classical_matrix_method}. Therefore, five situations are discussed and the estimates are compared between the original and improved matrix methods.
The following studies are performed using the likelihood from Eq.~\eqref{eqn:lhmm_likelihood_2} and the Metropolis Hastings algorithm with a sample size of \num{e6}.
\subsection[Physics example: \texorpdfstring{$t\bar{t}\gamma$}{} cross-section measurement with the ATLAS detector]{Physics example: $\boldsymbol{t\bar{t}\gamma}$ cross-section measurement with the ATLAS detector}\label{sec:physics_example}
The efficiencies \ensuremath{\varepsilon^{\mathrm{r}}}\xspace and \ensuremath{\varepsilon^{\mathrm{f}}}\xspace were determined with the tag-and-probe technique, using leptons from $Z$ boson decays and control regions enriched with fake leptons for the real and fake efficiency, respectively~\cite{Aaboud_2019}.
Since Ref.~\cite{Aaboud_2019} only provides the number of estimated fake events as well as its uncertainty to be $\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace = \num{360(200)}$ and \ensuremath{N_{\mathrm{T}}}\xspace to be $\ensuremath{N_{\mathrm{T}}}\xspace = \num{11750}$, but does not give \ensuremath{N_{\mathrm{L}}}\xspace or the real and fake efficiencies, they are estimated here in the following way:
The efficiencies are roughly estimated from Ref.~\cite{ATLAS-CONF-2014-058}. Although they are a function of various kinematic quantities and the uncertainties in each bin include a combination of systematic and statistical uncertainties, the efficiencies are taken to be $\ensuremath{\varepsilon^{\mathrm{r}}}\xspace = \num{0.8}$ and $\ensuremath{\varepsilon^{\mathrm{f}}}\xspace = \num{0.2}$. To reproduce the uncertainty on the fake-event yield $\sigma_{\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace} = \num{200}$, Eq.~\eqref{eqn:first_order_expansion} is used to estimate symmetric uncertainties on the efficiencies. For simplicity, the uncertainties on
\begin{figure}[H]
\centering
\includegraphics[width = 0.97\textwidth]{Plots/PhysicsExample/physics_example_md.pdf}
\caption{The marginalised distributions of the parameter set $\boldsymbol{\theta}$. The coloured regions belong to the smallest \SI{68.27}{\percent}, \SI{95.45}{\percent} and \SI{99.73}{\percent} intervals. The off-diagonals present two-dimensional histograms of the parameter combinations. They are mirrored on the diagonal with the upper right plots showing heat maps and the lower left plots sharing the plot style and interval definitions of the diagonal histograms.}
\label{fig:physics_example_md}
\end{figure}
the efficiencies are assumed to be equal, so that $\sigma_{\ensuremath{\varepsilon^{\mathrm{r}}}\xspace} = \sigma_{\ensuremath{\varepsilon^{\mathrm{f}}}\xspace} = \sigma_{\varepsilon}$. Hence, it follows
\begin{equation}
\sigma_{\varepsilon} = \alpha\sigma_{\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace}\,,
\end{equation}
with a factor $\alpha$. The factor is calculated to $\alpha \approx 0.00019$ using an arbitrary $\sigma_{\varepsilon}$ and the resulting uncertainty $\sigma_{\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace}$ computed with Eq.~\eqref{eqn:first_order_expansion}. Therefore, the uncertainty on the efficiencies resulting in the output uncertainty of the original matrix method $\sigma_{\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace} = 200$ is computed to $\sigma_{\varepsilon} \approx 0.038$.
The missing number of \ensuremath{N_{\mathrm{L}}}\xspace is derived by transforming Eq.~\eqref{eqn:N_tight_fake} as
\begin{equation}
\ensuremath{N_{\mathrm{L}}}\xspace = \frac{1}{\ensuremath{\varepsilon^{\mathrm{r}}}\xspace} \left(\ensuremath{N_{\mathrm{T}}}\xspace - \frac{\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace (\ensuremath{\varepsilon^{\mathrm{f}}}\xspace - \ensuremath{\varepsilon^{\mathrm{r}}}\xspace)}{\ensuremath{\varepsilon^{\mathrm{f}}}\xspace}\right) = \num{16038} \,.
\end{equation}
The marginalised distributions of the Bayesian model for the parameter set $\boldsymbol{\theta}$ are shown in Figure~\ref{fig:physics_example_md}. On the main diagonal the posterior parameter distributions are shown with the according \SI{68.27}{\percent}, \SI{95.45}{\percent} and \SI{99.73}{\percent} smallest intervals. The off-diagonals show two-dimensional histograms of the parameter configurations.
The posterior distribution for fake-lepton estimation results from Eq.~\eqref{eqn:nu_tight_fake} and is shown in Figure~\ref{fig:physics_example} as dark blue histogram. The smallest intervals are omitted here.
In Figure~\ref{fig:physics_example}, the distribution of the original matrix method is shown as well. The estimate with the original matrix method in the orange histogram leads to a non-negligible probability in the unphysical region where the fake yield is negative, while the Bayesian approach does not. Also the values of maximal probability (mean for the original matrix method and mode for the Bayesian approach) differ. The results are summarised in Table~\ref{tab:physics_example}.
\begin{table}[H]
\small
\centering
\caption{Summary of mean values of the distributions in Figure~\ref{fig:physics_example}. The uncertainties refer to the smallest \SI{68.27}{\percent} intervals, since the distribution of the Bayesian model has an asymmetric shape.}
\begin{tabular}{ccc}
\toprule
Quantity & Bayesian model & Original matrix method \\
\midrule
mean & \hrulefill & $\num{360\pm200}$ \\
mode & $300^{+225}_{-175}$ & $\num{360\pm200}$ \\
median & $352$ & $360$ \\
\bottomrule
\end{tabular}
\label{tab:physics_example}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width = 0.8\textwidth]{Plots/PhysicsExample/physics_example.pdf}
\caption{Probability distribution function for the fake event estimator passing the \textit{tight}\xspace selection in the example of the $t\bar{t}\gamma$ cross-section measurement.}
\label{fig:physics_example}
\end{figure}
\subsection{Special cases of the original matrix method}\label{sec:limit_cases}
One of the main problems of the original formulation of the matrix method are the negative event-yield estimates in some regions of the phase space. Thus, five artificial scenarios only based on the measured quantities \ensuremath{N_{\mathrm{L}}}\xspace, \ensuremath{N_{\mathrm{T}}}\xspace, \ensuremath{\varepsilon^{\mathrm{r}}}\xspace and \ensuremath{\varepsilon^{\mathrm{f}}}\xspace emphasising the limitations of the matrix method are presented below. Unless otherwise stated, $\ensuremath{N_{\mathrm{L}}}\xspace = 20$ and $\ensuremath{N_{\mathrm{T}}}\xspace = 10$.
\begin{description}[leftmargin = 1.6em, font=\normalfont]
\item[\Circled{1}] The first case targets the numerical instabilities of the matrix method when sampling in a region of the phase space with similar efficiencies. For this example, the efficiencies are chosen to be $\ensuremath{\varepsilon^{\mathrm{r}}}\xspace = \num{0.51\pm0.02}$ and $\ensuremath{\varepsilon^{\mathrm{f}}}\xspace = \num{0.50\pm0.02}$.
\item[\Circled{2}] The second case demonstrates how the estimation performs when the parameters of the matrix method are chosen so that $\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace \approx 0$. This is achieved for a very small fake efficiency or a very small $N_{\mathrm{L}}^{\mathrm{f}}$. For this example the fake efficiency takes on a value close to zero, so that $\ensuremath{\varepsilon^{\mathrm{r}}}\xspace = \num{0.75\pm0.02}$ and $\ensuremath{\varepsilon^{\mathrm{f}}}\xspace = \num{0.01\pm0.02}$.
\item[\Circled{3}] Since analytically there is a possibility that the matrix method estimates negative fake-event yields, the parameters of Eq.~\eqref{eqn:N_tight_fake} are chosen so that $\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace < 0$.
\item[\Circled{4}] This scenario discusses two aspects. With the efficiencies are chosen to be $\ensuremath{\varepsilon^{\mathrm{r}}}\xspace = \num{0.99\pm0.02}$ and $\ensuremath{\varepsilon^{\mathrm{f}}}\xspace = \num{0.01\pm0.02}$, the very large difference $\mathrm{\Delta}\varepsilon = |\ensuremath{\varepsilon^{\mathrm{r}}}\xspace - \ensuremath{\varepsilon^{\mathrm{f}}}\xspace|$ is expected to lead to a stable estimate with the matrix method. On the other hand, the Bayesian model is evaluated at the prior bounds, as these are bounded below and above with zero and one (see Eq.~\eqref{eqn:truncated_normal}).
\item[\Circled{5}] The last case examines the effect of the efficiency uncertainty, hence $\sigma_{\varepsilon}$ is increased by a factor of \num{10} and the efficiencies are chosen to be $\ensuremath{\varepsilon^{\mathrm{r}}}\xspace = \num{0.75\pm0.2}$ and $\ensuremath{\varepsilon^{\mathrm{f}}}\xspace = \num{0.42\pm0.2}$.
\end{description}
The results of the cases described above are presented as distributions in histograms in Figure~\ref{fig:limit_case_comparisons} and summarised in Table~\ref{tab:limit_case_table}.
\begin{table}[H]
\small
\centering
\caption{The quantitative results of the studies described above and illustrated in the histograms in Figure~\ref{fig:limit_case_comparisons}. The table shows the values of maximum probability (mode for the Bayesian approach and mean for the matrix method), as well as the median for the five limiting cases. The results for each of the cases are presented in columns. All uncertainties refer to the smallest \SI{68.27}{\percent} interval.}
\label{tab:limit_case_table}
\begin{tabular}{cc
S[table-format=1.2(3)]
S[table-format=1.2(3)]
S[table-format=1.2(3)]
S[table-format=1.2(3)]
S[table-format=1.2(3)]
}
\toprule
\multicolumn{1}{c}{Model} & \multicolumn{1}{c}{Quantity} & {\Circled{1}} & {\Circled{2}} & {\Circled{3}} & {\Circled{4}} & {\Circled{5}} \\
\midrule
\multirow{2}{*}{Bayesian model} & mode & $\num{4.00}^{+3.50}_{-4.00}$ & $\num{0.00}^{+0.18}_{-0.00}$ & $\num{0.00}^{+0.12}_{-0.00}$ & $\num{0.08}^{+0.20}_{-0.08}$ & $\num{1.80}^{+4.60}_{-1.80}$ \\
& median & $\num{5.25}$ & $\num{0.12}$ & $\num{0.79}$ & $\num{0.19}$ & $\num{4.41}$ \\ \midrule
\multirow{2}{*}{matrix method} & mean & 10.00(2041) & 0.07(014) & -5.09(099) & 0.10(020) & 6.37(700) \\
& median & $\num{10.00}$ & $\num{0.07}$ & $\num{-5.09}$ & $\num{0.10}$ & $\num{6.37}$ \\
\bottomrule
\end{tabular}
\end{table}
Each limit case is described in a column and the values of maximum probability of the distributions (mode for the Bayesian approach and mean for the matrix method) are given. The median is also given for both methods. Since the matrix method distribution follows a Gaussian distribution, mean value and median are identical. This is not the case for the Bayesian model, as the distributions presented in Figure~\ref{fig:limit_case_comparisons} are asymmetric. The uncertainties on the modal values refer to the upper and lower range of the smallest \SI{68.27}{\percent} intervals.
Each of the plots in Figure~\ref{fig:limit_case_comparisons} contains two histograms with the fake-event yield in the \textit{tight}\xspace regions. One histogram shows the distribution estimated with the original matrix method in orange, the distribution of the Bayesian model is shown in the other histogram in dark blue. The coloured regions indicating the smallest \SI{68.27}{\percent}, \SI{95.45}{\percent} and \SI{99.73}{\percent} intervals are omitted for clarity. The distributions computed with the original matrix method contain negative and thus unphysical contributions in all cases.
The first example in Figure~\ref{fig:case_1} shows the expected unstable fake-event estimate with the matrix method as a broad distribution. The $x$-axis is truncated from \numrange{-10}{30} to also clearly illustrate the distribution of the Bayesian model. The estimate of the latter method is more stable, although the Bayesian approach leads to a broad plateau from $\ensuremath{\nu_{\mathrm{T}}^{\mathrm{f}}}\xspace = 0$ up to $\ensuremath{\nu_{\mathrm{T}}^{\mathrm{f}}}\xspace \approx 7$.
In the second case, presented in Figure~\ref{fig:case_2}, the probability distribution of the matrix method decreases again towards zero. The Bayesian model does not show this feature and leads to a peak at zero but with a wider tail in the positive range.
Even in regions with limited validity of the original matrix method, the Bayesian model adequately describes the distribution of the fake leptons. An example for a region of limited validity is presented in Figure~\ref{fig:case_3}, where the distribution of the matrix method peaks in the negative range without any significant positive contribution. The Bayesian model shows a peak at zero with a steep slope, as expected.
In the fourth limit case in Figure~\ref{fig:case_4}, both methods produce narrow distributions with peaks and uncertainties of the same scale. Although the Bayesian model is evaluated at the prior bounds in this region, the estimate does not suffer. As expected, the distribution computed with the matrix method is very stable and accurate in terms of the mean value.
Since the efficiency uncertainties in the fifth case, given in Figure~\ref{fig:case_5}, are scaled with a factor of ten, the distributions of the two methods are wider, with a longer tail arising in the matrix method distribution. In this case, even with a positive estimated mean value of the matrix method, the values of maximum probability differ.
In general, the Bayesian approach results in a narrower probability density for the fake-lepton yield, with zero probability for negative yields.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width = 1.0\textwidth]{Plots/ComparisonPlots/case_study_1.pdf}
\subcaption{Case \Circled{1}.}
\label{fig:case_1}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width = 1.0\textwidth]{Plots/ComparisonPlots/case_study_2.pdf}
\subcaption{Case \Circled{2}.}
\label{fig:case_2}
\end{subfigure}\\
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width = 1.0\textwidth]{Plots/ComparisonPlots/case_study_3.pdf}
\subcaption{Case \Circled{3}.}
\label{fig:case_3}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width = 1.0\textwidth]{Plots/ComparisonPlots/case_study_4.pdf}
\subcaption{Case \Circled{4}.}
\label{fig:case_4}
\end{subfigure}\\
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[width = 0.48\textwidth]{Plots/ComparisonPlots/case_study_5.pdf}
\subcaption{Case \Circled{5}.}
\label{fig:case_5}
\end{subfigure}
\caption{Comparison of the original and improved matrix method to estimate fake-event yields in the \textit{tight}\xspace region. All distributions are sampled with $10^6$ samples. The Metropolis-Hastings algorithm is used for the Bayesian model posterior samples.}
\label{fig:limit_case_comparisons}
\end{figure}
\FloatBarrier
\section*{Acknowledgements}
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) through projects KR 4060/7-1 and KR 4060/13-1, the PUNCH4NFDI consortium supported by the DFG fund NFDI 39/1, the Studienstiftung des deutschen Volkes and the US Department of Energy (DoE).
\section{The matrix method and its limitations}
\label{sec:MM}
The matrix method is a data-driven method to estimate the fake-lepton contribution. While the matrix method works quite well in most use cases, there are some general limitations and restrictions in its application. The main challenges of the matrix method lie in the uncertainty calculation, which is based on a first-order Taylor-series approximation and in the non-vanishing probability to estimate negative fake-event yields~\cite{Gillam_2014}. The derivations presented below refer to the single-lepton case, although the methods are applicable to multi-lepton final states as well.
\subsection{Matrix method in its original form} \label{sec:classical_matrix_method}
In the original formulation of the matrix method~\cite{PhysRevD.76.092007}, two lepton identification criteria are used to estimate the fake-lepton contribution. These criteria are referred to as \textit{loose}\xspace and \textit{tight}\xspace. While the \textit{tight}\xspace requirement corresponds to the region of interest (signal region), i.e.~where the fake-lepton contribution is estimated, the \textit{loose}\xspace region has relaxed requirements and is enriched in fake leptons. The leptons in the \textit{tight}\xspace region are a subset of those in the \textit{loose}\xspace region (Fig.~\ref{fig:MM}).
\begin{figure}[h]
\centering
\includegraphics[width = 0.25\textwidth]{Plots/Theory/loose_tight_tikz.pdf}
\caption{Sketch of the matrix method with the \textit{loose}\xspace and \textit{tight}\xspace regions highlighted in blue and red, respectively. The dashed line divides the regions further into real and fake lepton regions.
\label{fig:MM}}
\end{figure}
The corresponding numbers of events passing the \textit{loose}\xspace and \textit{tight}\xspace selection are called \ensuremath{N_{\mathrm{L}}}\xspace and \ensuremath{N_{\mathrm{T}}}\xspace, containing both, the contribution of real ($N_{\text{L}}^{\text{r}}$ and $N_{\text{T}}^{\text{r}}$) and fake leptons ($N_{\text{L}}^{\text{f}}$ and $N_{\text{T}}^{\text{f}}$). The total event numbers in the regions are then given by
\begin{align}
\begin{split}
\ensuremath{N_{\mathrm{L}}}\xspace &= N_{\text{L}}^{\text{r}} + N_{\text{L}}^{\text{f}}\,, \\
\ensuremath{N_{\mathrm{T}}}\xspace &= N_{\text{T}}^{\text{r}} + N_{\text{T}}^{\text{f}}\,.
\end{split}
\label{eqn:N_loose_tight}
\end{align}
The probability of migrating from one region to the other is determined by the efficiencies \ensuremath{\varepsilon^{\mathrm{r}}}\xspace and \ensuremath{\varepsilon^{\mathrm{f}}}\xspace for real and fake leptons, respectively\footnote{Differences of these efficiencies caused by lepton flavor, kinematic effects or different sources of fake leptons are neglected in this analysis.}. They are defined as the fraction of leptons passing the corresponding criteria
\begin{align}
\begin{split}
\ensuremath{\varepsilon^{\mathrm{r}}}\xspace &= \frac{N_{\text{T}}^{\text{r}}}{N_{\text{L}}^{\text{r}}}\,,\\
\ensuremath{\varepsilon^{\mathrm{f}}}\xspace &= \frac{N_{\text{T}}^{\text{f}}}{N_{\text{L}}^{\text{f}}}\,.
\end{split}
\label{eqn:epsilon_real_fake}
\end{align}
Then, the amount of fake leptons fulfilling the \textit{tight}\xspace requirements\footnote{It is to note that due to fluctuations in \ensuremath{N_{\mathrm{L}}}\xspace and \ensuremath{N_{\mathrm{T}}}\xspace the resulting quantity \ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace refers to the estimator of the fake-lepton yield in the tight region.} is
\begin{equation}
\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace = \frac{\ensuremath{\varepsilon^{\mathrm{f}}}\xspace}{\ensuremath{\varepsilon^{\mathrm{f}}}\xspace - \ensuremath{\varepsilon^{\mathrm{r}}}\xspace}\cdot\left(\ensuremath{N_{\mathrm{T}}}\xspace - \ensuremath{\varepsilon^{\mathrm{r}}}\xspace \ensuremath{N_{\mathrm{L}}}\xspace\right)\,.
\label{eqn:N_tight_fake}
\end{equation}
This formulation of the matrix method leads to several limitations~\cite{erich_varnes}:
\begin{enumerate}
\item In the limit of very similar real and fake efficiencies, the estimate becomes numerically unstable, resulting in a large (positive or negative) estimate of \ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace.
\item \ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace becomes negative if the \textit{loose}\xspace regions contains more events with real leptons than the total number of events in the \textit{tight}\xspace region.
\item The uncertainty estimation on the \ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace yield is subject to limitations that are described in more detail in the next section (Section~\ref{sec:error_propagation}).
\end{enumerate}
In general, the estimation of the fake-lepton background is subject to several sources of systematic uncertainty. While a full discussion of these is beyond the scope of this paper, in principle these uncertainties arise due to potential variations in the values of \ensuremath{\varepsilon^{\mathrm{r}}}\xspace and \ensuremath{\varepsilon^{\mathrm{f}}}\xspace. These variations can be due to differences in the fake-lepton composition between the regions where the \ensuremath{\varepsilon^{\mathrm{f}}}\xspace-values are measured and the analysis region where they are applied, or due to variations in the Monte Carlo model for real-lepton contributions in the fake-lepton control region (since the simulated real-lepton sources are subtracted when measuring \ensuremath{\varepsilon^{\mathrm{f}}}\xspace).
\subsection{The limits of Gaussian uncertainty propagation} \label{sec:error_propagation}
The fake-event yield in the \textit{tight}\xspace region is subject to uncertainties in the input to Eq.~\eqref{eqn:N_tight_fake}. The uncertainty propagation is based on a first-order Taylor-series approximation (Gaussian uncertainty propagation). Since the efficiencies are given as uncertain parameters to Eq.~\eqref{eqn:N_tight_fake}, the first-order expansion of \ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace results in
\begin{equation}
\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace \approx \ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace(\ensuremath{N_{\mathrm{L}}}\xspace, \ensuremath{N_{\mathrm{T}}}\xspace, \bar{\varepsilon}^{\mathrm{r}}, \bar{\varepsilon}^{\mathrm{f}}) + \underbrace{\sum_{i \in [\ensuremath{\varepsilon^{\mathrm{r}}}\xspace, \ensuremath{\varepsilon^{\mathrm{f}}}\xspace]} \frac{\partial \ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace}{\partial x_i} (x_i - \bar{x}_i)}_{\approx\sigma_{\!\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace}}\,.
\label{eqn:first_order_expansion}
\end{equation}
This corresponds to a linearisation of the output distribution and \ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace is then assumed to follow a Gaussian distribution. Since Eq.~\eqref{eqn:N_tight_fake} is a non-linear function, a symmetric interval of $\pm\sigma_{\!\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace}$ around its mean does not always contain $\SI{68.27}{\percent}$ of the probability-density distribution. This is due to fact that the calculation of uncertainties with Gaussian uncertainty propagation neglects higher-order derivatives and since these do not vanish in Eq.~\eqref{eqn:N_tight_fake}, this effects the accuracy in the estimate of the uncertainty.
\section{Conclusions}
\label{sec:conclusions}
The estimation of fake-lepton contributions is a key part of analyses that use prompt leptons. Since simulation-based estimations are impractical due to the small fake probabilities, data-driven techniques, such as the matrix method, are used instead.
Alternative approaches have been proposed~\cite{erich_varnes}, which make use of maximum likelihood estimates that provide reliable and stable fake-lepton estimates. In this paper, the likelihood approach was reformulated in Bayesian reasoning. Its origin has been developed to describe the collection and migration of \textit{loose}\xspace and \textit{tight}\xspace leptons, respectively, in a probabilistic fashion. The equality of the two approach has been shown. As an example application, the fake-lepton estimation of a top-quark measurement has been used to illustrate that the original matrix method results in non-negligible probabilities for negative event yields, while the improved method only predicts positive values. In five examples, the two methods have been compared, which show the strengths of the improved method over the original matrix method.
\section{An improved matrix method} \label{sec:statistical_model}
In the following, a maximum likelihood approach is described, as motivated and explained in more detail in Ref.~\cite{erich_varnes}. The motivation for this method and the Bayesian derivation is mentioned afterwards. It is shown that both, the likelihood matrix method and the probabilistic derivation are mathematically identical and are further linked with \textit{a-priori} knowledge on the likelihood's parameters in a Bayesian model.
\subsection{Likelihood ansatz} \label{sec:statistical_model_likelihood}
The likelihood matrix method as derived in Ref.~\cite{erich_varnes} describes leptons passing the \textit{loose}\xspace criteria to be subsequently divided into two orthogonal groups, referred to as \textit{tight}\xspace and \textit{\mbox{non-tight}}\xspace.
The sum of these two groups is given as the number of events in the \textit{loose}\xspace sample
\begin{equation}
\ensuremath{N_{\mathrm{L}}}\xspace = \ensuremath{N_{\mathrm{T}}}\xspace + \ensuremath{N_{\mathrm{nT}}}\xspace\,.
\end{equation}
It follows directly $\ensuremath{N_{\mathrm{T}}}\xspace\subset\ensuremath{N_{\mathrm{L}}}\xspace$. The original matrix-method formalism is used to describe the transition into the \textit{tight}\xspace and \textit{\mbox{non-tight}}\xspace region. Therefore, the entries in both regions are given by
\begin{alignat}{2}
\ensuremath{N_{\mathrm{T}}}\xspace &= N_{\mathrm{T}}^{\mathrm{r}} + N_{\mathrm{T}}^{\mathrm{f}}\label{eqn:Nt_lhmm} &&= \ensuremath{\varepsilon^{\mathrm{r}}}\xspace N_{\mathrm{L}}^{\mathrm{r}} + \ensuremath{\varepsilon^{\mathrm{f}}}\xspace N_{\mathrm{L}}^{\mathrm{f}}\,,\\
\ensuremath{N_{\mathrm{nT}}}\xspace &= \underbrace{\frac{1 - \ensuremath{\varepsilon^{\mathrm{r}}}\xspace}{\ensuremath{\varepsilon^{\mathrm{r}}}\xspace} N_{\mathrm{T}}^{\mathrm{r}} + \frac{1 - \ensuremath{\varepsilon^{\mathrm{f}}}\xspace}{\ensuremath{\varepsilon^{\mathrm{f}}}\xspace} N_{\mathrm{T}}^{\mathrm{f}}\label{eqn:Nnt_lhmm}}_{\textit{tight}\xspace\text{\;frame}} &&= \underbrace{(1 - \ensuremath{\varepsilon^{\mathrm{r}}}\xspace) N_{\mathrm{L}}^{\mathrm{r}} + (1 - \ensuremath{\varepsilon^{\mathrm{f}}}\xspace) N_{\mathrm{L}}^{\mathrm{f}}\vphantom{\frac{1 - \ensuremath{\varepsilon^{\mathrm{r}}}\xspace}{\ensuremath{\varepsilon^{\mathrm{r}}}\xspace} N_{\mathrm{T}}^{\mathrm{r}} + \frac{1 - \ensuremath{\varepsilon^{\mathrm{f}}}\xspace}{\ensuremath{\varepsilon^{\mathrm{f}}}\xspace} N_{\mathrm{T}}^{\mathrm{f}}} }_{\textit{loose}\xspace\text{\;frame}}\,.
\end{alignat}
Since both Eq.~\eqref{eqn:Nt_lhmm} and Eq.~\eqref{eqn:Nnt_lhmm} can be defined in the \textit{tight}\xspace and the \textit{loose}\xspace region, there are two different corresponding parameterisations for a maximum-likelihood approach~\cite{erich_varnes}. The efficiencies are parameterised to follow Gaussian distributions $\mathcal{N}$, hence the likelihood results in
\begin{equation}
p(\ensuremath{N_{\mathrm{T}}}\xspace, \ensuremath{N_{\mathrm{nT}}}\xspace, \ensuremath{\varepsilon^{\mathrm{r}}}\xspace, \ensuremath{\varepsilon^{\mathrm{f}}}\xspace | \nu_{\mathrm{T}}, \nu_{\mathrm{nT}}, \hat{\varepsilon}^{\mathrm{r}}, \hat{\varepsilon}^{\mathrm{f}}) = p(\ensuremath{N_{\mathrm{T}}}\xspace | \nu_{\mathrm{T}}) \cdot p(\ensuremath{N_{\mathrm{nT}}}\xspace | \nu_{\mathrm{nT}}) \cdot \mathcal{N}(\ensuremath{\varepsilon^{\mathrm{r}}}\xspace, \sigma_{\varepsilon^{\mathrm{r}}} | \hat{\varepsilon}^{\mathrm{r}}) \cdot \mathcal{N}(\ensuremath{\varepsilon^{\mathrm{f}}}\xspace, \sigma_{\varepsilon^{\mathrm{f}}} | \hat{\varepsilon}^{\mathrm{f}})\,,
\label{eqn:lhmm_likelihood_1}
\end{equation}
with Poissonian constraints on \ensuremath{N_{\mathrm{T}}}\xspace and \ensuremath{N_{\mathrm{nT}}}\xspace and corresponding estimators $\nu_{\mathrm{T}}$ and $\nu_{\mathrm{nT}}$. In order to simplify the fit, the uncertainties on the efficiencies are assumed to be negligible compared to the Poissonian uncertainties \footnote{It is possible to re-run the fit with variations in the efficiency to estimate a systematic uncertainty for the estimate.}, so the efficiencies are parameterised as $\delta$ distributions in Ref.~\cite{erich_varnes}. The likelihood from Eq.~\eqref{eqn:lhmm_likelihood_1} is then simplified to
\begin{equation}
p(\ensuremath{N_{\mathrm{T}}}\xspace, \ensuremath{N_{\mathrm{nT}}}\xspace, \ensuremath{\varepsilon^{\mathrm{r}}}\xspace, \ensuremath{\varepsilon^{\mathrm{f}}}\xspace | \nu_{\mathrm{T}}, \nu_{\mathrm{nT}}, \hat{\varepsilon}^{\mathrm{r}}, \hat{\varepsilon}^{\mathrm{f}}) = p(\ensuremath{N_{\mathrm{T}}}\xspace | \nu_{\mathrm{T}}) \cdot p(\ensuremath{N_{\mathrm{nT}}}\xspace | \nu_{\mathrm{nT}})\,.
\label{eqn:lhmm_likelihood_2}
\end{equation}
In the following, the motivation of the likelihood matrix method is presented and the equality of the likelihood and Bayesian approaches is shown. The ansatz is made up of two parts, each returning a probability that the lepton is real or fake in the \textit{loose}\xspace and \textit{tight}\xspace regions. Beginning with the \textit{loose}\xspace contribution, the parameters of the likelihood that are used to express the estimators of \textit{loose}\xspace real and fake leptons are referred to as $\nu^{\mathrm{r}}$ and $\nu^{\mathrm{f}}$.
As the event yields are assumed to follow a Poisson distribution, the likelihood for the real and fake-event yields is given by
\begin{equation}
p(N_{\mathrm{L}}^{\mathrm{r/f}} | \nu^{\mathrm{r/f}}) = \mathrm{Poisson}(N_{\mathrm{L}}^{\mathrm{r/f}} | \nu^{\mathrm{r/f}})\,.
\end{equation}
Since neither $N_{\mathrm{L}}^{\mathrm{r}}$ nor $N_{\mathrm{L}}^{\mathrm{f}}$ are measured, but only their sum \ensuremath{N_{\mathrm{L}}}\xspace, the likelihood for the \textit{loose}\xspace region must contain a sum over the possible splits of \ensuremath{N_{\mathrm{L}}}\xspace into $N_{\mathrm{L}}^{\mathrm{r}}$ and $N_{\mathrm{L}}^{\mathrm{f}}$:
\begin{equation}
p(\ensuremath{N_{\mathrm{L}}}\xspace | \nu^{\mathrm{r}}, \nu^{\mathrm{f}}) = \sum_{x = 0}^{\ensuremath{N_{\mathrm{L}}}\xspace} p(\ensuremath{N_{\mathrm{L}}}\xspace - x | \nu^{\mathrm{r}}) \cdot p(x | \nu^{\mathrm{f}})\,.
\end{equation}
The sum index $x$ counts up all $N_{\mathrm{L}}^{\mathrm{f}}$ until \ensuremath{N_{\mathrm{L}}}\xspace is reached.
The migration from \textit{loose}\xspace to \textit{tight}\xspace is defined as the number of successes when the lepton passes or fails the \textit{tight}\xspace requirements. Therefore, a binomial distribution is used as the probability density function. The probability is given by the efficiencies \ensuremath{\varepsilon^{\mathrm{r}}}\xspace and \ensuremath{\varepsilon^{\mathrm{f}}}\xspace for real and fake leptons, respectively. The corresponding estimators for the efficiencies are indicated as $\hat{\varepsilon}^i$. The process is described via
\begin{equation}
p(N_{\mathrm{T}}^{\mathrm{r/f}} | \hat{\varepsilon}^{\mathrm{r/f}}, N_{\mathrm{L}}^{\mathrm{r/f}}) = \mathrm{Binomial}(N_{\mathrm{T}}^{\mathrm{r/f}} | \hat{\varepsilon}^{\mathrm{r/f}}, N_{\mathrm{L}}^{\mathrm{r/f}})\,.
\end{equation}
The given number of \textit{loose}\xspace real and fake leptons is provided to the binomial distribution as output of the Poisson distribution. Again, only the overall number of \textit{tight}\xspace leptons can be measured and hence a second sum must be used to provide the total probability for a set of data $\mathcal{D} = (\ensuremath{N_{\mathrm{L}}}\xspace, \ensuremath{N_{\mathrm{T}}}\xspace, \ensuremath{\varepsilon^{\mathrm{r}}}\xspace, \ensuremath{\varepsilon^{\mathrm{f}}}\xspace)$ given the parameter vector $\boldsymbol{\theta} = (\nu^{\mathrm{r}}, \nu^{\mathrm{f}}, \hat{\varepsilon}^{\mathrm{r}}, \hat{\varepsilon}^{\mathrm{f}})$
\begin{equation}
p(\mathcal{D} | \boldsymbol{\theta}) = \sum_{x = 0}^{\ensuremath{N_{\mathrm{L}}}\xspace} \sum_{y = y_{\mathrm{min}}}^{y_{\mathrm{max}}} p(\ensuremath{N_{\mathrm{L}}}\xspace - x | \nu^{\mathrm{r}}) \cdot p(x | \nu^{\mathrm{f}}) \cdot p(\ensuremath{N_{\mathrm{T}}}\xspace - y | \hat{\varepsilon}^{\mathrm{r}}, \ensuremath{N_{\mathrm{L}}}\xspace - x) \cdot p(y | \hat{\varepsilon}^{\mathrm{f}}, x)\,.
\label{eqn:likelihood_first_principles}
\end{equation}
Since the binomial terms $p(\ensuremath{N_{\mathrm{T}}}\xspace - y | \hat{\varepsilon}^{\mathrm{r}}, \ensuremath{N_{\mathrm{L}}}\xspace - x)$ and $p(y | \hat{\varepsilon}^{\mathrm{f}}, x)$ describe the migration from the \textit{loose}\xspace to the \textit{tight}\xspace region, it is only effected by the measured values given the probability estimators $\hat{\varepsilon}^{\mathrm{r/f}}$.
The sum index $y$ counts up all \ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace. The upper limit $y_{\mathrm{max}}$ is due to limitations that $N_{\mathrm{T}}^{\mathrm{r/f}}$ cannot be greater than either \ensuremath{N_{\mathrm{T}}}\xspace nor $N_{\mathrm{L}}^{\mathrm{r/f}}$, i.e.~the range is given as
\begin{equation}
N_{\mathrm{T}}^{\mathrm{r/f}} \leq \mathrm{min}(\ensuremath{N_{\mathrm{T}}}\xspace, N_{\mathrm{L}}^{\mathrm{r/f}}) = y_{\mathrm{max}}\,.
\label{eqn:sum_index_1}
\end{equation}
Since $N_{\mathrm{L/T}}^{\mathrm{r}} = N_{\mathrm{L/T}} - N_{\mathrm{L/T}}^{\mathrm{f}}$, Eq.~\eqref{eqn:sum_index_1} can be expressed as
\begin{equation}
N_{\mathrm{T}}^{\mathrm{r}} \leq \mathrm{min}(\ensuremath{N_{\mathrm{T}}}\xspace, N_{\mathrm{L}}^{\mathrm{r}}) = \mathrm{min}(\ensuremath{N_{\mathrm{T}}}\xspace, \ensuremath{N_{\mathrm{L}}}\xspace - N_{\mathrm{L}}^{\mathrm{f}})\,.
\label{eqn:sum_index_2}
\end{equation}
From Eq.~\eqref{eqn:sum_index_2}, \ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace has to be greater or equal to $\ensuremath{N_{\mathrm{T}}}\xspace - \mathrm{min}(\ensuremath{N_{\mathrm{T}}}\xspace, \ensuremath{N_{\mathrm{L}}}\xspace - N_{\mathrm{L}}^{\mathrm{f}})$. This relation is true if
\begin{equation}
\ensuremath{N_{\mathrm{T}}^{\mathrm{f}}}\xspace \geq \mathrm{max}(0, \ensuremath{N_{\mathrm{T}}}\xspace - \ensuremath{N_{\mathrm{L}}}\xspace + \underbrace{N_{\mathrm{L}}^{\mathrm{f}}}_{= x}) = y_{\mathrm{min}}\,.
\end{equation}
Since $y$ counts up all \textit{tight}\xspace and fake leptons, the sum indices $y_{\mathrm{min}}$ and $y_{\mathrm{max}}$ are defined as
\begin{align}
\begin{split}
y_{\mathrm{min}} &= \mathrm{max}(0, \ensuremath{N_{\mathrm{T}}}\xspace - \ensuremath{N_{\mathrm{L}}}\xspace + x)\,,\\
y_{\mathrm{max}} &= \mathrm{min}(\ensuremath{N_{\mathrm{T}}}\xspace, x)\,.
\end{split}
\end{align}
Though Eq.~\eqref{eqn:lhmm_likelihood_2} and Eq.~\eqref{eqn:likelihood_first_principles} were derived with different approaches, they are in fact equivalent, as proven in Appendix~\ref{sec:appendix_proof}.
\subsection{Bayesian model} \label{sec:Bayesian_model}
Prior knowledge is an essential part of Bayesian inference. Physical knowledge flows into the choice of the prior in order to obtain an appropriate posterior distribution.
Thus this choice leads to a certain degree of subjectivity.
It was shown that both likelihoods are mathematically identical, therefore, the formalism of the likelihood matrix method (see Eq.~\eqref{eqn:lhmm_likelihood_2}) is used due to smaller computation time compared to the likelihood with the two sums in Eq.~\eqref{eqn:likelihood_first_principles}.
As mentioned before, the efficiencies are chosen to follow $\delta$ distributions in Ref.~\cite{erich_varnes}. Since the formulation of the model in Eq.~\eqref{eqn:likelihood_first_principles} considers the uncertainties as free parameters of the model, they are constrained by the prior below.
The prior distributions in the case of fake-lepton estimation are chosen to be as uninformative as possible, but at the same time physical. Here, $\nu^{\mathrm{r}}$ and $\nu^{\mathrm{f}}$ follow uniform distributions with ranges from $[0, n_{\nu}\cdot\ensuremath{N_{\mathrm{L}}}\xspace]$ due to a small but non-vanishing probability for $\nu^{\mathrm{r/f}}$ to be larger than \ensuremath{N_{\mathrm{L}}}\xspace. The concrete value of the scale factor $n_{\nu}$ for the prior range can be chosen specifically for each use case to account for the full posterior probability density of the parameters $\nu^{\mathrm{r/f}}$.
The efficiencies are assumed to follow a truncated Normal distribution~\cite{Robert_1995} $\mathcal{N}_{\!\text{tr}}$ from $a = 0$ to $b = 1$ with mean $\varepsilon$ and standard deviation $\sigma_{\varepsilon}$, defined by
\begin{equation}
\mathcal{N}_{\!\text{tr}}(\varepsilon, \sigma_{\varepsilon}) = \frac{\phi\left( \frac{x - \varepsilon}{\sigma_{\varepsilon}} \right)}{\mathrm{\Phi}\left(\frac{b - \varepsilon}{\sigma_{\varepsilon}}\right) - \mathrm{\Phi}\left(\frac{a - \varepsilon}{\sigma_{\varepsilon}}\right)}\,,
\label{eqn:truncated_normal}
\end{equation}
where the probability density function $\phi(\alpha)$ and the cumulative distribution function $\mathrm{\Phi}(\beta)$ are defined by
\begin{align}
\begin{split}
\phi(\alpha) &= \frac{1}{\sqrt{2\pi\sigma^2}}\cdot\mathrm{e}^{-\frac{\alpha^2}{2}}\,,\\
\mathrm{\Phi}(\beta) &= \frac{1}{2}\cdot\left(1 + \mathrm{erf}\left(\frac{\beta}{\sqrt{2}}\right)\right)\,.
\end{split}
\end{align}
Other choices for the prior distributions can of course be made.
To develop the likelihood matrix method as a Bayesian model, the likelihood is parameterised according to Eq.~\eqref{eqn:Nt_lhmm} and Eq.~\eqref{eqn:Nnt_lhmm} and the priors for the efficiencies follow the truncated Normal distributions. This results in two possible parameterisations
\begin{align}
\begin{split}
\boldsymbol{\theta}_{\mathrm{L}}^{\mathrm{LHMM}} &= (\nu^{\mathrm{r}}, \nu^{\mathrm{f}}, \hat{\varepsilon}^{\mathrm{r}}, \hat{\varepsilon}^{\mathrm{f}}) = \boldsymbol{\theta}\,, \\
\boldsymbol{\theta}_{\mathrm{T}}^{\mathrm{LHMM}} &= (\nu_{\mathrm{T}}^{\mathrm{r}}, \nu_{\mathrm{T}}^{\mathrm{f}}, \hat{\varepsilon}^{\mathrm{r}}, \hat{\varepsilon}^{\mathrm{f}})\,, \\
\end{split}
\end{align}
which refer to the parameters in the likelihood in Eq.~\eqref{eqn:lhmm_likelihood_2}.
With the additional consideration of uncertainties in the efficiencies, the final likelihood is updated to
\begin{equation}
p(\ensuremath{N_{\mathrm{T}}}\xspace, \ensuremath{N_{\mathrm{nT}}}\xspace, \ensuremath{\varepsilon^{\mathrm{r}}}\xspace, \ensuremath{\varepsilon^{\mathrm{f}}}\xspace | \nu_{\mathrm{T}}, \nu_{\mathrm{nT}}, \hat{\varepsilon}^{\mathrm{r}}, \hat{\varepsilon}^{\mathrm{f}}) = p(\ensuremath{N_{\mathrm{T}}}\xspace | \nu_{\mathrm{T}}) p(\ensuremath{N_{\mathrm{nT}}}\xspace | \nu_{\mathrm{nT}})\mathcal{N}_{\!\text{tr}}(\ensuremath{\varepsilon^{\mathrm{r}}}\xspace, \sigma_{\ensuremath{\varepsilon^{\mathrm{r}}}\xspace} | \hat{\varepsilon}^{\mathrm{r}})\mathcal{N}_{\!\text{tr}}(\ensuremath{\varepsilon^{\mathrm{f}}}\xspace, \sigma_{\ensuremath{\varepsilon^{\mathrm{f}}}\xspace} | \hat{\varepsilon}^{\mathrm{f}})\,.
\label{eqn:lhmm_likelihood_3}
\end{equation}
The count rate estimators $\nu$ are defined according to Eq.~\eqref{eqn:Nt_lhmm} and Eq.~\eqref{eqn:Nnt_lhmm} as
\begin{alignat}{2}
\nu_{\mathrm{T}} &= \nu_{\mathrm{T}}^{\mathrm{r}} + \nu_{\mathrm{T}}^{\mathrm{f}} &&= \hat{\varepsilon}^{\mathrm{r}} \nu^{\mathrm{r}} + \hat{\varepsilon}^{\mathrm{f}} \nu^{\mathrm{f}}\,,\\
\nu_{\mathrm{nT}} &= \underbrace{\frac{1 - \hat{\varepsilon}^{\mathrm{r}}}{\hat{\varepsilon}^{\mathrm{r}}} \nu_{\mathrm{T}}^{\mathrm{r}} + \frac{1 - \hat{\varepsilon}^{\mathrm{f}}}{\hat{\varepsilon}^{\mathrm{f}}} \nu_{\mathrm{T}}^{\mathrm{f}}}_{\boldsymbol{\theta}_{\mathrm{T}}^{\mathrm{LHMM}} \text{\,parameterisation}} &&= \underbrace{(1 - \hat{\varepsilon}^{\mathrm{r}})\nu^{\mathrm{r}} + (1 - \hat{\varepsilon}^{\mathrm{f}})\nu^{\mathrm{f}} \vphantom{\frac{1 - \hat{\varepsilon}^{\mathrm{r}}}{\hat{\varepsilon}^{\mathrm{r}}} \nu_{\mathrm{T}}^{\mathrm{r}} + \frac{1 - \hat{\varepsilon}^{\mathrm{f}}}{\hat{\varepsilon}^{\mathrm{f}}} \nu_{\mathrm{T}}^{\mathrm{f}}} }_{\boldsymbol{\theta}_{\mathrm{L}}^{\mathrm{LHMM}}\text{\,parameterisation}}\,.
\end{alignat}
Since the \textit{loose}\xspace parameterisation, indicated with the subscript $_{\mathrm{L}}$, is identical to the parameterisation of the likelihood in Eq.~\eqref{eqn:likelihood_first_principles} and their equality was shown before, the \textit{loose}\xspace parameterisation is chosen for further studies.
\par\bigskip
Bayesian inference now returns probabilistic statements on the parameters $\boldsymbol{\theta}$ by considering the data $\mathcal{D}$. These statements result from Bayes' theorem~\cite{Bayes_theorem_original}
\begin{equation}
p(\boldsymbol{\theta}|\mathcal{D}) = \frac{\overbrace{p(\mathcal{D}|\boldsymbol{\theta})}^{\text{Likelihood}} \overbrace{p(\boldsymbol{\theta})}^{\text{Prior}}}{\underbrace{\int p(\mathcal{D}|\boldsymbol{\theta}) p(\boldsymbol{\theta})\,\text{d}\boldsymbol{\theta}}_{\text{Evidence}}}\,.
\end{equation}
Knowledge and inference about certain parameter distributions are determined using the multidimensional posterior probability distribution~\cite{schulz2020batjl} and the integral over all nuisance parameters
\begin{equation}
p(\theta_i | \mathcal{D}) = \int p(\boldsymbol{\theta} | \mathcal{D}) \prod_{i\neq j}\,\text{d}\theta_i\,.
\end{equation}
The posterior probability for the \textit{tight}\xspace, fake yield results from the multiplication of both marginalised distributions $p(\nu^{\mathrm{f}} | \mathcal{D})$ and $p(\hat{\varepsilon}^{\mathrm{f}} | \mathcal{D})$
\begin{equation}
p(\nu_{\mathrm{T}}^{\mathrm{f}} | \mathcal{D}) = p(\nu^{\mathrm{f}} | \mathcal{D}) \cdot p(\hat{\varepsilon}^{\mathrm{f}} | \mathcal{D})\,.
\label{eqn:nu_tight_fake}
\end{equation}
\subsection{Implementation in the Bayesian Analysis Toolkit}
\label{sec:implementation}
The posterior probability density is built in the \texttt{julia}~\cite{Julia-2017} package \texttt{BAT.jl}~\cite{schulz2020batjl}. It allows for the implementation of statistical models in a Bayesian framework and the inference of their free parameters. It provides a toolkit with numerical algorithms for sampling, optimisation and integration.
BAT.jl currently offers a choice of two main Markov Chain Monte Carlo (MCMC) algorithms in addition to three importance samplers. MCMC algorithms (Metropolis-Hastings~\cite{Metropolis:1953am, Hastings} and Hamiltonian Monte Carlo~\cite{Hamiltonian_MC}) are well suited for high-dimensional parameter-space sampling, while the importance samplers are an easy and fast-to-use alternative in low-dimensional parameter spaces. They are called via the \texttt{bat\_sample} function and provide adjustable, algorithm-specific arguments. Applied for the posterior density to estimate fake-event yields in the \textit{tight}\xspace region, the Metropolis-Hastings algorithm is the algorithm of choice for $\ensuremath{N_{\mathrm{L}}}\xspace > 50$, since the importance samplers provide too few samples to fully explore the parameter space in these regions with sufficient resolution.
\section{Introduction}
\label{sec:intro}
At hadron colliders, the detector signatures of prompt high-energy electrons and muons can be mimicked by other objects (so-called ``fake leptons"), in particular by particles that are produced in jets. These are for example non-prompt leptons, such as leptons from semileptonic $B$ meson decays, or---for electron fakes---jets with a high electromagnetic fraction and converted photons. The fake-lepton background contribution is difficult to estimate from simulations due to the small misidentification probabilities, meaning that a prohibitively large number of simulated events would need to be generated. It is also difficult to accurately model the fake-lepton contribution in the simulation. Therefore, data-driven techniques are often used instead. Several data-driven methods have been developed, including the matrix method~\cite{PhysRevD.76.092007, Aad:2001975, Aad:1312983}, the ABCD method~\cite{PhysRevD.44.29, Aad:1753190, Aaboud:2299430} and the fake-factor method~\cite{Aaboud:2631950, Aad:2750530, Aaboud:2640802}. In analyses which often search for signal events, the number of estimated background events including contributions from faked leptons is subtracted from the signal event counts. Any bias in the fake-lepton estimation will therefore affect the estimate of signal events, especially if the number of observed events is small. In such cases, it is crucial to have an accurate statistical model to correctly estimate the uncertainty and avoid any bias.
Motivated by limitations of the matrix method, in particular the possibility that the predicted fake-lepton event yields can be negative, a maximum-likelihood approach was proposed in Ref.~\cite{erich_varnes}. In this paper, this improved method is reformulated in Bayesian reasoning, the equality of both approaches is shown and both formulations are implemented in the Bayesian Analysis Toolkit (BAT)~\cite{schulz2020batjl}, a multi-purpose software package for Bayesian inference.
The paper is structured as follows: In Section~\ref{sec:classical_matrix_method}, the matrix method is introduced in its original form, and its limitations are briefly presented in Section~\ref{sec:error_propagation}. The likelihood ansatz and the Bayesian ansatz are introduced and the equality of the two approaches is shown (Sections~\ref{sec:statistical_model_likelihood}--\ref{sec:Bayesian_model}). The implementation in BAT is described in Section~\ref{sec:implementation}. The paper closes with a discussion of a concrete physics example and a study of several limit cases where the original matrix methods has problems that are addressed with the improved ansatz (Section~\ref{sec:comparison}). Conclusions are presented in Section~\ref{sec:conclusions}.
\section{Appendix}
\subsection{Equality of the two approaches}\label{sec:appendix_proof}
\label{sec:equality}
In the following paragraph, it is proven that the two likelihoods from Eq.~\eqref{eqn:likelihood_first_principles} and Eq.~\eqref{eqn:lhmm_likelihood_2} are mathematically identical in the \textit{loose}\xspace frame of the likelihood matrix method.
Starting with the binomial theorem
\begin{equation}
(x + y)^n = \sum_{k = 0}^{n} \binom{n}{k}x^{n-k}y^k\,,
\label{eqn:binomial_theorem}
\end{equation}
the relation is applied to Eq.~\eqref{eqn:Nt_lhmm} and Eq.~\eqref{eqn:Nnt_lhmm}, resulting in
\begin{align}
\begin{split}
p(\ensuremath{N_{\mathrm{T}}}\xspace\,|\,\hat{\varepsilon}^{\mathrm{r}} \nu^{\mathrm{r}} + \hat{\varepsilon}^{\mathrm{f}} \nu^{\mathrm{f}}) &= \frac{(\hat{\varepsilon}^{\mathrm{r}} \nu^{\mathrm{r}} + \hat{\varepsilon}^{\mathrm{f}} \nu^{\mathrm{f}})^{\ensuremath{N_{\mathrm{T}}}\xspace}}{\ensuremath{N_{\mathrm{T}}}\xspace!} \text{e}^{-(\hat{\varepsilon}^{\mathrm{r}} \nu^{\mathrm{r}} + \hat{\varepsilon}^{\mathrm{f}} \nu^{\mathrm{f}})} \\
&\stackrel{\eqref{eqn:binomial_theorem}}{=} \sum_{k = 0}^{\ensuremath{N_{\mathrm{T}}}\xspace}\binom{\ensuremath{N_{\mathrm{T}}}\xspace}{k} (\hat{\varepsilon}^{\mathrm{r}} \nu^{\mathrm{r}})^{\ensuremath{N_{\mathrm{T}}}\xspace - k} (\hat{\varepsilon}^{\mathrm{f}} \nu^{\mathrm{f}})^{k}\,\frac{\text{e}^{-(\hat{\varepsilon}^{\mathrm{r}} \nu^{\mathrm{r}} + \hat{\varepsilon}^{\mathrm{f}} \nu^{\mathrm{f}})}}{\ensuremath{N_{\mathrm{T}}}\xspace!}\,, \\
p(N_{\text{nT}}\,|\,(1 - \hat{\varepsilon}^{\mathrm{r}})\nu^{\mathrm{r}} + (1 - \hat{\varepsilon}^{\mathrm{f}})\nu^{\mathrm{f}}) &= \frac{((1 - \hat{\varepsilon}^{\mathrm{r}})\nu^{\mathrm{r}} + (1 - \hat{\varepsilon}^{\mathrm{f}})\nu^{\mathrm{f}})^{N_{\text{L}} - \ensuremath{N_{\mathrm{T}}}\xspace}}{(N_{\text{L}} - \ensuremath{N_{\mathrm{T}}}\xspace)!}\text{e}^{-((1 - \hat{\varepsilon}^{\mathrm{r}})\nu^{\mathrm{r}} + (1 - \hat{\varepsilon}^{\mathrm{f}})\nu^{\mathrm{f}})} \\
&\stackrel{\eqref{eqn:binomial_theorem}}{=} \sum_{n = 0}^{N_{\text{L}} - \ensuremath{N_{\mathrm{T}}}\xspace} \binom{N_{\text{L}} - \ensuremath{N_{\mathrm{T}}}\xspace}{n} ((1 - \hat{\varepsilon}^{\mathrm{r}})\nu^{\mathrm{r}})^{N_{\text{L}} - \ensuremath{N_{\mathrm{T}}}\xspace - n} \\
&\hspace{1.9cm}\cdot((1 - \hat{\varepsilon}^{\mathrm{f}})\nu^{\mathrm{f}})^n\,\frac{\text{e}^{-((1 - \hat{\varepsilon}^{\mathrm{r}})\nu^{\mathrm{r}} + (1 - \hat{\varepsilon}^{\mathrm{f}})\nu^{\mathrm{f}})}}{(N_{\text{L}} - \ensuremath{N_{\mathrm{T}}}\xspace)!}\,,
\end{split}
\label{eqn:derivation_1}
\end{align}
with according parameterisations of \ensuremath{N_{\mathrm{T}}}\xspace and \ensuremath{N_{\mathrm{nT}}}\xspace. More details on the parameterisation are presented in Section~\ref{sec:Bayesian_model}.
Since both sums are independent of each other, the terms in Eq.~\eqref{eqn:derivation_1} can be multiplied to obtain Eq.~\eqref{eqn:lhmm_likelihood_2}
\begin{align}
\begin{split}
p(N_{\mathrm{T}}, N_{\mathrm{nT}}, \ensuremath{\varepsilon^{\mathrm{r}}}\xspace, \ensuremath{\varepsilon^{\mathrm{f}}}\xspace |\nu^{\mathrm{r}}, \nu^{\mathrm{f}}, \hat{\varepsilon}^{\mathrm{r}}, \hat{\varepsilon}^{\mathrm{f}}) &= \sum_{n = 0}^{N_{\text{L}} - N_{\text{T}}} \sum_{k = 0}^{N_{\text{T}}} \binom{N_{\text{T}}}{k}\binom{N_{\text{L}} - N_{\text{T}}}{n}(\hat{\varepsilon}^{\mathrm{r}} \nu^{\mathrm{r}})^{N_{\text{T}} - k} (\hat{\varepsilon}^{\mathrm{f}} \nu^{\mathrm{f}})^k \\
&\hspace{1.2cm}\cdot\frac{((1 - \hat{\varepsilon}^{\mathrm{r}})\nu^{\mathrm{r}})^{N_{\text{L}} - N_{\text{T}} - n} ((1 - \hat{\varepsilon}^{\mathrm{f}})\nu^{\mathrm{f}})^n \mathrm{e}^{-\nu^{\mathrm{r}} - \nu^{\mathrm{f}}}}{N_{\text{T}}! (N_{\text{L}} - N_{\text{T}})!} \\
&= \sum_{n = 0}^{N_{\text{L}} - N_{\text{T}}} \sum_{k = 0}^{N_{\text{T}}} (\hat{\varepsilon}^{\mathrm{r}})^{N_{\text{T}} - k}(\nu^{\mathrm{r}})^{N_{\text{L}} - n - k} (1 - \hat{\varepsilon}^{\mathrm{r}})^{N_{\text{L}} - N_{\text{T}} - n} \\
&\hspace{1.2cm}\cdot\frac{(1 - \hat{\varepsilon}^{\mathrm{f}})^n (\hat{\varepsilon}^{\mathrm{f}})^k (\nu^{\mathrm{f}})^{n+k} \mathrm{e}^{-\nu^{\mathrm{r}} - \nu^{\mathrm{f}}}}{k!\,n!\,(N_{\text{T}} - k)! (N_{\text{L}} - N_{\text{T}} - n)!}\,.
\end{split}
\label{eqn:derivation_2}
\end{align}
Eq.~\eqref{eqn:derivation_2} can be compared to the likelihood resulting from the considerations in Eq.~\eqref{eqn:likelihood_first_principles}
\begin{align}
\begin{split}
p(N_{\mathrm{L}}, N_{\mathrm{T}}, \ensuremath{\varepsilon^{\mathrm{r}}}\xspace, \ensuremath{\varepsilon^{\mathrm{f}}}\xspace | \nu^{\mathrm{r}}, \nu^{\mathrm{f}}, \hat{\varepsilon}^{\mathrm{r}}, \hat{\varepsilon}^{\mathrm{f}}) &= \sum_{x = 0}^{N_{\text{L}}} \sum_{y = y_{\text{max}}}^{y_{\text{min}}} \binom{x}{y}\binom{N_{\text{L}} - x}{N_{\text{T}} - y} (\hat{\varepsilon}^{\mathrm{r}})^{N_{\text{T}} - y}(\nu^{\mathrm{r}})^{N_{\text{L}} - x} (\hat{\varepsilon}^{\mathrm{f}})^y (\nu^{\mathrm{f}})^x \\
&\hspace{1.2cm} \cdot\frac{(1 - \hat{\varepsilon}^{\mathrm{r}})^{N_{\text{L}} - N_{\text{T}} - x + y} (1 - \hat{\varepsilon}^{\mathrm{f}})^{x - y} \mathrm{e}^{-\nu^{\mathrm{r}} - \nu^{\mathrm{f}}}}{x! (N_{\text{L}} - x)!} \\
&= \sum_{x = 0}^{N_{\text{L}}} \sum_{y = y_{\text{max}}}^{y_{\text{min}}} (\hat{\varepsilon}^{\mathrm{r}})^{N_{\text{T}} - y}(\nu^{\mathrm{r}})^{N_{\text{L}} - x}(1 - \hat{\varepsilon}^{\mathrm{r}})^{N_{\text{L}} - N_{\text{T}} - x + y} \\
&\hspace{1.2cm} \cdot\frac{(1 - \hat{\varepsilon}^{\mathrm{f}})^{x - y} (\hat{\varepsilon}^{\mathrm{f}})^y (\nu^{\mathrm{f}})^x \mathrm{e}^{-\nu^{\mathrm{r}} - \nu^{\mathrm{f}}}}{y!\,(x-y)!(N_{\text{T}} - y)! (N_{\text{L}} - N_{\text{T}} - x + y)!}\,.
\end{split}
\label{eqn:derivation_3}
\end{align}
The comparison of the coefficients and the corresponding exponents of Eq.~\eqref{eqn:derivation_2} and Eq.~\eqref{eqn:derivation_3} leads to
\begin{align}
\begin{split}
y &= k\, \\
n &= x - y \Leftrightarrow x = n + k\,.
\end{split}
\label{statmod/eqn:x_and_y}
\end{align}
With these substitutions the lower sum index $x = 0$ in Eq.~\eqref{eqn:derivation_3} can be rewritten with the binomial coefficient $\binom{x}{y}$
\begin{equation}
\binom{x}{y} \stackrel{\eqref{statmod/eqn:x_and_y}}{\to} \binom{n + k}{k}\,.
\end{equation}
Since the binomial coefficient is only defined for $n + k \geq k$~\cite{binomial_coefficient}, the lower limit for $n$ results in
\begin{equation}
n \geq 0\,.
\end{equation}
The upper limit $y_{\mathrm{min}}$ is derived with the second binomial coefficient $\binom{N_{\mathrm{L}} - x}{N_{\mathrm{T}} - y}$ to
\begin{equation}
\binom{N_{\mathrm{L}} - x}{N_{\mathrm{T}} - y} \stackrel{\eqref{statmod/eqn:x_and_y}}{\to} \binom{N_{\mathrm{L}} - n - k}{N_{\mathrm{T}} - k}\,.
\end{equation}
Using the same reasoning, this implies $N_{\mathrm{L}} - n - k \geq N_{\mathrm{T}} - k$, which leads to $n \leq N_{\mathrm{L}} - N_{\mathrm{T}}$.
The transformation of the sum over $y$ is done with the lower and upper limits of the first sum
\begin{align}
\begin{split}
y = k &= \mathrm{max}(0, n + k + N_{\mathrm{T}} - N_{\mathrm{L}}) \quad|\;n\;\text{cannot be larger than}\;N_{\mathrm{L}} - N_{\mathrm{T}} \\
k &= \mathrm{max}(0, k) \\
&\Rightarrow k \geq 0\,,
\end{split}
\end{align}
while the upper limit follows as
\begin{align}
\begin{split}
k &= \mathrm{min}(n + k, N_{\mathrm{T}}) \quad|\;n\;\text{cannot be less than}\;0 \\
k &= \mathrm{min}(k, N_{\mathrm{T}}) \\
&\Rightarrow k \leq N_{\mathrm{T}}\,.
\end{split}
\end{align}
Therefore, the ranges of $n$ and $k$ are defined by $0 \leq n \leq N_{\mathrm{L}} - N_{\mathrm{T}}$ and $0 \leq k \leq N_{\mathrm{T}}$, respectively, as given in Eq.~\eqref{eqn:derivation_2}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Extended version of the calculations at pp.~525--531}
If we multiply the first of the tight-binding equations (140) by
$g(\vec r-\vec R_A)e^{-i \vec K\cdot \vec R_A}$
and we sum it over $\vec R_A$, we find
\begin{eqnarray}
&& E\,\sum_{\vec R_A} g(\vec r-\vec R_A) F_A^{\vec K}(\vec R_A)\\
&&\qquad -E\,i \, e^{i \theta'}\,\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A} F_A^{\vec K'}(\vec R_A)\nonumber\\
&&\qquad -\sum_{\vec R_A} g(\vec r-\vec R_A) u(\vec R_A) F_A^{\vec K}(\vec R_A)\nonumber\\
&&\qquad +i \, e^{i \theta'}\,\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A}
u(\vec R_A) F_A^{\vec K'}(\vec R_A)=\nonumber\\
&&\qquad -\gamma_0 \,i \, e^{i \theta'}
\sum_{l=1}^3 e^{-i \vec K\cdot \vec\tau_l}
\sum_{\vec R_A} g(\vec r-\vec R_A) F_B^{\vec K}(\vec R_A-\vec\tau_l)\nonumber\\
&&\qquad -\gamma_0 \sum_{l=1}^3 e^{-i \vec K'\cdot \vec\tau_l}
\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A} F_B^{\vec K'}(\vec R_A-\vec\tau_l);\nonumber
\end{eqnarray}
exploiting the property (144), it becomes
\vskip-5pt\noindent
\begin{eqnarray}
&& E\,\left[\sum_{\vec R_A} g(\vec r-\vec R_A)\right] F_A^{\vec K}(\vec r)-
E\,i \, e^{i \theta'}\,\left[\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A}\right] F_A^{\vec K'}(\vec r)\\
&& -\left[\sum_{\vec R_A} g(\vec r-\vec R_A) u(\vec R_A)\right]
F_A^{\vec K}(\vec r)\nonumber\\
&& +i \, e^{i \theta'}\,\left[\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A}
u(\vec R_A)\right] F_A^{\vec K'}(\vec r)=\nonumber\\
&& -\gamma_0 \,i \, e^{i \theta'}
\sum_{l=1}^3 e^{-i \vec K\cdot \vec\tau_l}
\left[\sum_{\vec R_A} g(\vec r-\vec R_A)\right]
F_B^{\vec K}(\vec r-\vec\tau_l)\nonumber\\
&& -\gamma_0 \sum_{l=1}^3 e^{-i \vec K'\cdot \vec\tau_l}
\left[\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A}\right] F_B^{\vec K'}(\vec r-\vec\tau_l).\nonumber
\end{eqnarray}
\vskip-5pt\noindent
For the quantities in the square brackets, we can use the properties
(141) and (143), together with the definitions
\vskip-5pt\noindent
\begin{equation}
\label{ua}
u_A(\vec r)=\sum_{\vec R_A} g(\vec r-\vec R_A) u(\vec R_A),\qquad
u'_A(\vec r)=\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A} u(\vec R_A),
\end{equation}
\vskip-5pt\noindent
obtaining:
\vskip-5pt\noindent
\begin{eqnarray}
\label{first}
&& E\,F_A^{\vec K}(\vec r)-u_A(\vec r)\,F_A^{\vec K}(\vec r)+
i \, e^{i \theta'}\,u'_A(\vec r)\,F_A^{\vec K'}(\vec r)=\\
&& -\gamma_0 \,i \, e^{i \theta'}
\sum_{l=1}^3 e^{-i \vec K\cdot \vec\tau_l}
F_B^{\vec K}(\vec r-\vec\tau_l).\nonumber
\end{eqnarray}
\vskip-5pt\noindent
Expanding the smooth quantity $F_B^{\vec K} (\vec r-\vec \tau_l)$ to the
first order in $\vec \tau_l$, we have that
\vskip-5pt\noindent
\begin{eqnarray}
&& \sum_{l=1}^3 e^{-i \vec K\cdot \vec\tau_l}
F_B^{\vec K}(\vec r-\vec\tau_l)\simeq
\sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}
\left[F_B^{\vec K} (\vec r)-
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}
\right) F_B^{\vec K} (\vec r)\right]=\\
&& \left\{\left(\sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}\right)
F_B^{\vec K} (\vec r)-\left[\sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)\right]
F_B^{\vec K} (\vec r)\right\}.\nonumber
\end{eqnarray}
\vskip-5pt\noindent
Let us now calculate the value of the sums which appear in the previous
expression
\vskip-5pt\noindent
\begin{eqnarray}
&& \sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}=
1+e^{-i\frac{2\pi}{3}}+e^{i\frac{2\pi}{3}}=0;\\
&& \sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
1 \frac{a}{\sqrt{3}} \left(-\frac{\partial}{\partial x'}\right)\nonumber\\
&& +e^{-i\frac{2\pi}{3}} \frac{a}{\sqrt{3}}
\left(\frac{1}{2}\frac{\partial}{\partial x'}-\frac{\sqrt{3}}{2}
\frac{\partial}{\partial y'}\right)+
e^{i\frac{2\pi}{3}} \frac{a}{\sqrt{3}}
\left(\frac{1}{2}\frac{\partial}{\partial x'}+\frac{\sqrt{3}}{2}
\frac{\partial}{\partial y'}\right)=\nonumber\\
&& \frac{a}{\sqrt{3}}
\left(\left(-1+\frac{1}{2}e^{-i\frac{2\pi}{3}}+
\frac{1}{2}e^{i\frac{2\pi}{3}}\right)
\frac{\partial}{\partial x'}+\left(-\frac{\sqrt{3}}{2}e^{-i\frac{2\pi}{3}}+
\frac{\sqrt{3}}{2}e^{i\frac{2\pi}{3}} \right)
\frac{\partial}{\partial y'}\right).\nonumber
\end{eqnarray}
Since
\begin{equation*}
-1+\frac{1}{2}e^{-i\frac{2\pi}{3}}+\frac{1}{2}e^{i\frac{2\pi}{3}}=
-1+\frac{1}{2}(e^{-i\frac{2\pi}{3}}+e^{i\frac{2\pi}{3}})=
-1+\frac{1}{2}(-1)=-\frac{3}{2}
\end{equation*}
and
\begin{equation*}
-\frac{\sqrt{3}}{2}e^{-i\frac{2\pi}{3}}+\frac{\sqrt{3}}{2}e^{i\frac{2\pi}{3}}=
\frac{\sqrt{3}}{2}(e^{i\frac{2\pi}{3}}-e^{-i\frac{2\pi}{3}})=
\frac{\sqrt{3}}{2}(i\sqrt{3})=i\frac{3}{2},
\end{equation*}
we have that
\begin{eqnarray}
&& \sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
-\frac{a}{\sqrt{3}}\frac{3}{2}
\left(\frac{\partial}{\partial x'}-i\frac{\partial}{\partial y'}\right)=\\
&& -\frac{\sqrt{3}}{2}a (i\hat\kappa_{x'}+\hat\kappa_{y'})=
-i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}-i\hat\kappa_{y'}),\nonumber
\end{eqnarray}
where we have defined
${\vec{\hat\kappa}}=-i\vec \nabla$ and thus
\begin{equation}
\quad
{\hat\kappa}_{x'}=-i\frac{\partial}{\partial x'}
\qquad \hbox{and} \qquad
{\hat\kappa}_{y'}=-i\frac{\partial}{\partial y'}.
\end{equation}
Substituting these results, eq.~(\ref{first}) becomes
\begin{eqnarray}
\label{first1}
&& E\,F_A^{\vec K}(\vec r)-u_A(\vec r)\,F_A^{\vec K}(\vec r)+
i \, e^{i \theta'}\,u'_A(\vec r)\,F_A^{\vec K'}(\vec r) \simeq\\
&& -\gamma_0 \,i \, e^{i \theta'}
\left(i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}-i\hat\kappa_{y'})
F_B^{\vec K}(\vec r)\right)=\nonumber\\
&& \frac{\sqrt{3}}{2}\gamma_0 a\, e^{i \theta'}
(\hat\kappa_{x'}-i\hat\kappa_{y'}) F_B^{\vec K}(\vec r)=
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) F_B^{\vec K} (\vec r),\nonumber
\end{eqnarray}
where we have passed from the original reference frame
$\Sigma'=(\hbox{\boldmath{$\hat x$}}', \hbox{\boldmath{$\hat y$}}',
\hbox{\boldmath{$\hat z$}}')$ to a new frame
$\Sigma=(\hbox{\boldmath{$\hat x$}}, \hbox{\boldmath{$\hat y$}},
\hbox{\boldmath{$\hat z$}})$,
rotated, in the plane $(\hbox{\boldmath{$\hat x$}}',
\hbox{\boldmath{$\hat y$}}')$, around the origin by an angle
$\theta'$ (positive in the counterclockwise direction) with respect to the
original one (fig.~7) and we have used the fact that
\begin{eqnarray}
\label{diffcoord}
&& e^{i\theta'}({\hat\kappa}_{x'}-i{\hat\kappa}_{y'})=
(\cos\theta'+i\sin\theta')({\hat\kappa}_{x'}-i{\hat\kappa}_{y'})=\\
&& (\cos\theta' {\hat\kappa}_{x'}+\sin\theta' {\hat\kappa}_{y'})-
i(\cos\theta' {\hat\kappa}_{y'}-\sin\theta' {\hat\kappa}_{x'})=
{\hat\kappa}_x-i{\hat\kappa}_y\nonumber
\end{eqnarray}
(due to the relations between old and new coordinates), with
\begin{equation}
\label{kappa}
\quad
{\hat\kappa}_{x}=-i\frac{\partial}{\partial x}
\qquad \hbox{and} \qquad
{\hat\kappa}_{y}=-i\frac{\partial}{\partial y}.
\end{equation}
Indeed, it is a well-known result that, for a rotation by $\theta'$ of
the reference frame, the relations between the new and the old coordinates
are $x=x'\cos\theta'+y'\sin\theta'$
and $y=y'\cos\theta'-x'\sin\theta'$. Therefore we have that
\begin{equation}
\frac{\partial F(x,y)}{\partial x'} =
\frac{\partial F(x,y)}{\partial x} \frac{\partial x}{\partial x'}+
\frac{\partial F(x,y)}{\partial y} \frac{\partial y}{\partial x'}=
\frac{\partial F(x,y)}{\partial x} \cos\theta'-
\frac{\partial F(x,y)}{\partial y} \sin\theta'
\end{equation}
and that
\begin{equation}
\frac{\partial F(x,y)}{\partial y'} =
\frac{\partial F(x,y)}{\partial x} \frac{\partial x}{\partial y'}+
\frac{\partial F(x,y)}{\partial y} \frac{\partial y}{\partial y'}=
\frac{\partial F(x,y)}{\partial x} \sin\theta'+
\frac{\partial F(x,y)}{\partial y} \cos\theta'.
\end{equation}
As a consequence, we have that
\begin{eqnarray}
\qquad (\cos\theta' {\hat\kappa}_{x'}&+&\sin\theta' {\hat\kappa}_{y'}) F(x,y)=
\cos\theta' \left(-i\frac{\partial F(x,y)}{\partial x'}\right)+
\sin\theta' \left(-i\frac{\partial F(x,y)}{\partial y'}\right)=\\
&-& i\bigg[\frac{\partial F(x,y)}{\partial x} \cos^2\theta'-
\frac{\partial F(x,y)}{\partial y} \cos\theta'\sin\theta'\nonumber\\
&&\quad +\frac{\partial F(x,y)}{\partial x} \sin^2\theta'+
\frac{\partial F(x,y)}{\partial y} \sin\theta'\cos\theta'
\bigg]=\nonumber\\
&-& i\frac{\partial F(x,y)}{\partial x}
(\cos^2\theta'+\sin^2\theta')=
-i\frac{\partial F(x,y)}{\partial x}=\hat\kappa_x F(x,y)\nonumber
\end{eqnarray}
and that
\begin{eqnarray}
\qquad (\cos\theta' {\hat\kappa}_{y'}&-&\sin\theta' {\hat\kappa}_{x'}) F(x,y)=
\cos\theta' \left(-i\frac{\partial F(x,y)}{\partial y'}\right)-
\sin\theta' \left(-i\frac{\partial F(x,y)}{\partial x'}\right)=\\
&-& i\bigg[
\frac{\partial F(x,y)}{\partial x} \sin\theta'\cos\theta'+
\frac{\partial F(x,y)}{\partial y} \cos^2\theta'-\nonumber\\
&&\quad -\frac{\partial F(x,y)}{\partial x} \cos\theta'\sin\theta'+
\frac{\partial F(x,y)}{\partial y} \sin^2\theta'
\bigg]=\nonumber\\
&-& i\frac{\partial F(x,y)}{\partial y}
(\cos^2\theta'+\sin^2\theta')=
-i\frac{\partial F(x,y)}{\partial y}=\hat\kappa_y F(x,y),\nonumber
\end{eqnarray}
from which we obtain eq.~(\ref{diffcoord}).
$\theta'$ is the angle, taken counterclockwise, from the vector
$\vec a_1+\vec a_2$ to the axis $\hbox{\boldmath{$\hat x$}}$ of the new frame.
We have also defined the quantity $\gamma=(\sqrt{3}/2)\gamma_0 a$.
Note that in the new reference frame
$\Sigma=(\hbox{\boldmath{$\hat x$}}, \hbox{\boldmath{$\hat y$}},
\hbox{\boldmath{$\hat z$}})$
\begin{eqnarray}
&\vec a_1 \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{a}{2}
\left[\begin{array}{c}
\displaystyle \sqrt{3}\cos\theta'+\sin\theta'\\[3pt]
\displaystyle \cos\theta'-\sqrt{3}\sin\theta'\\[3pt]
0
\end{array}\right],\quad&
\vec a_2 \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{a}{2}
\left[\begin{array}{c}
\displaystyle \sqrt{3}\cos\theta'-\sin\theta'\\[3pt]
\displaystyle -\cos\theta'-\sqrt{3}\sin\theta'\\[3pt]
0
\end{array}\right],\\
&\vec b_1 \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{2\pi}{\sqrt{3}a}
\left[\begin{array}{c}
\displaystyle \cos\theta'+\sqrt{3}\sin\theta'\\ [3pt]
\displaystyle \sqrt{3}\cos\theta'-\sin\theta'\\[3pt]
0
\end{array}\right],\quad&
\vec b_2 \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{2\pi}{\sqrt{3}a}
\left[\begin{array}{c}
\displaystyle \cos\theta'-\sqrt{3}\sin\theta'\\
\noalign{\vskip3pt}
\displaystyle -\sqrt{3}\cos\theta'-\sin\theta'\\
\noalign{\vskip3pt}
0
\end{array}\right],\nonumber\\
&\vec K \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{4\pi}{3a}
\left[\begin{array}{c}
-\sin\theta'\\
-\cos\theta'\\
0
\end{array}\right],\quad&
\vec K' \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{4\pi}{3a}
\left[\begin{array}{c}
\sin\theta'\\
\cos\theta'\\
0
\end{array}\right].\nonumber
\end{eqnarray}
Analogously, if we multiply the second of the tight-binding equations
(140) by
$g(\vec r-\vec R_B)(-i\, e^{-i\theta'} e^{-i \vec K\cdot \vec R_B})$
and we sum it over $\vec R_B$, we find
\begin{eqnarray}
&& E\,\sum_{\vec R_B} g(\vec r-\vec R_B) F_B^{\vec K} (\vec R_B)-
E\,i\,e^{-i\theta'}\,
\sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i\,(\vec K'-\vec K) \cdot \vec R_B} F_B^{\vec K'} (\vec R_B)\nonumber\\
&&\qquad -\sum_{\vec R_B} g(\vec r-\vec R_B) u(\vec R_B) F_B^{\vec K} (\vec R_B)\nonumber\\
&&\qquad +i\,e^{-i\theta'}\,\sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i\,(\vec K'-\vec K) \cdot \vec R_B} u(\vec R_B) F_B^{\vec K'} (\vec R_B)=\nonumber\\
&&\qquad \gamma_0 \,i\, e^{-i\theta'} \sum_{l=1}^3
e^{i \vec K \cdot \vec \tau_l} \sum_{\vec R_B} g(\vec r-\vec R_B)
F_A^{\vec K} (\vec R_B+\vec \tau_l)\nonumber\\
&&\qquad +\gamma_0 \, \sum_{l=1}^3 e^{i \vec K' \cdot \vec \tau_l}
\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i\,(\vec K'-\vec K) \cdot \vec R_B}
F_A^{\vec K'} (\vec R_B+\vec \tau_l);\nonumber
\end{eqnarray}
exploiting the property (144), it becomes
\begin{eqnarray}
&& E\,\left[\sum_{\vec R_B} g(\vec r-\vec R_B)\right] F_B^{\vec K} (\vec r)-
E\,i\,e^{-i\theta'}\,
\left[\sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i\,(\vec K'-\vec K) \cdot \vec R_B}\right]
F_B^{\vec K'} (\vec r)\nonumber\\
&& -\left[\sum_{\vec R_B} g(\vec r-\vec R_B) u(\vec R_B)\right]
F_B^{\vec K} (\vec r)\nonumber\\
&& +i\,e^{-i\theta'}\,\left[\sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i\,(\vec K'-\vec K) \cdot \vec R_B} u(\vec R_B)\right]
F_B^{\vec K'} (\vec r)=\nonumber\\
&& \gamma_0 \,i\, e^{-i\theta'} \sum_{l=1}^3
e^{i \vec K \cdot \vec \tau_l} \left[\sum_{\vec R_B} g(\vec r-\vec R_B)\right]
F_A^{\vec K} (\vec r+\vec \tau_l)\nonumber\\
&& +\gamma_0 \, \sum_{l=1}^3 e^{i \vec K' \cdot \vec \tau_l}
\left[\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i\,(\vec K'-\vec K) \cdot \vec R_B}
\right] F_A^{\vec K'} (\vec r+\vec \tau_l).\nonumber
\end{eqnarray}
For the quantities in the square brackets, we can use the properties
(141) and (143), together with the definitions
\begin{equation}
\label{ub}
u_B(\vec r)=\sum_{\vec R_B} g(\vec r-\vec R_B) u(\vec R_B),\qquad
u'_B(\vec r)=\sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i (\vec K'-\vec K)\cdot \vec R_B} u(\vec R_B),
\end{equation}
obtaining
\begin{eqnarray}
\label{second}
&& E\,F_B^{\vec K} (\vec r)-u_B(\vec r)\,F_B^{\vec K} (\vec r)+
i\,e^{-i\theta'}\, u'_B(\vec r)\,F_B^{\vec K'} (\vec r)=\\
&& \gamma_0 \,i\, e^{-i\theta'} \sum_{l=1}^3
e^{i \vec K \cdot \vec \tau_l} F_A^{\vec K} (\vec r+\vec \tau_l).\nonumber
\end{eqnarray}
Expanding the smooth quantity $F_A^{\vec K} (\vec r+\vec \tau_l)$ to the
first order in $\vec \tau_l$, we have that
\begin{eqnarray}
&& \sum_{l=1}^3 e^{i \vec K \cdot \vec \tau_l}
F_A^{\vec K} (\vec r+\vec \tau_l)\simeq
\sum_{l=1}^3 e^{i \vec K \cdot \vec \tau_l}
\left[F_A^{\vec K} (\vec r)+
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}
\right) F_A^{\vec K} (\vec r)\right]=\\
&& \left(\sum_{l=1}^3 e^{i \vec K\cdot \vec \tau_l}\right)
F_A^{\vec K} (\vec r)+\left[\sum_{l=1}^3 e^{i \vec K\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)\right]
F_A^{\vec K} (\vec r).\nonumber
\end{eqnarray}
Let us now calculate the value of the sums which appear in the previous
expression
\begin{eqnarray}
&& \sum_{l=1}^3 e^{i \vec K\cdot \vec \tau_l}=
1+e^{i\frac{2\pi}{3}}+e^{-i\frac{2\pi}{3}}=0;\\
&& \sum_{l=1}^3 e^{i \vec K\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
1 \frac{a}{\sqrt{3}} \left(-\frac{\partial}{\partial x'}\right)\nonumber\\
&& +e^{i\frac{2\pi}{3}} \frac{a}{\sqrt{3}}
\left(\frac{1}{2}\frac{\partial}{\partial x'}-\frac{\sqrt{3}}{2}
\frac{\partial}{\partial y'}\right)+
e^{-i\frac{2\pi}{3}} \frac{a}{\sqrt{3}}
\left(\frac{1}{2}\frac{\partial}{\partial x'}+\frac{\sqrt{3}}{2}
\frac{\partial}{\partial y'}\right)=\nonumber\\
&& \frac{a}{\sqrt{3}}
\left(\left(-1+\frac{1}{2}e^{i\frac{2\pi}{3}}+
\frac{1}{2}e^{-i\frac{2\pi}{3}}\right)
\frac{\partial}{\partial x'}+\left(-\frac{\sqrt{3}}{2}e^{i\frac{2\pi}{3}}+
\frac{\sqrt{3}}{2}e^{-i\frac{2\pi}{3}} \right)
\frac{\partial}{\partial y'}\right).\nonumber
\end{eqnarray}
Since
\begin{equation*}
-1+\frac{1}{2}e^{i\frac{2\pi}{3}}+\frac{1}{2}e^{-i\frac{2\pi}{3}}=
-1+\frac{1}{2}(e^{i\frac{2\pi}{3}}+e^{-i\frac{2\pi}{3}})=
-1+\frac{1}{2}(-1)=-\frac{3}{2}
\end{equation*}
and
\begin{equation*}
-\frac{\sqrt{3}}{2}e^{i\frac{2\pi}{3}}+\frac{\sqrt{3}}{2}e^{-i\frac{2\pi}{3}}=
-\frac{\sqrt{3}}{2}(e^{i\frac{2\pi}{3}}-e^{-i\frac{2\pi}{3}})=
-\frac{\sqrt{3}}{2}(i\sqrt{3})=-i \frac{3}{2},
\end{equation*}
we have that
\begin{eqnarray}
&& \sum_{l=1}^3 e^{i \vec K\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
-\frac{a}{\sqrt{3}}\frac{3}{2}
\left(\frac{\partial}{\partial x'}+i\frac{\partial}{\partial y'}\right)=\nonumber\\
&& -\frac{\sqrt{3}}{2}a (i{\hat\kappa}_{x'}-{\hat\kappa}_{y'})=
-i\frac{\sqrt{3}}{2}a ({\hat\kappa}_{x'}+i{\hat\kappa}_{y'}).\nonumber
\end{eqnarray}
Substituting these results, eq.~(162) becomes
\begin{eqnarray}
\label{second1}
&& E\,F_B^{\vec K} (\vec r)-u_B(\vec r)\,F_B^{\vec K} (\vec r)+
i\,e^{-i\theta'}\, u'_B(\vec r)\,F_B^{\vec K'} (\vec r)\simeq\\
&& \gamma_0 \,i\, e^{-i\theta'}
\left( -i\frac{\sqrt{3}}{2}a ({\hat\kappa}_{x'}+i{\hat\kappa}_{y'})
\right) F_A^{\vec K} (\vec r)=\nonumber\\
&& \frac{\sqrt{3}}{2}\gamma_0 a\, e^{-i \theta'}
(\hat\kappa_{x'}+i\hat\kappa_{y'}) F_A^{\vec K}(\vec r)=
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) F_A^{\vec K} (\vec r),\nonumber
\end{eqnarray}
where we have made use of the relation
\begin{eqnarray}
\label{sumcoord}
&& e^{-i\theta'}({\hat\kappa}_{x'}+i{\hat\kappa}_{y'})=
(\cos\theta'-i\sin\theta')({\hat\kappa}_{x'}+i{\hat\kappa}_{y'})=\\
&& (\cos\theta' {\hat\kappa}_{x'}+\sin\theta' {\hat\kappa}_{y'})+
i(\cos\theta' {\hat\kappa}_{y'}-\sin\theta' {\hat\kappa}_{x'})=
{\hat\kappa}_x+i{\hat\kappa}_y.\nonumber
\end{eqnarray}
Instead, if we multiply the first of the tight-binding equations
(140) by
$g(\vec r-\vec R_A)\times (i\, e^{-i\theta'} e^{-i \vec K'\cdot \vec R_A})$
and we sum it over $\vec R_A$, we find
\begin{eqnarray}
&& E\, i\, e^{-i\theta'} \sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i\,(\vec K-\vec K') \cdot \vec R_A} F_A^{\vec K} (\vec R_A)+
E \sum_{\vec R_A} g(\vec r-\vec R_A) F_A^{\vec K'} (\vec R_A)\nonumber\\
&&\qquad -i\, e^{-i\theta'} \sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i\,(\vec K-\vec K') \cdot \vec R_A} u(\vec R_A) F_A^{\vec K} (\vec R_A)\nonumber\\
&&\qquad -\sum_{\vec R_A} g(\vec r-\vec R_A) u(\vec R_A) F_A^{\vec K'} (\vec R_A)=\nonumber\\
&&\qquad \gamma_0 \sum_{l=1}^3
e^{-i \vec K \cdot \vec \tau_l} \sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i\,(\vec K-\vec K') \cdot \vec R_A}
F_B^{\vec K} (\vec R_A-\vec \tau_l)\nonumber\\
&&\qquad -\gamma_0\,i\, e^{-i\theta'} \sum_{l=1}^3
e^{-i \vec K' \cdot \vec \tau_l}
\sum_{\vec R_A} g(\vec r-\vec R_A) F_B^{\vec K'} (\vec R_A-\vec \tau_l);\nonumber
\end{eqnarray}
exploiting the property (144), it becomes
\begin{eqnarray}
&& E\,i\, e^{-i\theta'} \left[\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i\,(\vec K-\vec K') \cdot \vec R_A}\right] F_A^{\vec K} (\vec r)+
E \left[\sum_{\vec R_A} g(\vec r-\vec R_A)\right] F_A^{\vec K'} (\vec r)\nonumber\\
&& -i\, e^{-i\theta'} \left[\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i\,(\vec K-\vec K') \cdot \vec R_A} u(\vec R_A)\right]
F_A^{\vec K} (\vec r)\nonumber\\
&& -\left[\sum_{\vec R_A} g(\vec r-\vec R_A) u(\vec R_A)\right]
F_A^{\vec K'} (\vec r)=\nonumber\\
&& \gamma_0 \sum_{l=1}^3
e^{-i \vec K \cdot \vec \tau_l} \left[\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i\,(\vec K-\vec K') \cdot \vec R_A}\right]
F_B^{\vec K} (\vec r-\vec \tau_l)\nonumber\\
&& -\gamma_0\,i\, e^{-i\theta'} \sum_{l=1}^3
e^{-i \vec K' \cdot \vec \tau_l}
\left[\sum_{\vec R_A} g(\vec r-\vec R_A)\right]
F_B^{\vec K'} (\vec r-\vec \tau_l).\nonumber
\end{eqnarray}
For the quantities in the square brackets, we can use the properties
(141) and (143), in the form
\begin{equation*}
\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i\,(\vec K-\vec K') \cdot \vec R_A}=
\Big(\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i\,(\vec K'-\vec K) \cdot \vec R_A}\Big)^*=0,
\end{equation*}
obtaining
\begin{eqnarray}
\label{third}
&& E F_A^{\vec K'} (\vec r)-i\, e^{-i\theta'} {u'_A}^*(\vec r)
F_A^{\vec K} (\vec r)-u_A(\vec r) F_A^{\vec K'} (\vec r)=\\
&& -\gamma_0\,i\, e^{-i\theta'} \sum_{l=1}^3
e^{-i \vec K' \cdot \vec \tau_l} F_B^{\vec K'} (\vec r-\vec \tau_l).\nonumber
\end{eqnarray}
Expanding the smooth quantity $F_B^{\vec K'} (\vec r-\vec \tau_l)$ to the
first order in $\vec \tau_l$, we have that
\begin{eqnarray}
&& \sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}
F_B^{\vec K'} (\vec r-\vec \tau_l)\simeq
\sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}
\left[F_B^{\vec K'} (\vec r)-
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}
\right) F_B^{\vec K'} (\vec r)\right]=\\
&& \left(\sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}\right)
F_B^{\vec K'} (\vec r)-\left[\sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)\right]
F_B^{\vec K'} (\vec r).\nonumber
\end{eqnarray}
Let us now calculate the value of the sums which appear in the previous
expression
\begin{eqnarray}
&& \sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}=
1+e^{i\frac{2\pi}{3}}+e^{-i\frac{2\pi}{3}}=0;\\
&& \sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
1 \frac{a}{\sqrt{3}} \left(-\frac{\partial}{\partial x'}\right)\nonumber\\
&& +e^{i\frac{2\pi}{3}} \frac{a}{\sqrt{3}}
\left(\frac{1}{2}\frac{\partial}{\partial x'}-\frac{\sqrt{3}}{2}
\frac{\partial}{\partial y'}\right)+
e^{-i\frac{2\pi}{3}} \frac{a}{\sqrt{3}}
\left(\frac{1}{2}\frac{\partial}{\partial x'}+\frac{\sqrt{3}}{2}
\frac{\partial}{\partial y'}\right)=\nonumber\\
&& \frac{a}{\sqrt{3}}
\left(\left(-1+\frac{1}{2}e^{i\frac{2\pi}{3}}+
\frac{1}{2}e^{-i\frac{2\pi}{3}}\right)
\frac{\partial}{\partial x'}+\left(-\frac{\sqrt{3}}{2}e^{i\frac{2\pi}{3}}+
\frac{\sqrt{3}}{2}e^{-i\frac{2\pi}{3}}\right)
\frac{\partial}{\partial y'}\right).\nonumber
\end{eqnarray}
Since
\begin{equation*}
-1+\frac{1}{2}e^{i\frac{2\pi}{3}}+\frac{1}{2}e^{-i\frac{2\pi}{3}}=
-1+\frac{1}{2}(e^{i\frac{2\pi}{3}}+e^{-i\frac{2\pi}{3}})=
-1+\frac{1}{2}(-1)=-\frac{3}{2}
\end{equation*}
and
\begin{equation*}
-\frac{\sqrt{3}}{2}e^{i\frac{2\pi}{3}}+\frac{\sqrt{3}}{2}e^{-i\frac{2\pi}{3}}=
-\frac{\sqrt{3}}{2}(e^{i\frac{2\pi}{3}}-e^{-i\frac{2\pi}{3}})=
-\frac{\sqrt{3}}{2}(i\sqrt{3})=-i \frac{3}{2},
\end{equation*}
we have that
\begin{eqnarray}
&& \sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
-\frac{a}{\sqrt{3}}\frac{3}{2}
\left(\frac{\partial}{\partial x'}+i\frac{\partial}{\partial y'}\right)=\nonumber\\
&& -\frac{\sqrt{3}}{2}a (i\hat\kappa_{x'}-\hat\kappa_{y'})=
-i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}+i\hat\kappa_{y'}).\nonumber
\end{eqnarray}
Substituting these results, eq.~(167) becomes
\begin{eqnarray}
\label{third1}
&& E F_A^{\vec K'} (\vec r)-i\, e^{-i\theta'} {u'_A}^*(\vec r)
F_A^{\vec K} (\vec r)-u_A(\vec r) F_A^{\vec K'} (\vec r)\simeq\\
&& -\gamma_0\,i\, e^{-i\theta'}
\left(i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}+i\hat\kappa_{y'})
F_B^{\vec K'} (\vec r)\right)=\nonumber\\
&& \frac{\sqrt{3}}{2}\gamma_0 a\, e^{-i \theta'}
(\hat\kappa_{x'}+i\hat\kappa_{y'}) F_B^{\vec K'}(\vec r)=
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) F_B^{\vec K'} (\vec r),\nonumber
\end{eqnarray}
where we have exploited the relation (166).
Finally, if we multiply the second of the tight-binding equations
(140) by
$g(\vec r-\vec R_B) \times e^{-i \vec K'\cdot \vec R_B}$
and we sum it over $\vec R_B$, we find
\begin{eqnarray}
&& E\, i\,e^{i\theta'}\,
\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i\,(\vec K-\vec K') \cdot \vec R_B}
F_B^{\vec K} (\vec R_B)+
E\,\sum_{\vec R_B} g(\vec r-\vec R_B) F_B^{\vec K'} (\vec R_B)\nonumber\\
&&\qquad -i\,e^{i\theta'}\,\sum_{\vec R_B}
g(\vec r-\vec R_B) e^{i\,(\vec K-\vec K') \cdot \vec R_B}
u(\vec R_B) F_B^{\vec K} (\vec R_B)\nonumber\\
&&\qquad -\sum_{\vec R_B} g(\vec r-\vec R_B) u(\vec R_B) F_B^{\vec K'} (\vec R_B)=\nonumber\\
&&\qquad -\gamma_0 \,\sum_{l=1}^3
e^{i \vec K \cdot \vec \tau_l} \sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i\,(\vec K-\vec K') \cdot \vec R_B}
F_A^{\vec K} (\vec R_B+\vec \tau_l)\nonumber\\
&&\qquad +\gamma_0\,i\,e^{i\theta'}\,
\sum_{l=1}^3 e^{i \vec K' \cdot \vec \tau_l}
\sum_{\vec R_B} g(\vec r-\vec R_B) F_A^{\vec K'} (\vec R_B+\vec \tau_l);\nonumber
\end{eqnarray}
exploiting the property (144) it becomes
\begin{eqnarray}
&& E\,i\,e^{i\theta'}\,
\left[\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i\,(\vec K-\vec K') \cdot \vec R_B}
\right] F_B^{\vec K} (\vec r)+
E\,\left[\sum_{\vec R_B} g(\vec r-\vec R_B)\right]
F_B^{\vec K'} (\vec r)\nonumber\\
&& -i\,e^{i\theta'}\,\left[\sum_{\vec R_B}
g(\vec r-\vec R_B) e^{i\,(\vec K-\vec K') \cdot \vec R_B} u(\vec R_B)\right]
F_B^{\vec K} (\vec r)\nonumber\\
&& -\left[\sum_{\vec R_B} g(\vec r-\vec R_B) u(\vec R_B)\right]
F_B^{\vec K'} (\vec r)=\nonumber\\
&& -\gamma_0 \,\sum_{l=1}^3
e^{i \vec K \cdot \vec \tau_l} \left[\sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i\,(\vec K-\vec K') \cdot \vec R_B}\right]
F_A^{\vec K} (\vec r+\vec \tau_l)\nonumber\\
&& +\gamma_0\,i\,e^{i\theta'}\,
\sum_{l=1}^3 e^{i \vec K' \cdot \vec \tau_l}
\left[\sum_{\vec R_B} g(\vec r-\vec R_B) \right]
F_A^{\vec K'} (\vec r+\vec \tau_l).\nonumber
\end{eqnarray}
For the quantities in the square brackets, we can use the properties
(141) and (143), in the form
\begin{equation*}
\label{phase2}
\sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i\,(\vec K-\vec K') \cdot \vec R_B}=
\Big(\sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i\,(\vec K'-\vec K) \cdot \vec R_B}\Big)^*=0,
\end{equation*}
obtaining
\begin{eqnarray}
\label{fourth}
&& E\,F_B^{\vec K'} (\vec r)-i\,e^{i\theta'}\,
{u'_B}^*(\vec r)\,F_B^{\vec K} (\vec r)-
u_B(\vec r)\,F_B^{\vec K'} (\vec r)=\\
&& \gamma_0 \,i\, e^{i\theta'} \sum_{l=1}^3
e^{i \vec K' \cdot \vec \tau_l} F_A^{\vec K'} (\vec r+\vec \tau_l).\nonumber
\end{eqnarray}
Expanding the smooth quantity $F_A^{\vec K'} (\vec r+\vec \tau_l)$ to the
first order in $\vec \tau_l$, we have that
\begin{eqnarray}
&& \sum_{l=1}^3 e^{i \vec K' \cdot \vec \tau_l}
F_A^{\vec K'} (\vec r+\vec \tau_l)\simeq
\sum_{l=1}^3 e^{i \vec K' \cdot \vec \tau_l}
\left[F_A^{\vec K'} (\vec r)+
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}
\right) F_A^{\vec K'} (\vec r)\right]=\\
&& \left(\sum_{l=1}^3 e^{i \vec K'\cdot \vec \tau_l}\right)
F_A^{\vec K'} (\vec r)+\left[\sum_{l=1}^3 e^{i \vec K'\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)\right]\nonumber
F_A^{\vec K'} (\vec r).
\end{eqnarray}
Let us now calculate the value of the sums which appear in the previous
expression
\begin{eqnarray}
&& \sum_{l=1}^3 e^{i \vec K'\cdot \vec \tau_l}=
1+e^{-i\frac{2\pi}{3}}+e^{i\frac{2\pi}{3}}=0;\\
&& \sum_{l=1}^3 e^{i \vec K'\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
1 \frac{a}{\sqrt{3}} \left(-\frac{\partial}{\partial x'}\right)\nonumber\\
&& +e^{-i\frac{2\pi}{3}} \frac{a}{\sqrt{3}}
\left(\frac{1}{2}\frac{\partial}{\partial x'}-\frac{\sqrt{3}}{2}
\frac{\partial}{\partial y'}\right)+
e^{i\frac{2\pi}{3}} \frac{a}{\sqrt{3}}
\left(\frac{1}{2}\frac{\partial}{\partial x'}+\frac{\sqrt{3}}{2}
\frac{\partial}{\partial y'}\right)=\nonumber\\
&& \frac{a}{\sqrt{3}}
\left(\left(-1+\frac{1}{2}e^{-i\frac{2\pi}{3}}+
\frac{1}{2}e^{i\frac{2\pi}{3}}\right)
\frac{\partial}{\partial x'}+\left(-\frac{\sqrt{3}}{2}e^{-i\frac{2\pi}{3}}+
\frac{\sqrt{3}}{2}e^{i\frac{2\pi}{3}} \right)
\frac{\partial}{\partial y'}\right),\nonumber
\end{eqnarray}
Since
\begin{equation*}
-1+\frac{1}{2}e^{-i\frac{2\pi}{3}}+\frac{1}{2}e^{i\frac{2\pi}{3}}=
-1+\frac{1}{2}(e^{-i\frac{2\pi}{3}}+e^{i\frac{2\pi}{3}})=
-1+\frac{1}{2}(-1)=-\frac{3}{2}
\end{equation*}
and
\begin{equation*}
-\frac{\sqrt{3}}{2}e^{-i\frac{2\pi}{3}}+\frac{\sqrt{3}}{2}e^{i\frac{2\pi}{3}}=
\frac{\sqrt{3}}{2}(e^{i\frac{2\pi}{3}}-e^{-i\frac{2\pi}{3}})=
\frac{\sqrt{3}}{2}(i\sqrt{3})=i \frac{3}{2},
\end{equation*}
we have that
\begin{eqnarray}
&& \sum_{l=1}^3 e^{i \vec K'\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
-\frac{a}{\sqrt{3}}\frac{3}{2}
\left(\frac{\partial}{\partial x'}-i\frac{\partial}{\partial y'}\right)=\nonumber\\
&& -\frac{\sqrt{3}}{2}a (i\hat\kappa_{x'}+\hat\kappa_{y'})=
-i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}-i\hat\kappa_{y'}).\nonumber
\end{eqnarray}
Substituting these values, eq.~(171) becomes
\begin{eqnarray}
\label{fourth1}
&& E\,F_B^{\vec K'} (\vec r)-i\,e^{i\theta'}\,
{u'_B}^*(\vec r)\,F_B^{\vec K} (\vec r)-
u_B(\vec r)\,F_B^{\vec K'} (\vec r)=\\
&& \gamma_0 \,i\, e^{i\theta'}
\left( -i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}-i\hat\kappa_{y'})
\right) F_A^{\vec K'} (\vec r)=\nonumber\\
&& \frac{\sqrt{3}}{2}\gamma_0 a\, e^{i \theta'}
(\hat\kappa_{x'}-i\hat\kappa_{y'}) F_A^{\vec K'}(\vec r)=
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) F_A^{\vec K'} (\vec r),\nonumber
\end{eqnarray}
where we have exploited the relation (154).
In this way, we have obtained the four equations (153),
(165), (170) and (174), that we can summarize
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
u_A(\vec r)\,F_A^{\vec K}(\vec r)+
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) F_B^{\vec K} (\vec r)-
i\, e^{i \theta'}\,u'_A(\vec r)\,F_A^{\vec K'}(\vec r)=
E\,F_A^{\vec K}(\vec r),\\[5pt]
\displaystyle
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) F_A^{\vec K} (\vec r)+
u_B(\vec r)\,F_B^{\vec K} (\vec r)-
i\, e^{-i\theta'}\, u'_B(\vec r)\,F_B^{\vec K'} (\vec r)=
E\,F_B^{\vec K} (\vec r),\\[5pt]
\displaystyle
i\, e^{-i\theta'} {u'_A}^*(\vec r) F_A^{\vec K} (\vec r)+
u_A(\vec r) F_A^{\vec K'} (\vec r)+
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) F_B^{\vec K'} (\vec r)=
E F_A^{\vec K'} (\vec r),\\[5pt]
\displaystyle
i\, e^{i\theta'}\,{u'_B}^*(\vec r)\,F_B^{\vec K} (\vec r)+
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) F_A^{\vec K'} (\vec r)+
u_B(\vec r)\,F_B^{\vec K'} (\vec r)=
E\,F_B^{\vec K'} (\vec r),
\end{array} \right.
\end{equation}
and write in matrix form
\begin{eqnarray}
\label{diracpot}
\qquad && \left[\begin{array}{cccc}
u_A(\vec r) &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
-i \, e^{i \theta'}\,u'_A(\vec r) &
0 \\[3pt]
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) &
u_B(\vec r) &
0 &
-i\, e^{-i\theta'}\, u'_B(\vec r) \\[3pt]
i\, e^{-i\theta'}\,{u'_A}^*(\vec r) &
0 &
u_A(\vec r) &
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) \\[3pt]
0 &
i\, e^{i\theta'}\,{u'_B}^*(\vec r) &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
u_B(\vec r)
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right]=\\
&& E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right],\nonumber
\end{eqnarray}
which is the $\vec k \cdot \vec p$ equation of graphene.
\newpage
\setcounter{equation}{201}
\section{Extended version of the calculations at p.~537}
If we move to the momentum representation (see eq.~(111))
and enforce
\begin{equation}
\det\left\{\left[\begin{array}{cc}
0 & \gamma (\kappa_x+i\kappa_y)\\
\gamma (\kappa_x-i\kappa_y) & 0
\end{array}\right]-E
\left[\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}\right]\right\}=0,
\end{equation}
we find the dispersion relations
\begin{equation}
E_s^{\vec K'}(\vec\kappa)=s \gamma \sqrt{\kappa_x^2+\kappa_y^2}=
s \gamma|\vec \kappa|,
\end{equation}
where $s$ can assume the values $+1$ or $-1$.
The corresponding envelope functions are
\begin{equation}
\label{fk1}
\vec F_{s \vec\kappa}^{\vec K'}(\vec r)=
\frac{1}{\sqrt{2\Omega}}e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s (\vec\kappa)} R(\alpha (\vec\kappa)) \tilde{| s \rangle},
\end{equation}
with $\tilde\phi_s (\vec\kappa)$ an arbitrary phase factor and
\begin{equation}
\tilde{| s \rangle} =\frac{1}{\sqrt{2}}\left[\begin{array}{c}
is\\
1
\end{array}\right].
\end{equation}
This result is easily verified noting that
\begin{eqnarray}
&& \gamma \left[\begin{array}{cc}
0 & \hat\kappa_x+i\hat\kappa_y\\
\hat\kappa_x-i\hat\kappa_y & 0
\end{array}\right] \vec F_{s \vec\kappa}^{\vec K'}(\vec r)=\nonumber\\
&& \gamma \left[\begin{array}{cc}
0 & \hat\kappa_x+i\hat\kappa_y\\
\hat\kappa_x-i\hat\kappa_y & 0
\end{array}\right]
\frac{1}{\sqrt{2\Omega}}e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s} R(\alpha (\vec\kappa)) \tilde{| s \rangle}=\nonumber\\
&& \gamma \left[\begin{array}{cc}
0 & \kappa_x+i\kappa_y\\
\kappa_x-i\kappa_y & 0
\end{array}\right]
\frac{1}{\sqrt{2\Omega}}e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s} R(\alpha (\vec\kappa)) \tilde{| s \rangle}=\nonumber\\
&& \gamma \left[\begin{array}{cc}
0 & i|\vec\kappa|e^{i \alpha}\\
-i|\vec\kappa|e^{-i\alpha} & 0
\end{array}\right]
\left(\frac{1}{\sqrt{2\Omega}}e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s} \left[\begin{array}{cc}
e^{i\frac{\alpha}{2}} & 0\\
0 & e^{-i\frac{\alpha}{2}}
\end{array}\right]
\frac{1}{\sqrt{2}}\left[\begin{array}{c}
is\\
1
\end{array}\right] \right)=\nonumber\\
&& \frac{1}{2\sqrt{\Omega}}\gamma e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s} \!\left[\begin{array}{cc}
0 & i|\vec\kappa|e^{i \frac{\alpha}{2}}\\
-i|\vec\kappa|e^{-i\frac{\alpha}{2}} & 0
\end{array}\right]\!
\left[\begin{array}{c}
is\\
1
\end{array}\right]\!=\nonumber\\
&& \frac{1}{2\sqrt{\Omega}}\gamma e^{i(\vec\kappa\cdot\vec r+\tilde\phi_s)}
\!\left[\begin{array}{c}
i |\vec\kappa| e^{i \frac{\alpha}{2}}\\
|\vec\kappa| s e^{-i\frac{\alpha}{2}}
\end{array}\right]\nonumber
\end{eqnarray}
and that also
\begin{eqnarray}
&& E_s^{\vec K'} \vec F_{s \vec\kappa}^{\vec K'}(\vec r)=
s \gamma |\vec \kappa| \left(\frac{1}{\sqrt{2\Omega}}e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s} \left[\begin{array}{cc}
e^{i\frac{\alpha}{2}} & 0\\
0 & e^{-i\frac{\alpha}{2}}
\end{array}\right]
\frac{1}{\sqrt{2}}\left[\begin{array}{c}
is\\
1
\end{array}\right] \right)=\nonumber\\
&& s \gamma |\vec \kappa| \frac{1}{2\sqrt{\Omega}}
e^{i(\vec\kappa\cdot\vec r+\tilde\phi_s)}
\left[\begin{array}{c}
i s e^{i \frac{\alpha}{2}}\\
e^{-i\frac{\alpha}{2}}
\end{array}\right]=
\frac{1}{2\sqrt{\Omega}} \gamma e^{i(\vec\kappa\cdot\vec r+\tilde\phi_s)}
\left[\begin{array}{c}
i s^2 |\vec \kappa| e^{i \frac{\alpha}{2}}\\
|\vec \kappa| s e^{-i\frac{\alpha}{2}}
\end{array}\right]=\nonumber\\
&& \frac{1}{2\sqrt{\Omega}}\gamma e^{i(\vec\kappa\cdot\vec r+\tilde\phi_s)}
\left[\begin{array}{c}
i |\vec\kappa| e^{i \frac{\alpha}{2}}\\
|\vec\kappa| s e^{-i\frac{\alpha}{2}}
\end{array}\right].\nonumber
\end{eqnarray}
From these functions $F_A^{\vec K}$, $F_B^{\vec K}$, $F_A^{\vec K'}$
and $F_B^{\vec K'}$, we can find the functions $\psi_A$ and $\psi_B$
and thus the electron wave function $\psi$ in the absence of an external
potential, using the relations (134) and (120).
\newpage
\setcounter{equation}{209}
\section{Extended version of the calculations at pp.~539--540}
However the terms containing the phase factors
$e^{i (\vec K'-\vec K) \cdot \vec R_A}$,
$e^{i (\vec K'-\vec K) \cdot \vec R_B}$, or their
complex conjugates are negligible with respect to the others.
Indeed, using the smoothing function $g(\vec r)$, we know from
the property (141) with $\vec r=\vec R_A$ that
$\sum_{\vec R_A'} g(\vec R_A-\vec R_A')=1$. Therefore we
can insert this sum into the term
\begin{equation}
\sum_{\vec R_A} \Big[e^{i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K}}^*(\vec R_A) F_A^{\vec K'}(\vec R_A)\Big],
\end{equation}
obtaining
\begin{equation}
\sum_{\vec R_A} \left\{\left[\sum_{\vec R_A'} g(\vec R_A-\vec R_A')\right]
e^{i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K}}^*(\vec R_A) F_A^{\vec K'}(\vec R_A)\right\},
\end{equation}
that can be rewritten, as a result of the point-symmetry of the function $g$
with respect to its center and thus of the fact that $g(\vec R_A-\vec R_A')=
g(-(\vec R_A-\vec R_A'))$, in this way:
\begin{equation}
\sum_{\vec R_A} \sum_{\vec R_A'} g(\vec R_A'-\vec R_A)
e^{i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K}}^*(\vec R_A) F_A^{\vec K'}(\vec R_A).
\end{equation}
If then we use the property (144) with $\vec r=\vec R_A'$
and in particular the fact that
\begin{equation}
g(\vec R_A'-\vec R_A) {F_A^{\vec K}}^*(\vec R_A) F_A^{\vec K'}(\vec R_A)
=g(\vec R_A'-\vec R_A) {F_A^{\vec K}}^*(\vec R_A') F_A^{\vec K'}(\vec R_A')
\end{equation}
(due to the smoothness of the envelope functions), the term becomes
\begin{equation}
\sum_{\vec R_A'} \Big[ \sum_{\vec R_A} g(\vec R_A'-\vec R_A)
e^{i (\vec K'-\vec K) \cdot \vec R_A} \Big]
{F_A^{\vec K}}^*(\vec R_A') F_A^{\vec K'}(\vec R_A')
\end{equation}
and, by way of the property (143) with $\vec r=\vec R_A'$, we conclude
that the quantities between square brackets, and thus the overall term,
are very small.
Analogously, we can see that the terms
\begin{eqnarray}
&& \sum_{\vec R_A} \Big[e^{-i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K'}}^*(\vec R_A) F_A^{\vec K}(\vec R_A)\Big]= \nonumber\\
&& \sum_{\vec R_A} \Big\{\Big[\sum_{\vec R_A'} g(\vec R_A-\vec R_A')\Big]
e^{-i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K'}}^*(\vec R_A) F_A^{\vec K}(\vec R_A)\Big\}= \nonumber\\
&& \sum_{\vec R_A} \sum_{\vec R_A'} g(\vec R_A'-\vec R_A)
e^{-i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K'}}^*(\vec R_A) F_A^{\vec K}(\vec R_A)= \nonumber\\
&& \sum_{\vec R_A'} \Big[ \sum_{\vec R_A} g(\vec R_A'-\vec R_A)
e^{-i (\vec K'-\vec K) \cdot \vec R_A} \Big]
{F_A^{\vec K'}}^*(\vec R_A') F_A^{\vec K}(\vec R_A'),\nonumber
\end{eqnarray}
\begin{eqnarray}
&& \sum_{\vec R_B} \Big[e^{i (\vec K'-\vec K) \cdot \vec R_B}
{F_B^{\vec K}}^*(\vec R_B) F_B^{\vec K'}(\vec R_B)\Big]= \nonumber\\
&& \sum_{\vec R_B} \Big\{\Big[\sum_{\vec R_B'} g(\vec R_B-\vec R_B')\Big]
e^{i (\vec K'-\vec K) \cdot \vec R_B}
{F_B^{\vec K}}^*(\vec R_B) F_B^{\vec K'}(\vec R_B)\Big\}= \nonumber\\
&& \sum_{\vec R_B} \sum_{\vec R_B'} g(\vec R_B'-\vec R_B)
e^{i (\vec K'-\vec K) \cdot \vec R_B}
{F_B^{\vec K}}^*(\vec R_B) F_B^{\vec K'}(\vec R_B)= \nonumber\\
&& \sum_{\vec R_B'} \Big[ \sum_{\vec R_B} g(\vec R_B'-\vec R_B)
e^{i (\vec K'-\vec K) \cdot \vec R_B} \Big]
{F_B^{\vec K}}^*(\vec R_B') F_A^{\vec K'}(\vec R_B'),\nonumber
\end{eqnarray}
and
\begin{eqnarray}
&& \sum_{\vec R_B} \Big[e^{-i (\vec K'-\vec K) \cdot \vec R_B}
{F_B^{\vec K'}}^*(\vec R_B) F_B^{\vec K}(\vec R_B)\Big]= \nonumber\\
&& \sum_{\vec R_B} \Big\{\Big[\sum_{\vec R_B'} g(\vec R_B-\vec R_B')\Big]
e^{-i (\vec K'-\vec K) \cdot \vec R_B}
{F_B^{\vec K'}}^*(\vec R_B) F_B^{\vec K}(\vec R_B)\Big\}= \nonumber\\
&& \sum_{\vec R_B} \sum_{\vec R_B'} g(\vec R_B'-\vec R_B)
e^{-i (\vec K'-\vec K) \cdot \vec R_B}
{F_B^{\vec K'}}^*(\vec R_B) F_B^{\vec K}(\vec R_B)= \nonumber\\
&& \sum_{\vec R_B'} \Big[ \sum_{\vec R_B} g(\vec R_B'-\vec R_B)
e^{-i (\vec K'-\vec K) \cdot \vec R_B} \Big]
{F_B^{\vec K'}}^*(\vec R_B') F_A^{\vec K}(\vec R_B')\nonumber
\end{eqnarray}
are negligible. Since $g(\vec r)$ has non negligible values only within
a few lattice constants from its center, the previous considerations are
approximately valid also if we limit the sums to the atoms contained in
the area $S$.
\newpage
\setcounter{equation}{250}
\section{Extended version of the calculations at p.~548--549}
Multiplying the second equation of (245) by
$g(\vec r-\vec R_B) (-i e^{-i\theta'}e^{-i \vec K\cdot \vec R_B})$,
summing it over $\vec R_B$ and then using the properties of the function
$g$, we find analogously
\begin{eqnarray}
&& e^{i \vec K \cdot \vec C_h} \sum_{\vec R_B} g(\vec r-\vec R_B)
F_B^{\vec K}(\vec R_B+\vec C_h)\\
&& -i e^{-i\theta'} e^{i \vec K'\cdot \vec C_h}
\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i (\vec K'-\vec K) \cdot \vec R_B}
F_B^{\vec K'} (\vec R_B+\vec C_h)=\nonumber\\
&& \sum_{\vec R_B} g(\vec r-\vec R_B) F_B^{\vec K}(\vec R_B)
-i e^{-i\theta'}
\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i (\vec K'-\vec K) \cdot \vec R_B}
F_B^{\vec K'} (\vec R_B) \Rightarrow\nonumber\\
&& e^{i \vec K \cdot \vec C_h} \left[\sum_{\vec R_B} g(\vec r-\vec R_B)\right]
F_B^{\vec K}(\vec r+\vec C_h)\nonumber\\
&& -i e^{-i\theta'} e^{i \vec K'\cdot \vec C_h}
\left[\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i (\vec K'-\vec K) \cdot \vec R_B}
\right] F_B^{\vec K'} (\vec r+\vec C_h)=\nonumber\\
&& \left[\sum_{\vec R_B} g(\vec r-\vec R_B)\right] F_B^{\vec K}(\vec r)
-i e^{-i\theta'}
\left[\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i (\vec K'-\vec K) \cdot \vec R_B}
\right] F_B^{\vec K'} (\vec r) \Rightarrow\nonumber\\
&& e^{i \vec K\cdot \vec C_h} F_B^{\vec K}(\vec r+\vec C_h)=
F_B^{\vec K}(\vec r).\nonumber
\end{eqnarray}
Substituting the value of $e^{i \vec K\cdot \vec C_h}$, we can rewrite
this boundary condition in the form
\begin{equation}
e^{i \frac{2\pi\nu}{3}}F_B^{\vec K}(\vec r+\vec C_h)=F_B^{\vec K}(\vec r)
\end{equation}
or, equivalently
\begin{equation}
F_B^{\vec K}(\vec r+\vec C_h)=e^{-i \frac{2\pi\nu}{3}} F_B^{\vec K}(\vec r).
\end{equation}
\newpage
\setcounter{equation}{272}
\section{Extended version of the calculations at pp.~551--553}
We can proceed analogously for the boundary conditions near $\vec K'$.
Indeed, multiplying the first equation of (245) by
$g(\vec r-\vec R_A)
(i e^{-i\theta'} e^{-i \vec K'\cdot \vec R_A})$,
summing it over $\vec R_A$ and then using the properties of the function
$g$, we find
\begin{eqnarray}
&& i e^{-i\theta'} e^{i \vec K\cdot \vec C_h}
\sum_{\vec R_A} g(\vec r-\vec R_A) e^{i (\vec K-\vec K') \cdot \vec R_A}
F_A^{\vec K}(\vec R_A+\vec C_h)\\
&& +e^{i \vec K'\cdot \vec C_h}
\sum_{\vec R_A} g(\vec r-\vec R_A)
F_A^{\vec K'}(\vec R_A+\vec C_h)=\nonumber\\
&& i e^{-i\theta'}
\sum_{\vec R_A} g(\vec r-\vec R_A) e^{i (\vec K-\vec K') \cdot \vec R_A}
F_A^{\vec K}(\vec R_A)+
\sum_{\vec R_A} g(\vec r-\vec R_A)
F_A^{\vec K'}(\vec R_A) \Rightarrow\nonumber\\
&& i e^{-i\theta'} e^{i \vec K\cdot \vec C_h}
\left[\sum_{\vec R_A} g(\vec r-\vec R_A) e^{i (\vec K-\vec K') \cdot \vec R_A}
\right] F_A^{\vec K}(\vec r+\vec C_h)\nonumber\\
&& +e^{i \vec K'\cdot \vec C_h}
\left[\sum_{\vec R_A} g(\vec r-\vec R_A)\right]
F_A^{\vec K'}(\vec r+\vec C_h)=\nonumber\\
&& i e^{-i\theta'}
\left[\sum_{\vec R_A} g(\vec r-\vec R_A) e^{i (\vec K-\vec K') \cdot \vec R_A}
\right] F_A^{\vec K}(\vec r)+
\left[\sum_{\vec R_A} g(\vec r-\vec R_A)\right]
F_A^{\vec K'}(\vec r) \Rightarrow\nonumber\\
&& e^{i \vec K'\cdot \vec C_h} F_A^{\vec K'}(\vec r+\vec C_h)=
F_A^{\vec K'}(\vec r).\nonumber
\end{eqnarray}
The scalar product between $\vec K'$ and $\vec C_h$ is equal to
\begin{equation}
\label{k1ch}
\vec K'\cdot \vec C_h=-\frac{2\pi}{3}(m-n)=
-2\pi\tilde N-\frac{2\pi\nu}{3},
\end{equation}
where we have used the previously introduced relation
$m-n=3 \tilde N +\nu$ with $\nu=0$ or $\pm 1$ and $\tilde N$ a proper
integer. Thus we have that
\begin{equation}
e^{i \vec K'\cdot \vec C_h}=e^{-i 2\pi \tilde N}
e^{-i \frac{2\pi\nu}{3}}=e^{-i \frac{2\pi\nu}{3}}$$
and consequently the boundary condition near $K'$ is
$$e^{-i \frac{2\pi\nu}{3}}F_A^{\vec K'}(\vec r+\vec C_h)=
F_A^{\vec K'}(\vec r),
\end{equation}
or, equivalently
\begin{equation}
F_A^{\vec K'}(\vec r+\vec C_h)=
e^{i \frac{2\pi\nu}{3}}F_A^{\vec K'}(\vec r).
\end{equation}
On the other hand, multiplying the second equation of (245) by
$g(\vec r-\vec R_B) e^{-i \vec K'\cdot \vec R_B}$,
summing it over $\vec R_B$ and then using the properties of the function
$g$, we find
\begin{eqnarray}
&& i \, e^{i \theta'} e^{i \vec K\cdot \vec C_h}
\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i (\vec K-\vec K') \cdot \vec R_B}
F_B^{\vec K} (\vec R_B+\vec C_h)\\
&& +e^{i \vec K'\cdot \vec C_h}
\sum_{\vec R_B} g(\vec r-\vec R_B)
F_B^{\vec K'}(\vec R_B+\vec C_h)=\nonumber\\
&& i \, e^{i \theta'}
\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i (\vec K-\vec K') \cdot \vec R_B}
F_B^{\vec K} (\vec R_B)+
\sum_{\vec R_B} g(\vec r-\vec R_B)
F_B^{\vec K'}(\vec R_B) \Rightarrow\nonumber\\
&& i \, e^{i \theta'} e^{i \vec K\cdot \vec C_h}
\left[\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i (\vec K-\vec K') \cdot \vec R_B}
\right] F_B^{\vec K} (\vec r+\vec C_h)\nonumber\\
&& +e^{i \vec K'\cdot \vec C_h}
\left[\sum_{\vec R_B} g(\vec r-\vec R_B)\right]
F_B^{\vec K'}(\vec r+\vec C_h)=\nonumber\\
&& i \, e^{i \theta'}
\left[\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i (\vec K-\vec K') \cdot \vec R_B}
\right] F_B^{\vec K} (\vec r)+
\left[\sum_{\vec R_B} g(\vec r-\vec R_B)\right]
F_B^{\vec K'}(\vec r) \Rightarrow\nonumber\\
&& e^{i \vec K'\cdot \vec C_h} F_B^{\vec K'}(\vec r+\vec C_h)=
F_B^{\vec K'}(\vec r).\nonumber
\end{eqnarray}
Substituting the value of $e^{i \vec K'\cdot \vec C_h}$, we can rewrite this
second boundary condition near $\vec K'$ in the form
\begin{equation}
e^{-i \frac{2\pi\nu}{3}}F_B^{\vec K'}(\vec r+\vec C_h)=
F_B^{\vec K'}(\vec r),
\end{equation}
or, equivalently
\begin{equation}
F_B^{\vec K'}(\vec r+\vec C_h)=
e^{i \frac{2\pi\nu}{3}}F_B^{\vec K'}(\vec r).
\end{equation}
Thus the overall periodic boundary condition near $\vec K'$ is
\begin{equation}
\left[\begin{array}{c}
F_A^{\vec K'}(\vec r+\vec C_h)\\[5pt]
F_B^{\vec K'}(\vec r+\vec C_h)
\end{array}\right]=e^{i \frac{2\pi\nu}{3}}
\left[\begin{array}{c}
F_A^{\vec K'}(\vec r)\\[5pt]
F_B^{\vec K'}(\vec r)
\end{array}\right],
\end{equation}
which can be written in a compact form
\begin{equation}
\vec F^{\vec K'}(\vec r+\vec C_h)=e^{i \frac{2\pi\nu}{3}}
\vec F^{\vec K'}(\vec r).
\end{equation}
Substituting the form that, in the absence of an external potential,
the envelope functions have near $\vec K'$ (eq.~(204))
\begin{equation}
\vec F_{s \vec\kappa}^{\vec K'}(\vec r)=
\frac{1}{\sqrt{2 L \ell}}e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s (\vec\kappa)} R(\alpha (\vec\kappa)) \tilde{| s \rangle} =
\frac{1}{\sqrt{2 L \ell}}e^{i (\kappa_x x +\kappa_y y )}
e^{i\tilde\phi_s (\vec\kappa)} R(\alpha (\vec\kappa)) \tilde{| s \rangle},
\end{equation}
the periodic boundary condition becomes
\begin{equation}
\frac{1}{\sqrt{2 L \ell}}
e^{i\vec\kappa\cdot(\vec r+\vec C_h)}
e^{i\tilde\phi_s (\vec\kappa)} R(\alpha (\vec\kappa)) \tilde{| s \rangle} =
e^{i \frac{2\pi\nu}{3}}
\frac{1}{\sqrt{2 L \ell}}
e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s (\vec\kappa)} R(\alpha (\vec\kappa)) \tilde{| s \rangle},
\end{equation}
or, equivalently
\begin{equation}
e^{i\vec\kappa\cdot\vec C_h}=e^{i \frac{2\pi\nu}{3}}.
\end{equation}
This can be rewritten in the form
\begin{equation}
e^{i \kappa_x L}=e^{i \frac{2\pi\nu}{3}}1
=e^{i \frac{2\pi\nu}{3}}e^{i 2\pi \overline{n}},
\end{equation}
or, equivalently
\begin{equation}
\kappa_x L=\frac{2\pi\nu}{3}+2\pi \overline{n}
\end{equation}
and thus
\begin{equation}
\kappa_x =\frac{2\pi}{L}\left(\overline{n}+\frac{\nu}{3}\right)=
\tilde\kappa_{\nu} (\overline{n}),
\end{equation}
with $\overline{n}$ integer.
Analogously to what we have done near $\vec K$, this condition on
$\kappa_x$ can be found also setting
\begin{equation*}
e^{i \vec k\cdot \vec C_h}=1
\end{equation*}
or, equivalently
\begin{equation*}
\vec k\cdot \hbox{\boldmath{$\hat C$}}_h=
k_x=(\vec K')_x+\kappa_x=\frac{2\pi}{L} \overline{m},
\end{equation*}
which (using eq.~(274)) becomes
\begin{eqnarray}
\kappa_x &=& \frac{2\pi}{L} \overline{m}-(\vec K')_x=
\frac{2\pi}{L} \overline{m}-\frac{\vec K'\cdot \vec C_h}{L}=
\frac{2\pi}{L} \overline{m}+\frac{2\pi}{L}\tilde N+\frac{2\pi}{3 L}\nu=\nonumber\\
&=& \frac{2\pi}{L} \left(\overline{m}+\tilde N+\frac{\nu}{3}\right)=
\frac{2\pi}{L} \left(\overline{n}+\frac{\nu}{3}\right)=
\tilde\kappa_{\nu} (\overline{n})\nonumber
\end{eqnarray}
(with $\overline{n} \equiv \overline{m}+\tilde N$).
If we substitute this condition on $\kappa_x$ in the dispersion
relations of graphene, we find
\begin{equation}
E_{s,\overline{n}}^{\vec K'} (\kappa_y)=s\gamma|\vec\kappa|=
s\gamma\sqrt{\kappa_x^2+\kappa_y^2}=
s\gamma\sqrt{\tilde\kappa_{\nu} (\overline{n})^2+\kappa_y^2},
\end{equation}
where $k_y$ now is the wave vector $k$ of the nanotube and
$\kappa_y$ is the difference between the wave vector $k$ of the nanotube
and the component of $\vec K'$ along $y$.
On the other hand, if, starting from eq.~(204), we choose as arbitrary
phase $\tilde\phi_s=\alpha /2$ and then we enforce the condition on $\kappa_x$,
we find as envelope functions in the carbon nanotube near $\vec K'$:
\begin{eqnarray}
&& \vec F_{s \vec\kappa}^{\vec K'}(\vec r)=
\frac{1}{\sqrt{2 L \ell}}e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s} \left[\begin{array}{cc}
e^{i\frac{\alpha}{2}} & 0\\
0 & e^{-i\frac{\alpha}{2}}
\end{array}\right]
\frac{1}{\sqrt{2}}\left[\begin{array}{c}
is\\
1
\end{array}\right]=\\
&& \frac{1}{2 \sqrt{L \ell}}
e^{i(\kappa_x x+\kappa_y y)}e^{i\tilde\phi_s}
\left[\begin{array}{c}
i s e^{i \frac{\alpha}{2}}\\
e^{-i\frac{\alpha}{2}}
\end{array}\right]=
\frac{1}{2 \sqrt{L \ell}} e^{i(\kappa_x x+\kappa_y y)}
\left[\begin{array}{c}
i s e^{i \alpha}\\
1
\end{array}\right]=\nonumber\\
&& \frac{1}{2 \sqrt{L \ell}}
\left[\begin{array}{c}
s e^{i (\frac{\pi}{2}+\alpha)}\\
1
\end{array}\right]
e^{i\kappa_x x+i\kappa_y y}=\nonumber\\
&& \frac{1}{2 \sqrt{L \ell}}
\left[\begin{array}{c}
s \tilde b_{\nu}(\overline{n},\kappa_y)\\
1
\end{array}\right]
e^{i \tilde\kappa_{\nu}(\overline{n}) x+i\kappa_y y}=
\vec F_{s \overline{n} \kappa_y}^{\vec K'}(\vec r),\nonumber
\end{eqnarray}
where (using the definition of the angle $\alpha$: see eq.~(193))
\begin{equation}
\tilde b_{\nu}(\overline{n},\kappa_y)=e^{i\left(\frac{\pi}{2}+\alpha\right)}=
\frac{\kappa_x+i \kappa_y}{\sqrt{\kappa_x^2+\kappa_y^2}}=
\frac{\tilde\kappa_{\nu}(\overline{n})+i \kappa_y}
{\sqrt{\tilde\kappa_{\nu}(\overline{n})^2+\kappa_y^2}}.
\end{equation}
\newpage
\setcounter{equation}{324}
\section{Extended version of the calculations at pp.~564--565}
Let us now enforce the boundary conditions on $\Phi_B^{\vec K'}(y)$ and
$\Phi_A^{\vec K'}(y)$:
\begin{eqnarray}
\label{realk1}
&& \Phi_B^{\vec K'}(0)=0 \Rightarrow
C+D=0\Rightarrow D=-C;\\
&& \Phi_A^{\vec K'}(\tilde W)=0 \Rightarrow
\frac{\gamma}{E}\left((\kappa'_x+z') C e^{z' \tilde W}+
(\kappa'_x-z') D e^{-z' \tilde W}\right)=0 \Rightarrow\nonumber\\
&& (\kappa'_x+z') C e^{z' \tilde W}-
(\kappa'_x-z') C e^{-z' \tilde W}=0 \Rightarrow\nonumber\\
&& (\kappa'_x+z') C e^{z' \tilde W}=
(\kappa'_x-z') C e^{-z' \tilde W} \Rightarrow\nonumber\\
&& e^{-2 z' \tilde W}=\frac{\kappa'_x+z'}{\kappa'_x-z'}=
\frac{(-\kappa'_x)-z'}{(-\kappa'_x)+z'},\nonumber
\end{eqnarray}
which is equal to eq.~(307) if we substitute $\kappa_x$ with
$-\kappa'_x$.
Here we consider again real values of $\kappa'_x$.
If we graphically represent (fig.~11, with $z$ substituted with $z'$,
$\kappa_x$ with $-\kappa'_x$, and $-\kappa_x$ with $\kappa'_x$)
the two functions $f_1 (z')=e^{-2 z' \tilde W}$ and
$f_2 (z')=((-\kappa'_x)-z')/((-\kappa'_x)+z')$, we see that (apart from $z'=0$,
which corresponds to identically null $\Phi$'s)
there is an intersection between $f_1$ and $f_2$ for a real value of $z'$
(and thus eq.~(325) has a {\em real solution} $z'$) only if
$-\kappa'_x>0$ ({\em i.e.} if $\kappa'_x<0$) and if $f_1 (z')$ is steeper than
$f_2 (z')$ in $z'=0$, {\em i.e.} if
\begin{eqnarray}
&& \left|\left[\frac{d}{dz'}f_1(z')\right]_{z'=0}\right|>
\left|\left[\frac{d}{dz'}f_2(z')\right]_{z'=0}\right| \Rightarrow\nonumber\\
&& \left|\left[-2 \tilde W e^{-2 z' \tilde W}\right]_{z'=0}\right|>
\left|\left[-\frac{1}{(-\kappa'_x)+z'}-
\frac{(-\kappa'_x)-z'}{((-\kappa'_x)+z')^2}\right]_{z'=0}\right|=\nonumber\\
&& \left|\left[-\frac{(-\kappa'_x)+z'+(-\kappa'_x)-z'}
{((-\kappa'_x)+z')^2}\right]_{z'=0}\right|=
\left|\left[-\frac{2 (-\kappa'_x)}{((-\kappa'_x)+z')^2}\right]_{z'=0}\right|
\Rightarrow\nonumber\\
&& 2 \tilde W>\frac{2 (-\kappa'_x)}{(-\kappa'_x)^2} \Rightarrow
\tilde W>\frac{1}{-\kappa'_x} \Rightarrow
-\kappa'_x>\frac{1}{\tilde W} \Rightarrow
\kappa'_x<-\frac{1}{\tilde W}.\nonumber
\end{eqnarray}
If instead $\kappa'_x>-1/\tilde W$, eq.~(325) does not have real
solutions $z'$ (apart from $z'=0$).
In the case of real $z'$, from eq.~(325) we can find that
\begin{eqnarray}
&& e^{-2 z' \tilde W}=\frac{(-\kappa'_x)-z'}{(-\kappa'_x)+z'}\Rightarrow
(-\kappa'_x) e^{-2 z' \tilde W}+z' e^{-2 z' \tilde W}=(-\kappa'_x)-z'
\Rightarrow\\
&& (-\kappa'_x) (1-e^{-2 z' \tilde W})=z' (1+e^{-2 z' \tilde W})
\Rightarrow\nonumber\\
&& -\kappa'_x =z'\,\frac{1+e^{-2 z' \tilde W}}{1-e^{-2 z' \tilde W}}=
z'\,\frac{e^{z' \tilde W}+e^{-z' \tilde W}}{e^{z' \tilde W}-e^{-z' \tilde W}}=
\frac{z'}{\tanh(z' \tilde W)}
\Rightarrow\nonumber\\
&& \kappa'_x =-\frac{z'}{\tanh(z' \tilde W)}\nonumber
\end{eqnarray}
($z'=0$ does not have to be considered) and thus
\vskip-3pt\noindent
\begin{eqnarray}
\label{esinh1}
&& \left(\frac{E}{\gamma}\right)^2={\kappa'_x}^2-{z'}^2=
\frac{{z'}^2}{\tanh^2(z' \tilde W)}-{z'}^2=
{z'}^2 \left(\frac{\cosh^2(z' \tilde W)}{\sinh^2(z' \tilde W)}-1\right)=\\
&& {z'}^2 \left(\frac{\cosh^2(z' \tilde W)-\sinh^2(z' \tilde W)}
{\sinh^2(z' \tilde W)}\right)=
\frac{{z'}^2}{\sinh^2(z' \tilde W)} \Rightarrow
\left|\frac{E}{\gamma}\right|=\left|\frac{z'}{\sinh(z' \tilde W)}\right|.\nonumber
\end{eqnarray}
Since (for the properties of the hyperbolic sine function)
$|\sinh(z' \tilde W)|>|z' \tilde W|=|z'|\tilde W$, we see that in this case
\begin{equation*}
\left|\frac{E}{\gamma}\right|<\frac{|z'|}{|z'| \tilde W}=
\frac{1}{\tilde W}.
\end{equation*}
We can write (exploiting what we have found from the boundary
conditions) that
\begin{eqnarray}
&& \Phi_A^{\vec K'}(y)=
\frac{\gamma}{E}
\left((\kappa'_x+z') C e^{z'y}+(\kappa'_x-z') D e^{-z'y}\right)=\\
&& \frac{\gamma}{E}
\left((\kappa'_x+z') C e^{z'y}-(\kappa'_x-z') C e^{-z'y}\right)=\nonumber\\
&& \frac{\gamma}{E} C
\left(\kappa'_x (e^{z'y}-e^{-z'y})+z' (e^{z'y}+e^{-z'y})\right)=\nonumber\\
&& \frac{\gamma}{E} 2 C \left(\kappa'_x \sinh(z'y)+z' \cosh(z'y)\right)=\nonumber\\
&& 2 C \frac{\gamma}{E}
\left(-\frac{z'}{\tanh(z' \tilde W)} \sinh(z'y)+z' \cosh(z'y)\right)=\nonumber\\
&& 2 C \frac{\gamma}{E} z' \,
\frac{-\cosh(z' \tilde W)\sinh(z'y)+\sinh(z' \tilde W)\cosh(z'y)}
{\sinh(z' \tilde W)}=\nonumber\\
&& 2 C \frac{\gamma}{E} z' \,
\frac{\cosh(z' \tilde W)\sinh(-z'y)+\sinh(z' \tilde W)\cosh(-z'y)}
{\sinh(z' \tilde W)}=\nonumber\\
&& 2 C \left(\frac{\gamma}{E} \frac{z'}{\sinh(z' \tilde W)}\right)
\sinh(z'(\tilde W-y))=\nonumber\\
&& 2 C \,{\rm sign}\!\left(\frac{E}{\gamma} \frac{z'}{\sinh(z' \tilde W)}\right)
\sinh(z'(\tilde W-y)),\nonumber
\end{eqnarray}
where in the last step we have exploited the fact that, due to
eq.~(327), the product between $\gamma/E$ and $z'/\sinh(z' \tilde W)$
can only be equal to $+1$ (if the two quantities have the same sign) or
$-1$ (if they have opposite signs).
Moreover we have that
\begin{equation*}
\Phi_B^{\vec K'}(y)=C e^{z'y}+D e^{-z'y}=C e^{z'y}-C e^{-z'y}=
C (e^{z'y}-e^{-z'y})=
2 C \sinh(z'y).
\end{equation*}
These are edge states, each one exponentially localized on one edge
of the ribbon.
Also in this case, these edge states correspond to bands flattened towards
$E=0$. Since the Dirac point $\vec K'$, folded into the Brillouin zone
$(-\pi/a,\pi/a)$ of the zigzag nanoribbon, corresponds to
$k_x=4\pi/(3a)-2\pi/a=-2\pi/(3a)$,
the condition $\kappa'_x<-1/\tilde W$ is equivalent to
$k'_x=K'_x+\kappa'_x<-2\pi/(3a)-1/\tilde W$. Therefore also in the region
$-\pi/a<k_x<-2\pi/(3a)-1/\tilde W$ we have two bands flattened towards $E=0$,
which confirms the metallic nature of zigzag nanoribbons.
Let us now instead consider the {\em imaginary solutions} $z'=i \kappa'_n$
(with $\kappa'_n$ real) of eq.~(325). In this case the dispersion
relation $E=\pm \gamma \sqrt{{\kappa'_x}^2-{z'}^2}$ becomes
$E=\pm \gamma \sqrt{{\kappa'_x}^2+{\kappa'_n}^2}$.
The solutions are given by
\begin{eqnarray}
&& e^{-2 z' \tilde W}=\frac{\kappa'_x+z'}{\kappa'_x-z'}
\Rightarrow\\
&& e^{-i 2 \kappa'_n \tilde W}=
\frac{\kappa'_x+i \kappa'_n}{\kappa'_x-i \kappa'_n}=
\frac
{\sqrt{{\kappa'_x}^2+{\kappa'_n}^2}\,e^{i \angle (\kappa'_x+i \kappa'_n)}}
{\sqrt{{\kappa'_x}^2+{\kappa'_n}^2}\,e^{-i \angle (\kappa'_x+i \kappa'_n)}}=\nonumber\\
&& e^{i 2 \angle (\kappa'_x+i \kappa'_n)}=
e^{i 2 \angle (\kappa'_x+i \kappa'_n)} e^{i 2 \pi m}
\Rightarrow\nonumber\\
&& \kappa'_n \tilde W=-\angle (\kappa'_x+i \kappa'_n)-\pi m
\Rightarrow\
\tan (\kappa'_n \tilde W)=-\frac{\kappa'_n}{\kappa'_x}\Rightarrow
\kappa'_x=-\frac{\kappa'_n}{\tan (\kappa'_n \tilde W)}\nonumber
\end{eqnarray}
\vskip-3pt\noindent
(with $m$ integer); $\kappa'_n=0$ corresponds to identically null $\Phi$'s
and thus does not have to be considered. We have that
\begin{eqnarray}
\label{esin1}
&& \left(\frac{E}{\gamma}\right)^2={\kappa'_x}^2+{\kappa'_n}^2=
\left(-\frac{\kappa'_n}{\tan (\kappa'_n \tilde W)}\right)^2+{\kappa'_n}^2=
\left(\frac{\cos^2 (\kappa'_n \tilde W)}{\sin^2 (\kappa'_n \tilde W)}+1\right)
{\kappa'_n}^2=\\
&& \frac{\cos^2 (\kappa'_n \tilde W)+\sin^2 (\kappa'_n \tilde W)}
{\sin^2 (\kappa'_n \tilde W)} {\kappa'_n}^2=
\frac{{\kappa'_n}^2}{\sin^2 (\kappa'_n \tilde W)}\Rightarrow
\left|\frac{E}{\gamma}\right|=
\left|\frac{\kappa'_n}{\sin(\kappa'_n \tilde W)}\right|;\nonumber
\end{eqnarray}
\vskip-3pt\noindent
since (for the properties of the sin function)
$|\sin (\kappa'_n \tilde W)|<|\kappa'_n \tilde W|=|\kappa'_n|\tilde W$,
we see that now
\vskip-3pt\noindent
\begin{equation*}
\left|\frac{E}{\gamma}\right|>\frac{|\kappa'_n|}{|\kappa'_n| \tilde W}=
\frac{1}{\tilde W}.
\end{equation*}
In this case we can write (exploiting what we have found from the boundary
conditions) that
\begin{eqnarray}
&& \Phi_A^{\vec K'}(y)=
\frac{\gamma}{E} \left((\kappa'_x+i\kappa'_n) C e^{i\kappa'_n y}+
(\kappa'_x-i\kappa'_n) D e^{-i\kappa'_n y}\right)=\\
&& \frac{\gamma}{E} \left((\kappa'_x+i\kappa'_n) C e^{i\kappa'_n y}-
(\kappa'_x-i\kappa'_n) C e^{-i\kappa'_n y}\right)=\nonumber\\
&& \frac{\gamma}{E} C \left(\kappa'_x (e^{i\kappa'_n y}-e^{-i\kappa'_n y})+
i\kappa'_n (e^{i\kappa'_n y}+e^{-i\kappa'_n y})\right)=\nonumber\\
&& \frac{\gamma}{E} 2 i C
\left(\kappa'_x \sin(\kappa'_n y)+\kappa'_n \cos(\kappa'_n y)\right)=\nonumber\\
&& 2 i C \frac{\gamma}{E}
\left(-\frac{\kappa'_n}{\tan(\kappa'_n \tilde W)} \sin(\kappa'_n y)+
\kappa'_n \cos(\kappa'_n y)\right)=\nonumber\\
&& 2 i C \frac{\gamma}{E} \kappa'_n \,
\frac{-\cos(\kappa'_n \tilde W)\sin(\kappa'_n y)+
\sin(\kappa'_n \tilde W)\cos(\kappa'_n y)}
{\sin(\kappa'_n \tilde W)}=\nonumber\\
&& 2 i C \left(\frac{\gamma}{E} \frac{\kappa'_n}{\sin(\kappa'_n \tilde W)}\right)
\sin(\kappa'_n (\tilde W-y))=\nonumber\\
&& 2 i C \,{\rm sign}\!\left(\frac{E}{\gamma}
\frac{\kappa'_n}{\sin(\kappa'_n \tilde W)}\right)
\sin(\kappa'_n (\tilde W-y)),\nonumber
\end{eqnarray}
where in the last step we have taken advantage of the fact that, due to
eq.~(330), the product between $\gamma/E$ and
$\kappa'_n/\sin(\kappa'_n \tilde W)$
can only be equal to $+1$ (if the two quantities have the same sign) or
$-1$ (if they have opposite signs).
Moreover we have that
\begin{eqnarray}
&& \Phi_B^{\vec K'}(y)=C e^{i\kappa'_n y}+D e^{-i\kappa'_n y}=
C e^{i\kappa'_n y}-C e^{-i\kappa'_n y}=\nonumber\\
&& C (e^{i\kappa'_n y}-e^{-i\kappa'_n y})=
C 2 i \sin(\kappa'_n y).\nonumber
\end{eqnarray}
These are confined states extending all over the ribbon.
Obviously, once the expressions of the functions $\Phi$ have been obtained,
the overall wave function is given by the equations (296),
(297) and (299).
\newpage
\setcounter{equation}{378}
\section{Extended version of the calculations at pp.~576--577}
\noindent
{\bf Case II-C}\hfill\break
Finally, eqs.~(360) are satisfied also if
\begin{equation}
\left\{ \begin{array}{l}
B=0,\\
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}-K)\tilde W)
-i \cosh(\kappa_{ni} \tilde W) \sin((\kappa_{nr}-K)\tilde W)=0.
\end{array} \right.
\end{equation}
If we separately equate to zero the real and imaginary part of the second
equation, we find
\begin{equation*}
\left\{ \begin{array}{l}
B=0,\\
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}-K)\tilde W)=0,\\
\cosh(\kappa_{ni} \tilde W) \sin((\kappa_{nr}-K)\tilde W)=0.
\end{array} \right.
\end{equation*}
Since the hyperbolic cosine is never equal to zero, these become
\begin{equation*}
\left\{ \begin{array}{l}
B=0,\\
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}-K)\tilde W)=0,\\
\sin((\kappa_{nr}-K)\tilde W)=0.
\end{array} \right.
\end{equation*}
However, when the sine of an angle is equal to zero, the cosine of that angle
is certainly different from zero; therefore the previous equations become
\begin{equation*}
\left\{ \begin{array}{l}
B=0,\\
\sinh(\kappa_{ni} \tilde W)=0,\\
\sin((\kappa_{nr}-K)\tilde W)=0.
\end{array} \right.
\end{equation*}
Since the hyperbolic sine is null only when its argument is null, we
conclude that in this case:
\begin{equation}
\left\{ \begin{array}{l}
B=0,\\
\kappa_{ni}=0,\\
\sin((\kappa_{nr}-K)\tilde W)=0,
\end{array} \right.
\Rightarrow
\left\{ \begin{array}{l}
B=0,\\
\kappa_{n}\hbox{ real},\\
\sin((\kappa_n-K)\tilde W)=0.
\end{array} \right.
\end{equation}
Due to the fact that $B=0$, also $C=-iB=0$ (while $D=-iA$).
Instead the consequence of the condition on $\sin((\kappa_n-K)\tilde W)$ is
\begin{eqnarray}
&& \sin((\kappa_n-K)\tilde W)=0 \Rightarrow
(\kappa_n-K)\tilde W=n\pi \Rightarrow\\
&& \kappa_n-K=n\frac{\pi}{\tilde W} \Rightarrow
\kappa_n=n\frac{\pi}{\tilde W}+K.\nonumber
\end{eqnarray}
In this case the $\Phi$ functions (346) are equal to
\begin{equation}
\left\{ \begin{array}{r@{\ }l}
\displaystyle
\Phi_A^{\vec K}(y)
& \displaystyle
=\frac{\gamma}{E}\left((\kappa_x-i\kappa_n) A e^{i\kappa_n y}+
(\kappa_x+i\kappa_n) B e^{-i\kappa_n y}\right)=\\[5pt]
& \displaystyle
=\frac{\gamma}{E}(\kappa_x-i\kappa_n) A e^{i\kappa_n y}=
\frac{\gamma}{E}(\kappa_x+i\tilde\kappa_n) A e^{-i\tilde\kappa_n y},\\[5pt]
\displaystyle
\Phi_B^{\vec K}(y)
& \displaystyle
=A e^{i\kappa_n y}+B e^{-i\kappa_n y}=
A e^{i\kappa_n y}=A e^{-i\tilde\kappa_n y},\\[5pt]
\displaystyle
\Phi_A^{\vec K'}(y)
& \displaystyle
=\frac{\gamma}{E}\left((\kappa_x+i\kappa_n) C e^{i\kappa_n y}+
(\kappa_x-i\kappa_n) D e^{-i\kappa_n y}\right)=\\[5pt]
& \displaystyle
=-\frac{\gamma}{E}(\kappa_x-i\kappa_n) iA e^{-i\kappa_n y}=
-\frac{\gamma}{E}(\kappa_x+i\tilde\kappa_n) iA e^{i\tilde\kappa_n y},\\[5pt]
\displaystyle
\Phi_B^{\vec K'}(y)
& \displaystyle
=C e^{i\kappa_n y}+D e^{-i\kappa_n y}=
-iA e^{-i\kappa_n y}=-iA e^{i\tilde\kappa_n y},
\end{array} \right.
\end{equation}
with
\begin{equation}
\tilde\kappa_n=-\kappa_n=-\left(n\frac{\pi}{\tilde W}+K\right)=
-n\frac{\pi}{\tilde W}-K=\tilde n\frac{\pi}{\tilde W}-K
\end{equation}
(where $\tilde n=-n$ is an integer).
Clearly, if $\kappa_n$ satisfies
$E=\pm \gamma \sqrt{{\kappa_x}^2+{\kappa_n}^2}$, also
$\tilde\kappa_n=-\kappa_n$ satisfies
$E=\pm \gamma \sqrt{{\kappa_x}^2+{\tilde\kappa_n}^2}$.
\end{document}
\endinput
\section{Introduction}
To understand the physical properties of semiconductors it is necessary to
know their electronic band structure, {\em i.e.} the behavior of energy
as a function of the wave vector $\vec k$ in the reciprocal lattice of
the crystal. Several numerical methods can be successfully applied to find the
band structure and the corresponding wave functions, such as the tight
binding, the pseudopotential, the orthogonalized plane wave, the augmented
plane wave, the Green's function and the cellular methods
\cite{callaway1,grosso,martin}. These methodologies can yield the desired
results throughout the {$\vec k$}-space.
Many phenomena, for example in the study of electrical transport
(due to both electrons and holes) and of optical properties (such as
absorption or gain due to electronic transitions caused by an incident
optical wave), involve only the top of the valence band and the bottom
of the conduction band. Indeed, low-energy electrons and holes are situated
in these regions and also electronic transitions occur near the band edges
of direct band gap semiconductors. Therefore a technique to obtain the band
structure in such regions is of great interest.
The $\vec k \cdot \vec p$ method \cite{voon,wenckebach,chuang,tsidilkovski,
yu,li,kane1,kane2,kane3,pidgeon,bir,ivchenko,anselm,callaway2,singh,long,
kroemer,bassani,enderlein,haug,harrison,winkler,datta1,marconcini1}
allows to extrapolate the band structure of materials from the knowledge of
a restricted set of parameters (a limited number of energy gaps and of
momentum matrix elements between band lattice functions), evaluated in
correspondence of a single point $\vec k_0$ of the reciprocal space,
which are generally treated as fitting parameters, that can be obtained
from experiments or {\em ab initio} calculations.
Even though, considering quite a large number of bands and thus of
parameters, the $\vec k \cdot \vec p$ method can be used to obtain the
band structure all over the Brillouin zone of the material
\cite{cardona,cavasillas,radhia,richard,michelini},
its primary use is to explore with great detail the dispersion relations
around the considered point $\vec k_0$. In particular, it allows to
obtain the band structure of materials in the regions of the reciprocal
space near the band extrema, expanding the eigenvalues and eigenvectors
of the single-electron Hamiltonian as a function of $\vec k$ around the
wave vector $\vec k_0$ corresponding to the band maximum or minimum.
It has been shown to be very useful to study structures containing a large
number of atoms, for which atomistic approaches would be computationally
too expensive.
This method, first introduced by J.~Bardeen \cite{bardeen} and F.~Seitz
\cite{seitz}, was developed and adopted by W.~Shockley \cite{shockley}
and G.~Dresselhaus, A.~F.~Kip and C.~Kittel \cite{dresselhaus} in well-known
papers on the energy band structures of semiconductors. It received a general
formulation with E.~O.~Kane \cite{kane1,kane2,kane3,kane4,kane5,kane6}
and with J.~M.~Luttinger and W.~Kohn \cite{luttinger1,luttinger2}.
It was later applied to strained materials (by G.~E.~Pikus and G.~L.~Bir
\cite{bir}) and to heterostructures (for example by G.~Bastard
\cite{bastard1,bastard2,bastard3}, M.~Altarelli
\cite{altarelli1,altarelli2,altarelli3} and M.~G. Burt
\cite{burt1,burt2,burt3,burt4,burt5,foreman}), proving to be a very useful
and straightforward way to study the local properties of materials.
In the last few years, a significant theoretical and experimental effort
has been devoted to the study of graphene and graphene-related materials,
which appear as promising candidates for many technological applications
and are characterized by very unusual and interesting physical properties.
In particular, the application of the $\vec k \cdot \vec p$ method to the
study of the electronic properties of graphene, sistematically pursued
by T.~Ando \cite{ajiki,ando1,ando2} and other authors, results in a
description of the graphene properties in terms of the Dirac equation,
the same relation that describes the relativistic behavior of elementary
spin-(1/2) particles. This is at the basis of the experiments aiming to
observe in graphene, at non-relativistic speeds, the analogous of purely
relativistic quantum phenomena
\cite{katsnelson1,katsnelson2,katsnelson3,geim,beenakker}.
The application of proper boundary conditions to the relations found for
a sheet of graphene allows to obtain the electronic properties of
carbon nanotubes and graphene nanoribbons, materials which (contrary
to unconfined graphene)\break
can exibit (depending on their geometrical details) a non-zero energy gap.
The first part of this review is a short introduction to the
$\vec k \cdot \vec p$ method in some of its most common formulations.
In particular, sect.~{\bf 2} describes the application of the
$\vec k \cdot \vec p$ method to homogeneous crystals, where, due to
the periodicity of the material, the electron wave functions are Bloch
functions and thus the unperturbed Bloch lattice functions are adopted
as a basis for the method. We first describe (following W. T. Wenckebach
\cite{wenckebach}) how the $\vec k \cdot \vec p$ approach can be derived
by just applying the standard perturbation theory to the Schr\"odinger-Bloch
equation and how this formulation can be adopted to study the
dispersion relations of semiconductors with the diamond or zincblende
structure. Then we briefly summarize the alternative formulation by Kane,
consisting in the exact diagonalization of the Schr\"odinger-Bloch
Hamiltonian for a subset of bands, and in the inclusion of the effect
of the other energy bands with the L\"owdin perturbation theory.
In sect.~{\bf 3}, instead, we describe how the $\vec k \cdot \vec p$
method can be applied to the case of non-periodic systems, where the
phase factor (involving the wave vector measured from the considered
extremum point) of the Bloch lattice functions
has to be replaced by properly defined envelope functions. Following
J.~M.~Luttinger and W.~Kohn, we derive the single-band and multi-band
envelope function equations, and then we briefly outline the main
approaches followed in the application of the envelope function theory to
the study of heterostructures.
The second part of the review is devoted to the application of the
$\vec k \cdot \vec p$ method, and in particular of the envelope function
approach, to graphene, carbon nanotubes and graphene nanoribbons.
In sect.~{\bf 4}, following T. Ando, we perform a first-order expansion
of a simple tight-binding description of graphene, obtaining the Dirac
equation for the envelope functions (corresponding to the
two degeneration points of graphene) in the presence of a generic external
potential, and we analytically solve this equation for the case of null
potential. Starting from this formulation, we also derive
general expressions for the probability density and for the probability
current density in graphene, and we compare them with those\break
used, adopting a valley-isotropic representation, by C.~W.~J.~Beenakker
{\sl et al.} \cite{akhmerov1,beenakker}.
In sect.~{\bf 5} we extend the previous treatment to the study
of carbon nanotubes, enforcing a periodic boundary condition along the chiral
vector, that univocally characterizes these tubules. In particular, we show
how this periodic condition on the overall wave function translates in terms
of the envelope functions, and we analytically solve the\break
\newpage
\noindent
Dirac problem in the
absence of an external potential, obtaining the conditions for which nanotubes
have a semiconducting or a metallic behavior.
Finally, in sect.~{\bf 6} we discuss the case of graphene nanoribbons.
Adapting the approach adopted by L.~Brey and H.~A.~Fertig \cite{brey1,brey2}
to the mathematical formulation of graphene proposed by T. Ando, we study
both zigzag and armchair nanoribbons, obtaining an analytical solution in
the absence of an external potential, and recovering the fundamental
properties of these structures.
\section{The $\vec k \cdot \vec p$ method in homogeneous crystals:
derivation based on the standard perturbation theory and Kane's model}
We begin our overview of the $\vec k \cdot \vec p$ method describing its
formulation in the case of homogeneous crystals.
In a pure crystal an electron is subject to a periodic potential energy
\begin{equation}
U_L(\vec r)=U_L(\vec r+\vec R),
\end{equation}
with $\vec R$ any linear combination of the lattice vectors, and thus also
the Hamiltonian is invariant under translation by the lattice vectors.
Therefore, if $\psi^n_{\vec k} (\vec r)$ is the wave function of an electron
moving in the crystal, also $\psi^n_{\vec k} (\vec r+\vec R)$ will be
a solution of the Schr\"odinger equation and therefore will coincide with
$\psi^n_{\vec k} (\vec r)$, apart from a constant with unit modulus
(otherwise the wave function could grow to infinity, if we repeated the
translation $\vec R$ indefinitely). Thus the
general form of the electron wave functions\break
will be
\begin{equation}
\psi^n_{\vec k} (\vec r)=e^{i\,\vec k \cdot \vec r}u^n_{\vec k} (\vec r),
\end{equation}
where $\psi^n_{\vec k} (\vec r)$ is usually called ``Bloch function'',
while $u^n_{\vec k} (\vec r)$ is called ``Bloch lattice function'' and is
periodic with the lattice periodicity
\begin{equation}
u^n_{\vec k} (\vec r+\vec R)=u^n_{\vec k} (\vec r)
\end{equation}
(Bloch's theorem) \cite{bloch}.
Starting from the Schr\"odinger equation (in the absence of a magnetic field)
for $\psi^n_{\vec k} (\vec r)$
\begin{equation}
H^{(0)}\psi^n_{\vec k} (\vec r)=
E^n_{\vec k}\psi^n_{\vec k} (\vec r),
\end{equation}
with (in the absence of a magnetic field)
\begin{equation}
H^{(0)}=-\frac{\hbar^2}{2\,m_e}\nabla^2+U_L(\vec r)
\end{equation}
(where $m_e$ is the electron mass and $\hbar$ is the reduced Planck
constant) and substituting $\psi^n_{\vec k} ( \vec r)$ with the generic
expression of the Bloch function, we obtain
\begin{eqnarray}
&&\left(-\frac{\hbar^2}{2\,m_e}\nabla^2+U_L(\vec r)\right)
e^{i\,\vec k \cdot \vec r}u^n_{\vec k} (\vec r)=\\
&&\quad -\frac{\hbar^2}{2\,m_e}\vec\nabla\cdot
\left(e^{i\,\vec k \cdot \vec r}(\vec\nabla u^n_{\vec k} (\vec r))+
(\vec\nabla e^{i\,\vec k \cdot \vec r})u^n_{\vec k} (\vec r)\right)+
U_L(\vec r)e^{i\,\vec k \cdot \vec r}u^n_{\vec k} (\vec r)=\nonumber\\
&&\quad -\frac{\hbar^2}{2\,m_e}\vec\nabla\cdot\left(e^{i\,\vec k \cdot \vec r}
(\vec\nabla u^n_{\vec k} (\vec r)+i\,\vec k u^n_{\vec k} (\vec r))\right)+
U_L(\vec r)e^{i\,\vec k \cdot \vec r}u^n_{\vec k} (\vec r)=\nonumber\\
&&\quad-\frac{\hbar^2}{2\,m_e}\Big(e^{i\,\vec k \cdot \vec r}\vec\nabla\cdot
(\vec\nabla u^n_{\vec k} (\vec r)+i\,\vec k u^n_{\vec k} (\vec r))\nonumber\\
&&\qquad {}+(\vec\nabla e^{i\,\vec k \cdot \vec r})\cdot
(\vec\nabla u^n_{\vec k} (\vec r)+i\,\vec k u^n_{\vec k} (\vec r))\Big)+
U_L(\vec r)e^{i\,\vec k \cdot \vec r}u^n_{\vec k} (\vec r)=\nonumber\\
&&\quad -\frac{\hbar^2}{2\,m_e}\Big(e^{i\,\vec k \cdot \vec r}
(\nabla^2 u^n_{\vec k} (\vec r)+i\vec k \cdot \vec\nabla u^n_{\vec k}
(\vec r))\nonumber\\
&&\qquad {}+(i\,\vec k\, e^{i\,\vec k \cdot \vec r})\cdot
(\vec\nabla u^n_{\vec k} (\vec r)+i\,\vec k u^n_{\vec k} (\vec r))\Big)+
U_L(\vec r)e^{i\,\vec k \cdot \vec r}u^n_{\vec k} (\vec r)=\nonumber\\
&&\quad -\frac{\hbar^2}{2\,m_e}e^{i\,\vec k \cdot \vec r}
\!\left(\nabla^2 u^n_{\vec k}
(\vec r)+i\vec k \cdot \vec\nabla u^n_{\vec k} (\vec r)
+i\vec k \cdot\vec\nabla u^n_{\vec k} (\vec r)-k^2u^n_{\vec k} (\vec r)\right)\nonumber\\
&&\qquad {}+U_L(\vec r)e^{i\,\vec k \cdot \vec r}u^n_{\vec k} (\vec r)\!=\nonumber\\
&&\quad e^{i\,\vec k \cdot \vec r}\left(\left(-\frac{\hbar^2}{2\,m_e}
\nabla^2 +U_L(\vec r)\right)
-\frac{i\,\hbar^2}{m_e}\vec k \cdot\vec\nabla+
\frac{\hbar^2 k^2}{2\,m_e}\right)u^n_{\vec k}(\vec r)=\nonumber\\
&&\quad e^{i\,\vec k \cdot \vec r}(H^{(0)}+H^{(1)}) u^n_{\vec k}(\vec r)=
e^{i\,\vec k \cdot \vec r}E^n_{\vec k}u^n_{\vec k}(\vec r)\nonumber
\end{eqnarray}
and thus
\begin{equation}
H u^n_{\vec k}(\vec r)=
(H^{(0)}+H^{(1)})u^n_{\vec k}(\vec r)=
E^n_{\vec k}u^n_{\vec k}(\vec r),
\end{equation}
with
\begin{equation}
H^{(1)}=-\frac{i\,\hbar^2}{m_e}\vec k \cdot\vec\nabla+
\frac{\hbar^2 k^2}{2\,m_e}
\end{equation}
(where $k=|\vec k|$). What we have just obtained is clearly an equation for
the Bloch lattice functions (the Schr\"odinger-Bloch equation), which needs
to be solved only for a single primitive cell with the boundary condition
that the function $u^n_{\vec k}(\vec r)$ must be periodic with the lattice
periodicity. For each value of $\vec k$ this equation has a periodic solution
only for selected values $E^n_{\vec k}$ of the energy $E$. Noting
that $H^{(1)}(\vec r)$ reduces to zero when $\vec k$ approaches $0$
and thus that this part of the Hamiltonian can be treated as a perturbation
around $\vec k=0$, we can locally solve this equation using
the time-independent perturbation theory, assuming to know the eigenfunctions
and eigenvalues of $H^{(0)}(\vec r)$, {\em i.e.} the Bloch lattice functions
and the energy band values for $\vec k=0$.
For most of the semiconductors the maximum of the valence band is in the
\hbox{$\Gamma$-point} (the center of the first Brillouin zone represented with
the Wigner-Seitz method) and therefore corresponds to $\vec k=0$;
the minimum of the conduction band instead is for $\vec k=0$ only for
direct-gap semiconductors. When the extremum point of the energy band
(and thus the interesting region) is for a generic $\vec k_0$, we can easily
extend this argument observing that, if we define the value of $H$ in
$\vec k_0$ as
\begin{equation}
H_{\vec k_0}=H^{(0)}-\frac{i\,\hbar^2}{m_e}\vec k_0
\cdot\vec\nabla+\frac{\hbar^2 {k_0}^2}{2\,m_e},
\end{equation}
we have that the value of $H$ in $\vec k$ is
\begin{eqnarray}
& H= & H^{(0)}+H^{(1)}=
H_{\vec k_0}+\left[-\frac{i\,\hbar^2}{m_e}(\vec k-\vec k_0)
\cdot\vec\nabla+\frac{\hbar^2 (k^2-{k_0}^2)}{2\,m_e}\right]=\\
& H_{\vec k_0} & +\Big[-\frac{i\,\hbar^2}{m_e}(\vec k-\vec k_0)
\!\cdot\!\vec\nabla+\!\frac{\hbar^2 (k^2-{k_0}^2)}{2\,m_e}\nonumber\\
&&+\frac{\hbar}{m_e}(\vec k-\vec k_0)\!\cdot\! \hbar \vec k_0-
\!\frac{\hbar}{m_e}(\vec k-\vec k_0)\!\cdot\! \hbar \vec k_0 \Big]\!=\nonumber\\
& H_{\vec k_0} & +\left[\frac{\hbar}{m_e}(\vec k-\vec k_0)
\cdot(\hbar \vec k_0-i\,\hbar\vec\nabla)+\frac{\hbar^2}{2\,m_e}
(k^2-{k_0}^2-2\vec k\cdot\vec k_0+2{k_0}^2)\right]=\nonumber\\
& H_{\vec k_0} & +\left[\frac{\hbar}{m_e}(\vec k-\vec k_0)
\cdot(\hbar \vec k_0-i\,\hbar\vec\nabla)+\frac{\hbar^2}{2\,m_e}
(k^2-2\vec k\cdot\vec k_0+{k_0}^2)\right]=\nonumber\\
& H_{\vec k_0} & +\left[\frac{\hbar}{m_e}(\vec k-\vec k_0)
\cdot(\hbar \vec k_0-i\,\hbar\vec\nabla)+\frac{\hbar^2}{2\,m_e}
(\vec k-\vec k_0)\cdot(\vec k-\vec k_0)\right]=\nonumber\\
& H_{\vec k_0} & +\left[\frac{\hbar}{m_e}(\vec k-\vec k_0)
\cdot(\hbar \vec k_0-i\,\hbar\vec\nabla)+\frac{\hbar^2}{2\,m_e}
|\vec k-\vec k_0|^2 \right]\nonumber
\end{eqnarray}
and for $\vec k$ near $\vec k_0$ the term between square brackets can be
treated as a perturbation of $H_{\vec k_0}$ \cite{kane1}.
For the sake of simplicity, in the following we will consider
$\vec k_0=0$.
An important point to notice is that, for any selected $\vec k$, the
functions $u^n_{\vec k}(\vec r)$ form an orthogonal and complete set (in
the restricted sense that any function with the lattice periodicity can be
expanded in terms of the Bloch lattice functions corresponding to the
selected $\vec k$).
To describe the main results of time-independent perturbation theory
\cite{perturbation,baym}, we have to distinguish the case in which the
unperturbed energy levels are non-degenerate from the case in which such
a degeneration exists (in the following we will use the notation of
W.~T.~Wenckebach \cite{perturbation}).
Let us begin from the first case. The problem we have to\break
solve is
\begin{equation}
[H^{(0)}+H^{(1)}]|n \rangle =E_n|n \rangle,
\end{equation}
where $H^{(0)}$ is the unperturbed Hamiltonian and $H^{(1)}$ the perturbation.
If we expand the eigenvalues $E_n$ and the eigenfunctions $|n \rangle $:
\begin{eqnarray}
E_n &=& E_n^{(0)}+E_n^{(1)}+E_n^{(2)}+\ldots,\\
|n \rangle &=& |n \rangle ^{(0)}+|n \rangle ^{(1)}+|n \rangle ^{(2)}+\ldots,\nonumber
\end{eqnarray}
we insert these expressions into the eigenvalue equation, and we enforce the
identity between terms of the same order, we find
\begin{eqnarray}
H^{(0)}|n \rangle ^{(0)} &=& E_n^{(0)}|n \rangle ^{(0)},\\
H^{(0)}|n \rangle ^{(1)}+H^{(1)}|n \rangle ^{(0)} &=&
E_n^{(0)}|n \rangle ^{(1)}+
E_n^{(1)}|n \rangle ^{(0)},\nonumber\\
H^{(0)}|n \rangle ^{(2)}+H^{(1)}|n \rangle ^{(1)} &=&
E_n^{(0)}|n \rangle ^{(2)}+
E_n^{(1)}|n \rangle ^{(1)}+E_n^{(2)}|n \rangle ^{(0)},\nonumber\\
&\ldots\ .\nonumber
\end{eqnarray}
The first equation corresponds to the unperturbed eigenvalue equation,
the solutions of which, $E_n^{(0)} \equiv E_0^n$ and
$|n \rangle ^{(0)} \equiv |n0 \rangle $, are assumed to be known.
From the other equations, instead, we can obtain the corrections to these
values produced by the perturbation $H^{(1)}$. In particular, if we stop to
the first-order corrections for the eigenfunctions and to the second-order
corrections for the eigenvalues we find
\begin{equation}
|n \rangle \simeq |n0 \rangle +|n \rangle ^{(1)}=|n0 \rangle +\sum_{m\ne n}
\left(|m0 \rangle \frac{\langle m0|H^{(1)}|n0 \rangle}{E_0^n-E_0^m}\right)
\end{equation}
(choosing $ \langle n0|n \rangle ^{(1)}=0$) and
\begin{eqnarray}
E_n\simeq E_0^n+E_n^{(1)}+E_n^{(2)}
&=& E_0^n+ \langle n0|H^{(1)}|n0 \rangle\\
&&{}+\sum_{m\ne n}
\left(\frac{\langle n0|H^{(1)}|m0 \rangle \langle m0|H^{(1)}|n0 \rangle}
{E_0^n-E_0^m} \right).\nonumber
\end{eqnarray}
When we examine degenerate unperturbed states, the expressions we have just
found diverge and thus we have to modify our treatment. In particular, if the
degenerate energy level $E_0^n$ corresponds to a multiplet of degenerate
states $|na0 \rangle $ (with $a=1,2,\ldots,g_n$, where $g_n$ is the degeneracy)
and we have to solve the perturbed problem
\begin{equation}
H|\psi \rangle =[H^{(0)}+H^{(1)}]|\psi \rangle =E|\psi \rangle,
\end{equation}
we can express the new generic eigenfunction $|\psi \rangle $ as
\begin{equation}
|\psi \rangle =\sum_{a=1}^{g_n}|na \rangle \langle na|\psi \rangle,
\end{equation}
where the $|na \rangle $'s are states which are related to the
unperturbed eigenvectors $|na0 \rangle $'s by the perturbation matrix
elements between different multiplets (as we will see in
eq.~(\ref{multiplet})).
If we define
\begin{equation}
H_{ab}^n= \langle na|H|nb \rangle =
\langle na|[H^{(0)}+H^{(1)}]|nb \rangle,
\end{equation}
we can express our perturbed equation in the following way:
\begin{equation}
\sum_{b=1}^{g_n}H_{ab}^n \langle nb|\psi \rangle =
E \langle na|\psi \rangle.
\end{equation}
Noting that the definition of the $H_{ab}^n$s can be equivalently expressed
in this way
\begin{equation}
[H^{(0)}+H^{(1)}]|nb \rangle =\sum_{a=1}^{g_n}|na \rangle H_{ab}^n,
\end{equation}
inserting into this equation the expansions
\begin{eqnarray}
H_{ab}^n &= (H_{ab}^n)^{(0)}+(H_{ab}^n)^{(1)}+(H_{ab}^n)^{(2)}+\ldots,\\
|na \rangle &= |na \rangle ^{(0)}+|na \rangle ^{(1)}+|na \rangle ^{(2)}+
\ldots,\nonumber
\end{eqnarray}
and enforcing the identity of the terms of the same order, we find
\begin{eqnarray}
H^{(0)}|nb \rangle ^{(0)} &=&
\sum_{a=1}^{g_n}|na \rangle ^{(0)}(H_{ab}^n)^{(0)},\\
H^{(0)}|nb \rangle ^{(1)}+H^{(1)}|nb \rangle ^{(0)} &=&
\sum_{a=1}^{g_n}|na \rangle ^{(1)}(H_{ab}^n)^{(0)}+
\sum_{a=1}^{g_n}|na \rangle ^{(0)}(H_{ab}^n)^{(1)},\nonumber\\
H^{(0)}|nb \rangle ^{(2)}+H^{(1)}|nb \rangle ^{(1)} &=&
\sum_{a=1}^{g_n}|na \rangle ^{(2)}(H_{ab}^n)^{(0)}\nonumber\\
&&+ \sum_{a=1}^{g_n}|na \rangle ^{(1)}(H_{ab}^n)^{(1)}+
\sum_{a=1}^{g_n}|na \rangle ^{(0)}(H_{ab}^n)^{(2)},\nonumber\\
\ldots\ .\nonumber
\end{eqnarray}
The first equation corresponds, noting that
$(H_{ab}^n)^{(0)}=E_0^n \delta_{ab}$, to the unperturbed eigenvalue equation,
the solutions of which, $E_0^n$ and $|na \rangle ^{(0)}=|na0 \rangle $,
are assumed to be known.
From the other equations, instead, we can obtain the corrections to these
values produced by the perturbation. In particular, if we
stop to the first-order corrections for the eigenstates and to the second-order
corrections for the eigenvalues, we find
\begin{equation}
\label{multiplet}
|nb>\simeq |nb0 \rangle +|nb \rangle ^{(1)}=|nb0 \rangle +
\sum_{m\ne n}\sum_{c=1}^{g_m}
\left(|mc0 \rangle \frac{\langle mc0|H^{(1)}|nb0 \rangle}
{E_0^n-E_0^m}\right)
\end{equation}
(choosing $ \langle nc0|nb \rangle ^{(1)}=0$) and
\begin{eqnarray}
H_{cb}^n &\simeq& (H_{cb}^n)^{(0)}+(H_{cb}^n)^{(1)}+(H_{cb}^n)^{(2)}=
E_0^n\delta_{cb}+ \langle nc0|H^{(1)}|nb0 \rangle\\
&&{}+\sum_{m\ne n}\sum_{a=1}^{g_m}
\left(\frac{\langle nc0|H^{(1)}|ma0 \rangle \langle ma0|H^{(1)}|nb0 \rangle}
{E_0^n-E_0^m} \right).\nonumber
\end{eqnarray}
Once the $H_{cb}^n$ have been found, we can obtain the energy levels $E$
solving the equation
\begin{equation}
\sum_{b=1}^{g_n}H_{ab}^n \langle nb|\psi \rangle =
E \langle na|\psi \rangle,
\end{equation}
or, equivalently, finding the eigenvalues of the matrix
$H^n$ (matrix $g_n\times g_n$ with elements $H_{ab}^n$)
by solving
\begin{equation}
\det\,(H^n-E I)=0
\end{equation}
(with $I$ the $g_n\times g_n$ unit matrix).
We notice that, computing also the eigenvectors $ \langle na|\psi \rangle $
of such a matrix and combining such results with the $|nb \rangle $ that have
been computed before up to the first order, it is also possible to know the
eigenfunctions $|\psi \rangle $ of the perturbed problem.
In the case of the $\vec k \cdot \vec p$ Hamiltonian that we have found before
\cite{wenckebach}, we can use the $u^n_0 (\vec r)$ ($u^n_{\vec k} (\vec r)$
for $\vec k=0$) as $|n0 \rangle $ and we have that
\begin{eqnarray}
\langle m0|H^{(1)}|n0 \rangle &=&
\langle m0|\left[-\frac{i\,\hbar^2}{m_e}(\vec k \cdot\vec\nabla)\right]
|n0 \rangle + \langle m0|\frac{\hbar^2 k^2}{2\,m_e}|n0 \rangle =\\
&&\frac{\hbar \vec k}{m_e} \cdot \langle m0|(-i\,\hbar \vec\nabla) |n0 \rangle
+ \langle m0|\frac{\hbar^2 k^2}{2\,m_e}|n0 \rangle .\nonumber
\end{eqnarray}
The second term clearly gives only diagonal matrix elements, because it is
equal to $(\hbar^2 k^2/(2\,m_e)) \delta_{nm}$.
The first term, instead, gives only non-diagonal matrix elements
because it is known \cite{velocity} that
\begin{equation}
\langle n\vec k_0|(-i\hbar \vec\nabla)|n\vec k_0 \rangle +
\hbar \vec k_0=m_e \vec v_n=
\frac{m_e}{\hbar}\vec \nabla_{\vec k} E^n_{\vec k}
\end{equation}
(where $\vec v_n$ is the expectation value of the velocity of the Bloch
waves, and in our considerations we are assuming $\vec k_0=0$) and
$\vec \nabla_{\vec k} E^n_{\vec k}=0$ in the band extrema.
Then, if the unperturbed energy bands are non-degenerate, we can write that
\begin{eqnarray}
E_{\vec k}^n &=& E_0^n+\frac{\hbar^2 k^2}{2\,m_e}+\frac{\hbar^2}{{m_e}^2}
\sum_{m\ne n}\frac{\langle n0|\vec k \cdot (-i\,\hbar\,\vec\nabla)|m0 \rangle
\langle m0|\vec k \cdot (-i\,\hbar\,\vec\nabla)|n0 \rangle}
{E_0^n-E_0^m}=\\
&& E_0^n+\frac{\hbar^2}{2}\sum_{\mu,\nu}\frac{k_{\mu}k_{\nu}}{m_{\mu\nu}^*},\nonumber
\end{eqnarray}
where $\mu,\nu=x,y,z$, while $m_{\mu\nu}^*$ is the effective-mass tensor
defined by
\begin{equation}
\frac{1}{m_{\mu\nu}^*}=\frac{1}{m_e}\delta_{\mu\nu}+
\frac{2}{{m_e}^2}\sum_{m\ne n}\frac{P_{\mu}^{nm}P_{\nu}^{mn}}
{E_0^n-E_0^m}
\end{equation}
and the momentum matrix elements at the band extremum are
\begin{equation}
P_{\mu}^{nm}= \langle n0|(-i\,\hbar\,\nabla_{\mu})|m0 \rangle.
\end{equation}
\noindent
If the unperturbed energy bands are degenerate, instead, we have
\begin{eqnarray}
(H_{\vec k}^n)_{cb} &=& E_0^n \delta_{cb}+\frac{\hbar^2 k^2}{2\,m_e}\delta_{cb}+
\frac{\hbar}{m_e} \langle nc0|\vec k \cdot (-i\,\hbar\,\vec\nabla)|nb0 \rangle
\\
&&{}+\frac{\hbar^2}{{m_e}^2}\sum_{m\ne n}\sum_{a=1}^{g_m}
\frac{\langle nc0|\vec k \cdot (-i\,\hbar\,\vec\nabla)|ma0 \rangle
\langle ma0|\vec k \cdot (-i\,\hbar\,\vec\nabla)|nb0 \rangle}
{E_0^n-E_0^m}=\nonumber\\
&& E_0^n \delta_{cb}+\frac{\hbar}{m_e}\sum_{\mu}k_{\mu}(P_{\mu})_{cb}^{nn}+
\frac{\hbar^2}{2}\sum_{\mu,\nu}\frac{k_{\mu}k_{\nu}}{m_{\mu\nu}^{cb}},\nonumber
\end{eqnarray}
where $\mu,\nu=x,y,z$, while $m_{\mu\nu}^{cb}$ is the effective-mass tensor
defined by
\begin{equation}
\frac{1}{m_{\mu\nu}^{cb}}=\frac{1}{m_e}\delta_{cb}\delta_{\mu\nu}+
\frac{2}{{m_e}^2}\sum_{m\ne n}\sum_{a=1}^{g_m}
\frac{(P_{\mu})_{ca}^{nm}(P_{\nu})_{ab}^{mn}}
{E_0^n-E_0^m}
\end{equation}
and the momentum matrix elements at the band extremum are
\begin{equation}
(P_{\mu})_{cb}^{nm}= \langle nc0|(-i\,\hbar\,\nabla_{\mu})|mb0 \rangle.
\end{equation}
In most of the cases all the $(P_{\mu})_{cb}^{nn}=0$, and the linear term in
$k_{\mu}$ disappears.
The energy levels will be found solving
\begin{equation}
\det\,(H_{\vec k}^n-E I)=0.
\end{equation}
Thus, in principle to perform a calculation of the energy bands we would have
to know the $| n0 \rangle$'s (the Bloch lattice functions at $\vec k=0$).
Since the Hamiltonian $H^{(0)}$ and its eigenfunctions $| n0 \rangle$ have
the periodicity of the lattice, the problem can be solved inside
a single primitive cell, enforcing periodic boundary conditions
at the surface of the cell. Most semiconductors of interest have the diamond
or zincblende crystal structure; for these materials we can choose as
lattice primitive cell a Wigner-Seitz cell centered around an atomic site
(the one with the strongest potential in the case of the zincblende
structure, characterized by atoms that are not all identical) and with, at
four vertices of the cell, four other atoms forming a tetrahedron with the
center coincident with the primitive cell center (fig.~\ref{f1}).
We can use a central force model (the same results can be obtained
using group theory), considering the potential inside the primitive cell
as due only to the attraction of the nucleus of the central atom, shielded
by its electrons~\cite{wenckebach}. We find that the Bloch lattice
functions at $\vec k=0$ exhibit symmetry properties analogous to
those of atomic orbitals: we have completely symmetric $s$-type states
$\rho_{\nu s}(r)$, and $p$-type states antisymmetric with respect to a
coordinate and symmetric with respect to the others, {\em i.e.} of the form
$\rho_{\nu x}(r)x$, $\rho_{\nu y}(r)y$, and $\rho_{\nu z}(r)z$
(where $r=\sqrt{x^2+y^2+z^2}$).
Then, treating the electrostatic potential of the cores at the vertices
of the primitive cell as a perturbation, we see that, to first order, this
potential does not change the eigenfunctions but shifts the energy levels
and in particular breaks the degeneracy between each $s$-type state and
the corresponding three $p$-type states (which remain mutually degenerate).
As a result, we find that at $\vec k=0$ the top of the valence band
can be described with three degenerate states: $|vx0 \rangle =\rho_v(r)x$,
$|vy0 \rangle =\rho_v(r)y$ and $|vz0 \rangle =\rho_v(r)z$,
while in most cases the bottom of the conductance band is described
by a non-degenerate symmetric state $|c0 \rangle =\rho_c(r)$ (with the
important exception of silicon, where at $\vec k=0$ also the bottom of
the conduction band is characterized by three states $|cx0 \rangle$,
$|cy0 \rangle$ and $|cz0 \rangle$).
\begin{figure}
\centering
\includegraphics[width=.45\textwidth,angle=0]{zincrev.eps}
\caption{Wigner-Seitz primitive cell for the diamond or zincblende
structure (adapted from \cite{wenckebach}).}
\label{f1}
\end{figure}\noindent
Therefore, if we treat the conduction band as a non-degenerate band, we obtain
\begin{equation}
E_{\vec k}^c=
E_0^c+\frac{\hbar^2}{2}\sum_{\mu,\nu}\frac{k_{\mu}k_{\nu}}{m_{\mu\nu}^*},
\end{equation}
where $\mu,\nu=x,y,z$ and
\begin{equation}
\frac{1}{m_{\mu\nu}^*}=\frac{1}{m_e}\delta_{\mu\nu}+
\frac{2}{{m_e}^2}\sum_{m\ne n}
\frac{\langle c0|(-i\,\hbar\,\nabla_{\mu})|m0 \rangle
\langle m0|(-i\,\hbar\,\nabla_{\nu})|c0 \rangle}
{E_0^c-E_0^m}.
\end{equation}
The largest contribution to the sum comes from the bands $m$ for
which $| E_0^c-E_0^m |$ is smallest, {\em i.e.} from the three valence bands.
If we compute the momentum matrix elements between the valence bands and
the conduction band, we find that, due to the symmetry properties of the Bloch
lattice functions,
\begin{equation}
\langle v\mu 0|(-i\,\hbar\,\nabla_{\nu})|c0 \rangle =
- \langle c0|(-i\,\hbar\,\nabla_{\nu})|v\mu 0 \rangle =
-i\,\hbar\,P\,\delta_{\mu\nu}
\end{equation}
with $\mu,\nu=x,y,z$ and $P= \langle v\mu 0|\nabla_{\mu}|c0 \rangle $ a
non-zero quantity, which multiplied by $\hbar$ has the dimensions of a
momentum.
Consequently, the effective mass in the conduction band that we find is
isotropic and equal to
\begin{equation}
\frac{1}{m_{\mu\nu}^*}=\frac{1}{m_c^*}\delta_{\mu\nu}=
\left(\frac{1}{m_e}+\frac{2\,\hbar^2 P^2}{m_e^2 E_g^0}\right)\delta_{\mu\nu},
\end{equation}
with $E_g^0=E_0^c-E_0^v$.
As to the valence band, we must use the degenerate perturbation theory
and, with a motivation analogous to that used in the study of the
conduction band, we can consider only the interaction between the three
degenerate valence bands and the conduction band, which is the nearest
energy band. Thus, using the previous results, we have that
\begin{equation}
(H_{\vec k}^v)_{\alpha\beta}=E_0^v\delta_{\alpha\beta}+
\frac{\hbar^2}{2}\sum_{\mu,\nu}
\frac{k_{\mu}k_{\nu}}{m_{\mu\nu}^{\alpha\beta}},
\end{equation}
with
\begin{eqnarray}
\frac{1}{m_{\mu\nu}^{\alpha\beta}}\! &=& \!\frac{1}{m_e}\delta_{\alpha\beta}
\delta_{\mu\nu}+
\frac{2}{{m_e}^2}\!\sum_{m\ne v}\sum_{a=1}^{g_m}\!
\frac{\langle v \alpha 0|(-i\,\hbar\,\nabla_{\mu})|ma0 \rangle
\langle ma0|(-i\,\hbar\,\nabla_{\nu})|v \beta 0 \rangle}
{E_0^v-E_0^m}\!=\\
&& \frac{1}{m_e}\delta_{\alpha\beta}
\delta_{\mu\nu}+\frac{2}{{m_e}^2}
\frac{\langle v \alpha 0|(-i\,\hbar\,\nabla_{\mu})|c0 \rangle
\langle c0|(-i\,\hbar\,\nabla_{\nu})|v \beta 0 \rangle}
{E_0^v-E_0^c}=\nonumber\\
&& \frac{1}{m_e}\delta_{\alpha\beta}\delta_{\mu\nu}
-\frac{2\,\hbar^2 P^2}{{m_e}^2 E_g^0}
\delta_{\alpha\mu}\delta_{\beta\nu}\nonumber
\end{eqnarray}
and thus the valence energy bands near the extremum can be obtained finding
the eigenvalues of the matrix
\begin{equation}
H_{\vec k}^v=\left(E_0^v+\frac{\hbar^2 k^2}{2\,m_e}\right)I-
\frac{\hbar^4 P^2}{m_e^2 E_g^0}
\left[\begin{array}{ccc}
k_x^2 & k_x k_y & k_x k_z\\
k_y k_x & k_y^2 & k_y k_z\\
k_z k_x & k_z k_y & k_z^2
\end{array}\right].
\end{equation}
\noindent
Till now we have not considered the effect of the so-called spin-orbit
interaction, which often has a non-negligible influence on the energy
bands. The physical phenomenon is the following~\cite{jackson,spinorbit}.
An electron has an intrinsic magnetic moment
\begin{equation}
\vec \mu=-\gamma_e \frac{\hbar}{2} \,\vec\sigma =
-g_e \gamma_L \frac{\hbar}{2} \,\vec\sigma=
-g_e \frac{e}{2\,m_e} \frac{\hbar}{2} \,\vec\sigma=
-g_e \mu_B \,\frac{\vec \sigma}{2},
\end{equation}
where $e$ is the modulus of the electron charge,
${\vec \sigma}$ is a vector with three components consisting of
the Pauli spin matrices:
\begin{equation}
\label{pauli}
\sigma_x=\left(\begin{array}{cc}
0 & 1\\
1 & 0
\end{array}\right),\quad
\sigma_y=\left(\begin{array}{cc}
0 & -i\\
i & 0
\end{array}\right),\quad
\sigma_z=\left(\begin{array}{cc}
1 & 0\\
0 & -1
\end{array}\right),
\end{equation}
$\gamma_e$ is the intrinsic gyromagnetic ratio of the electron,
$\gamma_L$ is its orbital gyromagnetic ratio,
$g_e=\gamma_e / \gamma_L$ is its intrinsic g-factor
and $\mu_B=e \hbar / (2\,m_e)$ is the Bohr magneton. When an electron moves in
a system (such as the atom) where the charge distribution (for example the
nucleus charge) produces an electric field $\vec E$, for the theory of
relativity this electric field will appear as a magnetic field in the frame
of reference of the electron. In particular if the motion of the electron
were uniform the equivalent magnetic field would be equal to
$\vec B=-(\vec v \times \vec E)/c^2$. The fact that the electron (and its
frame of reference) is rotating halves such an equivalent magnetic
field~\cite{jackson,spinorbit}.
Thus the Hamiltonian of the electron will have an additional part
\begin{equation}
H_{SO}=\mu_B\,\vec\sigma \cdot
\left(\frac{\vec E \times \vec v}{2\,c^2}\right)=
\frac{e\,\hbar}{4\,m_e c^2}\vec\sigma \cdot (\vec E \times \vec v)=
\frac{\hbar}{4\,m_e c^2}\vec\sigma \cdot ((\vec\nabla U_L) \times \vec v)
\end{equation}
(with $U_L$ the potential energy),
which in the absence of an external magnetic field can be written also as
\begin{equation}
H_{SO}=\frac{\hbar}{4\,m_e^2 c^2}\vec\sigma \cdot
((\vec\nabla U_L) \times \vec p).
\end{equation}
However, if we insert this additional term into the original Schr\"odinger
equation for the wave function $\psi^n_{\vec k} (\vec r)=
e^{i\,\vec k \cdot \vec r}u^n_{\vec k} (\vec r)$, we obtain
\begin{eqnarray}
&& H_{SO}\psi^n_{\vec k} (\vec r) =
\frac{\hbar}{4\,m_e^2 c^2}\vec\sigma \cdot
\left((\vec\nabla U_L) \times (-i\,\hbar\,\vec\nabla)\right)
e^{i\,\vec k \cdot \vec r}u^n_{\vec k} (\vec r)=\\
&& \frac{\hbar}{4\,m_e^2 c^2}\vec\sigma \cdot
\left((\vec\nabla U_L) \times \left((\hbar \vec k
e^{i\,\vec k \cdot \vec r})u^n_{\vec k} (\vec r)+
e^{i\,\vec k \cdot \vec r}
(-i\,\hbar\,\vec\nabla u^n_{\vec k} (\vec r))\right)\right)=\nonumber\\
&& e^{i\,\vec k \cdot \vec r}
\left(\frac{\hbar^2}{4\,m_e^2 c^2}\vec\sigma \cdot
((\vec\nabla U_L) \times \vec k)+
\frac{\hbar}{4\,m_e^2 c^2}\vec\sigma \cdot
((\vec\nabla U_L) \times (-i\,\hbar\,\vec\nabla))\right)
u^n_{\vec k} (\vec r).\nonumber
\end{eqnarray}
If we repeat the procedure used to move from the Schr\"odinger
equation for the wave functions $\psi^n_{\vec k} (\vec r)$ to the
Schr\"odinger-Bloch equation for the Bloch lattice functions
$u^n_{\vec k} (\vec r)$, we obtain that in the Hamiltonian of this last
equation there will be two additional terms:
\begin{eqnarray}
&& \frac{\hbar^2}{4\,m_e^2 c^2}\vec\sigma \cdot
((\vec\nabla U_L) \times \vec k)+
\frac{\hbar}{4\,m_e^2 c^2}\vec\sigma \cdot
((\vec\nabla U_L) \times (-i\,\hbar\,\vec\nabla))=\\
&& \frac{\hbar^2}{4\,m_e^2 c^2}\vec\sigma \cdot
((\vec\nabla U_L) \times \vec k)+H_{SO}.\nonumber
\end{eqnarray}
The first term near $\vec k=0$ is small compared with the other term;
thus only the second term is usually considered.
The second term in the case of a potential energy with (locally) spherical
symmetry (and thus of a radial electric field) becomes
\begin{eqnarray}
H_{SO}&=&\frac{e\,\hbar}{4\,m_e^2 c^2}\vec\sigma \cdot (\vec E \times \vec p)=
\frac{e\,\hbar}{4\,m_e^2 c^2}\vec\sigma \cdot
\frac{E_r}{r}(\vec r \times \vec p)=\\
&& -i\left(\frac{e\,\hbar^2 E_r}{4\,m_e^2 c^2 r}\right)\vec\sigma \cdot
(\vec r \times \vec\nabla) \equiv
-i \frac{\Lambda}{2} \vec\sigma \cdot (\vec r \times \vec\nabla).\nonumber
\end{eqnarray}
\hfill\break
In order to calculate the influence that the spin-orbit term has on the
valence bands, we need to calculate the matrix elements on the basis states
$|vx0 \rangle $, $|vy0 \rangle $, $|vz0 \rangle $ and $|c0 \rangle $.
Due to the symmetry proprieties of such states, we see that the only
non-zero elements are the non-diagonal elements between valence band states
\begin{eqnarray}
\langle vy0|H_{SO}|vx0 \rangle &=& - \langle vx0|H_{SO}|vy0 \rangle =
i\,\lambda\sigma_z,\\
\langle vz0|H_{SO}|vy0 \rangle &=& - \langle vy0|H_{SO}|vz0 \rangle =
i\,\lambda\sigma_x,\nonumber\\
\langle vx0|H_{SO}|vz0 \rangle &=& - \langle vz0|H_{SO}|vx0 \rangle =
i\,\lambda\sigma_y,\nonumber
\end{eqnarray}
with $\lambda$ a non-zero quantity given by (if $V_c$ is the volume of the
lattice unit cell)
\begin{equation}
\lambda=\frac{\Lambda}{2}\frac{1}{V_c}
\int_{V_c} x^2 \rho_v^2 (r) d\,\vec r.
\end{equation}
Therefore, considering also the spin-orbit coupling, the matrix
$H_{\vec k}^v$ becomes
\begin{eqnarray}
H_{\vec k}^v &=& \left(E_0^v+\frac{\hbar^2 k^2}{2\,m_e}\right)I-
\frac{\hbar^4 P^2}{m_e^2 E_g^0}
\left[\begin{array}{ccc}
k_x^2 & k_x k_y & k_x k_z\\
k_y k_x & k_y^2 & k_y k_z\\
k_z k_x & k_z k_y & k_z^2
\end{array}\right]\\
&&+i\,\lambda\left[\begin{array}{ccc}
0 & -\sigma_z & \sigma_y\\
\sigma_z & 0 & -\sigma_x\\
-\sigma_y & \sigma_x & 0
\end{array}\right],\nonumber
\end{eqnarray}
where $\sigma_x$, $\sigma_y$ and $\sigma_z$ are the Pauli spin matrices
(\ref{pauli}), which do not commute with one another.
If we consider the special case $\vec k \parallel \hbox{\boldmath{$\hat z$}}$
we can quite easily find the eigenvalues of this matrix, arriving to a
third-order equation in the energy, the solutions of which represent the
dispersion relations of the three valence
bands, each one degenerate with respect to the spin. In particular, if
we make the approximation $(\hbar^4 P^2 k^2 /(m_e^2 E_g^0)) \ll \lambda$,
we find the solutions~\cite{wenckebach}
\begin{eqnarray}
E_{hh} &=& E_0^v+\lambda+\frac{\hbar^2}{2}\frac{1}{m_e}k^2,\\
E_{lh} &=& E_0^v+\lambda+\frac{\hbar^2}{2}\frac{1}{m_e}
\left(1-\frac{4\,\hbar^2\,P^2}{3\,m_e E_g^0}\right)k^2,\nonumber\\
E_{\lambda h} &=& E_0^v-2\,\lambda+\frac{\hbar^2}{2}\frac{1}{m_e}
\left(1-\frac{2\,\hbar^2\,P^2}{3\,m_eE_g^0}\right)k^2.\nonumber
\end{eqnarray}
Thus, considering the effect of the spin-orbit interaction, we have obtained
(fig.~\ref{f2}) two valence bands (the heavy-hole band and the light-hole
band) degenerate at $\vec k=0$, where they have an energy
$E_g=E_c^0-(E_v^0+\lambda )=E_g^0-\lambda$
lower than the conduction band, and one valence band (the spin-orbit band)
which for $\vec k=0$ has an energy $\Delta=3\,\lambda$ lower than
the other two valence bands. We notice that,
while the light-hole band and the spin-orbit band have a negative effective
mass of the same order of magnitude as the effective mass of the electrons
in the conduction band, the heavy-hole band is characterized by a much larger
effective mass (the fact that the obtained effective mass is positive instead
disappears with a more refined treatment: obviously the effective mass of
the electrons in the valence bands has to be negative, which corresponds to
a positive effective mass for the holes).
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth,angle=0]{bandsrev.eps}
\caption{Band structure near $\vec k=0$ obtained with the simplified
model described in the text (the heavy-hole band has a wrong curvature)
(adapted from \cite{wenckebach}).}
\label{f2}
\end{figure}\noindent
This simplified model is amenable to several refinements.
As to the conduction band, we can include in the calculation the
spin-orbit splitting of the valence band and the effect of the higher
conduction bands. In particular, with the first change we obtain
a better expression for the effective mass in the conduction band:
\begin{equation}
\frac{1}{m_c^*}=
\frac{1}{m_e}+\frac{2}{{m_e}^2}
\left[\frac{2\,\hbar^2P^2}{3\,E_g}+\frac{\hbar^2P^2}{3(E_g+\Delta)}\right],
\end{equation}
where $E_g=E_g^0-\lambda $.
\newpage
Also in the treatment of the valence bands we can consider the effect of the
higher conduction bands; one of the effects is that the resulting valence
bands lose their isotropy and exhibit a complex orientation dependence in
the reciprocal space (``band warping'').
It is important to notice that the expressions found for the band structure
depend on a small number of parameters, for example $E_g$,
$\Delta$ and $m_c^*$ (from which we can calculate the parameter $P$ using
the expression found for the effective mass of the conduction band). From
a practical point of view, these quantities are commonly obtained from
{\em a priori} band structure calculations or, better, experimentally:
in particular the bandgap values $E_g$ and $\Delta$ are accurately known
from optical experiments, while $m_c^*$ is known from cyclotron resonance
experiments.
The approach based on the ``traditional'' perturbation theory, that we
have reported in this first part following the description of
T.~Wenckebach \cite{wenckebach}, differs from the method proposed by
E.~O.~Kane \cite{kane1,kane2,kane3,kane4,kane5,kane6}.
Starting from the consideration that the Bloch lattice functions can
be expanded in terms of the complete, infinite set of the unperturbed Bloch
lattice functions, Kane computes this expansion in an approximate way,
considering only a finite set of bands. In particular he considers only
the three valence bands and the conduction band (not including the
effects of the other bands) and diagonalizes exactly the Hamiltonian
of the Schr\"odinger-Bloch equation in the presence of spin-orbit
interaction \cite{kane5}, written taking as a basis the following set,
made up of a linear combination with constant coefficients of the
$u^n_0 (\vec r)$ considered in the absence of spin-orbit ({\em i.e.} of the
functions $|c0 \rangle $, $|vx0 \rangle $, $|vy0 \rangle $ and
$|vz0 \rangle $ taken with spin-up and spin-down):
\begin{eqnarray}
&& i |c0 \rangle |\!\downarrow \rangle,\quad
1/\sqrt{2}(|vx0 \rangle -i\,|vy0 \rangle )|\!\uparrow \rangle,\quad
|vz0 \rangle |\!\downarrow \rangle,\quad
-1/\sqrt{2}(|vx0 \rangle +i\,|vy0 \rangle )|\!\uparrow \rangle,\\
&& i |c0 \rangle |\!\uparrow \rangle,\quad
-1/\sqrt{2}(|vx0 \rangle +i\,|vy0 \rangle )|\!\downarrow \rangle,\quad
|vz0 \rangle |\!\uparrow \rangle,\quad
1/\sqrt{2}(|vx0 \rangle -i\,|vy0 \rangle )|\!\downarrow \rangle \nonumber
\end{eqnarray}
(where $|\!\uparrow \rangle$ and $|\!\downarrow \rangle$ are, respectively,
the spin-up and spin-down unit spinors).
From this diagonalization he finds, for small values of $k^2$, the
following expressions for the considered bands (choosing the zero
of energy at the top of the light-hole and heavy-hole bands and
defining the various quantities as before):
\begin{eqnarray}
E_c &=& E_g+\frac{\hbar^2}{2}\frac{1}{m_e}
\left(1+\frac{4\,\hbar^2\,P^2}{3\,m_e E_g}+
\frac{2\,\hbar^2\,P^2}{3\,m_e (E_g+\Delta)}\right)k^2,\\
E_{hh} &=& \frac{\hbar^2}{2}\frac{1}{m_e}k^2,\nonumber\\
E_{lh} &=& \frac{\hbar^2}{2}\frac{1}{m_e}
\left(1-\frac{4\,\hbar^2\,P^2}{3\,m_e E_g}\right)k^2,\nonumber\\
E_{\lambda h} &=& -\Delta+\frac{\hbar^2}{2}\frac{1}{m_e}
\left(1-\frac{2\,\hbar^2\,P^2}{3\,m_e (E_g+\Delta)}\right)k^2.\nonumber
\end{eqnarray}
These expressions are very similar to the expressions obtained with the
previously described simplified model, but clearly show the dual effect
that each reciprocal interaction has on the related couple of bands.
As before, these results give an incorrect effective mass for the
heavy-hole band.
From the diagonalization Kane also finds the Bloch lattice functions
$u^n_{\vec k} (\vec r)$ that diagonalize the Hamiltonian of the
Schr\"odinger-Bloch equation in the presence of spin-orbit interaction
({\em i.e.} the eigenfunctions of this Hamiltonian) as linear combinations of the
$u^n_0 (\vec r)$ considered in the absence of spin-orbit;
in particular for vanishing $k$ they are (in the simplest case in which
$\vec k \parallel \hbox{\boldmath{$\hat z$}}$):
\begin{eqnarray}
\label{pertu}
& i\,|c0 \rangle |\!\downarrow \rangle,\qquad
i\,|c0 \rangle |\!\uparrow \rangle, &\\
& -1/\sqrt{2}\,(|vx0 \rangle +i\,|vy0 \rangle )|\!\uparrow \rangle,\qquad
1/\sqrt{2}\,(|vx0 \rangle -i\,|vy0 \rangle )|\!\downarrow \rangle,& \nonumber\\
& 1/\sqrt{6}\,(|vx0 \rangle -i\,|vy0 \rangle )|\!\uparrow \rangle+
\sqrt{2/3}\,|vz0 \rangle |\!\downarrow \rangle,& \nonumber\\
& -1/\sqrt{6}\,(|vx0 \rangle +i\,|vy0 \rangle )|\!\downarrow \rangle+
\sqrt{2/3}\,|vz0 \rangle |\!\uparrow \rangle,& \nonumber\\
& 1/\sqrt{3}\,(|vx0 \rangle -i\,|vy0 \rangle )|\!\uparrow \rangle-
1/\sqrt{3}\,|vz0 \rangle |\!\downarrow \rangle,& \nonumber\\
& 1/\sqrt{3}\,(|vx0 \rangle +i\,|vy0 \rangle )|\!\downarrow \rangle+
1/\sqrt{3}\,|vz0 \rangle |\!\uparrow \rangle.& \nonumber
\end{eqnarray}
In order to take into account the effect of higher and lower bands on
the considered ones, Kane uses the L\"owdin perturbation theory
\cite{lowdin1,lowdin2}. Following this method, one can divide all the bands
into two sets $A$ and $B$: $A$ is the set we want to treat exactly and $B$
contains all the other bands. At the lowest order of perturbation theory the
coupling between the set $A$ and the set $B$ can be removed introducing
the perturbed functions
\begin{equation}
u_i'=u_i+\sum_n^B \frac{H_{ni}u_{n}}{(H_{ii}-H_{nn})},
\end{equation}
where $i$ is in $A$ and $n$ is in $B$.
The renormalized interactions connecting $u_i'$ and $u_j'$
are given by
\begin{equation}
H_{ij}'=H_{ij}+\sum_n^B \frac{H_{in}H_{nj}}{\displaystyle
{\left(\frac{H_{ii}+H_{jj}}{2}-H_{nn}\right)}}
\end{equation}
(with $i$, $j$ in $A$).
In this way we can reduce the Hamiltonian matrix, which in principle
connects all the possible bands, to a Hamiltonian matrix relating
only the bands of interest, but in which, however, the interactions with
the non-considered bands are included. The method is accurate as long
as $|H_{in}| \ll |H_{ii}-H_{nn}|$, with $i$ in $A$ and $n$ in $B$, and
thus the set $A$ has to be selected in order to satisfy this relation
(for example, also states degenerate with those in which we are
interested have to be considered inside the set $A$).
Note that the L\"owdin perturbation theory reduces to the ordinary
perturbation theory when only a single band is considered in the set $A$.
Kane applies this perturbation method, starting from the Bloch lattice
functions $(\ref{pertu})$ of the set $A$ of considered conduction and valence
bands and from the unperturbed Bloch lattice functions of the set $B$
of the higher and lower bands, obtaining a better approximation of the
actual dispersion relations of the considered bands.
An exact diagonalization of the Hamiltonian has also been performed
(originally by M.~Cardona and F.~H.~Pollak \cite{cardona}, more
recently by other authors \cite{cavasillas,radhia,richard,michelini}),
extending the number of considered bands (and thus the number of
involved parameters) to reproduce the band structure all over the
Brillouin zone to a reasonable degree of accuracy (for example, in
their original paper M.~Cardona and F.~H.~Pollak consider 15 bands,
with 10 parameters, to reproduce the energy band structure of
germanium and silicon).
\section{The $\vec k \cdot \vec p$ method in non-periodic systems:
envelope function theory and application to heterostructures}
Till now, we have considered the applications of the $\vec k \cdot \vec p$
method to periodic, homogeneous crystals, using a description of the
electron wave function in terms of Bloch functions. However, in the
presence of a generic external potential the periodicity of the potential
inside the crystal breaks down and thus the electron wave functions are far
from periodic. Since the Bloch functions $|n\,\vec k \rangle =
e^{i\,\vec k \cdot \vec r} u^n_{\vec k} (\vec r)/\sqrt{(2\pi)^3}$,
considered as functions of $\vec r$ and $\vec k$, are a complete set of
orthonormal functions, also in this case the generic wave function could
be expanded on the basis of Bloch functions in this way
\begin{equation}
\psi (\vec r)=\sum_{n} \int d\vec k A_{n}(\vec k) |n\,\vec k \rangle,
\end{equation}
(where the sum over the number of bands together with the integral over
the Brillouin zone corresponds to an integral over all the reciprocal space).
However, in general a large number of Bloch functions, evaluated over a large
range of wave vectors, would be necessary in this expansion. Therefore in
this case it is convenient to replace the Bloch phase factor, involving the
wave vector measured from the reference extremum point, with an envelope
function, and thus to use a different formulation of the $\vec k \cdot \vec p$
method, based on the concept of envelope functions
\footnote{Notice that there is also an alternative approach to the
envelope function theory using the definition of Wannier~\cite{wannier}
and Slater~\cite{slater}, based on Wannier orbitals. See also
\cite{burt5,adams,young}.}.
In order to introduce this concept, we can make a very approximate calculation
\cite{envelope,mitin} in the hypothesis that the external potential energy
$U(\vec r)$ (``external'' here meaning ``not due to the periodic structure
of the lattice'') is slowly varying on an atomic scale and\break
the $n$-th energy
band that we are considering is non-degenerate (thus with unique independent
Bloch lattice function $u^n_{\vec k} (\vec r)$). In this case, the
Schr\"odinger equation (in the absence of a magnetic field) for the electron
wave function $\psi(\vec r)$
\begin{equation}
\label{es}
\left(-\frac{\hbar^2}{2\,m_e}\nabla^2+U_L (\vec r)\right)\psi (\vec r)+
U(\vec r)\psi (\vec r)=H^{(0)} \psi (\vec r)+
U(\vec r)\psi (\vec r)=E \psi (\vec r)
\end{equation}
(where $U_L (\vec r)$ is the periodic lattice potential energy and
$H^{(0)}$ is the Hamiltonian in the absence of the external potential
energy $U(\vec r)$) is equivalent to the equation
\begin{equation}
\label{efe}
E_n(-i\,\vec\nabla) F(\vec r)+
U(\vec r) F(\vec r)=E F(\vec r),
\end{equation}
where $E_n(-i\,\vec\nabla)$ represents the operator obtained replacing,
in the dispersion relation $E_n(\vec k)$ describing the $n$-th
energy band in the absence of the external potential, each component of
$\vec k$ with the corresponding component of $-i\,\vec\nabla$,
and $F(\vec r)$ is the envelope function, a slowly varying
function that, when we consider only the $n$-th band, multiplied by the
fast varying Bloch lattice function $u_0^n(\vec r)$
(considered in $\vec k=0$) gives the electron wave function.
Indeed, if we expand $\psi(\vec r)$ in the orthogonal basis set
$|\nu\vec k \rangle =
e^{i\,\vec k \cdot \vec r}u^{\nu}_{\vec k} (\vec r)/\sqrt{V}$
(with $V$ the crystal volume)
\begin{equation}
\psi(\vec r)=\sum_{\nu,\vec k}a_\nu (\vec k)|\nu\vec k \rangle,
\end{equation}
we can re-write the Schr\"odinger equation (\ref{es}) in matrix form using
the basis $|\nu\vec k \rangle $
\begin{eqnarray}
&& \sum_{\nu',\vec k'}\left(
\langle \nu\vec k|H^{(0)}+U(\vec r)|\nu'\vec k' \rangle
a_{\nu'}(\vec k')\right) = E a_{\nu} (\vec k) \Rightarrow\\
&& E_{\nu}(\vec k)a_{\nu} (\vec k)+
\sum_{\nu',\vec k'}
\left( \langle \nu\vec k|U(\vec r)|\nu'\vec k' \rangle
a_{\nu'}(\vec k')\right) = E a_{\nu} (\vec k),\nonumber
\end{eqnarray}
where we have used the fact that (being $|\nu\vec k \rangle$ an
eigenfunction of $H^{(0)}$ with eigenvalue $E_{\nu}(\vec k)$)
\begin{equation}
\langle \nu\vec k|H^{(0)}|\nu'\vec k' \rangle =E_{\nu'}(\vec k')
\langle \nu\vec k|\nu'\vec k' \rangle =
E_\nu(\vec k)\delta_{\nu\,\nu'}\delta_{\vec k\,\vec k'}.
\end{equation}
In particular, for $\nu=n$ we have that
\begin{equation}
\label{mpschr}
E_n(\vec k)a_n (\vec k)+
\sum_{\nu',\vec k'}
\left( \langle n\vec k|U(\vec r)|\nu'\vec k' \rangle a_{\nu'}(\vec k')\right)=
E a_n (\vec k).
\end{equation}
If instead we expand the envelope function equation in the orthogonal
set of plane waves $|\vec k \rangle =e^{i\,\vec k \cdot \vec r}/\sqrt{V}$
\begin{equation}
F(\vec r)=\sum_{\vec k}a(\vec k)|\vec k \rangle,
\end{equation}
we can re-write the envelope function equation (\ref{efe}) in matrix form using
the basis $|\vec k \rangle $
\begin{eqnarray}
\label{mpenv}
&& \sum_{\vec k'}\left( \langle \vec k|E_n(-i\,\vec\nabla)+
U(\vec r)|\vec k' \rangle
a(\vec k')\right) = E a(\vec k)\Rightarrow\\
&& E_n(\vec k)a(\vec k)+
\sum_{\vec k'}\left( \langle \vec k|U(\vec r)|\vec k' \rangle a(\vec k')\right)
= E a(\vec k),\nonumber
\end{eqnarray}
using the fact that
\begin{equation}
(-i\,\nabla_{\nu})^p |\vec k' \rangle =
(-i\,\nabla_{\nu})^p (e^{i\,\vec k' \cdot \vec r}/ \sqrt{V})=
(k'_{\nu})^p (e^{i\,\vec k' \cdot \vec r}/ \sqrt{V})=
(k'_{\nu})^p |\vec k' \rangle,
\end{equation}
with $\nu =x, y, z$
and thus
\begin{equation}
E_n(-i\,\vec\nabla)|\vec k' \rangle = E_n(\vec k')|\vec k' \rangle
\end{equation}
(being $E_n(-i\,\vec\nabla)$ an operator
made up of operators of the type $(-i\,\nabla_{\nu})^p$)
and then exploiting the orthogonality relation
$ \langle \vec k|\vec k' \rangle =\delta_{\vec k\,\vec k'}$.
The two equations (\ref{mpschr}) and (\ref{mpenv}), obtained from the
Schr\"odinger equation and from the envelope function equation are
exactly equal if
\begin{equation}
\langle n\vec k|U(\vec r)|\nu'\vec k' \rangle =
\delta_{n\,\nu'} \langle \vec k|U(\vec r)|\vec k' \rangle,
\end{equation}
{\em i.e.} if the matrix elements of the external potential $U(\vec r)$
between states from different bands are negligible. This is what happens
if $U$ is slowly varying on an atomic scale. Indeed, in this case we
have that
\begin{eqnarray}
&& \langle n\vec k|U(\vec r)|\nu'\vec k' \rangle =
\frac{1}{V} \sum_{j=1}^N \int_{V_j}d \vec r\, {u_{\vec k}^n}^*(\vec r)
u_{\vec k'}^{\nu'}(\vec r)e^{i\,(\vec k'-\vec k)
\cdot \vec r}U(\vec r)\simeq\\
&& \sum_{j=1}^N e^{i\,(\vec k'-\vec k) \cdot \vec r_j}U(\vec r_j)
\frac{1}{V}
\int_{V_j} d \vec r \,{u_{\vec k}^n}^*(\vec r)u_{\vec k'}^{\nu'}(\vec r)
\simeq\nonumber\\
&& \sum_{j=1}^N e^{i\,(\vec k'-\vec k) \cdot \vec r_j}U(\vec r_j)
\delta_{n\,\nu'}\frac{1}{N}\simeq
\delta_{n\,\nu'}\int_V d \vec r \,
\frac{e^{i\,(\vec k'-\vec k) \cdot \vec r}}{V} U(\vec r)=
\delta_{n\,\nu'} \langle \vec k|U(\vec r)|\vec k' \rangle,\nonumber
\end{eqnarray}
where $V$ the crystal volume, $V_j$ the volume of the $j$-th unit cell,
$\vec r_j$ the coordinate of its center and
$N$ the number of unit cells. We have assumed that $U(\vec r)$
and $e^{i\,(\vec k'-\vec k) \cdot \vec r}$ are approximately constant over
a unit cell and $u_{\vec k'}^{n}(\vec r)\simeq u_{\vec k}^{n}(\vec r)$ over
the range of values of $|\vec k'-\vec k|$ for which
$ \langle \vec k|U(\vec r)|\vec k' \rangle $ is not negligible.
Note that usually for functions with the translation symmetry of the crystal
lattice the scalar product is defined as
\begin{equation}
\label{scalarproduct}
\langle \Psi_1 | \Psi_2 \rangle=
\frac{1}{V_c} \int_{V_c} d \vec r \,\Psi_1^* (\vec r) \Psi_2 (\vec r)
\end{equation}
(with $V_c$ the volume of the unit cell); in particular
$u_{\vec k}^{\nu} (\vec r)$ and $e^{i\,\vec k \cdot \vec r}u^{\nu}_{\vec k}$
are normalized with respect to this scalar product.
If the two equations (\ref{mpschr}) and (\ref{mpenv}) are identical, they have
the same solutions $a_{n}(\vec k)$ and $a(\vec k)$.
Thus (assuming that $a_\nu(\vec k)$ is non-zero only for the particular
band $n$, coherently with our hypothesis that there is no mixing between
the bands) we can write that
\begin{eqnarray}
\psi(\vec r)&=&\sum_{\nu,\vec k}a_\nu (\vec k)|\nu\vec k \rangle =
\sum_{\vec k}a_n(\vec k)
\frac{e^{i\,\vec k \cdot \vec r}}{\sqrt{V}}
u^n_{\vec k} (\vec r)\simeq\\
&& u^n_0 (\vec r) \sum_{\vec k} a_n(\vec k)
\frac{e^{i\,\vec k \cdot \vec r}}{\sqrt{V}}=
u^n_0 (\vec r) \sum_{\vec k} a_n(\vec k) |\vec k \rangle =
u^n_0 (\vec r) F(\vec r),\nonumber
\end{eqnarray}
where we have assumed that $u^n_{\vec k}(\vec r)$ does not vary very much
with $\vec k$ (note that the main $\vec k$'s have to be quite close to
$\vec k_0=0$ for the previous derivation to be consistent).
We notice that if we express $E_n(\vec k)$ as
\begin{equation}
E_n(\vec k)=E_0^n+\frac{\hbar^2}{2}\sum_{\mu ,\nu}
\frac{k_{\mu}k_{\nu}}{m_{\mu\,\nu}^*}
\end{equation}
(with $\mu,\nu=x,y,z$) the envelope function equation becomes
\begin{equation}
-\frac{\hbar^2}{2}\sum_{\mu ,\nu}
\frac{\nabla_{\mu}\nabla_{\nu}}{m_{\mu\,\nu}^*}
F(\vec r)+(E_0^n+U(\vec r))=
E F(\vec r)
\end{equation}
and when the effective mass is isotropic
($(1/m_{\mu\,\nu}^*)=(1/m^*)\delta_{\mu\,\nu}$) we have the well-known
equation
\begin{equation}
-\frac{\hbar^2}{2\,m^*}\nabla^2 F(\vec r)
+(E_0^n+U(\vec r))=E F(\vec r).
\end{equation}
Luttinger and Kohn in a famous paper~\cite{luttinger1} have given an
alternative derivation of the single-band envelope function equation,
which has the advantage of being easily generalized to more complicated
cases. The starting equation is again the Schr\"odinger
equation $(H^{(0)}+U)\psi=E\psi$, with $H^{(0)}$ being the Hamiltonian of
the electron in the periodic lattice potential and $U$ an additional
potential which is assumed not to vary significantly over each unit cell.
They show that the fuctions $|n\,\vec k \rangle =\chi_{n \vec k}=
e^{i\,\vec k \cdot \vec r} u^n_0 (\vec r)/\sqrt{(2\pi)^3}$
(where $u^n_0 (\vec r)$ are the Bloch lattice functions in the absence of
the external potential, evaluated for $\vec k=0$)
are a complete orthonormal set, if considered as functions of
$\vec r$ and $\vec k$ (exactly as the functions
$e^{i\,\vec k \cdot \vec r} u^n_{\vec k} (\vec r)/\sqrt{(2\pi)^3}$,
which, contrary to the $\chi_{n \vec k}$, are eigenfunctions of $H^{(0)}$).
This means that
\begin{equation}
\langle \chi_{n \vec k} | \chi_{n' \vec k'} \rangle=
\delta_{n n'}\delta(\vec k-\vec k').
\end{equation}
Therefore, they can expand the wave function $\psi$ over the complete
orthonormal set of functions $|n\,\vec k \rangle$ in this way:
\begin{equation}
\label{eqp}
\psi=\sum_{n'} \int d\vec k' A_{n'}(\vec k')\chi_{n'\vec k'},
\end{equation}
and, considering this basis, they can rewrite the Schr\"odinger equation
in the following form:
\begin{equation}
\label{eqa}
\sum_{n'} \int d\vec k' \langle n \vec k | H^{(0)}+U | n' \vec k' \rangle
A_{n'}(\vec k')=E A_{n}(\vec k).
\end{equation}
After some calculations, they obtain that
\begin{eqnarray}
\langle n \vec k | H^{(0)} | n' \vec k' \rangle &=&
\left(E^n_0+\frac{\hbar^2k^2}{2\,m_e}\right)
\delta_{n n'}\delta(\vec k-\vec k')+
\sum_{\alpha=x,y,z}\frac{\hbar k_{\alpha} P_{\alpha}^{n n'}}{m_e}
\delta(\vec k-\vec k')\equiv\\
&&\langle n \vec k | H_a | n' \vec k' \rangle+
\langle n \vec k | H_b | n' \vec k' \rangle,\nonumber
\end{eqnarray}
where the momentum matrix elements at $\vec k=0$
\begin{equation}
P_{\alpha}^{n n'}=\frac{1}{V_c}\int_{V_c} {u^n_0}^* (-i\hbar \nabla_{\alpha})
u^{n'}_0 d\vec r
\end{equation}
are characterized by the following properties: $P_{\alpha}^{n n}=0$ if the
point $\vec k=0$ around which we are working is an extremum point of the
dispersion relations, and
$P_{\alpha}^{n n'}=P_{\alpha}^{n' n}=(P_{\alpha}^{n n'})^*$ if a center
of symmetry exists in the crystal.
Moreover, if $U$ is a ``gentle'' potential, with a very small variation over a
unit cell,
\begin{equation}
\langle n \vec k | U | n' \vec k' \rangle=
\mathcal{U}(\vec k-\vec k')\delta_{n n'},
\end{equation}
where $\mathcal{U}(\vec k)$ is the Fourier transform of $U$
\begin{equation}
\mathcal{U}(\vec k)=\frac{1}{(2\pi)^3}\int d\vec r
e^{-i\vec k\cdot \vec r} U(\vec r).
\end{equation}
As a consequence, eq.~(\ref{eqa}) becomes
\begin{eqnarray}
\label{eqa1}
&& \left(E^n_0+\frac{\hbar^2 k^2}{2\,m_e}\right) A_{n}(\vec k)+\sum_{\alpha=x,y,z}
\sum_{\scriptstyle n' \atop \scriptstyle n'\ne n}
\frac{\hbar k_{\alpha} P_{\alpha}^{n n'}}{m_e} A_{n'}(\vec k)\\
&& {}+\int d\vec k' \mathcal{U}(\vec k-\vec k') A_{n}(\vec k')=E A_{n}(\vec k).\nonumber
\end{eqnarray}
In order to decouple the equation corresponding to the band $n$ from the
other bands, the terms involving $P_{\alpha}^{n n'}$, which couple the bands,
have to be removed to the first order. Luttinger and Kohn obtain this result
applying a proper canonical transformation $T$:
\begin{equation}
A_n(\vec k)=\sum_{n'}\int d \vec k' \langle n \vec k | T | n' \vec k' \rangle
B_{n'}(\vec k'),
\end{equation}
which corresponds, more abstractly, to $\vec A=T \vec B$.
Writing $T=e^S$ and applying this transformation to the equation~(\ref{eqa1}),
which can be rewritten as $H \vec A=E \vec A$, with
$H=H_a+H_b+U$, we obtain
$(e^{-S} H e^S) \vec B=E \vec B$. After some calculations, it can be proved
that, choosing $S$ in such a way that $H_b+[H_a,S]=0$ (the square brackets
denoting the commutator), {\em i.e.}
\begin{equation}
\langle n \vec k | S | n' \vec k' \rangle=
\left\{ \begin{array}{ll}
\displaystyle
-\frac{\hbar \vec k \cdot \vec P^{n n'} \delta(\vec k-\vec k')}
{m_e(E_0^n-E_0^{n'})},
&\textrm{if $n \ne n'$,}\\
\displaystyle
0, &\textrm{if $n=n'$,}
\end{array}\right.
\end{equation}
and neglecting the terms of order $k^3$ and higher and the terms which assume
very small values for a ``gentle'' potential $U$, this equation becomes
\begin{eqnarray}
&& \left(E_0^n+\frac{\hbar^2 k^2}{2\,m_e}+
\frac{\hbar^2}{{m_e}^2}\sum_{\alpha,\beta=x,y,z}
k_{\alpha}k_{\beta}\sum_{\scriptstyle n'' \atop \scriptstyle n''\ne n}
\frac{P_{\alpha}^{n n''} P_{\beta}^{n'' n}}{E_0^n-E_0^{n''}}\right)
B_n(\vec k)\\
&& {}+\int \mathcal{U}(\vec k-\vec k') B_n (\vec k') d\vec k'=E B_n (\vec k),\nonumber
\end{eqnarray}
which can be written more briefly in this form:
\begin{equation}
\label{eqb}
E_n (\vec k) B_n(\vec k)+\int \mathcal{U}(\vec k-\vec k')B_n (\vec k')
d\vec k'=E B_n (\vec k),
\end{equation}
where $E_n (\vec k)$ is the dispersion relation in the absence of
$U(\vec r)$ expanded to second order in $\vec k$.
Converting eq.~(\ref{eqb}) from the momentum space to the position space
and defining the envelope function in this way
\begin{equation}
\label{eqf}
F_n(\vec r)=
\frac{1}{\sqrt{(2\pi)^3}}\int e^{i\vec k \cdot \vec r} B_n(\vec k)
d\vec k,
\end{equation}
the single band envelope function equation is obtained
\begin{equation}
\label{eqf1}
(E_n(-i\,\vec\nabla)+U(\vec r)) F_n(\vec r)=E F_n(\vec r),
\end{equation}
with $E_n(-i\,\vec\nabla)$ obtained expanding $E_n(\vec k)$ (the dispersion
relation in the absence of $U(\vec r)$) to second order in $\vec k$ around
$\vec k=0$ with non-degenerate perturbation theory and substituting
each component of $\vec k$ with the corresponding component of
$-i\,\vec\nabla$.
Being $F_n(\vec r)$ a smooth function, it has significant Fourier components
only for small values of $\vec k$. Since for small values of $\vec k$ also
$S$ is small, for these components
$A_n(\vec k)=e^S B_n(\vec k)\simeq B_n(\vec k)$
and thus, exploiting the eqs.~(\ref{eqp}) and (\ref{eqf}), we have
\begin{equation}
\psi\simeq \sum_n \int d\vec k B_n(\vec k)
e^{i\,\vec k \cdot \vec r} \frac{u^n_0 (\vec r)}{\sqrt{(2\pi)^3}}=
\sum_n F_n(\vec r) u^n_0 (\vec r)
\end{equation}
and, noting that eq.~(\ref{eqf1}) contains no interband coupling,
\begin{equation}
\psi=F_n(\vec r) u^n_0 (\vec r)
\end{equation}
(as already seen). If locally the external
potential changes considerably within a cell, in such a region the equation
we have derived is no longer valid, but it continues to be valid in
regions of space sufficiently distant from it.
\vskip2pt\noindent
Then Luttinger e Kohn adopt an analogous procedure starting from the
Schr\"odinger equation written in the presence of an external magnetic
field. In this way, they demonstrate that in such a case
the envelope function satisfies an equation similar to the
one in the absence of a magnetic field, the only difference being that
the new Hamiltonian is obtained replacing, in the expansion of
$E_n (\vec k)$ to quadratic terms, each $k_\alpha$ by the operator
$-i\nabla_{\alpha}+e A_{\alpha}/\hbar$ (using the MKS system of units)
with $A_{\alpha}$ the $\alpha$-th component of the vector potential.
Moreover in the expansion of $E_n (\vec k)$ to the
second order any arising product of non-commuting factors has to be
interpreted as the symmetrized product.
In the case in which the extremum is at $\vec k=\vec k_0 \ne 0$, the
demonstrations (both with and without an external
magnetic field) can be repeated by just replacing $u_0^n (\vec r)$
(the eigenfunctions of the Hamiltonian for $\vec k=0$ in the absence
of $U (\vec r)$ and of an external magnetic field) with
$\phi_{n \vec k_0}\equiv
e^{i\,\vec k_0 \cdot \vec r}u_{\vec k_0}^n (\vec r)$
(the eigenfunctions of the Hamiltonian for $\vec k=\vec k_0$ in the absence
of $U (\vec r)$ and of an external magnetic field). Indeed, it can be seen
that the functions $\varphi_{n \vec \kappa}\equiv
e^{i\,\vec \kappa \cdot \vec r} (e^{i\,\vec k_0 \cdot \vec r}
u^n_0 (\vec r)/\sqrt{(2\pi)^3})$ have properties analogous to those
prevously seen for the $\chi_{n \vec k}=e^{i\,\vec k \cdot \vec r}
u^n_0 (\vec r)/\sqrt{(2\pi)^3}$: considered as functions of $\vec r$
and $\vec \kappa$, they are a complete orthonormal set of functions (such
that $\langle \varphi_{n \vec \kappa} | \varphi_{n' \vec \kappa'} \rangle=
\delta_{n n'}\delta(\vec \kappa-\vec \kappa')$)
and the momentum matrix elements computed in $\vec k_0$, defined as
\begin{equation}
P_{\alpha}^{n n'}=\frac{1}{V_c}\int_{V_c} {u^n_{\vec k_0}}^*
(\hbar k_{0_{\alpha}}-i\hbar \nabla_{\alpha})
u^{n'}_{\vec k_0} d\vec r,
\end{equation}
have properties analogous to those seen in the case in which
$\vec k_0=0$.
In this case the relation between the wave function and the
envelope function is
\begin{equation}
\label{k0}
\psi=F_n(\vec r)(e^{i\,\vec k_0 \cdot \vec r}u_{\vec k_0}^n (\vec r))
\end{equation}
and the envelope function equation is
\begin{equation}
\left[E_n(\vec k_0-i\vec\nabla)+U\right] F_n=E F_n,
\end{equation}
in the absence of magnetic field, and
\begin{equation}
\left[E_n \left(\vec k_0-i\vec\nabla+\frac{e \vec A}{\hbar}\right)+U\right]
F_n=E F_n,
\end{equation}
in the presence of magnetic field. As before, in these expressions
an expansion of $E_n$ around $\vec k_0$ to second-order terms in
$-i\vec\nabla$ and in $-i\vec\nabla+e\vec A/\hbar$, respectively, is meant.
If there are extrema at several different values of $\vec k_0$ within the
band, we obtain an envelope function equation for each of them; if the
solutions corresponding to the different $\vec k_0$ values have different
energies, the corresponding wave functions represent independent solutions
of the Schr\"odinger equation; otherwise the correct wave function will be
a linear combination of those from the different extrema associated with
the same energy.
When the band of interest is degenerate, Luttinger and Kohn, using
a similar calculation, arrive at a set of coupled second-order equations
which correspond to the effective mass equation found in the case of
non-degenerate bands.
In particular (assuming for simplicity that the degeneracy occurs at
$\vec k=0$) they assume to have, at $\vec k=0$, $r$ unperturbed
degenerate Bloch lattice functions corresponding to the same unperturbed
energy $E_0^j$ (where ``unperturbed'' means for $\vec k=0$ and in
the absence of $U (\vec r)$ and of an external magnetic field) and they define
them as $\phi_j$ (with $j=1,\ldots , r$, where $r$ is the\break
degeneracy), {\em i.e.}
\begin{equation}
H^{(0)} \phi_j=E_0^j \phi_j
\end{equation}
(notice that the $\phi_j$'s, {\em i.e.} the $u^n_0$'s, can be seen as Bloch functions
$e^{i\,\vec k \cdot \vec r} u^n_{\vec k}$ for $\vec k=0$ and thus they
have to satisfy the Schr\"odinger equation for $\vec k=0$).
They instead indicate as $\phi_i$ (with $i\ne 1,\ldots , r$) the
unperturbed Bloch lattice functions at $\vec k=0$ corresponding to
the other bands, that are not degenerate with the $\phi_j$'s.
If the crystal has a center of symmetry, it can be proved that the
momentum matrix elements between different $\phi_j$'s vanish, {\em i.e.}
$P_{\alpha}^{j j'}=0$.
Luttinger and Kohn introduce the complete set of functions
$| n \vec k \rangle=\phi_{n \vec k}=e^{i\vec k \cdot \vec r}\phi_n
/\sqrt{(2\pi)^3}$
(where $\phi_n$ indicates both the $\phi_j$'s and the $\phi_i$'s).
Using this basis, they can expand the wave function in this way:
\begin{equation}
\psi=\sum_n \int d\vec k A_n(\vec k) \phi_{n \vec k}
\end{equation}
and rewrite the Schr\"odinger equation as
\begin{equation}
\sum_{n'} \int d\vec k' \langle n \vec k | H^{(0)}+U | n' \vec k' \rangle
A_{n'}(\vec k')=E A_{n}(\vec k),
\end{equation}
thus obtaining:
\vskip2pt\noindent
\begin{eqnarray}
&& \left(E_0^j+\frac{\hbar^2 k^2}{2\,m_e}\right) A_j(\vec k)+\sum_{\alpha=x,y,z}
\sum_i \frac{\hbar k_{\alpha} P_{\alpha}^{j i}}{m_e} A_i (\vec k)\\
&& {}+\int d\vec k' \mathcal{U}(\vec k-\vec k') A_j (\vec k')=E A_j (\vec k)\nonumber
\end{eqnarray}
\vskip2pt\noindent
(writing only the equations corresponding to the degenerate states $j$).
In order to decouple the equations corresponding to the states $j$ from those
of the states $i$, a proper canonical transformation $A=T B=e^S B$ is again
applied, with
\vskip2pt\noindent
\begin{equation}
\langle n \vec k | S | n' \vec k' \rangle=
\left\{ \begin{array}{ll}
\displaystyle
-\frac{\hbar \vec k \cdot \vec P^{n n'} \delta(\vec k-\vec k')}
{m_e (E_0^n-E_0^{n'})},
&\textrm{if $n$ or $n' \notin [1,r]$,}\\
\displaystyle
0, &\textrm{if $n$ and $n'\in [1,r]$.}
\end{array}\right.
\end{equation}
\vskip2pt\noindent
In this way Luttinger and Kohn obtain, to second-order terms in $k$, the
following set of equations for the $r$ degenerate states:
\begin{eqnarray}
&& \sum_{j'=1}^r \left(E_0^j \delta_{j j'}+
\sum_{\alpha,\beta=x,y,z} \left(D_{jj'}^{\alpha\beta}
k_{\alpha} k_{\beta}\right)\right) B_{j'} (\vec k)\\
&& {}+\int \mathcal{U}(\vec k-\vec k') B_j (\vec k') d\vec k'=E B_j (\vec k),\nonumber
\end{eqnarray}
with
\begin{equation}
D_{jj'}^{\alpha\beta}=\frac{\hbar^2}{2\,m_e}\delta_{j\,j'}
\delta_{\alpha\,\beta}+\frac{\hbar^2}{{m_e}^2}\sum_i
\frac{P_{\alpha}^{j\,i} P_{\beta}^{i\,j'}}{(E_0^j-E_0^i)}.
\end{equation}
Therefore, introducing again the envelope functions
\begin{equation}
F_j (\vec r)=
\frac{1}{\sqrt{(2\pi)^3}}\int e^{i\vec k \cdot \vec r} B_j (\vec k)
d\vec k,
\end{equation}
Luttinger and Kohn arrive at the conclusion that the $r$ envelope
functions $F_j (\vec r)$ corresponding to the originally degenerate
energy bands satisfy the $r$ coupled differential equations
\begin{equation}
\label{coupled}
\sum_{j'=1}^r \left(E_0^j \delta_{j j'}+
\sum_{\alpha, \beta=x,y,z} \left(D_{jj'}^{\alpha\beta}
(-i\nabla_{\alpha})(-i\nabla_{\beta})\right)
+U (\vec r)\delta_{j\,j'}\right) F_{j'} (\vec r)=E F_j (\vec r)
\end{equation}
(if the energy zero is set at $E_0^j$ the term $E_0^j \delta_{j j'}$
disappears).
Analogously to what happens in the non-degenerate case, for small
values of $\vec k$, $A_n(\vec k)\simeq B_n(\vec k)$ and thus
\begin{equation}
\label{deg}
\psi\simeq \sum_n \int d\vec k B_n(\vec k)
e^{i\,\vec k \cdot \vec r} \frac{\phi_n (\vec r)}{\sqrt{(2\pi)^3}}=
\sum_n F_n(\vec r) \phi_n (\vec r)\simeq
\sum_{j=1}^r F_j (\vec r) \phi_j (\vec r),
\end{equation}
since in eq.~(\ref{coupled}) no coupling remains between
the states $j$ and the states $i$.
The numbers $D_{jj'}^{\alpha\beta}$ play the same role in the case
of degenerate bands as $\hbar^2 / ( 2\,m_{\alpha\beta}^*)$
for a non-degenerate band.
As before, in the presence of a magnetic field the components of
$-i\vec\nabla$ appearing in the envelope function equations will be
replaced with the corresponding components of $-i\vec\nabla+e\vec A/\hbar$.
In the presence of spin-orbit coupling, Luttinger and Kohn adopt the same
treatment, considering the spin-orbit contribution as
part of the unperturbed Hamiltonian (therefore the total unperturbed
Hamiltonian will be $H^{(0)}+H_{SO}$) and assuming the Bloch lattice functions
and the corresponding energies for $\vec k=0$ of $H^{(0)}+H_{SO}$ as known
quantities. Thus the $u^n_0$ are replaced with the $\overline{u}^n_0$
(the spinorial Bloch lattice functions for $\vec k=0$ in\break
the presence of
spin-orbit interaction), $E_n (\vec k)$ by $\overline{E}_n (\vec k)$ (the
dispersion relations in the\break
\newpage
\noindent
presence of spin-orbit interaction) and the
$P_{\alpha}^{n\,n'}$ by
\begin{equation}
(\pi_{\alpha})_{n\,n'}= \langle \overline{u}_0^n|
\left(-i\,\hbar\nabla_{\alpha}
+\frac{\hbar}{4\,m_ec^2}(\vec\sigma\times\vec\nabla V)_{\alpha}
\right)|\overline{u}_0^{n'} \rangle,
\end{equation}
where the extra term arises from the fact that the spin-orbit coupling
contains the differential operator $\vec p$.
When we treat energy bands which are degenerate in the absence of spin-orbit
interaction, we have to remember that (as seen previously) the spin-orbit
coupling can lift, at least partially, the degeneracy. In such a case, we have
to consider that the validity of the adopted theory rests on
the assumption that the interband separations are large compared with the
energies involved in the solution of the envelope function equation. Thus
we have to evaluate if the external potential $U$ or the magnetic field are
sufficiently small to produce no appreciable mixing of the bands, the
degeneracy of which has been lifted by the spin-orbit coupling. If they
are sufficiently small, we can obtain a different set of coupled envelope
function equations for each set of bands that have remained degenerate;
otherwise we will have to deal with the full set of coupled equations for all
the bands that are degenerate in the absence of spin-orbit.
We can introduce a matrix $D$, the elements of which are
\begin{equation}
D_{jj'}=\sum_{\alpha, \beta} D_{jj'}^{\alpha\beta}k_{\alpha}k_{\beta}.
\end{equation}
If in these matrix elements we replace each component of the vector
$\vec k$ with the corresponding component of the operator
$-i\,\vec\nabla+e \vec A/\hbar$, we obtain the terms which appear in the
envelope function coupled equations.
In particular, the envelope function coupled equations written in
the absence of an external perturbation read (if we set the energy zero
at $E_0^j$)
\begin{equation}
\sum_{j'=1}^r \sum_{\alpha, \beta}(D_{jj'}^{\alpha\beta}
(-i\nabla_{\alpha})(-i\nabla_{\beta}))
F_{j'} (\vec r)=E F_j (\vec r).
\end{equation}
If we convert them from the position
representation to the momentum representation, we obtain
\begin{eqnarray}
\label{momentum}
&& \sum_{j'=1}^r \sum_{\alpha, \beta} (D_{jj'}^{\alpha\beta}
k_{\alpha}k_{\beta}) B_{j'} (\vec k) = E B_j (\vec k)\Rightarrow \\
&& \sum_{j'=1}^r D_{jj'}B_{j'} (\vec k) = E B_j (\vec k)\Rightarrow
D \vec B = E \vec B,\nonumber
\end{eqnarray}
from which it is evident that the dispersion relations $E(\vec k)$ near
the extremum can be obtained by finding the eigenvalues of the matrix $D$.
We notice that this clearly corresponds to what happens in the case
of non-degeneracy, in which (as we have seen) the envelope function
equation contains $E_n(-i\,\vec\nabla)$ (the dispersion
relation in the absence of external potential energy or magnetic field,
where each component of $\vec k$ is replaced with the corresponding
component of $-i\,\vec\nabla$).
In order to determine the number of independent parameters which appear
in the matrix $D$, the symmetry properties of the considered lattices
are exploited.
In \cite{luttinger2} Luttinger proposes a different
way to obtain an explicit expression for $D$, based only on symmetry
arguments. He writes this matrix for diamond-type semiconductors using
group theory, in particular considering that the Hamiltonian $D$ should
be invariant under the operations of the cubic group (so that the
Hamiltonian will give us results which transform correctly with respect
to the transformations of the cubic group, which is the symmetry group
of $\vec k$) and thus writing $D$ as a linear combination of the invariants
obtained combining angular momentum matrices and components of $\vec k$.
The elements of such a matrix are polynomials in the components of
$\vec k$, at most of the second order, and involve
parameters characteristic of the materials, which have been experimentally
found and are available for most common semiconductors \cite{lawaetz}.
For example, in the case of the $4\times 4$ matrix $D$ corresponding to the
light-hole and heavy-hole bands (the extra factor of $2$ coming from spin)
they are $\gamma_1$, $\gamma_2$, $\gamma_3$, $\kappa$ (which is useful
in the presence of an external magnetic field) and $q$ (which approaches zero
as the spin-orbit coupling does).
Bir and Pikus \cite{bir} have shown that in uniformly strained semiconductors,
such that the periodicity of the structure is preserved, the strain introduces
in the dispersion relation of non-degenerate bands an extra term of
the kind
\begin{equation}
a_c(\epsilon_{xx}+\epsilon_{yy}+\epsilon_{zz})
\end{equation}
and in the Hamiltonian of degenerate bands additional terms of the form
\begin{equation}
\sum_{\alpha ,\beta}\hat D_{j\,j'}^{\alpha\beta}
\epsilon_{\alpha\beta},
\end{equation}
where $\alpha ,\beta=x, y, z$ and $\epsilon_{\alpha\,\beta}$ is the generic
component of the strain matrix.
\begin{figure}
\centering
\includegraphics[width=.4\textwidth,angle=0]{hetrev.eps}
\caption{Heterojunction between two semiconductors $A$ and $B$.}
\label{f3}
\end{figure}\noindent
Bastard \cite{bastard1,bastard2,bastard3} uses the envelope function
method to study heterostructures, for example made up of two materials $A$
and $B$ (fig.~\ref{f3}).
In particular, he assumes that the two materials are perfectly lattice-matched
and crystallize with the same crystallographic structure, so that the
functions $u_0^n (\vec r)$ in the two materials can be considered identical.
With this hypothesis, if in each material the wave functions are written as
\begin{equation}
\psi^{(A, B)}=\sum_n F_n^{(A, B)}(\vec r) u^n_0 (\vec r),
\end{equation}
it is evident that, since the $u^n_0$ are linearly independent and
the wave function has to be continuous at the interface, also the envelope
functions have to be continuous at the interface. For the derivative of
the envelope functions, Bastard finds, enforcing the continuity of the
probability current at the interface, a general condition \cite{taylor},
which, in the simple case of two materials that are both characterized
by non-degenerate parabolic and isotropic bands but with different effective
masses $m^*_{(A)}$ and $m^*_{(B)}$, reduces to enforcing the continuity of
\begin{equation}
\frac{1}{m^*} \frac{\partial F_n}{\partial z}
\end{equation}
(where we have assumed the $\hbox{\boldmath{$\hat z$}}$ axis orthogonal to
the interface).
This can be easily obtained enforcing in this case the continuity of the
$z$ component of the probability current density, which is equal to
\begin{equation}
j_z=-\frac{i\,\hbar}{2\,m}\left(\psi^* \frac{\partial \psi}{\partial z}
- \psi \frac{\partial \psi^*}{\partial z}\right)
\end{equation}
and noting that the continuity of the envelope function has already been
enforced. As to the asymptotic behavior of the envelope functions far
from the interface, it depends on the heterostructure under consideration.
For example, for superlattices the $z$-dependent part of the envelope
function will be a Bloch wave, due to the periodicity of the structure in
that direction, while for the bound states of a quantum well it should
tend to zero for large $z$. Thus the envelope functions in the overall
structure can be found solving the envelope function equations in
the different materials, knowing the asymptotic behavior far from the
interface and enforcing the correct boundary conditions at the interface.
Bastard has also made an extensive analysis of the applications
of this method \cite{bastard4}.
Also M. Altarelli has given important contributions to the
development of the envelope function method \cite{altarelli4} and to
its applications to the study of heterostructures \cite{altarelli1,
altarelli2,altarelli3}.
M. G. Burt \cite{burt1,burt2,burt3,burt4,burt5} has pointed out the
errors deriving from the assumption, normally made in the application
of the envelope function method to heterostructures, that the
$u^n_0 (\vec r)$ in the two materials are the same and from the
boundary condition enforced on the derivative of the envelope function
at the interface. In a series of interesting and detailed articles he
has developed an alternative envelope function theory
expanding the wave function in the overall structure on the same periodic
basis functions $U_n (\vec r)$ throughout, even though they are not
necessarily eigenstates of the constituent crystals, without making
any hypothesis about the real eigenstates $u^n_0 (\vec r)$
\begin{equation}
\psi (\vec r)=\sum_n F_n (\vec r) U_n (\vec r).
\end{equation}
The envelope functions $F_n (\vec r)$ univocally defined in this way
and all their derivatives are certainly continuous everywhere,
including at the interface. Using this approach, he has first derived exact
envelope function equations, then, for local potentials and slowly varying
envelope functions (but without any assumption on the rate of variation
of the composition), he has formulated approximate envelope function
equations, and finally, with the assumption of the dominance
of one envelope function, he has arrived at an effective-mass equation that
includes also the effect of the differences in the $u^n_0 (\vec r)$ between
the two materials. At each step the associated approximations are accurately
described, so that it is possible to estimate the error.
A more detailed description of the applications of the $\vec k \cdot \vec p$
method to materials with a diamond, zincblende and wurtzite lattice,
both in the periodic and in the non-periodic case, can be found (besides
in the other books and in the original publications reported in the list of
references of this review) in the recent book by L.~C.~Lew Yan Voon and
M.~Willatzen~\cite{voon}.
\section{Application of the $\vec k \cdot \vec p$ method to graphene}
In the last years the $\vec k \cdot \vec p$ method, and in particular the
formulation (described in the last section) based on the envelope functions,
has been successfully applied to the analysis of the electronic
properties of graphene and graphene-related stuctures, such as carbon
nanotubes and graphene nanoribbons.
In this section we will begin the description of this particular application
deriving the $\vec k \cdot \vec p$ relations for a simple sheet of graphene.
\begin{figure}
\centering
\includegraphics[width=.65\textwidth,angle=0]{rev1c.eps}
\caption{The graphene lattice in the real space (a) and in the reciprocal
space (b).}
\label{f4}
\end{figure}\noindent
A graphene sheet is a hexagonal lattice of carbon atoms. In fig.~\ref{f4}(a)
we show its structure in the real space and, in particular, its unit cell
as a dashed rhombus, containing two inequivalent carbon atoms $A$ and $B$,
while in fig.~\ref{f4}(b) we show the lattice in the reciprocal space with
the Brillouin zone as a shaded hexagon. The lattice unit vectors are
$\vec a_1$ and $\vec a_2$ in the real space, and $\vec b_1$ and $\vec b_2$
in the reciprocal space. If we define
$a=|\vec a_1|=|\vec a_2|=a_{C-C}\,\sqrt{3}$
(with $a_{C-C}$ the distance between nearest-neighbor carbon atoms),
the coordinates of these vectors in the right-hand reference frame
$\Sigma'=(\hbox{\boldmath{$\hat x$}}',\hbox{\boldmath{$\hat y$}}',
\hbox{\boldmath{$\hat z$}}')$ are (observe that we have taken
$\hbox{\boldmath{$\hat x$}}'$ along the vector $\vec a_1+\vec a_2$)
\begin{equation}
\vec a_1 \mathrel{\mathop\equiv_{\Sigma'}} \left[\begin{array}{c}
\displaystyle \frac{\sqrt{3}}{2}a\\
\noalign{\vskip3pt}
\displaystyle \frac{a}{2}\\
\noalign{\vskip3pt}
0
\end{array}\right]
,\quad
\vec a_2 \mathrel{\mathop\equiv_{\Sigma'}} \left[\begin{array}{c}
\displaystyle \frac{\sqrt{3}}{2}a\\
\noalign{\vskip3pt}
\displaystyle -\frac{a}{2}\\
\noalign{\vskip3pt}
0
\end{array}\right]
,\quad
\vec b_1 \mathrel{\mathop\equiv_{\Sigma'}} \left[\begin{array}{c}
\displaystyle \frac{2\pi}{\sqrt{3}a}\\
\noalign{\vskip3pt}
\displaystyle \frac{2\pi}{a}\\
\noalign{\vskip3pt}
0
\end{array}\right]
,\quad
\vec b_2 \mathrel{\mathop\equiv_{\Sigma'}} \left[\begin{array}{c}
\displaystyle \frac{2\pi}{\sqrt{3}a}\\
\noalign{\vskip3pt}
\displaystyle -\frac{2\pi}{a}\\
\noalign{\vskip3pt}
0
\end{array}\right]
\end{equation}
(following the conventions used by R.~Saito, G.~Dresselhaus and
M.~S.~Dresselhaus \cite{saito}), which (being
$\vec b_1=2\pi (\vec a_2 \times \hbox{\boldmath{$\hat z$}}')/
(\vec a_1 \cdot (\vec a_2 \times \hbox{\boldmath{$\hat z$}}'))$ and
$\vec b_2=2\pi (\hbox{\boldmath{$\hat z$}}' \times \vec a_1)/
(\vec a_1 \cdot (\vec a_2 \times \hbox{\boldmath{$\hat z$}}'))$)
fulfill the well-know relation
$\vec a_i \cdot \vec b_j=2 \pi \delta_{ij}$
between lattice unit vectors in the real space and in the reciprocal space.
Note that the letter written under the symbol ``$\equiv$'' indicates the
adopted reference frame.
The most relevant graphene dispersion relations for transport and other
solid-state properties are the two $\pi$-bands (an upper anti-bonding band
and a lower bonding band), which are degenerate at the points
(considering the point $\Gamma$ at the center of the hexagonal Brillouin zone
of graphene as the origin of the reciprocal space)
\begin{equation}
\label{diracpoints}
\vec K=\frac{1}{3}(\vec b_2-\vec b_1) \mathrel{\mathop\equiv_{\Sigma'}}
\frac{4\pi}{3a}
\left[\begin{array}{c}
0\\
-1\\
0
\end{array}\right]
\quad\hbox{and}\quad
\vec K'=\frac{1}{3}(\vec b_1-\vec b_2) \mathrel{\mathop\equiv_{\Sigma'}}
\frac{4\pi}{3a}
\left[\begin{array}{c}
0\\
1\\
0
\end{array}\right]
\end{equation}
and obviously at their equivalents in the reciprocal space
(as we can see from fig.~\ref{f5}, which has been obtained with a
nearest-neighbor tight-binding approach limited to the $2 p^z$ atomic
orbitals, with nonzero nearest-neighbor overlap integral).
Thus we can use the $\vec k \cdot \vec p$ method to find the dispersion
relations of graphene near these extrema points (called Dirac points),
following T.~Ando's approach \cite{ajiki,ando1,ando2}.
However, in our description we will continue to use the conventions of
ref.~\cite{saito} and we will consider the pair (\ref{diracpoints}) of
Dirac points (which will simplify the treatment of zigzag and armchair
graphene nanoribbons in the last section of this review).
Other articles where a $\vec k \cdot \vec p$ treatment of graphene is
introduced are refs.~\cite{wallace,mcclure,slonczewski,divincenzo,
semenoff,kanemele}.
\begin{figure}
\centering
\includegraphics[width=.55\textwidth,angle=0]{rev2b.eps}
\caption{The energy dispersion relations of graphene inside its
hexagonal Brillouin zone.}
\label{f5}
\end{figure}\noindent
We start using a simple tight-binding model, in which we use as basis
functions the $2 p^z$ orbitals of all the carbon atoms of the graphene
sheet, which are the orbitals leading to the $\pi$-bonds and
thus to the above-mentioned two $\pi$-bands.
The generic eigenfunction in the material can be expressed \cite{ando1,ando2}
as a linear combination (with coefficients $\psi_A (\vec R_A)$ and
$\psi_B (\vec R_B)$) of these atomic orbitals $\varphi(\vec r -\vec R_A)$ and
$\varphi(\vec r -\vec R_B)$ (centered on atoms of type $A$ and $B$,
respectively)
\begin{equation}
\label{wavefunction}
\psi (\vec r)=
\sum_{\vec R_A}\psi_A (\vec R_A)\varphi(\vec r -\vec R_A)+
\sum_{\vec R_B}\psi_B (\vec R_B)\varphi(\vec r -\vec R_B),
\end{equation}
where the first (second) sum spans over all the positions of the atoms of
type $A$ ($B$) in the lattice.
Using the definition of the Hamiltonian operator
\begin{equation}
\label{hamiltonian}
H |\psi \rangle =E |\psi \rangle,
\end{equation}
we have that \cite{saito}
\begin{equation}
\langle \psi| H |\psi \rangle =E \langle \psi |\psi \rangle
\end{equation}
and thus (using $j$ and $j'$ to indicate the type of the atoms and $n$ and
$m$ to specify the particular atoms)
\begin{eqnarray}
\label{defe}
\qquad E &=& \frac{\langle \psi| H |\psi \rangle}
{\langle \psi |\psi \rangle }=\\
&& \frac{\displaystyle
\left<\sum_{j=A, B}\sum_{\vec R_{jn}}\psi_j (\vec R_{jn})
\varphi(\vec r -\vec R_{jn}) \Bigg|
H \Bigg|\sum_{j'=A, B}\sum_{\vec R_{j'm}}
\psi_{j'} (\vec R_{j'm})\varphi(\vec r -\vec R_{j'm})\right>}
{\displaystyle
\left<\sum_{j=A, B}\sum_{\vec R_{jn}}\psi_j (\vec R_{jn})
\varphi(\vec r -\vec R_{jn}) \Bigg|
\sum_{j'=A, B}\sum_{\vec R_{j'm}}
\psi_{j'} (\vec R_{j'm})\varphi(\vec r -\vec R_{j'm})\right>}=\nonumber\\
&& \frac{\displaystyle
\sum_{j,j'=A, B}\sum_{\vec R_{jn}}\sum_{\vec R_{j'm}}
\psi_j^* (\vec R_{jn})\psi_{j'} (\vec R_{j'm})
\langle \varphi(\vec r -\vec R_{jn})| H |\varphi(\vec r -\vec R_{j'm}) \rangle}
{\displaystyle
\sum_{j,j'=A, B}\sum_{\vec R_{jn}}\sum_{\vec R_{j'm}}
\psi_j^* (\vec R_{jn})\psi_{j'} (\vec R_{j'm})
\langle \varphi(\vec r -\vec R_{jn})|\varphi(\vec r -\vec R_{j'm}) \rangle }=\nonumber\\
&& \frac{\displaystyle
\sum_{j,j'=A, B}\sum_{\vec R_{jn}}\sum_{\vec R_{j'm}}
\psi_j^* (\vec R_{jn})\psi_{j'} (\vec R_{j'm}) h_{\vec R_{jn},\vec R_{j'm}}}
{\displaystyle
\sum_{j,j'=A, B}\sum_{\vec R_{jn}}\sum_{\vec R_{j'm}}
\psi_j^* (\vec R_{jn})\psi_{j'} (\vec R_{j'm})
s_{\vec R_{jn},\vec R_{j'm}}},\nonumber
\end{eqnarray}
where we have introduced the transfer integrals $h_{\vec R_{jn},\vec R_{j'm}}$
and the overlap integrals $s_{\vec R_{jn},\vec R_{j'm}}$ between atomic
orbitals. Now we can minimize $E$ (to obtain the actual physical state)
enforcing (for each coefficient, and thus for each atom)
\begin{eqnarray}
\frac{\partial E}{\partial \psi_j^* (\vec R_{jn})} &=&
\frac{\displaystyle
\sum_{j'=A, B}\sum_{\vec R_{j'm}} \psi_{j'} (\vec R_{j'm})
h_{\vec R_{jn},\vec R_{j'm}}}
{\displaystyle
\sum_{j,j'=A, B}\sum_{\vec R_{jn}}\sum_{\vec R_{j'm}}
\psi_j^* (\vec R_{jn})\psi_{j'} (\vec R_{j'm}) s_{\vec R_{jn},\vec R_{j'm}}}\\
&&{}-\frac{\displaystyle
\sum_{j,j'=A, B}\sum_{\vec R_{jn}}\sum_{\vec R_{j'm}}
\psi_j^* (\vec R_{jn})\psi_{j'} (\vec R_{j'm}) h_{\vec R_{jn},\vec R_{j'm}}}
{\displaystyle
\Big( \sum_{j,j'=A, B}\sum_{\vec R_{jn}}\sum_{\vec R_{j'm}}
\psi_j^* (\vec R_{jn})\psi_{j'} (\vec R_{j'm})
s_{\vec R_{jn},\vec R_{j'm}}\Big)^2}\nonumber\\
&&{}\cdot\sum_{j'=A, B}\sum_{\vec R_{j'm}}
\psi_{j'} (\vec R_{j'm}) s_{\vec R_{jn},\vec R_{j'm}}=0.\nonumber
\end{eqnarray}
Multiplying both members by the denominator of eq.~(\ref{defe}) and
rearranging, we find:
\begin{eqnarray}
&& \sum_{j'=A, B}\sum_{\vec R_{j'm}} \psi_{j'} (\vec R_{j'm})
h_{\vec R_{jn},\vec R_{j'm}}=\\
&& \frac{\displaystyle
\sum_{j,j'=A, B}\sum_{\vec R_{jn}}\sum_{\vec R_{j'm}}
\psi_j^* (\vec R_{jn})\psi_{j'} (\vec R_{j'm}) h_{\vec R_{jn},\vec R_{j'm}}}
{\displaystyle
\sum_{j,j'=A, B}\sum_{\vec R_{jn}}\sum_{\vec R_{j'm}}
\psi_j^* (\vec R_{jn})\psi_{j'} (\vec R_{j'm}) s_{\vec R_{jn},\vec R_{j'm}}}
\sum_{j'=A, B}\sum_{\vec R_{j'm}}
\psi_{j'} (\vec R_{j'm}) s_{\vec R_{jn},\vec R_{j'm}}\nonumber
\end{eqnarray}
and recognizing that the fraction in the right-hand side is the expression
of $E$, we have
\begin{equation}
\sum_{j'=A, B}\sum_{\vec R_{j'm}} \psi_{j'} (\vec R_{j'm})
h_{\vec R_{jn},\vec R_{j'm}}
=E \sum_{j'=A, B}\sum_{\vec R_{j'm}}
\psi_{j'} (\vec R_{j'm}) s_{\vec R_{jn},\vec R_{j'm}}.
\end{equation}
Let us expand this result for the coefficients (and thus for the atoms)
with $j=A$ and for those with $j=B$
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
\sum_{\vec R_{Am}} \psi_A (\vec R_{Am}) h_{\vec R_{An},\vec R_{Am}}+
\sum_{\vec R_{Bm}} \psi_B (\vec R_{Bm}) h_{\vec R_{An},\vec R_{Bm}}=\\
\displaystyle
\qquad
E \left(\sum_{\vec R_{Am}}\psi_A (\vec R_{Am}) s_{\vec R_{An},\vec R_{Am}}
\!+\!\sum_{\vec R_{Bm}}\psi_B (\vec R_{Bm})
s_{\vec R_{An},\vec R_{Bm}}\right);\\
\displaystyle
\sum_{\vec R_{Am}} \psi_A (\vec R_{Am}) h_{\vec R_{Bn},\vec R_{Am}}+
\sum_{\vec R_{Bm}} \psi_B (\vec R_{Bm}) h_{\vec R_{Bn},\vec R_{Bm}}=\\
\displaystyle
\qquad
E \left(\sum_{\vec R_{Am}}\psi_A (\vec R_{Am}) s_{\vec R_{Bn},\vec R_{Am}}
\!+\!\sum_{\vec R_{Bm}}\psi_B (\vec R_{Bm}) s_{\vec R_{Bn},\vec R_{Bm}}\right).
\end{array}\right.
\end{equation}
We consider non-negligible only the integrals between each atom and
itself and between each atom and its nearest neighbors (which are the
nearest three $B$ atoms for an $A$ atom, while they are the nearest
three $A$ atoms for a $B$ atom).
Therefore, if (in order to simplify the notation) we rename $\vec R_{An}$ as
$\vec R_A$ and $\vec R_{Bn}$ as $\vec R_B$ and we use the index $l$ to
indicate the nearest three atoms, we can rewrite these equations in the
following way:
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
\psi_A (\vec R_A) h_{\vec R_A,\vec R_A}+
\sum_{l=1}^3 \psi_B (\vec R_{B_l}) h_{\vec R_A,\vec R_{B_l}}=\\
\displaystyle
\qquad\qquad\qquad
E \left(\psi_A (\vec R_A) s_{\vec R_A,\vec R_A}
+\sum_{l=1}^3\psi_B (\vec R_{B_l}) s_{\vec R_A,\vec R_{B_l}}\right),\\
\displaystyle
\sum_{l=1}^3 \psi_A (\vec R_{A_l}) h_{\vec R_B,\vec R_{A_l}}+
\psi_B (\vec R_B) h_{\vec R_B,\vec R_B}=\\
\displaystyle
\qquad\qquad\qquad
E \left(\sum_{l=1}^3 \psi_A (\vec R_{A_l}) s_{\vec R_B,\vec R_{A_l}}
+\psi_B (\vec R_B) s_{\vec R_B,\vec R_B}\right).
\end{array}\right.
\end{equation}
In particular, we consider
\begin{eqnarray}
\label{integr}
\quad h_{\vec R_{jn},\vec R_{j'm}}&=&\left\{ \begin{array}{ll}
\displaystyle
\epsilon_{\vec R_{jn}}=u(\vec R_{jn}) &
\textrm{if $\vec R_{jn}=\vec R_{j'm}$,}\\
\displaystyle
-\gamma_0 &\textrm{if $\vec R_{jn}\ne \vec R_{j'm}$ and
$\vec R_{jn}$ and $\vec R_{j'm}$ are}\\
&\qquad\qquad\qquad\qquad\quad\textrm{nearest neighbors,}\\
\displaystyle
0 &\textrm{otherwise,}
\end{array}\right.\\
s_{\vec R_{jn},\vec R_{j'm}}&=&\left\{ \begin{array}{ll}
\displaystyle
1 &\textrm{if $\vec R_{jn}=\vec R_{j'm}$,}\\
\displaystyle
0 &\textrm{if $\vec R_{jn}\ne \vec R_{j'm}$.}
\end{array}\right.\nonumber
\end{eqnarray}
Here $\gamma_0$ is the modulus of the nearest-neighbor transfer integral.
Instead $\epsilon_{\vec R_{jn}}$ is the onsite energy, that we take as zero of
the energy in the absence of an external ({\em i.e.} not due to the periodic
structure of the lattice) potential energy; if the external
potential energy is not zero, we have to consider the term $u(\vec R_{jn})$,
which represents the value of this external potential energy in the position
$\vec R_{jn}$.
Note that the reason for the values of the overlap integrals reported in
eq.~(\ref{integr}) is that we consider atomic orbitals orthonormalized
using the L\"owdin procedure~\cite{lowdin3,slaterkoster,datta2}.
Thus the tight-binding relations become
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
-\gamma_0 \sum_{l=1}^3 \psi_B (\vec R_{B_l})=
(E-u(\vec R_A))\,\psi_A (\vec R_A),\\
\displaystyle
-\gamma_0 \sum_{l=1}^3 \psi_A (\vec R_{A_l})=
(E-u(\vec R_B))\,\psi_B (\vec R_B).
\end{array}\right.
\end{equation}
If we introduce the vectors (fig.~\ref{f4}(a))
\begin{equation}
\vec\tau_1 \mathrel{\mathop\equiv_{\Sigma'}}
\frac{a}{\sqrt{3}}\left[\begin{array}{c}
-1\\
\noalign{\vskip3pt}
0\\
\noalign{\vskip3pt}
0
\end{array}\right]
,\quad
\vec\tau_2 \mathrel{\mathop\equiv_{\Sigma'}}
\frac{a}{\sqrt{3}}\left[\begin{array}{c}
\displaystyle \frac{1}{2}\\
\noalign{\vskip3pt}
\displaystyle -\frac{\sqrt{3}}{2}\\
\noalign{\vskip3pt}
0
\end{array}\right]
,\quad
\vec\tau_3 \mathrel{\mathop\equiv_{\Sigma'}}
\frac{a}{\sqrt{3}}\left[\begin{array}{c}
\displaystyle \frac{1}{2}\\
\noalign{\vskip3pt}
\displaystyle \frac{\sqrt{3}}{2}\\
\noalign{\vskip3pt}
0
\end{array}\right]
\end{equation}
(with respect to the frame $\Sigma'=(\hbox{\boldmath{$\hat x$}}',
\hbox{\boldmath{$\hat y$}}',\hbox{\boldmath{$\hat z$}}')$),
we can write the positions of the nearest-neighbor atoms in this way:
\begin{eqnarray}
\vec R_{B_1} &= \vec R_A-\vec \tau_1,\\
\vec R_{B_2} &= \vec R_A-\vec \tau_2,\nonumber\\
\vec R_{B_3} &= \vec R_A-\vec \tau_3,\nonumber\\
\vec R_{A_1} &= \vec R_B+\vec \tau_1,\nonumber\\
\vec R_{A_2} &= \vec R_B+\vec \tau_2,\nonumber\\
\vec R_{A_3} &= \vec R_B+\vec \tau_3,\nonumber
\end{eqnarray}
and thus we can rewrite the tight-binding relations in the following form:
\begin{equation}
\label{tightbinding}
\left\{ \begin{array}{l}
\displaystyle
-\gamma_0 \sum_{l=1}^3 \psi_B (\vec R_A-\vec \tau_l)=
(E-u(\vec R_A))\,\psi_A (\vec R_A),\\
\displaystyle
-\gamma_0 \sum_{l=1}^3 \psi_A (\vec R_B+\vec \tau_l)=
(E-u(\vec R_B))\,\psi_B (\vec R_B).
\end{array} \right.
\end{equation}
Now let us consider what happens near the points $\vec K$ and $\vec K'$.
Let us assume that we can write
\begin{equation}
\label{assumptions}
\left\{ \begin{array}{l}
\displaystyle
\psi_A (\vec R_A)=e^{i \vec K\cdot \vec R_A} F_A^{\vec K}(\vec R_A)
-i \, e^{i \theta'} e^{i \vec K'\cdot \vec R_A} F_A^{\vec K'}(\vec R_A),\\
\\
\displaystyle
\psi_B (\vec R_B)=i \, e^{i \theta'}
e^{i \vec K\cdot \vec R_B} F_B^{\vec K} (\vec R_B)+
e^{i \vec K'\cdot \vec R_B} F_B^{\vec K'} (\vec R_B)
\end{array} \right.
\end{equation}
(the angle $\theta'$ will be properly chosen later).
If $\vec k$ is the wave vector of $\psi_A$ and $\psi_B$, the functions
$F_A^{\vec K}$ and $F_B^{\vec K}$ have a wave vector $\vec\kappa=\vec k-\vec K$
and thus are slowly-varying functions (with small $\vec\kappa$) near the
point $\vec K$; analogously the functions $F_A^{\vec K'}$ and $F_B^{\vec K'}$
have a wave vector $\vec\kappa=\vec k-\vec K'$ and thus are slowly-varying
functions (with small $\vec\kappa$) near the point $\vec K'$ (note that
in the overall review we use $\vec k$ for the total wave vector and
$\vec\kappa$ for the wave vector measured from the reference extremum point).
Incidentally, with these assumptions, if we define $\alpha_A^{\vec K}=1$,
$\alpha_A^{\vec K'}=-i \, e^{i \theta'}$,
$\alpha_B^{\vec K}=i \, e^{i \theta'}$,
and $\alpha_B^{\vec K'}=1$, we have that
\begin{eqnarray}
\psi (\vec r) &=&
\sum_{i=A,B}\sum_{\vec R_i}
\psi_i (\vec R_i)\varphi(\vec r -\vec R_i)=\\
&& \sum_{i=A,B}\sum_{\vec R_i}\sum_{\vec K_j=\vec K,\vec K'}
\alpha_i^{\vec K_j}\,e^{i \vec K_j\cdot \vec R_i}
F_i^{\vec K_j}(\vec R_i) \varphi(\vec r -\vec R_i)\simeq\nonumber\\
&& \sum_{i=A,B}\sum_{\vec R_i}\sum_{\vec K_j=\vec K,\vec K'}
\alpha_i^{\vec K_j}\,e^{i \vec K_j\cdot \vec R_i}
F_i^{\vec K_j}(\vec r) \varphi(\vec r -\vec R_i)=\nonumber\\
&& \sum_{i=A,B}\sum_{\vec K_j=\vec K,\vec K'}
F_i^{\vec K_j}(\vec r)\, e^{i \vec K_j\cdot \vec r}
\left[\alpha_i^{\vec K_j}\sum_{\vec R_i} \varphi(\vec r -\vec R_i)\,
e^{-i \vec K_j\cdot (\vec r-\vec R_i)} \right]=\nonumber\\
&& \sum_{i=A,B}\sum_{\vec K_j=\vec K,\vec K'}
F_i^{\vec K_j}(\vec r)\, e^{i \vec K_j\cdot \vec r}\,
\tilde u^i_{\vec K_j}(\vec r),\nonumber
\end{eqnarray}
where we have substituted $F_i^{\vec K_j}(\vec r)$ to
$F_i^{\vec K_j}(\vec R_i)$ using the fact that $F_i^{\vec K_j}$ is a
slowly varying function of $\vec r$ near $\vec K_j$, while the atomic
orbital $\varphi$ has significant values only near the corresponding atom.
The quantity between square brackets (that we have called here
$\tilde u^i_{\vec K_j}$) is periodic with the periodicity of the lattice,
since, if $\vec a_\ell$ is a lattice unit vector (and thus, if $\vec R_i$ is
the position of a lattice point, also $\vec R_i^0=\vec R_i-\vec a_\ell$ is the
position of\break
a lattice point), then
\begin{eqnarray}
\tilde u^i_{\vec K_j} (\vec r+\vec a_\ell) &=&
\alpha_i^{\vec K_j}\sum_{\vec R_i} \varphi((\vec r+\vec a_\ell)-\vec R_i)\,
e^{-i \vec K_j\cdot ((\vec r+\vec a_\ell)-\vec R_i)}=\\
&& \alpha_i^{\vec K_j}\sum_{\vec R_i} \varphi(\vec r-(\vec R_i-\vec a_\ell))\,
e^{-i \vec K_j\cdot (\vec r-(\vec R_i-\vec a_\ell))}=\nonumber\\
&& \alpha_i^{\vec K_j}\sum_{\vec R_i^0} \varphi(\vec r-\vec R_i^0)\,
e^{-i \vec K_j\cdot (\vec r-\vec R_i^0)}=
\tilde u^i_{\vec K_j} (\vec r).\nonumber
\end{eqnarray}
Therefore, since $\tilde u^i_{\vec K_j}$ has the lattice periodicity
and $\vec K_j$ is an extremum point (different from $0$) of
the dispersion relations, from the relation between $\psi (\vec r)$,
$\tilde u^i_{\vec K_j}(\vec r)$ and $F_i^{\vec K_j}(\vec r)$ we conclude
that the 4 functions $F_i^{\vec K_j}$ can be seen as the electron
envelope functions corresponding to the 2 extremum points $\vec K_j$ where
the 2 considered bands of graphene are degenerate (see eq.~(\ref{k0}),
the related discussion, and eq.~(\ref{deg})).
Let us point out that this whole procedure does not need a particular choice
of scalar product and of normalization: these have just to be chosen coherently
with each other.
However, one could find desirable to normalize the periodic function
$\tilde u^i_{\vec K_j}$ according to the scalar product defined in
(\ref{scalarproduct}), as is generally done in the envelope function theory.
Following this particular criterion, one should have (if $\Omega_0$ is the
area of a graphene unit cell, while $\Omega$ is the area of the overall
graphene sheet)
\begin{eqnarray}
\label{prenorm}
\quad 1 &=& \langle\tilde u^i_{\vec K_j}(\vec r)|\tilde u^i_{\vec K_j}(\vec r)\rangle=
\frac{1}{\Omega_0}
\int_{\Omega_0} |\tilde u^i_{\vec K_j}(\vec r)|^2 d \vec r=
\frac{1}{\Omega}
\int_{\Omega} |\tilde u^i_{\vec K_j}(\vec r)|^2 d \vec r=\\
&& \frac{1}{\Omega} \int_{\Omega}
\Big|\alpha_i^{\vec K_j}\sum_{\vec R_i} \varphi(\vec r -\vec R_i)\,
e^{-i \vec K_j\cdot (\vec r-\vec R_i)}\Big|^2 d \vec r=\nonumber\\
&& \frac{1}{\Omega}
\int_{\Omega}\Big(\sum_{\vec R_i} \varphi(\vec r -\vec R_i)\,
e^{-i \vec K_j\cdot (\vec r-\vec R_i)}\Big)^*
\Big(\sum_{\vec R_i'} \varphi(\vec r -\vec R_i')\,
e^{-i \vec K_j\cdot (\vec r-\vec R_i')}\Big) d \vec r=\nonumber\\
&& \frac{1}{\Omega}
\int_{\Omega} \sum_{\vec R_i} |\varphi(\vec r -\vec R_i)|^2 d \vec r\nonumber\\
&&\quad{}+\frac{1}{\Omega}\sum_{\scriptstyle \vec R_i,\vec R_i' \atop
\scriptstyle \vec R_i \ne \vec R_i'}
\left[\int_{\Omega} \varphi^*(\vec r -\vec R_i)\varphi(\vec r -\vec R_i')
d \vec r \right] e^{i \vec K_j\cdot (\vec R_i'-\vec R_i)}=\nonumber\\
&& \frac{1}{\Omega}
\int_{\Omega} \sum_{\vec R_i} |\varphi(\vec r -\vec R_i)|^2 d \vec r=
\frac{1}{\Omega} \sum_{\vec R_i}
\int_{\Omega} |\varphi(\vec r -\vec R_i)|^2 d \vec r \simeq\nonumber\\
&& \frac{1}{\Omega} \frac{\Omega}{\Omega_0}
\int_{\Omega} |\varphi(\vec r -\vec R_i)|^2 d \vec r=
\frac{1}{\Omega_0}
\int_{\Omega} |\varphi(\vec r -\vec R_i)|^2 d \vec r.\nonumber
\end{eqnarray}
Here we have exploited the following properties of the involved
functions. First of all, integrating a function with the lattice
periodicity over the whole graphene sheet and dividing the result by its
area is equivalent to integrating it over
the lattice unit cell and dividing by the corresponding area.
Moreover, each atomic orbital $\varphi$ (orthonormalized using the L\"owdin
procedure) has a non-zero overlap only with itself. Finally, since each\break
\newpage
\noindent
atomic orbital has significative values only near the corresponding atom,
the integral of the square modulus over the whole graphene sheet is nearly
the same for all the considered atomic orbitals, and thus the sum
of all the integrals is approximately equal to a single integral
multiplied by the number $\Omega/\Omega_0$ of orbitals.
Therefore, adopting this particular normalization for $\tilde u^i_{\vec K_j}$,
the atomic orbital $\varphi$ should be normalized in such a way that
\begin{equation}
\label{norm}
\frac{1}{\Omega_0}
\int_{\Omega}|\varphi(\vec r -\vec R_i)|^2 d \vec r =1 \Rightarrow
\int_{\Omega}|\varphi(\vec r -\vec R_i)|^2 d \vec r = \Omega_0,
\end{equation}
and thus we should consider atomic orbitals $\sqrt{\Omega_0}$ times greater
than those deriving from the usual normalization over the whole graphene sheet.
The corresponding scalar product
\begin{equation}
\label{scalarproduct1}
\langle \varphi_1 |\varphi_2 \rangle=
\frac{1}{\Omega_0} \int_{\Omega} \varphi_1^* (\vec r) \varphi_2 (\vec r)\,
d \vec r
\end{equation}
should be used in all the calculations involving atomic orbitals.
If we introduce the assumptions (\ref{assumptions}) into the tight-binding
equations (\ref{tightbinding}), we obtain
\begin{equation}
\label{tightbinding1}
\left\{ \begin{array}{l}
\displaystyle
(E-u(\vec R_A))\,\left[e^{i \vec K\cdot \vec R_A} F_A^{\vec K}(\vec R_A)
-i \, e^{i \theta'} e^{i \vec K'\cdot \vec R_A} F_A^{\vec K'}(\vec R_A)
\right]=\\
\displaystyle
-\gamma_0 \sum_{l=1}^3 \left[i \, e^{i \theta'}
e^{i \vec K\cdot (\vec R_A-\vec \tau_l)} F_B^{\vec K} (\vec R_A-\vec \tau_l)+
e^{i \vec K'\cdot (\vec R_A-\vec \tau_l)} F_B^{\vec K'} (\vec R_A-\vec \tau_l)
\right];\\
\displaystyle
(E-u(\vec R_B))\,\left[i \, e^{i \theta'}
e^{i \vec K\cdot \vec R_B} F_B^{\vec K} (\vec R_B)+
e^{i \vec K'\cdot \vec R_B} F_B^{\vec K'} (\vec R_B)\right]=\\
\displaystyle
-\gamma_0 \sum_{l=1}^3 \left[
e^{i \vec K\cdot (\vec R_B+\vec \tau_l)} F_A^{\vec K}(\vec R_B+\vec \tau_l)
-i \, e^{i \theta'} e^{i \vec K'\cdot (\vec R_B+\vec \tau_l)}
F_A^{\vec K'}(\vec R_B+\vec \tau_l)\right].
\end{array} \right.
\end{equation}
It is useful to introduce \cite{ando1,ando2} a smoothing function $g(\vec r)$,
{\em i.e.} a real function which varies smoothly around the point around which it is
centered, has non-negligible values only in a range of a few lattice
constants around the center, and then decays rapidly for larger distances.
This function (point-symmetric around its center) is chosen in such a way as to
satisfy the conditions
\begin{equation}
\label{sum}
\sum_{\vec R_A} g(\vec r-\vec R_A)=\sum_{\vec R_B} g(\vec r-\vec R_B)=1
\end{equation}
and
\begin{equation}
\int_{\Omega} d\vec r \, g(\vec r-\vec R_A)=
\int_{\Omega} d\vec r \, g(\vec r-\vec R_B)=\Omega_0
\end{equation}
(where $\Omega_0=\sqrt{3} a^2 /2$ is the area of a graphene unit cell,
while $\Omega$ is the area of the overall graphene sheet); moreover it has
to satisfy the relations
\begin{equation}
\label{phase}
\sum_{\vec R_A} g(\vec r-\vec R_A) e^{i (\vec K'-\vec K)\cdot \vec R_A}
=\sum_{\vec R_B} g(\vec r-\vec R_B) e^{i (\vec K'-\vec K)\cdot \vec R_B}
\simeq 0.
\end{equation}
Due to its locality, when this function is multiplied by a generic smooth
function $f(\vec r)$ (such as the envelope functions $F$ we have defined),
we clearly have that
\begin{equation}
\label{smooth}
f(\vec r) g(\vec r-\vec R)\simeq f(\vec R) g(\vec r-\vec R)
\end{equation}
(for positions $\vec r$ for which $g(\vec r-\vec R)$ is not
negligible, the smooth function $f(\vec r)$ is approximately equal to
$f(\vec R)$, while for positions $\vec r$, further away from $\vec R$, for
which $f(\vec r)$ significantly differs from $f(\vec R)$, the function
$g(\vec r-\vec R)$ is null).
In fig.~\ref{f6} we show a possible smoothing function $g(\vec r)$,
which approximately satisfies all the previous relations \footnote
{In detail, we have represented the function defined as $106.5307 \,
\exp(-\frac{5.7677}{1-(|\vec r|/(0.355~{\rm nm}))^2})$
for $|\vec r|<0.355~{\rm nm}$, and $0$ for $|\vec r| \ge 0.355~{\rm nm}$,
but better approximations for the smoothing function $g(\vec r)$ can be found.}.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth,angle=0]{smoothing.eps}
\caption{A candidate smoothing function $g(\vec r)$.}
\label{f6}
\end{figure}\noindent
If we multiply the first of the tight-binding equations
(\ref{tightbinding1}) by
$g(\vec r-\vec R_A)e^{-i \vec K\cdot \vec R_A}$
and we sum it over $\vec R_A$, we find
\begin{eqnarray}
&& E\,\sum_{\vec R_A} g(\vec r-\vec R_A) F_A^{\vec K}(\vec R_A)\\
&&\qquad -E\,i \, e^{i \theta'}\,\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A} F_A^{\vec K'}(\vec R_A)\nonumber\\
&&\qquad -\sum_{\vec R_A} g(\vec r-\vec R_A) u(\vec R_A) F_A^{\vec K}(\vec R_A)\nonumber\\
&&\qquad +i \, e^{i \theta'}\,\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A}
u(\vec R_A) F_A^{\vec K'}(\vec R_A)=\nonumber\\
&&\qquad -\gamma_0 \,i \, e^{i \theta'}
\sum_{l=1}^3 e^{-i \vec K\cdot \vec\tau_l}
\sum_{\vec R_A} g(\vec r-\vec R_A) F_B^{\vec K}(\vec R_A-\vec\tau_l)\nonumber\\
&&\qquad -\gamma_0 \sum_{l=1}^3 e^{-i \vec K'\cdot \vec\tau_l}
\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A} F_B^{\vec K'}(\vec R_A-\vec\tau_l);\nonumber
\end{eqnarray}
\newpage
\noindent
exploiting the property (\ref{smooth}) it becomes
\vskip-5pt\noindent
\begin{eqnarray}
&& E\,\left[\sum_{\vec R_A} g(\vec r-\vec R_A)\right] F_A^{\vec K}(\vec r)-
E\,i \, e^{i \theta'}\,\left[\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A}\right] F_A^{\vec K'}(\vec r)\\
&& -\left[\sum_{\vec R_A} g(\vec r-\vec R_A) u(\vec R_A)\right]
F_A^{\vec K}(\vec r)\nonumber\\
&& +i \, e^{i \theta'}\,\left[\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A}
u(\vec R_A)\right] F_A^{\vec K'}(\vec r)=\nonumber\\
&& -\gamma_0 \,i \, e^{i \theta'}
\sum_{l=1}^3 e^{-i \vec K\cdot \vec\tau_l}
\left[\sum_{\vec R_A} g(\vec r-\vec R_A)\right]
F_B^{\vec K}(\vec r-\vec\tau_l)\nonumber\\
&& -\gamma_0 \sum_{l=1}^3 e^{-i \vec K'\cdot \vec\tau_l}
\left[\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A}\right] F_B^{\vec K'}(\vec r-\vec\tau_l).\nonumber
\end{eqnarray}
\vskip-5pt\noindent
For the quantities in the square brackets, we can use the properties
(\ref{sum}) and (\ref{phase}), together with the definitions
\vskip-5pt\noindent
\begin{equation}
\label{ua}
u_A(\vec r)=\sum_{\vec R_A} g(\vec r-\vec R_A) u(\vec R_A),\qquad
u'_A(\vec r)=\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A} u(\vec R_A),
\end{equation}
\vskip-5pt\noindent
obtaining
\vskip-5pt\noindent
\begin{eqnarray}
\label{first}
&& E\,F_A^{\vec K}(\vec r)-u_A(\vec r)\,F_A^{\vec K}(\vec r)+
i \, e^{i \theta'}\,u'_A(\vec r)\,F_A^{\vec K'}(\vec r)=\\
&& -\gamma_0 \,i \, e^{i \theta'}
\sum_{l=1}^3 e^{-i \vec K\cdot \vec\tau_l}
F_B^{\vec K}(\vec r-\vec\tau_l).\nonumber
\end{eqnarray}
\vskip-5pt\noindent
Expanding the smooth quantity $F_B^{\vec K} (\vec r-\vec \tau_l)$ to the
first order in $\vec \tau_l$, we have that
\vskip-5pt\noindent
\begin{eqnarray}
&& \sum_{l=1}^3 e^{-i \vec K\cdot \vec\tau_l}
F_B^{\vec K}(\vec r-\vec\tau_l)\simeq
\sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}
\left[F_B^{\vec K} (\vec r)-
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}
\right) F_B^{\vec K} (\vec r)\right]=\\
&& \left\{\left(\sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}\right)
F_B^{\vec K} (\vec r)-\left[\sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)\right]
F_B^{\vec K} (\vec r)\right\}.\nonumber
\end{eqnarray}
\vskip-5pt\noindent
Let us now calculate the value of the sums which appear in the previous
expression
\vskip-5pt\noindent
\begin{eqnarray}
&& \sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}=
1+e^{-i\frac{2\pi}{3}}+e^{i\frac{2\pi}{3}}=0;\\
&& \sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
1 \frac{a}{\sqrt{3}} \left(-\frac{\partial}{\partial x'}\right)\nonumber\\
&& +e^{-i\frac{2\pi}{3}} \frac{a}{\sqrt{3}}
\left(\frac{1}{2}\frac{\partial}{\partial x'}-\frac{\sqrt{3}}{2}
\frac{\partial}{\partial y'}\right)+
e^{i\frac{2\pi}{3}} \frac{a}{\sqrt{3}}
\left(\frac{1}{2}\frac{\partial}{\partial x'}+\frac{\sqrt{3}}{2}
\frac{\partial}{\partial y'}\right)=\nonumber\\
&& \frac{a}{\sqrt{3}}
\left(\left(-1+\frac{1}{2}e^{-i\frac{2\pi}{3}}+
\frac{1}{2}e^{i\frac{2\pi}{3}}\right)
\frac{\partial}{\partial x'}+\left(-\frac{\sqrt{3}}{2}e^{-i\frac{2\pi}{3}}+
\frac{\sqrt{3}}{2}e^{i\frac{2\pi}{3}} \right)
\frac{\partial}{\partial y'}\right).\nonumber
\end{eqnarray}
Since \ $\displaystyle
-1+\frac{1}{2}e^{-i\frac{2\pi}{3}}+\frac{1}{2}e^{i\frac{2\pi}{3}}=
-\frac{3}{2}$ \ and \
$\displaystyle
-\frac{\sqrt{3}}{2}e^{-i\frac{2\pi}{3}}+\frac{\sqrt{3}}{2}e^{i\frac{2\pi}{3}}=
i\frac{3}{2}$, \ we have that
\begin{eqnarray}
&& \sum_{l=1}^3 e^{-i \vec K\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
-\frac{a}{\sqrt{3}}\frac{3}{2}
\left(\frac{\partial}{\partial x'}-i\frac{\partial}{\partial y'}\right)=\\
&& -\frac{\sqrt{3}}{2}a (i\hat\kappa_{x'}+\hat\kappa_{y'})=
-i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}-i\hat\kappa_{y'}),\nonumber
\end{eqnarray}
where we have defined
${\vec{\hat\kappa}}=-i\vec \nabla$ and thus
\begin{equation}
\quad
{\hat\kappa}_{x'}=-i\frac{\partial}{\partial x'}
\qquad \hbox{and} \qquad
{\hat\kappa}_{y'}=-i\frac{\partial}{\partial y'}.
\end{equation}
Substituting these results, eq.~(\ref{first}) becomes
\begin{eqnarray}
\label{first1}
&& E\,F_A^{\vec K}(\vec r)-u_A(\vec r)\,F_A^{\vec K}(\vec r)+
i \, e^{i \theta'}\,u'_A(\vec r)\,F_A^{\vec K'}(\vec r) \simeq\\
&& -\gamma_0 \,i \, e^{i \theta'}
\left(i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}-i\hat\kappa_{y'})
F_B^{\vec K}(\vec r)\right)=\nonumber\\
&& \frac{\sqrt{3}}{2}\gamma_0 a\, e^{i \theta'}
(\hat\kappa_{x'}-i\hat\kappa_{y'}) F_B^{\vec K}(\vec r)=
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) F_B^{\vec K} (\vec r),\nonumber
\end{eqnarray}
where we have passed from the original reference frame
$\Sigma'=(\hbox{\boldmath{$\hat x$}}', \hbox{\boldmath{$\hat y$}}',
\hbox{\boldmath{$\hat z$}}')$ to a new frame
$\Sigma=(\hbox{\boldmath{$\hat x$}}, \hbox{\boldmath{$\hat y$}},
\hbox{\boldmath{$\hat z$}})$,
rotated, in the plane $(\hbox{\boldmath{$\hat x$}}',
\hbox{\boldmath{$\hat y$}}')$, around the origin by an angle
$\theta'$ (positive in the counterclockwise direction) with respect to the
original one (fig.~\ref{f7}) and we have used the fact that
\begin{eqnarray}
\label{diffcoord}
&& e^{i\theta'}({\hat\kappa}_{x'}-i{\hat\kappa}_{y'})=
(\cos\theta'+i\sin\theta')({\hat\kappa}_{x'}-i{\hat\kappa}_{y'})=\\
&& (\cos\theta' {\hat\kappa}_{x'}+\sin\theta' {\hat\kappa}_{y'})-
i(\cos\theta' {\hat\kappa}_{y'}-\sin\theta' {\hat\kappa}_{x'})=
{\hat\kappa}_x-i{\hat\kappa}_y\nonumber
\end{eqnarray}
(due to the relations between old and new coordinates), with
\begin{equation}
\label{kappa}
\quad
{\hat\kappa}_{x}=-i\frac{\partial}{\partial x}
\qquad \hbox{and} \qquad
{\hat\kappa}_{y}=-i\frac{\partial}{\partial y}.
\end{equation}
Indeed, it is a well-known result that, for a rotation by $\theta'$ of
the reference frame, the relations between the new and the old coordinates
are $x=x'\cos\theta'+y'\sin\theta'$
and $y=y'\cos\theta'-x'\sin\theta'$. Therefore we have that
\begin{equation}
\frac{\partial F(x,y)}{\partial x'} =
\frac{\partial F(x,y)}{\partial x} \frac{\partial x}{\partial x'}+
\frac{\partial F(x,y)}{\partial y} \frac{\partial y}{\partial x'}=
\frac{\partial F(x,y)}{\partial x} \cos\theta'-
\frac{\partial F(x,y)}{\partial y} \sin\theta'
\end{equation}
and that
\begin{equation}
\frac{\partial F(x,y)}{\partial y'} =
\frac{\partial F(x,y)}{\partial x} \frac{\partial x}{\partial y'}+
\frac{\partial F(x,y)}{\partial y} \frac{\partial y}{\partial y'}=
\frac{\partial F(x,y)}{\partial x} \sin\theta'+
\frac{\partial F(x,y)}{\partial y} \cos\theta'.
\end{equation}
As a consequence, we have that
\begin{eqnarray}
\qquad (\cos\theta' {\hat\kappa}_{x'}&+&\sin\theta' {\hat\kappa}_{y'}) F(x,y)=
\cos\theta' \left(-i\frac{\partial F(x,y)}{\partial x'}\right)+
\sin\theta' \left(-i\frac{\partial F(x,y)}{\partial y'}\right)=\\
&-& i\bigg[\frac{\partial F(x,y)}{\partial x} \cos^2\theta'-
\frac{\partial F(x,y)}{\partial y} \cos\theta'\sin\theta'\nonumber\\
&&\quad +\frac{\partial F(x,y)}{\partial x} \sin^2\theta'+
\frac{\partial F(x,y)}{\partial y} \sin\theta'\cos\theta'
\bigg]=\nonumber\\
&-& i\frac{\partial F(x,y)}{\partial x}
(\cos^2\theta'+\sin^2\theta')=
-i\frac{\partial F(x,y)}{\partial x}=\hat\kappa_x F(x,y)\nonumber
\end{eqnarray}
and that
\begin{eqnarray}
\qquad (\cos\theta' {\hat\kappa}_{y'}&-&\sin\theta' {\hat\kappa}_{x'}) F(x,y)=
\cos\theta' \left(-i\frac{\partial F(x,y)}{\partial y'}\right)-
\sin\theta' \left(-i\frac{\partial F(x,y)}{\partial x'}\right)=\\
&-& i\bigg[
\frac{\partial F(x,y)}{\partial x} \sin\theta'\cos\theta'+
\frac{\partial F(x,y)}{\partial y} \cos^2\theta'-\nonumber\\
&&\quad -\frac{\partial F(x,y)}{\partial x} \cos\theta'\sin\theta'+
\frac{\partial F(x,y)}{\partial y} \sin^2\theta'
\bigg]=\nonumber\\
&-& i\frac{\partial F(x,y)}{\partial y}
(\cos^2\theta'+\sin^2\theta')=
-i\frac{\partial F(x,y)}{\partial y}=\hat\kappa_y F(x,y),\nonumber
\end{eqnarray}
from which we obtain eq.~(\ref{diffcoord}).
\begin{figure}
\centering
\includegraphics[width=.7\textwidth,angle=0]{rev3def2.eps}
\caption{The reference frames used in the calculations ($\vec C_h$ and
$\theta$ will be used for carbon nanotubes in the next section: this
figure corresponds to a $(4,2)$ nanotube).}
\label{f7}
\end{figure}\noindent
$\theta'$ is the angle, taken counterclockwise, from the vector
$\vec a_1+\vec a_2$ to the axis $\hbox{\boldmath{$\hat x$}}$ of the new frame.
We have also defined the quantity $\gamma=(\sqrt{3}/2)\gamma_0 a$.
Note that in the new reference frame
$\Sigma=(\hbox{\boldmath{$\hat x$}}, \hbox{\boldmath{$\hat y$}},
\hbox{\boldmath{$\hat z$}})$
\begin{eqnarray}
&\vec a_1 \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{a}{2}
\left[\begin{array}{c}
\displaystyle \sqrt{3}\cos\theta'+\sin\theta'\\[3pt]
\displaystyle \cos\theta'-\sqrt{3}\sin\theta'\\[3pt]
0
\end{array}\right],\quad&
\vec a_2 \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{a}{2}
\left[\begin{array}{c}
\displaystyle \sqrt{3}\cos\theta'-\sin\theta'\\[3pt]
\displaystyle -\cos\theta'-\sqrt{3}\sin\theta'\\[3pt]
0
\end{array}\right],\\
&\vec b_1 \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{2\pi}{\sqrt{3}a}
\left[\begin{array}{c}
\displaystyle \cos\theta'+\sqrt{3}\sin\theta'\\ [3pt]
\displaystyle \sqrt{3}\cos\theta'-\sin\theta'\\[3pt]
0
\end{array}\right],\quad&
\vec b_2 \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{2\pi}{\sqrt{3}a}
\left[\begin{array}{c}
\displaystyle \cos\theta'-\sqrt{3}\sin\theta'\\[3pt]
\displaystyle -\sqrt{3}\cos\theta'-\sin\theta'\\[3pt]
0
\end{array}\right],\nonumber\\
&\vec K \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{4\pi}{3a}
\left[\begin{array}{c}
-\sin\theta'\\
-\cos\theta'\\
0
\end{array}\right],\quad&
\vec K' \mathrel{\mathop\equiv_{\Sigma}} \displaystyle \frac{4\pi}{3a}
\left[\begin{array}{c}
\sin\theta'\\
\cos\theta'\\
0
\end{array}\right].\nonumber
\end{eqnarray}
Analogously, if we multiply the second of the tight-binding equations
(\ref{tightbinding1}) by
$g(\vec r-\vec R_B)(-i\, e^{-i\theta'} e^{-i \vec K\cdot \vec R_B})$
and we sum it over $\vec R_B$, using again the properties
(\ref{smooth}), (\ref{sum}) and (\ref{phase}), together with the definitions
\begin{equation}
\label{ub}
u_B(\vec r)=\sum_{\vec R_B} g(\vec r-\vec R_B) u(\vec R_B),\qquad
u'_B(\vec r)=\sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i (\vec K'-\vec K)\cdot \vec R_B} u(\vec R_B),
\end{equation}
we obtain~\cite{supplem}
\begin{eqnarray}
\label{second}
&& E\,F_B^{\vec K} (\vec r)-u_B(\vec r)\,F_B^{\vec K} (\vec r)+
i\,e^{-i\theta'}\, u'_B(\vec r)\,F_B^{\vec K'} (\vec r)=\\
&& \gamma_0 \,i\, e^{-i\theta'} \sum_{l=1}^3
e^{i \vec K \cdot \vec \tau_l} F_A^{\vec K} (\vec r+\vec \tau_l).\nonumber
\end{eqnarray}
Expanding the smooth quantity $F_A^{\vec K} (\vec r+\vec \tau_l)$ to the
first order in $\vec \tau_l$, we have that
\begin{eqnarray}
&& \sum_{l=1}^3 e^{i \vec K \cdot \vec \tau_l}
F_A^{\vec K} (\vec r+\vec \tau_l)\simeq
\sum_{l=1}^3 e^{i \vec K \cdot \vec \tau_l}
\left[F_A^{\vec K} (\vec r)+
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}
\right) F_A^{\vec K} (\vec r)\right]=\\
&& \left(\sum_{l=1}^3 e^{i \vec K\cdot \vec \tau_l}\right)
F_A^{\vec K} (\vec r)+\left[\sum_{l=1}^3 e^{i \vec K\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)\right]
F_A^{\vec K} (\vec r).\nonumber
\end{eqnarray}
Since
\begin{equation}
\sum_{l=1}^3 e^{i \vec K\cdot \vec \tau_l}=0\quad\hbox{and}\quad
\sum_{l=1}^3 e^{i \vec K\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
-i\frac{\sqrt{3}}{2}a ({\hat\kappa}_{x'}+i{\hat\kappa}_{y'}),
\end{equation}
eq.~(\ref{second}) becomes
\begin{eqnarray}
\label{second1}
&& E\,F_B^{\vec K} (\vec r)-u_B(\vec r)\,F_B^{\vec K} (\vec r)+
i\,e^{-i\theta'}\, u'_B(\vec r)\,F_B^{\vec K'} (\vec r)\simeq\\
&& \gamma_0 \,i\, e^{-i\theta'}
\left( -i\frac{\sqrt{3}}{2}a ({\hat\kappa}_{x'}+i{\hat\kappa}_{y'})
\right) F_A^{\vec K} (\vec r)=\nonumber\\
&& \frac{\sqrt{3}}{2}\gamma_0 a\, e^{-i \theta'}
(\hat\kappa_{x'}+i\hat\kappa_{y'}) F_A^{\vec K}(\vec r)=
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) F_A^{\vec K} (\vec r),\nonumber
\end{eqnarray}
where we have made use of the relation
\begin{eqnarray}
\label{sumcoord}
&& e^{-i\theta'}({\hat\kappa}_{x'}+i{\hat\kappa}_{y'})=
(\cos\theta'-i\sin\theta')({\hat\kappa}_{x'}+i{\hat\kappa}_{y'})=\\
&& (\cos\theta' {\hat\kappa}_{x'}+\sin\theta' {\hat\kappa}_{y'})+
i(\cos\theta' {\hat\kappa}_{y'}-\sin\theta' {\hat\kappa}_{x'})=
{\hat\kappa}_x+i{\hat\kappa}_y.\nonumber
\end{eqnarray}
Instead, if we multiply the first of the tight-binding equations
(\ref{tightbinding1}) by
$g(\vec r-\vec R_A)\times (i\, e^{-i\theta'} e^{-i \vec K'\cdot \vec R_A})$
and we sum it over $\vec R_A$, we obtain (exploiting the properties
(\ref{smooth}), (\ref{sum})\break
and (\ref{phase}))~\cite{supplem}
\begin{eqnarray}
\label{third}
&& E F_A^{\vec K'} (\vec r)-i\, e^{-i\theta'} {u'_A}^*(\vec r)
F_A^{\vec K} (\vec r)-u_A(\vec r) F_A^{\vec K'} (\vec r)=\\
&& -\gamma_0\,i\, e^{-i\theta'} \sum_{l=1}^3
e^{-i \vec K' \cdot \vec \tau_l} F_B^{\vec K'} (\vec r-\vec \tau_l).\nonumber
\end{eqnarray}
Expanding the smooth quantity $F_B^{\vec K'} (\vec r-\vec \tau_l)$ to the
first order in $\vec \tau_l$, we have that
\begin{eqnarray}
&& \sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}
F_B^{\vec K'} (\vec r-\vec \tau_l)\simeq
\sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}
\left[F_B^{\vec K'} (\vec r)-
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}
\right) F_B^{\vec K'} (\vec r)\right]=\\
&& \left(\sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}\right)
F_B^{\vec K'} (\vec r)-\left[\sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)\right]
F_B^{\vec K'} (\vec r),\nonumber
\end{eqnarray}
with
\begin{equation}
\sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}=0\quad\hbox{and}\quad
\sum_{l=1}^3 e^{-i \vec K'\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
-i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}+i\hat\kappa_{y'}).
\end{equation}
Therefore eq.~(\ref{third}) becomes
\begin{eqnarray}
\label{third1}
&& E F_A^{\vec K'} (\vec r)-i\, e^{-i\theta'} {u'_A}^*(\vec r)
F_A^{\vec K} (\vec r)-u_A(\vec r) F_A^{\vec K'} (\vec r)\simeq\\
&& -\gamma_0\,i\, e^{-i\theta'}
\left(i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}+i\hat\kappa_{y'})
F_B^{\vec K'} (\vec r)\right)=\nonumber\\
&& \frac{\sqrt{3}}{2}\gamma_0 a\, e^{-i \theta'}
(\hat\kappa_{x'}+i\hat\kappa_{y'}) F_B^{\vec K'}(\vec r)=
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) F_B^{\vec K'} (\vec r),\nonumber
\end{eqnarray}
where we have exploited the relation (\ref{sumcoord}).
Finally, if we multiply the second of the tight-binding equations
(\ref{tightbinding1}) by
$g(\vec r-\vec R_B) \times e^{-i \vec K'\cdot \vec R_B}$
and we sum it over $\vec R_B$, we obtain (using the properties
(\ref{smooth}), (\ref{sum})\break
and (\ref{phase}))~\cite{supplem}
\begin{eqnarray}
\label{fourth}
&& E\,F_B^{\vec K'} (\vec r)-i\,e^{i\theta'}\,
{u'_B}^*(\vec r)\,F_B^{\vec K} (\vec r)-
u_B(\vec r)\,F_B^{\vec K'} (\vec r)=\\
&& \gamma_0 \,i\, e^{i\theta'} \sum_{l=1}^3
e^{i \vec K' \cdot \vec \tau_l} F_A^{\vec K'} (\vec r+\vec \tau_l).\nonumber
\end{eqnarray}
Expanding the smooth quantity $F_A^{\vec K'} (\vec r+\vec \tau_l)$ to the
first order in $\vec \tau_l$, we have that
\begin{eqnarray}
&& \sum_{l=1}^3 e^{i \vec K' \cdot \vec \tau_l}
F_A^{\vec K'} (\vec r+\vec \tau_l)\simeq
\sum_{l=1}^3 e^{i \vec K' \cdot \vec \tau_l}
\left[F_A^{\vec K'} (\vec r)+
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}
\right) F_A^{\vec K'} (\vec r)\right]=\\
&& \left(\sum_{l=1}^3 e^{i \vec K'\cdot \vec \tau_l}\right)
F_A^{\vec K'} (\vec r)+\left[\sum_{l=1}^3 e^{i \vec K'\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)\right]
F_A^{\vec K'} (\vec r).\nonumber
\end{eqnarray}
Since
\begin{equation}
\sum_{l=1}^3 e^{i \vec K'\cdot \vec \tau_l}=0\quad\hbox{and}\quad
\sum_{l=1}^3 e^{i \vec K'\cdot \vec \tau_l}
\left(\vec \tau_l\cdot\frac{\partial}{\partial\vec r}\right)=
-i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}-i\hat\kappa_{y'}),
\end{equation}
eq.~(\ref{fourth}) becomes
\begin{eqnarray}
\label{fourth1}
&& E\,F_B^{\vec K'} (\vec r)-i\,e^{i\theta'}\,
{u'_B}^*(\vec r)\,F_B^{\vec K} (\vec r)-
u_B(\vec r)\,F_B^{\vec K'} (\vec r)=\\
&& \gamma_0 \,i\, e^{i\theta'}
\left( -i\frac{\sqrt{3}}{2}a (\hat\kappa_{x'}-i\hat\kappa_{y'})
\right) F_A^{\vec K'} (\vec r)=\nonumber\\
&& \frac{\sqrt{3}}{2}\gamma_0 a\, e^{i \theta'}
(\hat\kappa_{x'}-i\hat\kappa_{y'}) F_A^{\vec K'}(\vec r)=
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) F_A^{\vec K'} (\vec r),\nonumber
\end{eqnarray}
where the relation (\ref{diffcoord}) has been used.
In this way, we have obtained the four equations (\ref{first1}),
(\ref{second1}), (\ref{third1}) and (\ref{fourth1}), that we can
summarize
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
u_A(\vec r)\,F_A^{\vec K}(\vec r)+
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) F_B^{\vec K} (\vec r)-
i\, e^{i \theta'}\,u'_A(\vec r)\,F_A^{\vec K'}(\vec r)=
E\,F_A^{\vec K}(\vec r),\\[5pt]
\displaystyle
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) F_A^{\vec K} (\vec r)+
u_B(\vec r)\,F_B^{\vec K} (\vec r)-
i\, e^{-i\theta'}\, u'_B(\vec r)\,F_B^{\vec K'} (\vec r)=
E\,F_B^{\vec K} (\vec r),\\[5pt]
\displaystyle
i\, e^{-i\theta'} {u'_A}^*(\vec r) F_A^{\vec K} (\vec r)+
u_A(\vec r) F_A^{\vec K'} (\vec r)+
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) F_B^{\vec K'} (\vec r)=
E F_A^{\vec K'} (\vec r),\\[5pt]
\displaystyle
i\, e^{i\theta'}\,{u'_B}^*(\vec r)\,F_B^{\vec K} (\vec r)+
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) F_A^{\vec K'} (\vec r)+
u_B(\vec r)\,F_B^{\vec K'} (\vec r)=
E\,F_B^{\vec K'} (\vec r),
\end{array} \right.
\end{equation}
and write in matrix form
\begin{eqnarray}
\label{diracpot}
\quad && \left[\begin{array}{cccc}
u_A(\vec r) &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
-i \, e^{i \theta'}\,u'_A(\vec r) &
0 \\[3pt]
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) &
u_B(\vec r) &
0 &
-i\, e^{-i\theta'}\, u'_B(\vec r) \\[3pt]
i\, e^{-i\theta'}\,{u'_A}^*(\vec r) &
0 &
u_A(\vec r) &
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) \\[3pt]
0 &
i\, e^{i\theta'}\,{u'_B}^*(\vec r) &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
u_B(\vec r)
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right]=\\
&& E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right],\nonumber
\end{eqnarray}
which is the $\vec k \cdot \vec p$ equation of graphene.
Incidentally, if we repeat all the previous calculations considering
the following different pair of reference Dirac points:
\begin{equation}
\vec K=\left[\begin{array}{c}
\displaystyle \frac{2\pi}{\sqrt{3}a}\\
\noalign{\vskip3pt}
\displaystyle \frac{2\pi}{3a}\\
\noalign{\vskip3pt}
0
\end{array}
\right]
,\quad
\vec K'=\left[\begin{array}{c}
\displaystyle \frac{2\pi}{\sqrt{3}a}\\
\noalign{\vskip3pt}
\displaystyle -\frac{2\pi}{3a}\\
\noalign{\vskip3pt}
0
\end{array}
\right]
\end{equation}
(equivalent, in the reciprocal space, to the pair (\ref{diracpoints})
of Dirac points), we have to replace (\ref{assumptions}) with
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
\psi_A (\vec R_A)=e^{i \vec K\cdot \vec R_A} F_A^{\vec K}(\vec R_A)
+e^{i\eta} e^{i \vec K'\cdot \vec R_A} F_A^{\vec K'}(\vec R_A),\\[5pt]
\displaystyle
\psi_B (\vec R_B)=-e^{i\frac{2\pi}{3}}\,e^{i\eta}
e^{i \vec K\cdot \vec R_B} F_B^{\vec K} (\vec R_B)+
e^{i \vec K'\cdot \vec R_B} F_B^{\vec K'} (\vec R_B),
\end{array} \right.
\end{equation}
where $\eta=\pi/6+\theta'$, and we obtain (instead of
eq.~(\ref{diracpot}))
\begin{eqnarray}
\qquad && \left[\begin{array}{@{\ }c@{\ }c@{\ \ \ }c@{\ }c@{\ }}
u_A(\vec r) &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
e^{i\eta}\,u'_A(\vec r) &
0 \\[3pt]
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) &
u_B(\vec r) &
0 &
-e^{-i\frac{2\pi}{3}}\,e^{-i\eta}\, u'_B(\vec r) \\[3pt]
e^{-i\eta}\,{u'_A}^*(\vec r) &
0 &
u_A(\vec r) &
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) \\[3pt]
0 &
-e^{i\frac{2\pi}{3}}\,e^{i\eta}\,{u'_B}^*(\vec r) &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
u_B(\vec r)
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right]=\\
&& E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right],\nonumber
\end{eqnarray}
as found by Ando~\cite{ando1,ando2}.
Summarizing, we have that the overall wave function is given by
(see (\ref{wavefunction}))
\begin{equation}
\psi (\vec r)=
\sum_{\vec R_A}\psi_A (\vec R_A)\varphi(\vec r -\vec R_A)+
\sum_{\vec R_B}\psi_B (\vec R_B)\varphi(\vec r -\vec R_B),
\end{equation}
with (see (\ref{assumptions}))
\begin{equation}
\label{assumptions2}
\left\{ \begin{array}{l}
\displaystyle
\psi_A (\vec r)=e^{i \vec K\cdot \vec r} F_A^{\vec K}(\vec r)
-i \, e^{i \theta'} e^{i \vec K'\cdot \vec r} F_A^{\vec K'}(\vec r),\\
\displaystyle
\psi_B (\vec r)=i \, e^{i \theta'}
e^{i \vec K\cdot \vec r} F_B^{\vec K} (\vec r)+
e^{i \vec K'\cdot \vec r} F_B^{\vec K'} (\vec r),
\end{array} \right.
\end{equation}
where the envelope functions $F$ satisfy eq.~(\ref{diracpot}).
We $\!$can $\!$treat $\!$two $\!$limiting $\!$cases $\!$for $\!$the $\!$external $\!$potential, $\!$depending $\!$on $\!$its$\!$ range~$\!\!$\cite{shon,ando3}.
If the potential range is much smaller than the lattice constant
(short-range case), we can consider the external potential as different from
zero only on one carbon atom.
If it is non-zero only on an atom of type A (in position $\vec R_{A_0}$), {\em i.e.}
$u (\vec R_{A_0}) \ne 0$, $u (\vec R_A)=0$ for $\vec R_A \ne \vec R_{A_0}$ and
$u (\vec R_B)=0$ for every $\vec R_B$, recalling eq.~(\ref{ua}) and
(\ref{ub}),\break
\newpage
\noindent
we have that
\vskip-3pt\noindent
\begin{eqnarray}
u_A(\vec r) &=& \sum_{\vec R_A} g(\vec r-\vec R_A) u(\vec R_A)=
g(\vec r-\vec R_{A_0}) u(\vec R_{A_0}),\\
u'_A(\vec r) &=& \sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A} u(\vec R_A)=\nonumber\\
&& g(\vec r-\vec R_{A_0})
e^{i (\vec K'-\vec K)\cdot \vec R_{A_0}} u(\vec R_{A_0})=
u_A(\vec r) e^{i (\vec K'-\vec K)\cdot \vec R_{A_0}},\nonumber\\
u_B(\vec r) &=& \sum_{\vec R_B} g(\vec r-\vec R_B) u(\vec R_B)=0,\nonumber\\
u'_B(\vec r) &=& \sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i (\vec K'-\vec K)\cdot \vec R_B} u(\vec R_B)=0.\nonumber
\end{eqnarray}
\vskip-3pt\noindent
Instead, if it is nonzero only on an atom of type B (in position
$\vec R_{B_0}$), i.e.
$u (\vec R_{B_0}) \ne 0$, $u (\vec R_B)=0$ for $\vec R_B \ne \vec R_{B_0}$ and
$u (\vec R_A)=0$ for every $\vec R_A$, we have that
\vskip-3pt\noindent
\begin{eqnarray}
u_A(\vec r) &=& \sum_{\vec R_A} g(\vec r-\vec R_A) u(\vec R_A)=0,\\
u'_A(\vec r) &=& \sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A} u(\vec R_A)=0,\nonumber\\
u_B(\vec r) &=& \sum_{\vec R_B} g(\vec r-\vec R_B) u(\vec R_B)=
g(\vec r-\vec R_{B_0}) u(\vec R_{B_0}),\nonumber\\
u'_B(\vec r) &=& \sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i (\vec K'-\vec K)\cdot \vec R_B} u(\vec R_B)=\nonumber\\
&& g(\vec r-\vec R_{B_0})
e^{i (\vec K'-\vec K)\cdot \vec R_{B_0}} u(\vec R_{B_0})=
u_B(\vec r) e^{i (\vec K'-\vec K)\cdot \vec R_{B_0}}.\nonumber
\end{eqnarray}
\vskip-3pt\noindent
If instead the potential range is much larger than the lattice constant
(long-range case), using eq.~(\ref{sum}), (\ref{phase}) and (\ref{smooth}),
we have that
\vskip-3pt\noindent
\begin{eqnarray}
\label{longrange}
\qquad\quad u_A(\vec r)\! &=& \!\sum_{\vec R_A}\! g(\vec r-\vec R_A) u(\vec R_A)\!\simeq\!
\sum_{\vec R_A}\! g(\vec r-\vec R_A) u(\vec r)\!=\!
\left[\sum_{\vec R_A} g(\vec r-\vec R_A)\right]\! u(\vec r)\!=\!u(\vec r),\\
u'_A(\vec r) &=& \sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A} u(\vec R_A)\simeq
\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A} u(\vec r)=\nonumber\\
&& \left[\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K)\cdot \vec R_A}\right] u(\vec r)=0,\nonumber\\
u_B(\vec r) &=& \sum_{\vec R_B} g(\vec r-\vec R_B) u(\vec R_B)\simeq\nonumber\\
&& \sum_{\vec R_B} g(\vec r-\vec R_B) u(\vec r)=
\left[\sum_{\vec R_B} g(\vec r-\vec R_B)\right] u(\vec r)=u(\vec r)=
u_A(\vec r),\nonumber\\
u'_B(\vec r) &=& \sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i (\vec K'-\vec K)\cdot \vec R_B} u(\vec R_B)\simeq
\sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i (\vec K'-\vec K)\cdot \vec R_B} u(\vec r)=\nonumber\\
&& \left[\sum_{\vec R_B} g(\vec r-\vec R_B)
e^{i (\vec K'-\vec K)\cdot \vec R_B}\right] u(\vec r)=0.\nonumber
\end{eqnarray}
Here we have used first (exploiting the hypothesis that the external potential
is a very smooth function in conparison with $g(\vec r)$) the property
(\ref{smooth}) and then (for the quantities inside the square brackets)
the properties (\ref{sum}) and (\ref{phase}) of the function $g(\vec r)$.
In this last case the effect of the external potential on the Hamiltonian
matrix is only to sum the same quantity, $u(\vec r)$, to all the
diagonal elements of the matrix, as expected from the $\vec k \cdot \vec p$
theory (see eq.~(\ref{coupled}), where the external potential was
assumed slowly variable)
\vskip5pt\noindent
\begin{eqnarray}
\label{longrange2}
&& \left[\begin{array}{cccc}
u(\vec r) &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
0 &
0 \\[6pt]
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) &
u(\vec r) &
0 &
0 \\[6pt]
0 &
0 &
u(\vec r) &
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) \\[6pt]
0 &
0 &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
u(\vec r)
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[6pt]
F_B^{\vec K} (\vec r)\\[6pt]
F_A^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K'} (\vec r)
\end{array}\right]=\\
&& E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[6pt]
F_B^{\vec K} (\vec r)\\[6pt]
F_A^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K'} (\vec r)
\end{array}\right].\nonumber
\end{eqnarray}
\vskip5pt\noindent
Let us note that by reordering the elements of the envelope function vector,
we can rewrite this equation in the form
\vskip5pt\noindent
\begin{eqnarray}
&& \left[\begin{array}{cccc}
u(\vec r) &
0 &
0 &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) \\[6pt]
0 &
u(\vec r) &
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) &
0 \\[6pt]
0 &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
u(\vec r) &
0 \\[6pt]
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) &
0 &
0 &
u(\vec r)
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[6pt]
F_A^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K} (\vec r)
\end{array}\right]=\\
&& E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[6pt]
F_A^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K} (\vec r)
\end{array}\right],\nonumber
\end{eqnarray}
\vskip5pt\noindent
which can be more compactly written as
\vskip5pt\noindent
\begin{equation}
\left[\begin{array}{cccc}
u(\vec r)I &
\gamma \vec\sigma\cdot\vec{\hat\kappa}\\[6pt]
\gamma \vec\sigma\cdot\vec{\hat\kappa} &
u(\vec r)I
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[6pt]
F_A^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K} (\vec r)
\end{array}\right]=
E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[6pt]
F_A^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K} (\vec r)
\end{array}\right]
\end{equation}
\vskip5pt\noindent
(where $I$ is the $2 \times 2$ identity matrix and $\vec\sigma$ is the
vector having as components the Pauli spin matrices $\sigma_x$ and $\sigma_y$
(\ref{pauli})).
This equation is analitically equivalent to the Dirac equation for
massless particles (Weyl's equation) of relativistic quantum
mechanics~\footnotemark{}; therefore eq.~(\ref{longrange2}) is commonly called
the Dirac equation for graphene. Since charge carriers in graphene obey a
relation identical to that describing the relativistic behavior of elementary
massless spin-(1/2) particles, transport in graphene exhibits many phenomena,
such as Klein's tunneling, analogous to those predicted in relativistic
quantum mechanics~\cite{katsnelson1,katsnelson2,katsnelson3,geim,beenakker}.
\footnotetext{For example, compare this equation with eq.~(3.62) of
ref.~\cite{sakurai}, with $m=0$, $\vec A=0$,
$e A_0=u(\vec r)$, $c$ substituted by $v_F=\gamma/\hbar$,
$\vec \psi_A$ substituted by $[F_A^{\vec K}, \ F_A^{\vec K'}]^{\, T}$,
and $\vec \psi_B$ substituted by $[F_B^{\vec K'}, \ F_B^{\vec K}]^{\, T}$.}
Note that in the presence of a magnetic field the operator
$\vec{\hat\kappa}=-i\vec\nabla$ which appears in the equation has to be
replaced by $-i\vec\nabla+e\vec A/\hbar$, as we have shown in the general
introduction on the $\vec k \cdot \vec p$ method.
In the absence of an external potential, the quantities $u_A$, $u'_A$,
$u_B$ and $u'_B$ are null and thus the matrix equation becomes
\begin{eqnarray}
\label{absence}
&& \left[\begin{array}{cccc}
0 &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
0 &
0 \\[3pt]
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) &
0 &
0 &
0 \\[3pt]
0 &
0 &
0 &
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) \\[3pt]
0 &
0 &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right]=\\
&& E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right].\nonumber
\end{eqnarray}
Since in this case the part of equation corresponding to the point $\vec K$
is decoupled from that corresponding to the point $\vec K'$, we can consider
the two parts separately.
In particular, the part of equation corresponding to the point $\vec K$ is
\begin{equation}
\left[\begin{array}{cc}
0 & \gamma ({\hat\kappa}_x-i{\hat\kappa}_y)\\[3pt]
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) & 0
\end{array}\right]
\left[\begin{array}{cc}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)
\end{array}\right]=
E \left[\begin{array}{cc}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)
\end{array}\right],
\end{equation}
or (using the Pauli spin matrices (\ref{pauli}))
\begin{equation}
\gamma({\hat\kappa}_x \sigma_x+
{\hat\kappa}_y \sigma_y)\vec F^{\vec K} (\vec r)=
\gamma({\vec{\hat\kappa}}\cdot\vec\sigma)
\vec F^{\vec K} (\vec r)=E\vec F^{\vec K} (\vec r).
\end{equation}
This $\vec k \cdot \vec p$ Hamiltonian matrix, converted into the momentum
representation (see eq.~(\ref{momentum})), has as eigenvalues the dispersion
relations of the two degenerate energy bands\break
$E_s^{\vec K}(\vec \kappa)$ and
as eigenvectors the corresponding electron envelope functions
$F_{s \vec\kappa}^{\vec K} (\vec r)$.
In particular, if we set
\begin{equation}
\det\left\{\left[\begin{array}{cc}
0 & \gamma (\kappa_x-i\kappa_y)\\[3pt]
\gamma (\kappa_x+i\kappa_y) & 0
\end{array}\right]-E
\left[\begin{array}{cc}
1 & 0\\[3pt]
0 & 1
\end{array}\right]\right\}=0,
\end{equation}
we find the dispersion relations
\begin{equation}
E_s^{\vec K}(\vec\kappa)=s \gamma \sqrt{\kappa_x^2+\kappa_y^2}=
s \gamma |\vec \kappa|,
\end{equation}
where $s$ can assume the values $+1$ or $-1$.
If we define the angle $\alpha$ in such a way that
\begin{equation}
\label{alpha}
\kappa_x+i\kappa_y=|\vec\kappa|e^{i(\frac{\pi}{2}+\alpha)}=
i |\vec\kappa| e^{i \alpha}
\end{equation}
and thus
\begin{equation}
\kappa_x-i\kappa_y=(\kappa_x+i\kappa_y)^*=
|\vec\kappa|e^{i(-\frac{\pi}{2}-\alpha)}=
-i |\vec\kappa| e^{-i \alpha},
\end{equation}
we have that the corresponding envelope functions (properly normalized,
as we will see), are
\begin{equation}
\label{fk}
\vec F_{s \vec\kappa}^{\vec K}(\vec r)=
\frac{1}{\sqrt{2\Omega}}e^{i\vec\kappa\cdot\vec r}
e^{i\phi_s (\vec\kappa)} R(-\alpha (\vec\kappa)) |s \rangle,
\end{equation}
with
\begin{equation}
|s \rangle =\frac{1}{\sqrt{2}}\left[\begin{array}{c}
-is\\
1
\end{array}\right],
\end{equation}
where $\Omega$ is the considered surface area, $\phi_s (\vec\kappa)$ is an
arbitrary phase factor and $R(\alpha)$ is a spin-rotation operator, given by
\begin{equation}
R(\alpha)=\left[\begin{array}{cc}
e^{i\frac{\alpha}{2}} & 0\\
0 & e^{-i\frac{\alpha}{2}}
\end{array}\right].
\end{equation}
This can be easily verified noting that
\begin{eqnarray}
\label{ver1}
&& \gamma \left[\begin{array}{cc}
0 & \hat\kappa_x-i\hat\kappa_y\\
\hat\kappa_x+i\hat\kappa_y & 0
\end{array}\right] \vec F_{s \vec\kappa}^{\vec K}(\vec r)=\\
&& \gamma \left[\begin{array}{cc}
0 & \hat\kappa_x-i\hat\kappa_y\\
\hat\kappa_x+i\hat\kappa_y & 0
\end{array}\right]
\frac{1}{\sqrt{2\Omega}}e^{i\vec\kappa\cdot\vec r}
e^{i\phi_s} R(-\alpha (\vec\kappa)) |s \rangle=\nonumber\\
&& \gamma \left[\begin{array}{cc}
0 & \kappa_x-i\kappa_y\\
\kappa_x+i\kappa_y & 0
\end{array}\right]
\frac{1}{\sqrt{2\Omega}}e^{i\vec\kappa\cdot\vec r}
e^{i\phi_s} R(-\alpha (\vec\kappa)) |s \rangle=\nonumber\\
&& \gamma \left[\begin{array}{cc}
0 & -i|\vec\kappa|e^{-i \alpha}\\
i|\vec\kappa|e^{i\alpha} & 0
\end{array}\right]
\left(\frac{1}{\sqrt{2\Omega}}e^{i\vec\kappa\cdot\vec r}
e^{i\phi_s} \left[\begin{array}{cc}
e^{-i\frac{\alpha}{2}} & 0\\
0 & e^{i\frac{\alpha}{2}}
\end{array}\right]
\frac{1}{\sqrt{2}}\left[\begin{array}{c}
-is\\
1
\end{array}\right] \right)=\nonumber\\
&& \frac{1}{2\sqrt{\Omega}}\gamma e^{i\vec\kappa\cdot\vec r}
e^{i\phi_s} \!\left[\begin{array}{cc}
0 & -i|\vec\kappa|e^{-i \frac{\alpha}{2}}\\
i|\vec\kappa|e^{i\frac{\alpha}{2}} & 0
\end{array}\right]\!
\left[\begin{array}{c}
-is\\
1
\end{array}\right]\!=\nonumber\\
&& \frac{1}{2\sqrt{\Omega}}\gamma e^{i(\vec\kappa\cdot\vec r+\phi_s)}
\!\left[\begin{array}{c}
-i |\vec\kappa| e^{-i \frac{\alpha}{2}}\\
|\vec\kappa| s e^{i\frac{\alpha}{2}}
\end{array}\right]\nonumber
\end{eqnarray}
and also
\begin{eqnarray}
\label{ver2}
&& E_s^{\vec K} \vec F_{s \vec\kappa}^{\vec K}(\vec r)=
s \gamma |\vec \kappa| \left(\frac{1}{\sqrt{2\Omega}}e^{i\vec\kappa\cdot\vec r}
e^{i\phi_s} \left[\begin{array}{cc}
e^{-i\frac{\alpha}{2}} & 0\\
0 & e^{i\frac{\alpha}{2}}
\end{array}\right]
\frac{1}{\sqrt{2}}\left[\begin{array}{c}
-is\\
1
\end{array}\right] \right)=\\
&& s \gamma |\vec \kappa| \frac{1}{2\sqrt{\Omega}}
e^{i(\vec\kappa\cdot\vec r+\phi_s)}
\left[\begin{array}{c}
-i s e^{-i \frac{\alpha}{2}}\\
e^{i\frac{\alpha}{2}}
\end{array}\right]=
\frac{1}{2\sqrt{\Omega}} \gamma e^{i(\vec\kappa\cdot\vec r+\phi_s)}
\left[\begin{array}{c}
-i s^2 |\vec \kappa| e^{-i \frac{\alpha}{2}}\\
|\vec \kappa| s e^{i\frac{\alpha}{2}}
\end{array}\right]=\nonumber\\
&& \frac{1}{2\sqrt{\Omega}}\gamma e^{i(\vec\kappa\cdot\vec r+\phi_s)}
\left[\begin{array}{c}
-i |\vec\kappa| e^{-i \frac{\alpha}{2}}\\
|\vec\kappa| s e^{i\frac{\alpha}{2}}
\end{array}\right]\nonumber
\end{eqnarray}
(where we have used the fact that $s^2=(\pm 1)^2=1$ ).
Instead, the part of equation corresponding to the point $\vec K'$ is
\begin{equation}
\left[\begin{array}{cc}
0 & \gamma (\hat\kappa_x+i\hat\kappa_y)\\
\gamma (\hat\kappa_x-i\hat\kappa_y) & 0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K'} (\vec r)\\
F_B^{\vec K'} (\vec r)
\end{array}\right]=
E \left[\begin{array}{c}
F_A^{\vec K'} (\vec r)\\
F_B^{\vec K'} (\vec r)
\end{array}\right],
\end{equation}
or equivalently (using the Pauli spin matrices (\ref{pauli}))
\begin{equation}
\gamma(\hat\kappa_x \sigma_x-\hat\kappa_y \sigma_y)\vec F^{\vec K'} (\vec r)=
\gamma\left(\left[\begin{array}{c}
\hat\kappa_x \\
-\hat\kappa_y \\
0
\end{array}\right]
\cdot\vec\sigma\right) \vec F^{\vec K'} (\vec r)=E\vec F^{\vec K'} (\vec r).
\end{equation}
If we move to the momentum representation (see eq.~(\ref{momentum}))
and enforce
\begin{equation}
\det\left\{\left[\begin{array}{cc}
0 & \gamma (\kappa_x+i\kappa_y)\\
\gamma (\kappa_x-i\kappa_y) & 0
\end{array}\right]-E
\left[\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}\right]\right\}=0,
\end{equation}
we find the dispersion relations
\begin{equation}
E_s^{\vec K'}(\vec\kappa)=s \gamma \sqrt{\kappa_x^2+\kappa_y^2}=
s \gamma|\vec \kappa|,
\end{equation}
where $s$ can assume the values $+1$ or $-1$.
The corresponding envelope functions are
\begin{equation}
\label{fk1}
\vec F_{s \vec\kappa}^{\vec K'}(\vec r)=
\frac{1}{\sqrt{2\Omega}}e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s (\vec\kappa)} R(\alpha (\vec\kappa)) \tilde{| s \rangle},
\end{equation}
with $\tilde\phi_s (\vec\kappa)$ an arbitrary phase factor and
\begin{equation}
\tilde{| s \rangle} =\frac{1}{\sqrt{2}}\left[\begin{array}{c}
is\\
1
\end{array}\right].
\end{equation}
\noindent
This result is easily verified in a way completely analogous to
eqs.~(\ref{ver1})-(\ref{ver2})~\cite{supplem}.
From these functions $F_A^{\vec K}$, $F_B^{\vec K}$, $F_A^{\vec K'}$
and $F_B^{\vec K'}$, we can find the functions $\psi_A$ and $\psi_B$
and thus the electron wave function $\psi$ in the absence of an external
potential, using the relations (\ref{assumptions}) and (\ref{wavefunction}).
We notice that the energy dispersion relations that we have found in
this way near $\vec K$ and $\vec K'$ are identical to those one can obtain
by first computing the dispersion relations in the absence of an external
potential by using the nearest-neighbor tight-binding technique, and then
expanding them near the extrema points.
Let us now find an expression for the probability density and for the
probability current density in graphene.
The probability to find an electron in a region of area $S$ is equal to
\begin{eqnarray}
&& \int_{S} |\psi(\vec r)|^2 d \vec r=
\int_{S} \psi^*(\vec r) \psi(\vec r) d \vec r=\\
&& \int_{S} \Big[\sum_{\vec R_A}\psi_A^*(\vec R_A)\varphi^*(\vec r -\vec R_A)+
\sum_{\vec R_B}\psi_B^*(\vec R_B)\varphi^*(\vec r -\vec R_B)\Big]\nonumber\\
&& \cdot\Big[\sum_{\vec R_A}\psi_A (\vec R_A)\varphi(\vec r -\vec R_A)+
\sum_{\vec R_B}\psi_B (\vec R_B)\varphi(\vec r -\vec R_B)\Big] d \vec r=\nonumber\\
&& \sum_{\vec R_A \in S} |\psi_A(\vec R_A)|^2
\int_{S} |\varphi(\vec r -\vec R_A)|^2 d \vec r+
\sum_{\vec R_B \in S} |\psi_B(\vec R_B)|^2
\int_{S} |\varphi(\vec r -\vec R_B)|^2 d \vec r\simeq\nonumber\\
&& \sum_{\vec R_A \in S} |\psi_A(\vec R_A)|^2
\int_{\Omega} |\varphi(\vec r -\vec R_A)|^2 d \vec r+
\sum_{\vec R_B \in S} |\psi_B(\vec R_B)|^2
\int_{\Omega} |\varphi(\vec r -\vec R_B)|^2 d \vec r \nonumber
\end{eqnarray}
($\Omega$ is the area of the whole graphene sheet),
where we have exploited the fact that each atomic orbital $\varphi$
has a non-zero overlap only with itself (since we use L\"owdin orthonormalized
atomic orbitals) and has significant values only
near the atom on which it is centered. If the atomic orbital $\varphi$ is
normalized according to (\ref{norm}), the integral of its square modulus on
$\Omega$ is equal to the unit cell area $\Omega_0$ (otherwise, if the usual
normalization for $\varphi$ is adopted, this integral is equal to 1 and
the following results just have to be divided by the constant $\Omega_0$).
Therefore, in this case we have that
\begin{equation}
\int_{S} |\psi(\vec r)|^2 d \vec r\simeq
\Omega_0 \sum_{\vec R_A \in S} |\psi_A(\vec R_A)|^2+
\Omega_0 \sum_{\vec R_B \in S} |\psi_B(\vec R_B)|^2.
\end{equation}
Using the relations (\ref{assumptions}), we have that
\begin{eqnarray}
&& \sum_{\vec R_A} |\psi_A(\vec R_A)|^2=
\sum_{\vec R_A} \psi_A^*(\vec R_A) \psi_A(\vec R_A)=\\
&& \sum_{\vec R_A}
\Big\{\Big[e^{-i \vec K\cdot \vec R_A} {F_A^{\vec K}}^*(\vec R_A)
+i \, e^{-i \theta'} e^{-i \vec K'\cdot \vec R_A}
{F_A^{\vec K'}}^*(\vec R_A)\Big]\nonumber\\
&& \cdot\Big[e^{i \vec K\cdot \vec R_A} F_A^{\vec K}(\vec R_A)
-i \, e^{i \theta'} e^{i \vec K'\cdot \vec R_A}
F_A^{\vec K'}(\vec R_A)\Big]\Big\}=\nonumber\\
&& \sum_{\vec R_A} |F_A^{\vec K}(\vec R_A)|^2
+\sum_{\vec R_A} |F_A^{\vec K'}(\vec R_A)|^2\nonumber\\
&& -i \, e^{i \theta'} \sum_{\vec R_A}
\Big[e^{i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K}}^*(\vec R_A) F_A^{\vec K'}(\vec R_A)\Big]\nonumber\\
&& +i \, e^{-i \theta'} \sum_{\vec R_A}
\Big[e^{-i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K'}}^*(\vec R_A) F_A^{\vec K}(\vec R_A)\Big]\nonumber
\end{eqnarray}
and that
\begin{eqnarray}
&& \sum_{\vec R_B} |\psi_B(\vec R_B)|^2=
\sum_{\vec R_B} \psi_B^*(\vec R_B) \psi_B(\vec R_B)=\\
&& \sum_{\vec R_B}
\Big\{\Big[-i \, e^{-i \theta'}
e^{-i \vec K\cdot \vec R_B} {F_B^{\vec K}}^*(\vec R_B)+
e^{-i \vec K'\cdot \vec R_B} {F_B^{\vec K'}}^*(\vec R_B)
\Big]\nonumber\\
&& \cdot\Big[i \, e^{i \theta'}
e^{i \vec K\cdot \vec R_B} F_B^{\vec K} (\vec R_B)+
e^{i \vec K'\cdot \vec R_B} F_B^{\vec K'} (\vec R_B)
\Big]\Big\}=\nonumber\\
&& \sum_{\vec R_B} |F_B^{\vec K}(\vec R_B)|^2
+\sum_{\vec R_B} |F_B^{\vec K'}(\vec R_B)|^2\nonumber\\
&& -i \, e^{-i \theta'} \sum_{\vec R_B}
\Big[e^{i (\vec K'-\vec K) \cdot \vec R_B}
{F_B^{\vec K}}^*(\vec R_B) F_B^{\vec K'}(\vec R_B)\Big]\nonumber\\
&& +i \, e^{i \theta'} \sum_{\vec R_B}
\Big[e^{-i (\vec K'-\vec K) \cdot \vec R_B}
{F_B^{\vec K'}}^*(\vec R_B) F_B^{\vec K}(\vec R_B)\Big].\nonumber
\end{eqnarray}
However the terms containing the phase factors
$e^{i (\vec K'-\vec K) \cdot \vec R_A}$,
$e^{i (\vec K'-\vec K) \cdot \vec R_B}$, or their
complex conjugates are negligible with respect to the others.
Indeed, using the smoothing function $g(\vec r)$, we know from
the property (\ref{sum}) with $\vec r=\vec R_A$ that
$\sum_{\vec R_A'} g(\vec R_A-\vec R_A')=1$. Therefore we
can insert this sum into the term
\begin{equation}
\sum_{\vec R_A} \Big[e^{i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K}}^*(\vec R_A) F_A^{\vec K'}(\vec R_A)\Big],
\end{equation}
obtaining
\begin{equation}
\sum_{\vec R_A} \left\{\left[\sum_{\vec R_A'} g(\vec R_A-\vec R_A')\right]
e^{i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K}}^*(\vec R_A) F_A^{\vec K'}(\vec R_A)\right\},
\end{equation}
that can be rewritten, as a result of the point-symmetry of the function $g$
with respect to its center and thus of the fact that $g(\vec R_A-\vec R_A')=
g(-(\vec R_A-\vec R_A'))$, in this way:
\begin{equation}
\sum_{\vec R_A} \sum_{\vec R_A'} g(\vec R_A'-\vec R_A)
e^{i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K}}^*(\vec R_A) F_A^{\vec K'}(\vec R_A).
\end{equation}
If then we use the property (\ref{smooth}) with $\vec r=\vec R_A'$
and in particular the fact that
\begin{equation}
g(\vec R_A'-\vec R_A) {F_A^{\vec K}}^*(\vec R_A) F_A^{\vec K'}(\vec R_A)
=g(\vec R_A'-\vec R_A) {F_A^{\vec K}}^*(\vec R_A') F_A^{\vec K'}(\vec R_A')
\end{equation}
(due to the smoothness of the envelope functions), the term becomes
\begin{equation}
\sum_{\vec R_A'} \Big[ \sum_{\vec R_A} g(\vec R_A'-\vec R_A)
e^{i (\vec K'-\vec K) \cdot \vec R_A} \Big]
{F_A^{\vec K}}^*(\vec R_A') F_A^{\vec K'}(\vec R_A')
\end{equation}
and, by way of the property (\ref{phase}) with $\vec r=\vec R_A'$, we conclude
that the quantities between square brackets, and thus the overall term,
are very small.
Analogously, we can see that the terms
\begin{eqnarray}
&& \sum_{\vec R_A} \Big[e^{-i (\vec K'-\vec K) \cdot \vec R_A}
{F_A^{\vec K'}}^*(\vec R_A) F_A^{\vec K}(\vec R_A)\Big],\nonumber\\
&& \sum_{\vec R_B} \Big[e^{i (\vec K'-\vec K) \cdot \vec R_B}
{F_B^{\vec K}}^*(\vec R_B) F_B^{\vec K'}(\vec R_B)\Big]\nonumber
\end{eqnarray}
and
\begin{equation}
\sum_{\vec R_B} \Big[e^{-i (\vec K'-\vec K) \cdot \vec R_B}
{F_B^{\vec K'}}^*(\vec R_B) F_B^{\vec K}(\vec R_B)\Big]\nonumber
\end{equation}
are negligible~\cite{supplem}.
Since $g(\vec r)$ has non-negligible values only within
a few lattice constants from its center, the previous considerations are
approximately valid also if we limit the sums to the atoms contained in
the area $S$.
We conclude that
\begin{eqnarray}
&& \int_{S} |\psi(\vec r)|^2 d \vec r \simeq
\Omega_0 \sum_{\vec R_A \in S} |\psi_A(\vec R_A)|^2+
\Omega_0 \sum_{\vec R_B \in S} |\psi_B(\vec R_B)|^2\simeq\\
&& \Omega_0 \sum_{\vec R_A \in S} |F_A^{\vec K}(\vec R_A)|^2+
\Omega_0 \sum_{\vec R_A \in S} |F_A^{\vec K'}(\vec R_A)|^2\nonumber\\
&& +\Omega_0 \sum_{\vec R_B \in S} |F_B^{\vec K}(\vec R_B)|^2+
\Omega_0 \sum_{\vec R_B \in S} |F_B^{\vec K'}(\vec R_B)|^2 \simeq \nonumber\\
&& \int_{S} \Big[ |F_A^{\vec K}(\vec r)|^2+|F_A^{\vec K'}(\vec r)|^2+
|F_B^{\vec K}(\vec r)|^2+|F_B^{\vec K'}(\vec r)|^2 \Big] d \vec r,\nonumber
\end{eqnarray}
where we have exploited the fact that the envelope functions $F$ are
smooth functions, which are nearly constant over a unit cell. Therefore
we can consider
\begin{equation}
\label{p}
P=|F_A^{\vec K}(\vec r)|^2+|F_A^{\vec K'}(\vec r)|^2+
|F_B^{\vec K}(\vec r)|^2+|F_B^{\vec K'}(\vec r)|^2
\end{equation}
as a probability density, and the correct normalization condition is
\begin{equation}
\label{normalization}
\int_{\Omega} \left( |F_A^{\vec K}(\vec r)|^2+|F_A^{\vec K'}(\vec r)|^2+
|F_B^{\vec K}(\vec r)|^2+|F_B^{\vec K'}(\vec r)|^2 \right) d \vec r=1.
\end{equation}
We now follow a procedure similar to that used in relativistic quantum
mechanics \cite{sakurai} to find the expression of the probability current
density.
Let us consider the envelope function equation in the case of long-range
external potential (eq.~(\ref{longrange2})), writing explicitly the operators
${\hat\kappa}_x$ and ${\hat\kappa}_y$ (see eq.~(\ref{kappa})). Let us
consider the time-dependent wave function $\psi(\vec r,t)$ and thus the
time-dependent envelope functions $\vec F(\vec r,t)$
($\vec F$ will be the column vector
$[F_A^{\vec K}, \
F_B^{\vec K}, \
F_A^{\vec K'}, \
F_B^{\vec K'}]^{\, T}$).
We now convert the
time-independent envelope function equation into a time-dependent envelope
function equation, substituting in the r.h.s. of eq.(\ref{longrange2})
the quantity $E \vec F(\vec r)$ with
$i\,\hbar\,(\partial \vec F(\vec r,t) / \partial t)$
(for stationary states $\psi(\vec r,t)=\psi(\vec r) e^{-iEt/\hbar}$,
$\vec F(\vec r,t)=\vec F(\vec r) e^{-iEt/\hbar}$, and thus the time-dependent
equation is clearly equivalent to the time-independent one).
Therefore we can write
\begin{eqnarray}
&& \gamma \left[\begin{array}{cccc}
0 &
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} &
0 &
0 \\[3pt]
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} &
0 &
0 &
0 \\[3pt]
0 &
0 &
0 &
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} \\[3pt]
0 &
0 &
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} &
0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K}\\[3pt]
F_B^{\vec K}\\[3pt]
F_A^{\vec K'}\\[3pt]
F_B^{\vec K'}
\end{array}\right]\\
&& +u(\vec r)
\left[\begin{array}{c}
F_A^{\vec K}\\[3pt]
F_B^{\vec K}\\[3pt]
F_A^{\vec K'}\\[3pt]
F_B^{\vec K'}
\end{array}\right]=
i\,\hbar\,\frac{\partial}{\partial t}\left[\begin{array}{c}
F_A^{\vec K}\\[3pt]
F_B^{\vec K}\\[3pt]
F_A^{\vec K'}\\[3pt]
F_B^{\vec K'}
\end{array}\right].\nonumber
\end{eqnarray}
Dividing by $\gamma$ and using the Pauli matrices (\ref{pauli}), we can
rewrite the equation in this form (in the following we will indicate
with $I$ the $2 \times 2$ identity matrix):
\begin{eqnarray}
\label{eqcu1}
&& \left[\begin{array}{cc}
-i\,\sigma_x & 0 \\[3pt]
0 & -i\,\sigma_x
\end{array}\right] \left(\frac{\partial}{\partial x} \vec F \right)+
\left[\begin{array}{cc}
-i\,\sigma_y & 0 \\[3pt]
0 & i\,\sigma_y
\end{array}\right] \left(\frac{\partial}{\partial y} \vec F \right)\\
&& -\frac{i\,\hbar}{\gamma}
\left[\begin{array}{cc}
I & 0 \\[3pt]
0 & I
\end{array}\right] \left(\frac{\partial}{\partial t} \vec F \right)+
\frac{u(\vec r)}{\gamma}
\left[\begin{array}{cc}
I & 0 \\[3pt]
0 & I
\end{array}\right] \vec F =0\nonumber
\end{eqnarray}
that, if we define
\begin{equation}
A=\left[\begin{array}{cc}
i\,\sigma_x & 0 \\[3pt]
0 & i\,\sigma_x
\end{array}\right]
,\quad
B=\left[\begin{array}{cc}
i\,\sigma_y & 0 \\[3pt]
0 & -i\,\sigma_y
\end{array}\right],
\end{equation}
we can rewrite in this compact way:
\begin{equation}
-A \left(\frac{\partial}{\partial x} \vec F \right)
-B \left(\frac{\partial}{\partial y} \vec F \right)
-\frac{i\,\hbar}{\gamma}
\left(\frac{\partial}{\partial t} \vec F \right)
+\frac{u(\vec r)}{\gamma} \vec F =0.
\end{equation}
If we left-multiply this equation by the row vector $\vec F^\dagger$
(the conjugate transpose of $\vec F$), we obtain:
\begin{equation}
\label{eqcu2}
-\vec F^\dagger A \left(\frac{\partial}{\partial x} \vec F \right)
-\vec F^\dagger B \left(\frac{\partial}{\partial y} \vec F \right)
-\frac{i\,\hbar}{\gamma}
\vec F^\dagger \left(\frac{\partial}{\partial t} \vec F \right)
+\frac{u(\vec r)}{\gamma} \vec F^\dagger \vec F =0.
\end{equation}
Instead, if we consider the conjugate transpose of eq.~(\ref{eqcu1}) we
obtain
\begin{eqnarray}
&& \left(\frac{\partial}{\partial x} \vec F^\dagger \right)
\left[\begin{array}{cc}
i\,\sigma_x^\dagger & 0 \\[3pt]
0 & i\,\sigma_x^\dagger
\end{array}\right]+
\left(\frac{\partial}{\partial y} \vec F^\dagger \right)
\left[\begin{array}{cc}
i\,\sigma_y^\dagger & 0 \\[3pt]
0 & -i\,\sigma_y^\dagger
\end{array}\right]\\
&& +\frac{i\,\hbar}{\gamma}
\left(\frac{\partial}{\partial t} \vec F^\dagger \right)
\left[\begin{array}{cc}
I & 0 \\[3pt]
0 & I
\end{array}\right]+
\frac{u(\vec r)}{\gamma}
\vec F^\dagger
\left[\begin{array}{cc}
I & 0 \\[3pt]
0 & I
\end{array}\right] =0,\nonumber
\end{eqnarray}
which, since $\sigma_x^\dagger=\sigma_x$ and $\sigma_y^\dagger=\sigma_y$,
is equal to
\begin{equation}
\left(\frac{\partial}{\partial x} \vec F^\dagger \right) A+
\left(\frac{\partial}{\partial y} \vec F^\dagger \right) B+
\frac{i\,\hbar}{\gamma}
\left(\frac{\partial}{\partial t} \vec F^\dagger \right)+
\frac{u(\vec r)}{\gamma} \vec F^\dagger =0.
\end{equation}
If we right-multiply this equation by the column vector $\vec F$, we obtain
\begin{equation}
\label{eqcu3}
\left(\frac{\partial}{\partial x} \vec F^\dagger \right) A \vec F+
\left(\frac{\partial}{\partial y} \vec F^\dagger \right) B \vec F+
\frac{i\,\hbar}{\gamma}
\left(\frac{\partial}{\partial t} \vec F^\dagger \right)\vec F+
\frac{u(\vec r)}{\gamma} \vec F^\dagger \vec F=0.
\end{equation}
Subtracting (\ref{eqcu2}) from (\ref{eqcu3}), we find
\begin{eqnarray}
&& \left[\left(\frac{\partial}{\partial x} \vec F^\dagger \right) A \vec F+
\vec F^\dagger A \left(\frac{\partial}{\partial x} \vec F \right)\right]+
\left[\left(\frac{\partial}{\partial y} \vec F^\dagger \right) B \vec F+
\vec F^\dagger B \left(\frac{\partial}{\partial y} \vec F \right)\right]\\
&& +\frac{i\,\hbar}{\gamma}
\left[\left(\frac{\partial}{\partial t} \vec F^\dagger \right)\vec F+
\vec F^\dagger \left(\frac{\partial}{\partial t} \vec F \right)\right]=0
\Rightarrow\nonumber\\
&& \frac{\partial}{\partial x}(\vec F^\dagger A \vec F)+
\frac{\partial}{\partial y}(\vec F^\dagger B \vec F)+
\frac{i\,\hbar}{\gamma}
\frac{\partial}{\partial t}(\vec F^\dagger \vec F)=0.\nonumber
\end{eqnarray}
Since $\vec F^\dagger \vec F=P$ (probability density), we have that
(defining $v_F=\gamma/\hbar$)
\begin{eqnarray}
&& -\frac{\partial}{\partial t} P=
-\frac{\partial}{\partial t}(\vec F^\dagger \vec F)=
-i\left(\frac{\gamma}{\hbar}\right)
\vec\nabla \cdot \left[(\vec F^\dagger A \vec F) \hbox{\boldmath{$\hat x$}}+
(\vec F^\dagger B \vec F) \hbox{\boldmath{$\hat y$}} \right]= \\
&& \vec\nabla \cdot \left[(-i\,v_F\,\vec F^\dagger A \vec F)
\hbox{\boldmath{$\hat x$}}+
(-i\,v_F\,\vec F^\dagger B \vec F) \hbox{\boldmath{$\hat y$}} \right]=
\vec\nabla\cdot\vec J,\nonumber
\end{eqnarray}
which is the well-known continuity equation, if we define as probability
current density the vector
\begin{equation}
\vec J=\left[\begin{array}{c}
J_x \\
J_y
\end{array}\right]=
\left[\begin{array}{c}
-i\,v_F\,\vec F^\dagger A \vec F \\
-i\,v_F\,\vec F^\dagger B \vec F
\end{array}\right].
\end{equation}
In particular, we have that
\begin{eqnarray}
\label{jx}
J_x &=&-i\,v_F\,\vec F^\dagger A \vec F=\\
&& -i\,v_F\,
\left[\begin{array}{cccc}
{F_A^{\vec K}}^* &
{F_B^{\vec K}}^* &
{F_A^{\vec K'}}^* &
{F_B^{\vec K'}}^*
\end{array}\right]
\left[\begin{array}{cccc}
0 & i & 0 & 0 \\[3pt]
i & 0 & 0 & 0 \\[3pt]
0 & 0 & 0 & i \\[3pt]
0 & 0 & i & 0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} \\[3pt]
F_B^{\vec K} \\[3pt]
F_A^{\vec K'} \\[3pt]
F_B^{\vec K'}
\end{array}\right]= \nonumber\\
&& -i\,v_F\,
\left[\begin{array}{cccc}
{F_A^{\vec K}}^* &
{F_B^{\vec K}}^* &
{F_A^{\vec K'}}^* &
{F_B^{\vec K'}}^*
\end{array}\right]
\left[\begin{array}{c}
i\,F_B^{\vec K} \\[3pt]
i\,F_A^{\vec K} \\[3pt]
i\,F_B^{\vec K'} \\[3pt]
i\,F_A^{\vec K'}
\end{array}\right]= \nonumber\\
&& v_F\,\left({F_A^{\vec K}}^* F_B^{\vec K}+
{F_B^{\vec K}}^* F_A^{\vec K}+
{F_A^{\vec K'}}^* F_B^{\vec K'}+
{F_B^{\vec K'}}^* F_A^{\vec K'}\right) \nonumber
\end{eqnarray}
and that
\begin{eqnarray}
\label{jy}
\quad J_y &=& -i\,v_F\,\vec F^\dagger B \vec F=\\
&& -i\,v_F\,
\left[\begin{array}{cccc}
{F_A^{\vec K}}^* &
{F_B^{\vec K}}^* &
{F_A^{\vec K'}}^* &
{F_B^{\vec K'}}^*
\end{array}\right]
\left[\begin{array}{cccc}
0 & 1 & 0 & 0 \\[3pt]
-1 & 0 & 0 & 0 \\[3pt]
0 & 0 & 0 & -1 \\[3pt]
0 & 0 & 1 & 0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} \\[3pt]
F_B^{\vec K} \\[3pt]
F_A^{\vec K'} \\[3pt]
F_B^{\vec K'}
\end{array}\right]= \nonumber\\
&& -i\,v_F\,
\left[\begin{array}{cccc}
{F_A^{\vec K}}^* &
{F_B^{\vec K}}^* &
{F_A^{\vec K'}}^* &
{F_B^{\vec K'}}^*
\end{array}\right]
\left[\begin{array}{c}
F_B^{\vec K} \\[3pt]
-F_A^{\vec K} \\[3pt]
-F_B^{\vec K'} \\[3pt]
F_A^{\vec K'}
\end{array}\right]= \nonumber\\
&& -i\,v_F\,\left({F_A^{\vec K}}^* F_B^{\vec K}-
{F_B^{\vec K}}^* F_A^{\vec K}-
{F_A^{\vec K'}}^* F_B^{\vec K'}+
{F_B^{\vec K'}}^* F_A^{\vec K'}\right).\nonumber
\end{eqnarray}
We note that a different ordering of the elements inside the envelope function
vector is often used~\cite{akhmerov1,beenakker}, in which, instead of
$\vec F$, the vector
$\vec{\tilde F}=
[F_A^{\vec K} (\vec r), \
F_B^{\vec K} (\vec r),$
$F_B^{\vec K'} (\vec r), \
F_A^{\vec K'} (\vec r)]^{\, T}$ is considered.
Consequently, the $\vec k \cdot \vec p$ equation in the case of long-range
external potential (\ref{longrange2}) can be rewritten in this way:
\begin{eqnarray}
&& \left[\begin{array}{cccc}
u(\vec r) &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
0 &
0 \\[3pt]
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) &
u(\vec r) &
0 &
0 \\[3pt]
0 &
0 &
u(\vec r) &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) \\[3pt]
0 &
0 &
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) &
u(\vec r)
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)
\end{array}\right]=\\
&& E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)
\end{array}\right],\nonumber
\end{eqnarray}
which is the so-called valley-isotropic representation of the Dirac equation,
characterized by two identical $2 \times 2$ submatrices corresponding to the
two valleys $\vec K$ and $\vec K'$.
Following this representation, the previously obtained expressions for
the probability current density can be compactly restated in this form:
\begin{equation}
\vec J=v_F\,\vec{\tilde F}^\dagger (I\otimes\vec\sigma) \vec{\tilde F},\nonumber
\end{equation}
where $I\otimes\vec\sigma$ is the Kronecker product between the $2 \times 2$
identity matrix $I$ and the vector $\vec\sigma$ of Pauli matrices.
Indeed, the resulting $x$ and $y$ components of $\vec J$ are
\begin{eqnarray}
J_x &=& v_F\,\vec{\tilde F}^\dagger (I\otimes\vec\sigma_x) \vec{\tilde F}=\\
&& v_F \,\left[\begin{array}{cccc}
{F_A^{\vec K}}^* &
{F_B^{\vec K}}^* &
{F_B^{\vec K'}}^* &
{F_A^{\vec K'}}^*
\end{array}\right]
\left[\begin{array}{cccc}
0 & 1 & 0 & 0 \\[6pt]
1 & 0 & 0 & 0 \\[6pt]
0 & 0 & 0 & 1 \\[6pt]
0 & 0 & 1 & 0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} \\[6pt]
F_B^{\vec K} \\[6pt]
F_B^{\vec K'} \\[6pt]
F_A^{\vec K'}
\end{array}\right]= \nonumber\\[6pt]
&& v_F\,\left({F_A^{\vec K}}^* F_B^{\vec K}+
{F_B^{\vec K}}^* F_A^{\vec K}+
{F_B^{\vec K'}}^* F_A^{\vec K'}+
{F_A^{\vec K'}}^* F_B^{\vec K'}\right);\nonumber\\[6pt]
J_y &=& v_F\,\vec{\tilde F}^\dagger (I\otimes\vec\sigma_y) \vec{\tilde F}=\nonumber\\[6pt]
&& v_F \,\left[\begin{array}{cccc}
{F_A^{\vec K}}^* &
{F_B^{\vec K}}^* &
{F_B^{\vec K'}}^* &
{F_A^{\vec K'}}^*
\end{array}\right]
\left[\begin{array}{cccc}
0 & -i & 0 & 0 \\[6pt]
i & 0 & 0 & 0 \\[6pt]
0 & 0 & 0 & -i \\[6pt]
0 & 0 & i & 0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} \\[6pt]
F_B^{\vec K} \\[6pt]
F_B^{\vec K'} \\[6pt]
F_A^{\vec K'}
\end{array}\right]= \nonumber\\[6pt]
&& -i\,v_F\,\left({F_A^{\vec K}}^* F_B^{\vec K}-
{F_B^{\vec K}}^* F_A^{\vec K}+
{F_B^{\vec K'}}^* F_A^{\vec K'}-
{F_A^{\vec K'}}^* F_B^{\vec K'}\right),\nonumber
\end{eqnarray}
\vskip6pt\noindent
which coincide with eqs.~(\ref{jx})-(\ref{jy}).
It is useful to notice that the Dirac equation in the absence of an
external potential is not satisfied only by the eigenvector
$\vec F (\vec r)=[F_A^{\vec K} (\vec r), \
F_B^{\vec K} (\vec r), \
F_A^{\vec K'} (\vec r), \
F_B^{\vec K'} (\vec r)]^{\, T}$
with eigenvalue $E$ (as we see in (\ref{absence})), but is
satisfied also by the eigenvector
$\vec F_1 (\vec r)=[F_A^{\vec K} (\vec r), \
-F_B^{\vec K} (\vec r), \
F_A^{\vec K'} (\vec r), \
-F_B^{\vec K'} (\vec r)]^{\, T}$
with eigenvalue $-E$, since (\ref{absence}) is equivalent to
\vskip6pt\noindent
\begin{eqnarray}
&& \left[\begin{array}{cccc}
0 &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
0 &
0 \\[6pt]
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) &
0 &
0 &
0 \\[6pt]
0 &
0 &
0 &
\gamma ({\hat\kappa}_x+i{\hat\kappa}_y) \\[6pt]
0 &
0 &
\gamma ({\hat\kappa}_x-i{\hat\kappa}_y) &
0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[6pt]
-F_B^{\vec K} (\vec r)\\[6pt]
F_A^{\vec K'} (\vec r)\\[6pt]
-F_B^{\vec K'} (\vec r)
\end{array}\right]=\\[6pt]
&& -E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[6pt]
-F_B^{\vec K} (\vec r)\\[6pt]
F_A^{\vec K'} (\vec r)\\[6pt]
-F_B^{\vec K'} (\vec r)
\end{array}\right].\nonumber
\end{eqnarray}
\vskip6pt\noindent
The wave functions $\psi (\vec r)$ and $\psi_1 (\vec r)$ corresponding
to the envelope functions $\vec F (\vec r)$ and $\vec F_1 (\vec r)$
therefore have opposite energies and thus, being
(see eq.~(\ref{hamiltonian})) eigenfunctions of the Hermitian operator $H$
corresponding to different eigenvalues, are orthogonal. But,\break
\newpage
\noindent
due to the
form of $\vec F (\vec r)$ and $\vec F_1 (\vec r)$ and to
eq.~(\ref{assumptions2}), we see that $\psi (\vec r)$ and $\psi_1 (\vec r)$
have the same $\psi_A (\vec r)$ but opposite $\psi_B (\vec r)$.
Therefore, if we write the orthogonality relation between $\psi (\vec r)$ and
$\psi_1 (\vec r)$, we have that
\begin{eqnarray}
\quad 0 &=& \int_\Omega \psi (\vec r)^* \psi_1 (\vec r) d \vec r=\\
&& \int_\Omega \Big[\sum_{\vec R_A}\psi_A (\vec R_A)\varphi(\vec r -\vec R_A)+
\sum_{\vec R_B}\psi_B (\vec R_B)\varphi(\vec r -\vec R_B)\Big]^* \nonumber\\
&& \cdot\Big[\sum_{\vec R_A}\psi_A (\vec R_A)\varphi(\vec r -\vec R_A)-
\sum_{\vec R_B}\psi_B (\vec R_B)\varphi(\vec r -\vec R_B)\Big] d \vec r=\nonumber\\
&& \sum_{\vec R_A} |\psi_A (\vec R_A)|^2
\int_\Omega |\varphi(\vec r -\vec R_A)|^2 d \vec r-
\sum_{\vec R_B} |\psi_B (\vec R_B)|^2
\int_\Omega |\varphi(\vec r -\vec R_B)|^2 d \vec r=\nonumber\\
&& \sum_{\vec R_A} |\psi_A (\vec R_A)|^2 \Omega_0-
\sum_{\vec R_B} |\psi_B (\vec R_B)|^2 \Omega_0 \Rightarrow\nonumber\\
&& \Omega_0 \sum_{\vec R_A} |\psi_A (\vec R_A)|^2 =
\Omega_0 \sum_{\vec R_B} |\psi_B (\vec R_B)|^2,\nonumber
\end{eqnarray}
where we have exploited the fact that each atomic wave function $\varphi$
has a non-zero overlap only with itself and has been normalized according to
(\ref{norm}).
Since (as we have seen)
\begin{eqnarray}
&& \Omega_0 \sum_{\vec R_A} |\psi_A (\vec R_A)|^2\simeq
\Omega_0 \sum_{\vec R_A} |F_A^{\vec K} (\vec R_A)|^2+
\Omega_0 \sum_{\vec R_A} |F_A^{\vec K'} (\vec R_A)|^2\simeq\\
&& \int_{\Omega} \left(
|F_A^{\vec K} (\vec r)|^2+|F_A^{\vec K'} (\vec r)|^2
\right)d \vec r, \nonumber\\
&& \Omega_0 \sum_{\vec R_B} |\psi_B (\vec R_B)|^2\simeq
\Omega_0 \sum_{\vec R_B} |F_B^{\vec K} (\vec R_B)|^2+
\Omega_0 \sum_{\vec R_B} |F_B^{\vec K'} (\vec R_B)|^2\simeq\nonumber\\
&& \int_{\Omega} \left(
|F_B^{\vec K} (\vec r)|^2+|F_B^{\vec K'} (\vec r)|^2
\right)d \vec r,\nonumber
\end{eqnarray}
we conclude that
\begin{equation}
\int_{\Omega} \left(
|F_A^{\vec K} (\vec r)|^2+|F_A^{\vec K'} (\vec r)|^2
\right)d \vec r=
\int_{\Omega} \left(
|F_B^{\vec K} (\vec r)|^2+|F_B^{\vec K'} (\vec r)|^2
\right)d \vec r
\end{equation}
and this means that in the absence of an external potential the
normalization (\ref{normalization}) is equivalent to
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
\int_{\Omega} \left(
|F_A^{\vec K} (\vec r)|^2+|F_A^{\vec K'} (\vec r)|^2
\right)d \vec r=\frac{1}{2},\\
\\
\displaystyle
\int_{\Omega} \left(
|F_B^{\vec K} (\vec r)|^2+|F_B^{\vec K'} (\vec r)|^2
\right)d \vec r=\frac{1}{2}
\end{array}\right.
\end{equation}
(the expressions of the envelope functions previously written for graphene
in the absence of an external potential satisfy this normalization
criterion).
\section{Application of the $\vec k \cdot \vec p$ method to carbon
nanotubes}
A single-wall carbon nanotube can be described as a graphite sheet rolled,
along one of its lattice translational vectors (the vector $\vec C_h$
shown in fig.~\ref{f7}),
into a cylindrical shape~\cite{saito}. In particular, it is completely
specified by the so-called chiral vector $\vec C_h$, which corresponds
to a section of the nanotube perpendicular to the nanotube axis and thus
has a length equal to the nanotube circumference and connects two points
of the graphene sheet which coincide in the nanotube. This vector can
be expressed as a linear combination of the real space unit vectors of
graphene with integer coefficients $n$ and $m$
\begin{equation}
\vec C_h=n\vec a_1 +m\vec a_2 \mathrel{\mathop\equiv_{\Sigma'}}
n a \left[\begin{array}{c}
\displaystyle \frac{\sqrt{3}}{2}\\[7pt]
\noalign{\vskip3pt}
\displaystyle \frac{1}{2}\\[7pt]
\noalign{\vskip3pt}
0
\end{array}\right]+
m a \left[\begin{array}{c}
\displaystyle \frac{\sqrt{3}}{2}\\[7pt]
\noalign{\vskip3pt}
\displaystyle -\frac{1}{2}\\[7pt]
\noalign{\vskip3pt}
0
\end{array}\right]=
a \left[\begin{array}{c}
\displaystyle \frac{\sqrt{3}}{2}(n+m)\\[7pt]
\noalign{\vskip3pt}
\displaystyle \frac{1}{2}(n-m)\\[7pt]
\noalign{\vskip3pt}
0
\end{array}\right].
\end{equation}
The corresponding carbon nanotube will be indicated as $(n, m)$.
If we define the chiral angle of the nanotube $\theta$ (with
$-\pi /6 < \theta \leq \pi /6$, due to the hexagonal symmetry of graphene
lattice) as the angle (positive in the clockwise direction) between
$\vec a_1$ and $\vec C_h$ (see fig.~\ref{f7}) or, equivalently, as the tilt
angle of the edges of the hexagons constituting the graphene sheet with
respect to the direction of the nanotube axis, such an angle can be found
from the values of $n$ and $m$ noting that
\begin{equation}
\cos \theta=\frac{\vec C_h \cdot \vec a_1}{|\vec C_h||\vec a_1|}=
\frac{2n+m}{2\sqrt{n^2+m^2+nm}}
\end{equation}
and
\begin{equation}
\sin \theta=\frac{(\vec C_h \times \vec a_1)\cdot\hbox{\boldmath{$\hat z$}}'}
{|\vec C_h||\vec a_1|}=\frac{\sqrt{3} m}{2\sqrt{n^2+m^2+nm}},
\end{equation}
where the right-hand reference frame $\Sigma'=
(\hbox{\boldmath{$\hat x$}}',\hbox{\boldmath{$\hat y$}}',
\hbox{\boldmath{$\hat z$}}')$ is
that already used in the calculations on graphene.
In the successive expressions we will identify the previously introduced
angle $\theta'$ with $\theta'=(\pi / 6)-\theta$ (the angle between
$\vec C_h$ and the axis $\hbox{\boldmath{$\hat x$}}'$), as shown in
fig.~\ref{f7}, and thus we will take the axis $\hbox{\boldmath{$\hat x$}}$
along $\vec C_h$.
Following Ando's approach~\cite{ando1,ando2},
the dispersion relations and the electron wave functions of a carbon
nanotube can be obtained from those of graphene, enforcing
for the electron wave function the following periodic boundary condition in
the circumferential direction:
\begin{equation}
\psi (\vec r+\vec C_h)=\psi (\vec r)
\end{equation}
(in the calculations we will not consider the curvature effects
\footnotemark{}).
\footnotetext{For the effects of the finite curvature on the electronic
properties of carbon nanotubes see, for example, ref.~\cite{reich} and the
references therein.}
Remembering that using the tight-binding technique the electron wave function
can be expressed as
\begin{equation}
\psi (\vec r)=\sum_{\vec R_A}\psi_A (\vec R_A)\varphi(\vec r -\vec R_A)+
\sum_{\vec R_B}\psi_B (\vec R_B)\varphi(\vec r -\vec R_B),
\end{equation}
the boundary condition can be written as
\begin{eqnarray}
&& \psi (\vec r+\vec C_h)=\\
&& \sum_{\vec R_A}\psi_A (\vec R_A)\varphi((\vec r+\vec C_h) -\vec R_A)+
\sum_{\vec R_B}\psi_B (\vec R_B)\varphi((\vec r+\vec C_h) -\vec R_B)=\nonumber\\
&& \sum_{\vec R_A}\psi_A (\vec R_A)\varphi(\vec r -(\vec R_A-\vec C_h))+
\sum_{\vec R_B}\psi_B (\vec R_B)\varphi(\vec r -(\vec R_B-\vec C_h))=\nonumber\\
&& \!\!\sum_{\vec R_A}\!\psi_A ((\vec R_A-\vec C_h)+\vec C_h)
\varphi(\vec r -(\vec R_A-\vec C_h))\nonumber\\
&& +\!\sum_{\vec R_B}\!\psi_B ((\vec R_B-\vec C_h)+\vec C_h)
\varphi(\vec r -(\vec R_B-\vec C_h))\!\!=\nonumber\\
&& \sum_{\vec R^*_A}\psi_A (\vec R^*_A+\vec C_h) \varphi(\vec r -\vec R^*_A)+
\sum_{\vec R^*_B}\psi_B (\vec R^*_B+\vec C_h) \varphi(\vec r -\vec R^*_B)=\nonumber\\
&& \psi (\vec r)=
\sum_{\vec R^*_A}\psi_A (\vec R^*_A) \varphi(\vec r -\vec R^*_A)+
\sum_{\vec R^*_B}\psi_B (\vec R^*_B) \varphi(\vec r -\vec R^*_B)\nonumber
\end{eqnarray}
(where we have used the fact that, being $\vec C_h$ a linear combination
with integer coefficients of the real space lattice unit vectors, also
$\vec R_A-\vec C_h$ and $\vec R_B-\vec C_h$ are atomic positions, defined
$\vec R^*_A$ and $\vec R^*_B$). Thus the boundary condition is equivalent
to the two conditions
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
\psi_A (\vec R^*_A+\vec C_h)=\psi_A (\vec R^*_A),\\[7pt]
\displaystyle
\psi_B (\vec R^*_B+\vec C_h)=\psi_B (\vec R^*_B).
\end{array} \right.
\end{equation}
\noindent
If we use the expressions (\ref{assumptions}) for $\psi_A (\vec r)$
and $\psi_B (\vec r)$ (and we define again the generic atomic position
$\vec R_A$ and $\vec R_B$, instead of $\vec R^*_A$ and $\vec R^*_B$), these
conditions can be rewritten in the following form:
\begin{equation}
\label{conditions}
\left\{ \begin{array}{l}
\displaystyle
e^{i \vec K\cdot (\vec R_A+\vec C_h)} F_A^{\vec K}(\vec R_A+\vec C_h)
-i \, e^{i \theta'} e^{i \vec K'\cdot (\vec R_A+\vec C_h)}
F_A^{\vec K'}(\vec R_A+\vec C_h)=\\[7pt]
\qquad\qquad\qquad\qquad e^{i \vec K\cdot \vec R_A} F_A^{\vec K}(\vec R_A)
-i \, e^{i \theta'} e^{i \vec K'\cdot \vec R_A}
F_A^{\vec K'}(\vec R_A),\\[7pt]
\displaystyle
i \, e^{i \theta'}
e^{i \vec K\cdot (\vec R_B+\vec C_h)} F_B^{\vec K} (\vec R_B+\vec C_h)+
e^{i \vec K'\cdot (\vec R_B+\vec C_h)} F_B^{\vec K'} (\vec R_B+\vec C_h)=\\[7pt]
\qquad\qquad\qquad\qquad i \, e^{i \theta'}
e^{i \vec K\cdot \vec R_B} F_B^{\vec K} (\vec R_B)+
e^{i \vec K'\cdot \vec R_B} F_B^{\vec K'} (\vec R_B).
\end{array} \right.
\end{equation}
\noindent
Multiplying the first equation of (\ref{conditions}) by
$g(\vec r-\vec R_A) e^{-i \vec K\cdot \vec R_A}$,
summing it over $\vec R_A$ and\break
\newpage
\noindent
then using the properties of the function
$\!g\!$ (defined in eqs.~$\!$(\ref{sum}), $\!$(\ref{phase}) and $\!$(\ref{smooth})),$\!$ we find
\begin{eqnarray}
&& e^{i \vec K\cdot \vec C_h} \sum_{\vec R_A} g(\vec r-\vec R_A)
F_A^{\vec K}(\vec R_A+\vec C_h)\\
&& -i \, e^{i \theta'} e^{i \vec K'\cdot \vec C_h} \sum_{\vec R_A}
g(\vec r-\vec R_A) e^{i (\vec K'-\vec K) \cdot \vec R_A}
F_A^{\vec K'}(\vec R_A+\vec C_h)=\nonumber\\
&& \sum_{\vec R_A} g(\vec r-\vec R_A) F_A^{\vec K}(\vec R_A)
-i \, e^{i \theta'} \sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K) \cdot \vec R_A}
F_A^{\vec K'}(\vec R_A) \Rightarrow\nonumber\\
&& e^{i \vec K\cdot \vec C_h} \left[\sum_{\vec R_A} g(\vec r-\vec R_A)\right]
F_A^{\vec K}(\vec r+\vec C_h)\nonumber\\
&& -i \, e^{i \theta'} e^{i \vec K'\cdot \vec C_h} \left[\sum_{\vec R_A}
g(\vec r-\vec R_A) e^{i (\vec K'-\vec K) \cdot \vec R_A}\right]
F_A^{\vec K'}(\vec r+\vec C_h)=\nonumber\\
&& \left[\sum_{\vec R_A} g(\vec r-\vec R_A)\right] F_A^{\vec K}(\vec r)
-i \, e^{i \theta'} \left[\sum_{\vec R_A} g(\vec r-\vec R_A)
e^{i (\vec K'-\vec K) \cdot \vec R_A}\right]
F_A^{\vec K'}(\vec r) \Rightarrow\nonumber\\
&& e^{i \vec K\cdot \vec C_h} F_A^{\vec K}(\vec r+\vec C_h)=
F_A^{\vec K}(\vec r).\nonumber
\end{eqnarray}
If we calculate the scalar product between $\vec K$ and $\vec C_h$
we obtain
\begin{equation}
\label{kch}
\vec K\cdot \vec C_h =
\frac{2\pi}{3}(m-n)=2\pi\tilde N+\frac{2\pi\nu}{3},
\end{equation}
where $m-n=3 \tilde N +\nu$, with $\nu=0$ or $\pm 1$ and $\tilde N$ a proper
integer. Therefore we have that
\begin{equation}
e^{i \vec K\cdot \vec C_h}=e^{i 2\pi\tilde N} e^{i \frac{2\pi\nu}{3}}=
e^{i \frac{2\pi\nu}{3}}
\end{equation}
and thus the first boundary condition near $\vec K$ is
\begin{equation}
e^{i \frac{2\pi\nu}{3}}F_A^{\vec K}(\vec r+\vec C_h)=F_A^{\vec K}(\vec r),
\end{equation}
or equivalently
\begin{equation}
F_A^{\vec K}(\vec r+\vec C_h)=e^{-i \frac{2\pi\nu}{3}}F_A^{\vec K}(\vec r).
\end{equation}
Multiplying the second equation of (\ref{conditions}) by
$g(\vec r-\vec R_B) (-i e^{-i\theta'}e^{-i \vec K\cdot \vec R_B})$,
summing it over $\vec R_B$ and then using the properties of the function
$g$, we find analogously~\cite{supplem}
\begin{equation}
e^{i \vec K\cdot \vec C_h} F_B^{\vec K}(\vec r+\vec C_h)=
F_B^{\vec K}(\vec r).
\end{equation}
Substituting the value of $e^{i \vec K\cdot \vec C_h}$, we can rewrite
this boundary condition in the form
\begin{equation}
e^{i \frac{2\pi\nu}{3}}F_B^{\vec K}(\vec r+\vec C_h)=F_B^{\vec K}(\vec r),
\end{equation}
or, equivalently
\begin{equation}
F_B^{\vec K}(\vec r+\vec C_h)=e^{-i \frac{2\pi\nu}{3}} F_B^{\vec K}(\vec r).
\end{equation}
Thus the periodic boundary condition near $\vec K$ is
\begin{equation}
\left[\begin{array}{c}
F_A^{\vec K}(\vec r+\vec C_h)\\[5pt]
F_B^{\vec K}(\vec r+\vec C_h)
\end{array}\right]=e^{-i \frac{2\pi\nu}{3}}
\left[\begin{array}{c}
F_A^{\vec K}(\vec r)\\[5pt]
F_B^{\vec K}(\vec r)
\end{array}\right],
\end{equation}
which can be written in this compact way:
\begin{equation}
\vec F^{\vec K}(\vec r+\vec C_h)=e^{-i \frac{2\pi\nu}{3}}
\vec F^{\vec K}(\vec r).
\end{equation}
However, as we have previously seen (eq.~(\ref{fk})), in the absence of an
external potential the envelope functions have the following form:
\begin{eqnarray}
\vec F_{s \vec\kappa}^{\vec K}(\vec r) &=&
\frac{1}{\sqrt{2 L \ell}}e^{i\vec\kappa\cdot\vec r}
e^{i\phi_s (\vec\kappa)} R(-\alpha (\vec\kappa)) |s \rangle =\\
&& \frac{1}{\sqrt{2 L \ell}}e^{i (\kappa_x x +\kappa_y y )}
e^{i\phi_s (\vec\kappa)} R(-\alpha (\vec\kappa)) |s \rangle,\nonumber
\end{eqnarray}
with the surface area $\Omega=L \ell$, where $L=|\vec C_h|$ and $\ell$ is the
length of the nanotube. Thus the periodic boundary condition
becomes
\begin{equation}
\frac{1}{\sqrt{2 L \ell}}
e^{i\vec\kappa\cdot(\vec r+\vec C_h)}
e^{i\phi_s (\vec\kappa)} R(-\alpha (\vec\kappa)) |s \rangle =
e^{-i \frac{2\pi\nu}{3}}
\frac{1}{\sqrt{2 L \ell}}
e^{i\vec\kappa\cdot\vec r}
e^{i\phi_s (\vec\kappa)} R(-\alpha (\vec\kappa)) |s \rangle,
\end{equation}
or equivalently
\begin{equation}
e^{i\vec\kappa\cdot\vec C_h}=e^{-i \frac{2\pi\nu}{3}}.
\end{equation}
This condition can be written also in the following way:
\begin{equation}
e^{i \kappa_x L}=e^{-i \frac{2\pi\nu}{3}}1=
e^{-i \frac{2\pi\nu}{3}}e^{i 2\pi \tilde n},
\end{equation}
or, equivalently
\begin{equation}
\kappa_x L=-\frac{2\pi\nu}{3}+2\pi \tilde n
\end{equation}
and thus
\begin{equation}
\kappa_x =\frac{2\pi}{L}\left(\tilde n-\frac{\nu}{3}\right)=
\kappa_{\nu} (\tilde n),
\end{equation}
with $\tilde n$ integer.
This condition on $\kappa_x$ can be obtained also in a different way,
enforcing the boundary condition on the overall wave vector $\vec k$.
In order to do this, we have to observe that, considering only the periodic
lattice potential inside the graphene sheet, the wave function
$\psi (\vec r)$ has to be a Bloch function
$u(\vec k, \vec r) e^{i \vec k\cdot \vec r}$, where
$u(\vec k, \vec r)$ has the periodicity of the lattice.
Thus the boundary condition
\begin{equation}
\psi (\vec r+\vec C_h)=\psi (\vec r)
\end{equation}
is equivalent to
\begin{equation}
u(\vec k, \vec r+\vec C_h) e^{i \vec k\cdot (\vec r+\vec C_h)}=
u(\vec k, \vec r) e^{i \vec k\cdot \vec r}.
\end{equation}
Since we know that $u(\vec k, \vec r)$ has the lattice periodicity
and thus $u(\vec k, \vec r+\vec C_h)=u(\vec k, \vec r)$
($\vec C_h$ being a linear combination with integer coefficients of the
lattice unit vectors) the boundary condition can also be written as
\begin{equation}
e^{i \vec k\cdot \vec C_h}=1,
\end{equation}
or, equivalently
\begin{equation}
\vec k\cdot \vec C_h=2\pi \tilde m.
\end{equation}
Thus the boundary condition is
(being $\hbox{\boldmath{$\hat C$}}_h=\vec C_h /|\vec C_h|=\vec C_h /L$)
\begin{equation}
\vec k\cdot \hbox{\boldmath{$\hat C$}}_h=\vec k\cdot
\hbox{\boldmath{$\hat x$}}=
k_x=(\vec K)_x+\kappa_x=\frac{2\pi}{L} \tilde m
\end{equation}
and (using eq.~(\ref{kch}))
\begin{eqnarray}
\kappa_x &=& \frac{2\pi}{L} \tilde m-(\vec K)_x=
\frac{2\pi}{L} \tilde m-\frac{\vec K\cdot \vec C_h}{L}=
\frac{2\pi}{L} \tilde m-\frac{2\pi}{L}\tilde N-\frac{2\pi}{3 L}\nu=\\
&& \frac{2\pi}{L} \left(\tilde m-\tilde N-\frac{\nu}{3}\right)=
\frac{2\pi}{L} \left(\tilde n-\frac{\nu}{3}\right)=
\kappa_{\nu} (\tilde n)\nonumber
\end{eqnarray}
(with $\tilde n \equiv \tilde m-\tilde N$),
which is equal to the previously found expression.
If we substitute this condition on $\kappa_x$ in the dispersion
relations of graphene, we find
\begin{equation}
E_{s,\tilde n}^{\vec K} (\kappa_y)=s\gamma|\vec\kappa|=
s\gamma\sqrt{\kappa_x^2+\kappa_y^2}=
s\gamma\sqrt{\kappa_{\nu} (\tilde n)^2+\kappa_y^2},
\end{equation}
where $s=+1$ and $s=-1$ indicate the conduction and valence bands,
respectively.
We notice that now $k_y$ is the wave vector $k$ of the nanotube,
which, being a substantially unidimensional material,
has a one-dimensional Brillouin zone with width $2 \pi /T$ (where $T$
is the length of the unit cell of the nanotube, along its axis, which
can be easily found from the numbers $n$ and $m$ characterizing the
nanotube \cite{saito}). Correspondingly, $\kappa_y$ is the difference between
the wave vector $k$ of the nanotube and the component of $\vec K$
along $y$.
As to the envelope functions near $\vec K$, if, starting from eq.~(\ref{fk}),
we choose as value of the arbitrary phase $\phi_s=-\alpha /2$ and then
we enforce the condition on $\kappa_x$, we can write
\begin{eqnarray}
&& \vec F_{s \vec\kappa}^{\vec K}(\vec r)=
\frac{1}{\sqrt{2 L \ell}}e^{i\vec\kappa\cdot\vec r}
e^{i\phi_s} \left[\begin{array}{cc}
e^{-i\frac{\alpha}{2}} & 0\\
0 & e^{i\frac{\alpha}{2}}
\end{array}\right]
\frac{1}{\sqrt{2}}\left[\begin{array}{c}
-is\\
1
\end{array}\right]=\\
&& \frac{1}{2\sqrt{L \ell}}
e^{i(\kappa_x x+\kappa_y y)}e^{i\phi_s}
\left[\begin{array}{c}
-i s e^{-i \frac{\alpha}{2}}\\
e^{i\frac{\alpha}{2}}
\end{array}\right]=
\frac{1}{2\sqrt{L \ell}} e^{i(\kappa_x x+\kappa_y y)}
\left[\begin{array}{c}
-i s e^{-i \alpha}\\
1
\end{array}\right]=\nonumber\\
&& \frac{1}{2\sqrt{L \ell}}
\left[\begin{array}{c}
s e^{-i (\frac{\pi}{2}+\alpha)}\\
1
\end{array}\right]
e^{i\kappa_x x+i\kappa_y y}=\nonumber\\
&& \frac{1}{2\sqrt{L \ell}}
\left[\begin{array}{c}
s b_{\nu}(\tilde n,\kappa_y)\\
1
\end{array}\right]
e^{i \kappa_{\nu}(\tilde n) x+i\kappa_y y}=
\vec F_{s \tilde n \kappa_y}^{\vec K}(\vec r).\nonumber
\end{eqnarray}
The function $b_{\nu}(\tilde n,\kappa_y)=e^{-i (\frac{\pi}{2}+\alpha)}$
can be found noting that $\alpha$ has been defined (see eq.~(\ref{alpha}))
in such a way that
\begin{equation}
\kappa_x+i \kappa_y=|\vec\kappa|e^{i\left(\frac{\pi}{2}+\alpha\right)};
\end{equation}
this means that
\begin{equation}
e^{i\left(\frac{\pi}{2}+\alpha\right)}=
\frac{{\kappa_x+i \kappa_y}}{\sqrt{\kappa_x^2+\kappa_y^2}}
\end{equation}
and thus
\begin{eqnarray}
b_{\nu}(\tilde n,\kappa_y) &=& e^{-i\left(\frac{\pi}{2}+\alpha\right)}=
\left(e^{i\left(\frac{\pi}{2}+\alpha\right)}\right)^*=\\
&& \left(\frac{\kappa_x+i \kappa_y}{\sqrt{\kappa_x^2+\kappa_y^2}}\right)^*=
\frac{\kappa_x-i \kappa_y}{\sqrt{\kappa_x^2+\kappa_y^2}}=
\frac{\kappa_{\nu}(\tilde n)-i \kappa_y}
{\sqrt{\kappa_{\nu}(\tilde n)^2+\kappa_y^2}}.\nonumber
\end{eqnarray}
We can proceed analogously for the boundary conditions near $\vec K'$.
Indeed, multiplying the first equation of (\ref{conditions}) by
$g(\vec r-\vec R_A)
(i e^{-i\theta'} e^{-i \vec K'\cdot \vec R_A})$,
summing it over $\vec R_A$ and then using the properties of the function
$g$, we find~\cite{supplem}
\begin{equation}
e^{i \vec K'\cdot \vec C_h} F_A^{\vec K'}(\vec r+\vec C_h)=
F_A^{\vec K'}(\vec r).
\end{equation}
The scalar product between $\vec K'$ and $\vec C_h$ is equal to
\begin{equation}
\label{k1ch}
\vec K'\cdot \vec C_h=-\frac{2\pi}{3}(m-n)=
-2\pi\tilde N-\frac{2\pi\nu}{3},
\end{equation}
where we have used the previously introduced relation
$m-n=3 \tilde N +\nu$ with $\nu=0$ or $\pm 1$ and $\tilde N$ a proper
integer. Thus we have that
\begin{equation}
e^{i \vec K'\cdot \vec C_h}=e^{-i 2\pi \tilde N}
e^{-i \frac{2\pi\nu}{3}}=e^{-i \frac{2\pi\nu}{3}}$$
and consequently the boundary condition near $K'$ is
$$e^{-i \frac{2\pi\nu}{3}}F_A^{\vec K'}(\vec r+\vec C_h)=
F_A^{\vec K'}(\vec r),
\end{equation}
or, equivalently
\begin{equation}
F_A^{\vec K'}(\vec r+\vec C_h)=
e^{i \frac{2\pi\nu}{3}}F_A^{\vec K'}(\vec r).
\end{equation}
On the other hand, multiplying the second equation of (\ref{conditions}) by
$g(\vec r-\vec R_B) e^{-i \vec K'\cdot \vec R_B}$,
summing it over $\vec R_B$ and then using the properties of the function
$g$, we find~\cite{supplem}
\begin{equation}
e^{i \vec K'\cdot \vec C_h} F_B^{\vec K'}(\vec r+\vec C_h)=
F_B^{\vec K'}(\vec r).
\end{equation}
Substituting the value of $e^{i \vec K'\cdot \vec C_h}$, we can rewrite this
second boundary condition near $\vec K'$ in the form
\begin{equation}
e^{-i \frac{2\pi\nu}{3}}F_B^{\vec K'}(\vec r+\vec C_h)=
F_B^{\vec K'}(\vec r),
\end{equation}
or, equivalently
\begin{equation}
F_B^{\vec K'}(\vec r+\vec C_h)=
e^{i \frac{2\pi\nu}{3}}F_B^{\vec K'}(\vec r).
\end{equation}
Thus the overall periodic boundary condition near $\vec K'$ is
\begin{equation}
\left[\begin{array}{c}
F_A^{\vec K'}(\vec r+\vec C_h)\\[5pt]
F_B^{\vec K'}(\vec r+\vec C_h)
\end{array}\right]=e^{i \frac{2\pi\nu}{3}}
\left[\begin{array}{c}
F_A^{\vec K'}(\vec r)\\[5pt]
F_B^{\vec K'}(\vec r)
\end{array}\right],
\end{equation}
which can be written in a compact form
\begin{equation}
\vec F^{\vec K'}(\vec r+\vec C_h)=e^{i \frac{2\pi\nu}{3}}
\vec F^{\vec K'}(\vec r).
\end{equation}
Substituting the form that, in the absence of an external potential,
the envelope functions have near $\vec K'$ (eq.~(\ref{fk1}))
\begin{equation}
\vec F_{s \vec\kappa}^{\vec K'}(\vec r)=
\frac{1}{\sqrt{2 L \ell}}e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s (\vec\kappa)} R(\alpha (\vec\kappa)) \tilde{| s \rangle} =
\frac{1}{\sqrt{2 L \ell}}e^{i (\kappa_x x +\kappa_y y )}
e^{i\tilde\phi_s (\vec\kappa)} R(\alpha (\vec\kappa)) \tilde{| s \rangle},
\end{equation}
the periodic boundary condition becomes
\begin{equation}
\frac{1}{\sqrt{2 L \ell}}
e^{i\vec\kappa\cdot(\vec r+\vec C_h)}
e^{i\tilde\phi_s (\vec\kappa)} R(\alpha (\vec\kappa)) \tilde{| s \rangle} =
e^{i \frac{2\pi\nu}{3}}
\frac{1}{\sqrt{2 L \ell}}
e^{i\vec\kappa\cdot\vec r}
e^{i\tilde\phi_s (\vec\kappa)} R(\alpha (\vec\kappa)) \tilde{| s \rangle},
\end{equation}
or, equivalently
\begin{equation}
e^{i\vec\kappa\cdot\vec C_h}=e^{i \frac{2\pi\nu}{3}}.
\end{equation}
This can be rewritten in the form
\begin{equation}
e^{i \kappa_x L}=e^{i \frac{2\pi\nu}{3}}1
=e^{i \frac{2\pi\nu}{3}}e^{i 2\pi \overline{n}},
\end{equation}
or, equivalently
\begin{equation}
\kappa_x L=\frac{2\pi\nu}{3}+2\pi \overline{n}
\end{equation}
and thus
\begin{equation}
\kappa_x =\frac{2\pi}{L}\left(\overline{n}+\frac{\nu}{3}\right)=
\tilde\kappa_{\nu} (\overline{n}),
\end{equation}
with $\overline{n}$ integer.
Analogously to what we have done near $\vec K$, this condition on
$\kappa_x$ can be found~\cite{supplem} also setting
$\displaystyle e^{i \vec k\cdot \vec C_h}=1$.
If we substitute this condition on $\kappa_x$ in the dispersion
relations of graphene, we find
\begin{equation}
E_{s,\overline{n}}^{\vec K'} (\kappa_y)=s\gamma|\vec\kappa|=
s\gamma\sqrt{\kappa_x^2+\kappa_y^2}=
s\gamma\sqrt{\tilde\kappa_{\nu} (\overline{n})^2+\kappa_y^2},
\end{equation}
where $k_y$ now is the wave vector $k$ of the nanotube and
$\kappa_y$ is the difference between the wave vector $k$ of the nanotube
and the component of $\vec K'$ along $y$.
On the other hand, if, starting from eq.~(\ref{fk1}), we choose as arbitrary
phase $\tilde\phi_s=\alpha /2$ and then we enforce the condition on $\kappa_x$,
we find~\cite{supplem} as envelope functions in the carbon nanotube near
$\vec K'$
\begin{equation}
\vec F_{s \vec\kappa}^{\vec K'}(\vec r)=
\frac{1}{2 \sqrt{L \ell}}
\left[\begin{array}{c}
s \tilde b_{\nu}(\overline{n},\kappa_y)\\
1
\end{array}\right]
e^{i \tilde\kappa_{\nu}(\overline{n}) x+i\kappa_y y}=
\vec F_{s \overline{n} \kappa_y}^{\vec K'}(\vec r),
\end{equation}
where (using the definition of the angle $\alpha$: see eq.~(\ref{alpha}))
\begin{equation}
\tilde b_{\nu}(\overline{n},\kappa_y)=e^{i\left(\frac{\pi}{2}+\alpha\right)}=
\frac{\kappa_x+i \kappa_y}{\sqrt{\kappa_x^2+\kappa_y^2}}=
\frac{\tilde\kappa_{\nu}(\overline{n})+i \kappa_y}
{\sqrt{\tilde\kappa_{\nu}(\overline{n})^2+\kappa_y^2}}.
\end{equation}
\noindent
If $m-n$ is a multiple of 3 and thus $\nu=0$, for $\tilde n=0$ and
$\overline{n}=0$ we have that
$\kappa_{\nu} (\tilde n)=0$ and $\tilde\kappa_{\nu} (\overline{n})=0$,
and consequently $E_{s}=s \gamma |\kappa_y|$ ,
which vanishes for $\kappa_y=0$, so that $E_{+}=E_{-}=0$.
This means that when $m-n$ is a multiple of 3 the points $\vec K$
and $\vec K'$, where the upper and lower bands of graphene are degenerate,
are among the values of $\vec k$ allowed by the periodic boundary condition,
and thus the nanotube is metallic.
Instead, if $m-n$ is not a multiple of 3 and thus $\nu=\pm 1$,
the allowed $\vec k$s nearest to $\vec K$ and $\vec K'$
correspond to $\tilde n=0$ and $\overline{n}=0$, for which
$\kappa_{\nu} (\tilde n)=\mp 2\pi /(3L)$ and
$\tilde\kappa_{\nu} (\overline{n})=\pm 2\pi /(3L)$,
and consequently
\begin{equation}
E_{s}=s\gamma\sqrt{\left(\frac{2\pi}{3 L}\right)^2+\kappa_y^2}.
\end{equation}
In particular, the minimum and maximum values of the nanotube bands are
obtained with the further position $\kappa_y=0$ and therefore are equal to
\begin{equation}
E_{s}=s\gamma \frac{2\pi}{3 L};
\end{equation}
thus the bandgap of the nanotube is
\begin{equation}
E_g=E_{+}-E_{-}=2 \gamma \frac{2\pi}{3 L}=\frac{4\pi\gamma}{3 L}=
\frac{4\pi}{3 L}\frac{\sqrt{3}a\gamma_0}{2}=
2 \frac{\pi}{L}\frac{a}{\sqrt{3}}\gamma_0=\frac{2\gamma_0\, a_{C-C}}{d_t},
\end{equation}
where $d_t=L / \pi$ is the nanotube diameter. Therefore we have that the
bandgap of the nanotube depends on the reciprocal nanotube diameter.
\begin{figure}
\centering
\includegraphics[width=.75\textwidth,angle=0]{rev4.eps}
\caption{The nanotube (10,0) and its dispersion relations, obtained both by
means of the tight-binding method (solid lines) and (for the bands
corresponding to the smallest values of $|\kappa_{\nu} (\tilde n)|$ and
$|\tilde\kappa_{\nu} (\overline{n})|$) by means of the $\vec k\cdot\vec p$
method (dashed lines).}
\label{f8}
\vskip5pt
\centering
\includegraphics[width=.52\textwidth,angle=0]{rev5.eps}
\caption{The density of states per unit length of the nanotube (10,0),
obtained both by means of the tight-binding method (solid lines) and
(in a smaller region around $E=0$) by means of the $\vec k\cdot\vec p$
method (dashed lines).}
\label{f9}
\end{figure}
We can observe that the approximate approach for the computation of the
density of states in carbon nanotubes proposed by J.~W.~Mintmire and
C.~T.~White~\cite{mintmire}, being based on a linear approximation of the
dispersion relations of graphene near the extrema points, can be
seen as a consequence of a $\vec k \cdot \vec p$ study of
the nanotube energy bands.
In fig.~\ref{f8} we compare the dispersion relations that we have obtained
for the same carbon nanotube using the nearest-neighbor tight-binding
method and the $\vec k \cdot \vec p$ method (without considering curvature
effects)~\cite{marconcini1,marconcini2,marconcini3,marconcini4}. We see that
the $\vec k \cdot \vec p$ method gives a good approximation for the portions of
energy bands of the nanotube deriving from the graphene dispersion relations
around $\vec K$ and $\vec K'$.
In fig.~\ref{f9}, instead, for the same nanotube we show both the
density of states that we have obtained by properly differentiating
the tight-binding dispersion relations, and the
density of states deriving from the Mintmire-White approach
\cite{marconcini1,marconcini2}. We see that this last approximation gives
good results near $E=0$, thus in the region corresponding to the
graphene dispersion relations around $\vec K$ and $\vec K'$.
\section{Application of the $\vec k \cdot \vec p$ method to graphene
nanoribbons}
A graphene sheet can be laterally confined (along the $y$-direction) to
form a graphene nanoribbon (extending in the $x$-direction). The properties
of the nanorribbon strongly depend on the characteristics of the boundary.
Here we will consider nanoribbons with perfect zigzag and armchair edges, that
can be easily studied using the Dirac equation and enforcing the correct
boundary conditions~\cite{brey1,brey2,wakabayashi1,castroneto,wurm,tworzydlo,
wakabayashi2}.
An analysis of the boundary conditions that have to be enforced in
nanoribbons with more general terminations can be found in
ref.~\cite{akhmerov2}.
In particular, we will perform the analytical calculations in the absence
of an external potential following Brey and Fertig's approach
\cite{brey1,brey2}, but using the representation adopted in the previous
sections.
While inside the nanoribbon each atom has 3
nearest-neighbor atoms, for the atoms on the edges of the
ribbon some of the nearest-neighbor lattice sites are outside the
ribbon and thus are not occupied by a carbon atom. These lattice sites are
instead occupied by passivation atoms (such as hydrogen atoms), which saturate
the dangling bonds. The correct boundary condition to be enforced in
our calculations is the vanishing of the wave function in correspondence
of these lattice sites (let us call them ``boundary lattice sites'').
\begin{figure}[b]
\centering
\includegraphics[width=\textwidth,angle=0]{nanoribbon-zigzag.eps}
\caption{Sketch of a zigzag nanoribbon with $N$ zigzag lines (the black
atoms are carbon atoms, while the grey atoms are passivation atoms).}
\label{f10}
\end{figure}
\subsection{Zigzag nanoribbons}
In the case of zigzag nanoribbons (fig.~\ref{f10}), the graphene sheet has
been cut at an angle of $30^\circ$ with respect to the nearest-neighbor
carbon bonds, and therefore the edges have a zigzag shape. In order to
simplify the following calculations, we can choose (see fig.~\ref{f10})
the graphene lattice vectors in the real space $\vec a_1$ and $\vec a_2$
(and consequently those in the reciprocal space $\vec b_1$ and $\vec b_2$)
in this way (we express them in the reference frame
$\Sigma=(\hbox{\boldmath{$\hat x$}},\hbox{\boldmath{$\hat y$}},
\hbox{\boldmath{$\hat z$}})$):
\begin{eqnarray}
& \vec a_1 \mathrel{\mathop\equiv_{\Sigma}}
\left[\begin{array}{c}
\displaystyle \frac{a}{2}\\
\noalign{\vskip3pt}
\displaystyle -\frac{\sqrt{3}}{2}\,a\\
\noalign{\vskip3pt}
0
\end{array}\right]
,\quad
& \vec a_2 \mathrel{\mathop\equiv_{\Sigma}}
\left[\begin{array}{c}
\displaystyle -\frac{a}{2}\\
\noalign{\vskip3pt}
\displaystyle -\frac{\sqrt{3}}{2}\,a\\
\noalign{\vskip3pt}
0
\end{array}\right]
,\\
& \vec b_1 \mathrel{\mathop\equiv_{\Sigma}}
\left[\begin{array}{c}
\displaystyle \frac{2\pi}{a}\\
\noalign{\vskip3pt}
\displaystyle -\frac{2\pi}{\sqrt{3}a}\\
\noalign{\vskip3pt}
0
\end{array}\right]
,\quad
& \vec b_2 \mathrel{\mathop\equiv_{\Sigma}}
\left[\begin{array}{c}
\displaystyle -\frac{2\pi}{a}\\
\noalign{\vskip3pt}
\displaystyle -\frac{2\pi}{\sqrt{3}a}\\
\noalign{\vskip3pt}
0
\end{array}
\right]\nonumber
\end{eqnarray}
(which, being
$\vec b_1=2\pi (\vec a_2 \times \hbox{\boldmath{$\hat z$}})/
(\vec a_1 \cdot (\vec a_2 \times \hbox{\boldmath{$\hat z$}}))$ and
$\vec b_2=2\pi (\hbox{\boldmath{$\hat z$}} \times \vec a_1)/
(\vec a_1 \cdot (\vec a_2 \times \hbox{\boldmath{$\hat z$}}))$,
fulfill the relation $\vec a_i \cdot \vec b_j=2 \pi \delta_{ij}$).
Consequently we have that
\begin{eqnarray}
\label{kz}
&& \vec K =\frac{1}{3}(\vec b_2-\vec b_1) \mathrel{\mathop\equiv_{\Sigma}}
\frac{4\pi}{3a}
\left[\begin{array}{c}
-1\\
\noalign{\vskip3pt}
0 \\
\noalign{\vskip3pt}
0
\end{array}\right]=
\left[\begin{array}{c}
-K\\
\noalign{\vskip3pt}
0 \\
\noalign{\vskip3pt}
0
\end{array}\right],\\
&& \vec K' =\frac{1}{3}(\vec b_1-\vec b_2) \mathrel{\mathop\equiv_{\Sigma}}
\frac{4\pi}{3a}
\left[\begin{array}{c}
1\\
\noalign{\vskip3pt}
0\\
\noalign{\vskip3pt}
0
\end{array}\right]=
\left[\begin{array}{c}
K\\
\noalign{\vskip3pt}
0 \\
\noalign{\vskip3pt}
0
\end{array}\right],\nonumber
\end{eqnarray}
where we have defined $K=4\pi/(3a)$. For our choice of $\vec a_1$
and $\vec a_2$, the angle $\theta'$ from the vector $\vec a_1+\vec a_2$ ({\em i.e.}
from the axis $\hbox{\boldmath{$\hat x$}}'$ used in previous calculations)
to the axis $\hbox{\boldmath{$\hat x$}}$ (taken in the longitudinal direction)
is equal to $\pi/2$.
Therefore the total wave function is given by (eq.~(\ref{wavefunction}))
\begin{equation}
\label{wavefunctionbisz}
\psi (\vec r)=
\sum_{\vec R_A}\psi_A (\vec R_A)\varphi(\vec r -\vec R_A)+
\sum_{\vec R_B}\psi_B (\vec R_B)\varphi(\vec r -\vec R_B).
\end{equation}
with (eq.~(\ref{assumptions2}) with $\theta'=\pi/2$)
\begin{equation}
\label{envelopez}
\left\{ \begin{array}{l}
\displaystyle
\psi_A (\vec r)=
e^{i \vec K\cdot \vec r} F_A^{\vec K}(\vec r)+
e^{i \vec K'\cdot \vec r} F_A^{\vec K'}(\vec r),\\[5pt]
\displaystyle
\psi_B (\vec r)=
-e^{i \vec K\cdot \vec r} F_B^{\vec K} (\vec r)+
e^{i \vec K'\cdot \vec r} F_B^{\vec K'} (\vec r),
\end{array} \right.
\end{equation}
where (using eq.~(\ref{kz})), if we write
$\displaystyle \vec r \mathrel{\mathop\equiv_{\Sigma}} [x,y,0]^T$ we
have that $\vec K\cdot \vec r=-Kx$ and that $\vec K'\cdot \vec r=Kx$.
In the absence of an external potential, the envelope functions
satisfy the usual Dirac equation (eq.~(\ref{absence}))
\begin{eqnarray}
&& \gamma
\left[\begin{array}{cccc}
0 &
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} &
0 &
0 \\[3pt]
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} &
0 &
0 &
0 \\[3pt]
0 &
0 &
0 &
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} \\[3pt]
0 &
0 &
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} &
0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right]=\\[3pt]
&& E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)\\[3pt]
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right].\nonumber
\end{eqnarray}
Due to the translational invariance along the $x$-direction, we can write
the envelope functions as the product of a propagating part along the
longitudinal direction $x$ and of a confined part along the transverse
direction $y$. Therefore we can assume that
\begin{equation}
\label{phiz}
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[5pt]
F_B^{\vec K} (\vec r)
\end{array}\right]=
e^{i \kappa_x x}
\left[\begin{array}{c}
\Phi_A^{\vec K} (y)\\[5pt]
\Phi_B^{\vec K} (y)
\end{array}\right],
\ \hbox{and that}\
\left[\begin{array}{c}
F_A^{\vec K'} (\vec r)\\[5pt]
F_B^{\vec K'} (\vec r)
\end{array}\right]=
e^{i \kappa'_x x}
\left[\begin{array}{c}
\Phi_A^{\vec K'} (y)\\[5pt]
\Phi_B^{\vec K'} (y)
\end{array}\right].
\end{equation}
We have to enforce that the overall wave function vanishes in correspondence
with the ``boundary lattice sites'' on the lower and upper edges of the ribbon.
Let us define as $W$ the real width of the nanoribbon, {\em i.e.} the distance
between the lowest row of carbon atoms (all of type $A$) and
the highest row of carbon atoms (all of type $B$); if the ribbon
has $N$ zigzag lines across its width, we have that $W=(3N-2)a_{C-C}/2$.
If we take $y=0$ in correspondence of the row of ``boundary lattice sites''
on the lower edge, the row of ``boundary lattice sites'' on the
upper edge will be for $y=\tilde W=W+2 a_{C-C}={(3N+2)a_{C-C}/2}$.
The proper boundary condition thus implies that, for every $x$,
$\psi (x,y=0)=\psi (x,y=\tilde W)=0$.
Since in the zigzag nanoribbon all the ``boundary lattice sites''
on the lower edge belong to the $B$ sublattice, while all those on the upper
edge belong to the $A$ sublattice, looking at
eq.~(\ref{wavefunctionbisz}) and observing that the atomic orbitals
$\varphi$ are strongly localized around the atom on which they are centered,
the boundary condition on the wave function is equivalent to setting,
for every $x$, $\psi_B (x,y=0)=\psi_A (x,y=\tilde W)=0$.
Using eq.~(\ref{envelopez}), we have that
\begin{eqnarray}
&& \psi_B (x,y=0)=0 \ \ \forall x \Rightarrow\
-e^{-iKx} F_B^{\vec K} (x,y=0)+e^{iKx} F_B^{\vec K'} (x,y=0)=\\[5pt]
&& -e^{-iKx} e^{i \kappa_x x} \Phi_B^{\vec K}(0)+
e^{iKx} e^{i \kappa'_x x} \Phi_B^{\vec K'}(0)=0\ \forall x
\Rightarrow\nonumber\\[5pt]
&& \Phi_B^{\vec K}(0)=0,\quad\Phi_B^{\vec K'}(0)=0\nonumber
\end{eqnarray}
and that
\begin{eqnarray}
&& \psi_A (x,y=\tilde W)=0 \ \ \forall x \Rightarrow\
e^{-iKx} F_A^{\vec K} (x,y=\tilde W)+e^{iKx} F_A^{\vec K'} (x,y=\tilde W)=\\[5pt]
&& e^{-iKx} e^{i \kappa_x x} \Phi_A^{\vec K}(\tilde W)+
e^{iKx} e^{i \kappa'_x x} \Phi_A^{\vec K'}(\tilde W)=0\ \forall x
\Rightarrow\nonumber\\[5pt]
&& \Phi_A^{\vec K}(\tilde W)=0,\quad
\Phi_A^{\vec K'}(\tilde W)=0.\nonumber
\end{eqnarray}
As we can see, in zigzag nanoribbons the boundary conditions do not couple
the envelope functions relative to the Dirac points $\vec K$ and $\vec K'$.
First let us make the calculation {\em around the point} {\boldmath $\vec K$}.
The corresponding part of the Dirac equation is:
\begin{eqnarray}
\label{systemza}
&& \gamma
\left[\begin{array}{cccc}
0 &
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} \\[5pt]
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} &
0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[5pt]
F_B^{\vec K} (\vec r)
\end{array}\right]=
E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[5pt]
F_B^{\vec K} (\vec r)
\end{array}\right]
\Rightarrow\\[5pt]
&& \gamma
\left[\begin{array}{cccc}
0 &
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} \\[5pt]
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} &
0
\end{array}\right]
\left[\begin{array}{c}
\Phi_A^{\vec K}(y)e^{i \kappa_x x}\\[5pt]
\Phi_B^{\vec K}(y)e^{i \kappa_x x}
\end{array}\right]=\nonumber\\[5pt]
&& \gamma
\left[\begin{array}{c}
\kappa_x \Phi_B^{\vec K}(y) e^{i \kappa_x x}-
e^{i \kappa_x x}\frac{d}{d\,y}\Phi_B^{\vec K}(y)\\[5pt]
\kappa_x \Phi_A^{\vec K}(y) e^{i \kappa_x x}+
e^{i \kappa_x x}\frac{d}{d\,y} \Phi_A^{\vec K}(y)
\end{array}\right]=\nonumber\\[5pt]
&& \gamma
\left[\begin{array}{cccc}
0 &
\kappa_x-\frac{d}{d\,y} \\[5pt]
\kappa_x+\frac{d}{d\,y} &
0
\end{array}\right]
\left[\begin{array}{c}
\Phi_A^{\vec K}(y)\\[5pt]
\Phi_B^{\vec K}(y)
\end{array}\right] e^{i \kappa_x x}=\nonumber\\[5pt]
&& E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[5pt]
F_B^{\vec K} (\vec r)
\end{array}\right]=
E \left[\begin{array}{c}
\Phi_A^{\vec K}(y)\\[5pt]
\Phi_B^{\vec K}(y)
\end{array}\right] e^{i \kappa_x x}
\Rightarrow\nonumber\\[5pt]
&& \left[\begin{array}{cccc}
0 &
\kappa_x-\frac{d}{d\,y} \\[5pt]
\kappa_x+\frac{d}{d\,y} &
0
\end{array}\right]
\left[\begin{array}{c}
\Phi_A^{\vec K}(y)\\[5pt]
\Phi_B^{\vec K}(y)
\end{array}\right]=
\frac{E}{\gamma} \left[\begin{array}{c}
\Phi_A^{\vec K}(y)\\[5pt]
\Phi_B^{\vec K}(y)
\end{array}\right],\nonumber
\end{eqnarray}
which can be rewritten as
\begin{equation}
\label{systemzb}
\left\{ \begin{array}{l}
\displaystyle
\left(\kappa_x-\frac{d}{d\,y}\right)
\Phi_B^{\vec K}(y)=\frac{E}{\gamma}\Phi_A^{\vec K}(y),\\[10pt]
\displaystyle
\left(\kappa_x+\frac{d}{d\,y}\right)
\Phi_A^{\vec K}(y)=\frac{E}{\gamma}\Phi_B^{\vec K}(y).
\end{array} \right.
\end{equation}
Obtaining $\Phi_B^{\vec K}(y)$ from the second of (\ref{systemzb})
and then substituting $\Phi_A^{\vec K}(y)$ from the first of
(\ref{systemzb}), we find:
\begin{eqnarray}
\label{solaz}
&& \Phi_B^{\vec K}(y)=\frac{\gamma}{E}
\left(\kappa_x+\frac{d}{d\,y}\right)
\Phi_A^{\vec K}(y)=\left(\frac{\gamma}{E}\right)^2
\left(\kappa_x+\frac{d}{d\,y}\right)
\left(\kappa_x-\frac{d}{d\,y}\right)
\Phi_B^{\vec K}(y)=\\[5pt]
&& \left(\frac{\gamma}{E}\right)^2
\left(\kappa_x^2-\kappa_x\frac{d}{d\,y}
+\kappa_x\frac{d}{d\,y}-
\frac{d^2}{d\,y^2}\right)
\Phi_B^{\vec K}(y)=\nonumber\\[5pt]
&& \left(\frac{\gamma}{E}\right)^2
\left(\kappa_x^2-
\frac{d^2}{d\,y^2}\right)
\Phi_B^{\vec K}(y)\Rightarrow\nonumber\\[5pt]
&& \left(-\frac{d^2}{d\,y^2}+\kappa_x^2\right)
\Phi_B^{\vec K}(y)=\left(\frac{E}{\gamma}\right)^2
\Phi_B^{\vec K}(y),\nonumber
\end{eqnarray}
the solution of which is
\begin{eqnarray}
\label{solbz}
&& \Phi_B^{\vec K}(y)=A e^{zy}+B e^{-zy},\\
&& \hbox{with}\quad
z=\sqrt{\kappa_x^2-\left(\frac{E}{\gamma}\right)^2}
\quad\left(\;\hbox{and thus}\quad
E=\pm \gamma \sqrt{\kappa_x^2-z^2}\;\right).\nonumber
\end{eqnarray}
Substituting $\Phi_B^{\vec K}(y)$ back into the first of (\ref{systemzb}),
we obtain that
\begin{eqnarray}
\label{solcz}
\Phi_A^{\vec K}(y) &=& \frac{\gamma}{E}
\left(\kappa_x-\frac{d}{d\,y}\right)
\Phi_B^{\vec K}(y)=\\
&& \frac{\gamma}{E}\left(\kappa_x A e^{zy}+\kappa_x B e^{-zy}-
z A e^{zy}+z B e^{-zy}\right)=\nonumber\\
&& \frac{\gamma}{E}\left((\kappa_x-z) A e^{zy}+(\kappa_x+z) B e^{-zy}\right).\nonumber
\end{eqnarray}
Let us now enforce the boundary conditions on $\Phi_B^{\vec K}(y)$ and
$\Phi_A^{\vec K}(y)$
\begin{eqnarray}
\label{realk}
&& \Phi_B^{\vec K}(0)=0 \Rightarrow
A+B=0 \Rightarrow B=-A;\\
&& \Phi_A^{\vec K}(\tilde W)=0 \Rightarrow
\frac{\gamma}{E}\left((\kappa_x-z) A e^{z \tilde W}+
(\kappa_x+z) B e^{-z \tilde W}\right)=0 \Rightarrow\nonumber\\
&& (\kappa_x-z) A e^{z \tilde W}-
(\kappa_x+z) A e^{-z \tilde W}=0 \Rightarrow\nonumber\\
&& (\kappa_x-z) A e^{z \tilde W}=
(\kappa_x+z) A e^{-z \tilde W} \Rightarrow\nonumber\\
&& e^{-2 z \tilde W}=\frac{\kappa_x-z}{\kappa_x+z}.\nonumber
\end{eqnarray}
As we can see, in zigzag nanoribbons the longitudinal and the transverse
wave vectors are coupled.
Incidentally, note that, instead of eq.~(\ref{realk}), an equivalent equation can be
used~\cite{wakabayashi2}; indeed, being $E=\pm \gamma \sqrt{\kappa_x^2-z^2}$
and thus $(E/\gamma)^2=\kappa_x^2-z^2$, we have that
\begin{eqnarray}
&& e^{-2 z \tilde W}=\frac{\kappa_x-z}{\kappa_x+z}=
\frac{(\kappa_x-z)(\kappa_x+z)}{(\kappa_x+z)^2}=
\frac{\kappa_x^2-z^2}{(\kappa_x+z)^2}=
\frac{(E/\gamma)^2}{(\kappa_x+z)^2}\Rightarrow\\
&& \frac{E}{\gamma}=\pm (\kappa_x+z) e^{-z \tilde W}.\nonumber
\end{eqnarray}
Here we consider real values of $\kappa_x$.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth,angle=0]{functions.eps}
\caption{Graphical solution (in the real domain) of eq.~(\ref{realk})
(the dotted lines are the asymptotes of $f_2 (z)$).}
\label{f11}
\end{figure}\noindent
If we graphically represent (fig.~\ref{f11}) the two functions
$f_1 (z)=e^{-2 z \tilde W}$ and
$f_2 (z)=(\kappa_x-z)/(\kappa_x+z)$, we see
that (apart from $z=0$, which corresponds to identically null $\Phi$'s)
there is an intersection between $f_1$ and $f_2$ for a
real value of $z$ (and thus eq.~(\ref{realk}) has a {\em real solution} $z$)
only if $\kappa_x>0$ and if $f_1 (z)$ is steeper than $f_2 (z)$ in $z=0$,
{\em i.e.} if
\begin{eqnarray}
&& \left|\left[\frac{d}{dz}f_1(z)\right]_{z=0}\right|>
\left|\left[\frac{d}{dz}f_2(z)\right]_{z=0}\right|\Rightarrow\\
&& \left|\left[-2 \tilde W e^{-2 z \tilde W}\right]_{z=0}\right|>
\left|\left[-\frac{1}{\kappa_x+z}-
\frac{\kappa_x-z}{(\kappa_x+z)^2}\right]_{z=0}\right|=\nonumber\\
&& \left|\left[-\frac{\kappa_x+z+\kappa_x-z}{(\kappa_x+z)^2}\right]_{z=0}\right|=
\left|\left[-\frac{2 \kappa_x}{(\kappa_x+z)^2}\right]_{z=0}\right|
\Rightarrow\nonumber\\
&& 2 \tilde W>\frac{2 \kappa_x}{\kappa_x^2}\Rightarrow
\tilde W>\frac{1}{\kappa_x}\Rightarrow
\kappa_x>\frac{1}{\tilde W}.\nonumber
\end{eqnarray}
\vskip-5pt\noindent
If instead $\kappa_x<1/\tilde W$, eq.~(\ref{realk}) does not have real
solutions $z$ (apart from $z=0$).
In the case of real $z$, from eq.~(\ref{realk}) we can find that
\vskip-5pt\noindent
\begin{eqnarray}
&& e^{-2 z \tilde W}=\frac{\kappa_x-z}{\kappa_x+z}\Rightarrow\\
&& \kappa_x e^{-2 z \tilde W}+z e^{-2 z \tilde W}=\kappa_x-z
\Rightarrow
\kappa_x (1-e^{-2 z \tilde W})=z (1+e^{-2 z \tilde W})
\Rightarrow\nonumber\\
&& \kappa_x =z\,\frac{1+e^{-2 z \tilde W}}{1-e^{-2 z \tilde W}}=
z\,\frac{e^{z \tilde W}+e^{-z \tilde W}}{e^{z \tilde W}-e^{-z \tilde W}}=
\frac{z}{\tanh(z \tilde W)}\nonumber
\end{eqnarray}
\vskip-5pt\noindent
($z=0$ does not have to be considered) and thus
\vskip-5pt\noindent
\begin{eqnarray}
\label{esinh}
&& \left(\frac{E}{\gamma}\right)^2=\kappa_x^2-z^2=
\frac{z^2}{\tanh^2(z \tilde W)}-z^2=
z^2 \left(\frac{\cosh^2(z \tilde W)}{\sinh^2(z \tilde W)}-1\right)=\\
&& z^2 \left(\frac{\cosh^2(z \tilde W)-\sinh^2(z \tilde W)}
{\sinh^2(z \tilde W)}\right)=
\frac{z^2}{\sinh^2(z \tilde W)} \Rightarrow
\left|\frac{E}{\gamma}\right|=\left|\frac{z}{\sinh(z \tilde W)}\right|.\nonumber
\end{eqnarray}
\vskip-5pt\noindent
Since (for the properties of the hyperbolic sine function)
$|\sinh(z \tilde W)|>|z \tilde W|=|z|\tilde W$, we see that in this case
\begin{equation}
\left|\frac{E}{\gamma}\right|<\frac{|z|}{|z| \tilde W}=
\frac{1}{\tilde W}.
\end{equation}
We can write (exploiting what we have found from the boundary
conditions) that
\begin{eqnarray}
\Phi_A^{\vec K}(y) &=&
\frac{\gamma}{E} \left((\kappa_x-z) A e^{zy}+(\kappa_x+z) B e^{-zy}\right)=\\
&& \frac{\gamma}{E} \left((\kappa_x-z) A e^{zy}-(\kappa_x+z) A e^{-zy}\right)=\nonumber\\
&& \frac{\gamma}{E} A \left(\kappa_x (e^{zy}-e^{-zy})-z (e^{zy}+e^{-zy})\right)=\nonumber\\
&& \frac{\gamma}{E} 2 A \left(\kappa_x \sinh(zy)-z \cosh(zy)\right)=\nonumber\\
&& 2 A \frac{\gamma}{E}
\left(\frac{z}{\tanh(z \tilde W)} \sinh(zy)-z \cosh(zy)\right)=\nonumber\\
&& 2 A \frac{\gamma}{E} z \,
\frac{\cosh(z \tilde W)\sinh(zy)-\sinh(z \tilde W)\cosh(zy)}
{\sinh(z \tilde W)}=\nonumber\\
&& -2 A \frac{\gamma}{E} z \,
\frac{\cosh(z \tilde W)\sinh(-zy)+\sinh(z \tilde W)\cosh(-zy)}
{\sinh(z \tilde W)}=\nonumber\\
&& -2 A \left(\frac{\gamma}{E} \frac{z}{\sinh(z \tilde W)}\right)
\sinh(z(\tilde W-y))=\nonumber\\
&& -2 A \,{\rm sign}\!\left(\frac{E}{\gamma} \frac{z}{\sinh(z \tilde W)}\right)
\sinh(z(\tilde W-y)),\nonumber
\end{eqnarray}
where in the last step we have taken advantage of the fact that, due to
eq.~(\ref{esinh}), the product between $\gamma/E$ and $z/\sinh(z \tilde W)$
can only be equal to $+1$ (if the two quantities have the same sign) or
$-1$ (if they have opposite signs).
Moreover we have that
\begin{equation}
\Phi_B^{\vec K}(y)=A e^{zy}+B e^{-zy}=A e^{zy}-A e^{-zy}=A (e^{zy}-e^{-zy})=
2 A \sinh(zy).
\end{equation}
These are edge states, each one exponentially localized on one edge
of the ribbon.
These edge states correspond to bands flattened towards $E=0$, as we can
see both from the graphical solution of eq.~(\ref{realk}) (where we observe
that we have an intersection between $f_1$ and $f_2$ for a $z$ coordinate
very close to $\kappa_x$ and thus the energy
$E=\pm \gamma \sqrt{\kappa_x^2-z^2}$ has a very small value), and from our
previous analytical conclusion that $|E/\gamma|<1/\tilde W$ in this case. Since
the Dirac point $\vec K$, folded into the Brillouin zone $(-\pi/a,\pi/a)$
of the zigzag nanoribbon (the unit cell of which is of length $a$), corresponds
to $k_x=-4\pi/(3a)+2\pi/a=2\pi/(3a)$, the condition $\kappa_x>1/\tilde W$
(under which we have a real solution and thus the edge states) is equivalent to
$k_x=K_x+\kappa_x>2\pi/(3a)+1/\tilde W$ (note the difference between the total
wave vectors $k$ and the wave vectors $\kappa$ measured from the Dirac points).
Therefore in the region $2\pi/(3a)+1/\tilde W<k_x<\pi/a$ we have two bands
flattened towards $E=0$; this means that the zigzag nanoribbons are always
metallic~\cite{nakada}.
However, further studies~\cite{son,fujita1,fujita2} have shown that actual
zigzag nanoribbons have a non-zero gap deriving from a staggered sublattice
potential due to edge magnetization.
Let us now instead consider the {\em imaginary solutions} $z=i \kappa_n$ (with
$\kappa_n$ real) of\break
eq.~(\ref{realk}). In this case the dispersion
relation $E=\pm \gamma \sqrt{\kappa_x^2-z^2}$ becomes $E=$\break
$\pm \gamma \sqrt{\kappa_x^2+\kappa_n^2}$, from which we see more clearly
that $\kappa_x$ and $\kappa_n=-i z$ have the meaning of longitudinal and
transverse components of the wave vector, measured from the Dirac point.
The solutions are given by
\begin{eqnarray}
&& e^{-2 z \tilde W}=\frac{\kappa_x-z}{\kappa_x+z}
\Rightarrow\\
&& e^{-i 2 \kappa_n \tilde W}=
\frac{\kappa_x-i \kappa_n}{\kappa_x+i \kappa_n}=
\frac{\sqrt{\kappa_x^2+\kappa_n^2}\,e^{-i \angle (\kappa_x+i \kappa_n)}}
{\sqrt{\kappa_x^2+\kappa_n^2}\,e^{i \angle (\kappa_x+i \kappa_n)}}=\nonumber\\
&& e^{-i 2 \angle (\kappa_x+i \kappa_n)}=
e^{-i 2 \angle (\kappa_x+i \kappa_n)} e^{i 2 \pi m}
\Rightarrow\nonumber\\
&& \kappa_n \tilde W=\angle (\kappa_x+i \kappa_n)-\pi m \Rightarrow
\tan (\kappa_n \tilde W)=\frac{\kappa_n}{\kappa_x}\Rightarrow
\kappa_x=\frac{\kappa_n}{\tan (\kappa_n \tilde W)}\nonumber
\end{eqnarray}
(with $m$ integer); $\kappa_n=0$ corresponds to identically null $\Phi$'s
and thus does not have to be considered. We have that
\begin{eqnarray}
\label{esin}
\left(\frac{E}{\gamma}\right)^2 &=& \kappa_x^2+\kappa_n^2=
\left(\frac{\kappa_n}{\tan (\kappa_n \tilde W)}\right)^2+\kappa_n^2=
\left(\frac{\cos^2 (\kappa_n \tilde W)}{\sin^2 (\kappa_n \tilde W)}+1\right)
\kappa_n^2=\\
&& \frac{\cos^2 (\kappa_n \tilde W)+\sin^2 (\kappa_n \tilde W)}
{\sin^2 (\kappa_n \tilde W)} \kappa_n^2=
\frac{\kappa_n^2}{\sin^2 (\kappa_n \tilde W)}\Rightarrow
\left|\frac{E}{\gamma}\right|=
\left|\frac{\kappa_n}{\sin(\kappa_n \tilde W)}\right|;\nonumber
\end{eqnarray}
since (for the properties of the sin function)
$|\sin (\kappa_n \tilde W)|<|\kappa_n \tilde W|=|\kappa_n|\tilde W$,
we see that in this case
\begin{equation}
\left|\frac{E}{\gamma}\right|>\frac{|\kappa_n|}{|\kappa_n| \tilde W}=
\frac{1}{\tilde W}.
\end{equation}
We can write (exploiting what we have found from the boundary
conditions) that
\begin{eqnarray}
\Phi_A^{\vec K}(y) &=&
\frac{\gamma}{E} \left((\kappa_x-i\kappa_n) A e^{i\kappa_n y}+
(\kappa_x+i\kappa_n) B e^{-i\kappa_n y}\right)=\\
&& \frac{\gamma}{E} \left((\kappa_x-i\kappa_n) A e^{i\kappa_n y}-
(\kappa_x+i\kappa_n) A e^{-i\kappa_n y}\right)=\nonumber\\
&& \frac{\gamma}{E} A \left(\kappa_x (e^{i\kappa_n y}-e^{-i\kappa_n y})-
i\kappa_n (e^{i\kappa_n y}+e^{-i\kappa_n y})\right)=\nonumber\\
&& \frac{\gamma}{E} 2 i A
\left(\kappa_x \sin(\kappa_n y)-\kappa_n \cos(\kappa_n y)\right)=\nonumber\\
&& 2 i A \frac{\gamma}{E}
\left(\frac{\kappa_n}{\tan(\kappa_n \tilde W)} \sin(\kappa_n y)-
\kappa_n \cos(\kappa_n y)\right)=\nonumber\\
&& 2 i A \frac{\gamma}{E} \kappa_n \,
\frac{\cos(\kappa_n \tilde W)\sin(\kappa_n y)-
\sin(\kappa_n \tilde W)\cos(\kappa_n y)}
{\sin(\kappa_n \tilde W)}=\nonumber\\
&& -2 i A \left(\frac{\gamma}{E} \frac{\kappa_n}{\sin(\kappa_n \tilde W)}\right)
\sin(\kappa_n (\tilde W-y))=\nonumber\\
&& -2 i A \,{\rm sign}\!\left(\frac{E}{\gamma}
\frac{\kappa_n}{\sin(\kappa_n \tilde W)}\right)
\sin(\kappa_n (\tilde W-y)),\nonumber
\end{eqnarray}
where in the last step we have taken advantage of the fact that, due to
eq.~(\ref{esin}), the product between $\gamma/E$ and
$\kappa_n/\sin(\kappa_n \tilde W)$
can only be equal to $+1$ (if the two quantities have the same sign) or
$-1$ (if they have opposite signs).
Moreover we have that
\begin{eqnarray}
\Phi_B^{\vec K}(y) &=& A e^{i\kappa_n y}+B e^{-i\kappa_n y}=
A e^{i\kappa_n y}-A e^{-i\kappa_n y}=\\
&& A (e^{i\kappa_n y}-e^{-i\kappa_n y})=
A 2 i \sin(\kappa_n y).\nonumber
\end{eqnarray}
These are clearly confined states extending all over the ribbon.
The calculations {\em around the point} {\boldmath $\vec K'$} are completely
analogous.
The corresponding part of the Dirac equation is
\begin{eqnarray}
\label{systemz1a}
&& \gamma
\left[\begin{array}{cccc}
0 &
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} \\
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} &
0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K'} (\vec r)\\
F_B^{\vec K'} (\vec r)
\end{array}\right]=
E \left[\begin{array}{c}
F_A^{\vec K'} (\vec r)\\
F_B^{\vec K'} (\vec r)
\end{array}\right]
\Rightarrow\\
&& \gamma
\left[\begin{array}{cccc}
0 &
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} \\
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} &
0
\end{array}\right]
\left[\begin{array}{c}
\Phi_A^{\vec K'}(y)e^{i \kappa'_x x}\\
\Phi_B^{\vec K'}(y)e^{i \kappa'_x x}
\end{array}\right]=\nonumber\\
&& \gamma
\left[\begin{array}{c}
\kappa'_x \Phi_B^{\vec K'}(y) e^{i \kappa'_x x}+
e^{i \kappa'_x x}\frac{d}{d\,y}\Phi_B^{\vec K'}(y)\\
\kappa'_x \Phi_A^{\vec K'}(y) e^{i \kappa'_x x}-
e^{i \kappa'_x x}\frac{d}{d\,y} \Phi_A^{\vec K'}(y)
\end{array}\right]=\nonumber\\
&& \gamma
\left[\begin{array}{cccc}
0 &
\kappa'_x+\frac{d}{d\,y} \\
\kappa'_x-\frac{d}{d\,y} &
0
\end{array}\right]
\left[\begin{array}{c}
\Phi_A^{\vec K'}(y)\\
\Phi_B^{\vec K'}(y)
\end{array}\right] e^{i \kappa'_x x}=
E \left[\begin{array}{c}
F_A^{\vec K'} (\vec r)\\
F_B^{\vec K'} (\vec r)
\end{array}\right]=\nonumber\\
&& E \left[\begin{array}{c}
\Phi_A^{\vec K'}(y)\\
\Phi_B^{\vec K'}(y)
\end{array}\right] e^{i \kappa'_x x}
\Rightarrow\nonumber\\
&& \left[\begin{array}{cccc}
0 &
\kappa'_x+\frac{d}{d\,y} \\
\kappa'_x-\frac{d}{d\,y} &
0
\end{array}\right]
\left[\begin{array}{c}
\Phi_A^{\vec K'}(y)\\
\Phi_B^{\vec K'}(y)
\end{array}\right]=
\frac{E}{\gamma} \left[\begin{array}{c}
\Phi_A^{\vec K'}(y)\\
\Phi_B^{\vec K'}(y)
\end{array}\right],\nonumber
\end{eqnarray}
which can be rewritten as
\begin{equation}
\label{systemz1b}
\left\{ \begin{array}{l}
\displaystyle
\left(\kappa'_x+\frac{d}{d\,y}\right)
\Phi_B^{\vec K'}(y)=\frac{E}{\gamma}\Phi_A^{\vec K'}(y),\\
\displaystyle
\left(\kappa'_x-\frac{d}{d\,y}\right)
\Phi_A^{\vec K'}(y)=\frac{E}{\gamma}\Phi_B^{\vec K'}(y).
\end{array} \right.
\end{equation}
Obtaining $\Phi_B^{\vec K'}(y)$ from the second of (\ref{systemz1b})
and then substituting $\Phi_A^{\vec K'}(y)$ from the first of
(\ref{systemz1b}), we find
\begin{eqnarray}
\label{solaz1}
\quad \Phi_B^{\vec K'}(y) &=& \frac{\gamma}{E}
\left(\kappa'_x-\frac{d}{d\,y}\right)
\Phi_A^{\vec K'}(y)=\left(\frac{\gamma}{E}\right)^2
\left(\kappa'_x-\frac{d}{d\,y}\right)
\left(\kappa'_x+\frac{d}{d\,y}\right)
\Phi_B^{\vec K'}(y)=\\
&& \left(\frac{\gamma}{E}\right)^2
\left({\kappa'_x}^2+\kappa'_x\frac{d}{d\,y}
-\kappa'_x\frac{d}{d\,y}-
\frac{d^2}{d\,y^2}\right)
\Phi_B^{\vec K'}(y)=\nonumber\\
&& \left(\frac{\gamma}{E}\right)^2
\left({\kappa'_x}^2-
\frac{d^2}{d\,y^2}\right)
\Phi_B^{\vec K'}(y)\Rightarrow\nonumber\\
&& \left(-\frac{d^2}{d\,y^2}+{\kappa'_x}^2\right)
\Phi_B^{\vec K'}(y)=\left(\frac{E}{\gamma}\right)^2
\Phi_B^{\vec K'}(y),\nonumber
\end{eqnarray}
the solution of which is
\begin{eqnarray}
\label{solbz1}
&& \Phi_B^{\vec K'}(y)=C e^{z'y}+D e^{-z'y},\\
&& \hbox{with}\quad
z'=\sqrt{{\kappa'_x}^2-\left(\frac{E}{\gamma}\right)^2}
\quad\left(\;\hbox{and thus}\quad
E=\pm \gamma \sqrt{{\kappa'_x}^2-{z'}^2}\;\right).\nonumber
\end{eqnarray}
Substituting $\Phi_B^{\vec K'}(y)$ back into the first of
(\ref{systemz1b}), we obtain that
\begin{eqnarray}
\label{solcz1}
\Phi_A^{\vec K'}(y) &=& \frac{\gamma}{E}
\left(\kappa'_x+\frac{d}{d\,y}\right)
\Phi_B^{\vec K'}(y)=\\
&& \frac{\gamma}{E}\left(\kappa'_x C e^{z'y}+\kappa'_x D e^{-z'y}+
z' C e^{z'y}-z' D e^{-z'y}\right)=\nonumber\\
&& \frac{\gamma}{E}
\left((\kappa'_x+z') C e^{z'y}+(\kappa'_x-z') D e^{-z'y}\right).\nonumber
\end{eqnarray}
Let us now enforce the boundary conditions on $\Phi_B^{\vec K'}(y)$ and
$\Phi_A^{\vec K'}(y)$:
\begin{eqnarray}
\label{realk1}
\Phi_B^{\vec K'}(0) &=& 0 \Rightarrow
C+D=0\quad\Rightarrow\quad D=-C;\\
\Phi_A^{\vec K'}(\tilde W) &=& 0 \Rightarrow
\frac{\gamma}{E}\left((\kappa'_x+z') C e^{z' \tilde W}+
(\kappa'_x-z') D e^{-z' \tilde W}\right)=0 \Rightarrow\nonumber\\
&& (\kappa'_x+z') C e^{z' \tilde W}-
(\kappa'_x-z') C e^{-z' \tilde W}=0 \Rightarrow\nonumber\\
&& (\kappa'_x+z') C e^{z' \tilde W}=
(\kappa'_x-z') C e^{-z' \tilde W} \Rightarrow\nonumber\\
e^{-2 z' \tilde W} &=& \frac{\kappa'_x+z'}{\kappa'_x-z'}=
\frac{(-\kappa'_x)-z'}{(-\kappa'_x)+z'},\nonumber
\end{eqnarray}
which is equal to eq.~(\ref{realk}) if we substitute $\kappa_x$ with
$-\kappa'_x$. Therefore the calculations are completely
analogous to those seen around the point $\vec K$.
We consider again real values of $\kappa'_x$.
We conclude~\cite{supplem} that (apart from $z'=0$,
which corresponds to identically null $\Phi$'s) eq.~(\ref{realk1}) has a
{\em real solution} $z'$ only if $\displaystyle -\kappa'_x>1/\tilde W$,
{\em i.e.} if $\displaystyle \kappa'_x<-1/\tilde W$.
If instead $\kappa'_x>-1/\tilde W$, eq.~(\ref{realk1}) does not have real
solutions $z'$ (apart from $z'=0$).
In the case of real $z'$, from eq.~(\ref{realk1}) we can find that
\cite{supplem}
\begin{equation}
\kappa'_x =-\frac{z'}{\tanh(z' \tilde W)}
\end{equation}
($z'=0$ does not have to be considered) and thus~\cite{supplem}
\begin{equation}
\label{esinh1}
\left(\frac{E}{\gamma}\right)^2={\kappa'_x}^2-{z'}^2=
\frac{{z'}^2}{\sinh^2(z' \tilde W)} \Rightarrow
\left|\frac{E}{\gamma}\right|=\left|\frac{z'}{\sinh(z' \tilde W)}\right|
<\frac{|z'|}{|z'| \tilde W}=\frac{1}{\tilde W}.
\end{equation}
The corresponding $\Phi$ functions are
\cite{supplem}
\begin{eqnarray}
\Phi_A^{\vec K'}(y) &=&
\frac{\gamma}{E}
\left((\kappa'_x+z') C e^{z'y}+(\kappa'_x-z') D e^{-z'y}\right)=\\
&& 2 C \,{\rm sign}\!\left(\frac{E}{\gamma} \frac{z'}{\sinh(z' \tilde W)}\right)
\sinh(z'(\tilde W-y));\nonumber\\
\Phi_B^{\vec K'}(y) &=& C e^{z'y}+D e^{-z'y}=2 C \sinh(z'y).\nonumber
\end{eqnarray}
These are edge states, each one exponentially localized on one edge
of the ribbon.
Also in this case, these edge states correspond to bands flattened towards
$E=0$. Since the Dirac point $\vec K'$, folded into the Brillouin zone
$(-\pi/a,\pi/a)$ of the zigzag nanoribbon, corresponds to
$k_x=4\pi/(3a)-2\pi/a=-2\pi/(3a)$,
the condition $\kappa'_x<-1/\tilde W$ is equivalent to
$k'_x=K'_x+\kappa'_x<-2\pi/(3a)-1/\tilde W$. Therefore also in the region
$-\pi/a<k_x<-2\pi/(3a)-1/\tilde W$ we have two bands flattened towards $E=0$,
which confirms the metallic nature of zigzag nanoribbons.
Let us now instead consider the {\em imaginary solutions} $z'=i \kappa'_n$
(with $\kappa'_n$ real) of eq.~(\ref{realk1}). The dispersion
relation $E=\pm \gamma \sqrt{{\kappa'_x}^2-{z'}^2}$ becomes
$E=\pm \gamma \sqrt{{\kappa'_x}^2+{\kappa'_n}^2}$.
The solutions are given by~\cite{supplem}
\begin{equation}
\kappa'_x=-\frac{\kappa'_n}{\tan (\kappa'_n \tilde W)}
\end{equation}
($\kappa'_n=0$ corresponds to identically null $\Phi$s
and thus does not have to be considered) and thus~\cite{supplem}
\begin{equation}
\label{esin1}
\left(\frac{E}{\gamma}\right)^2={\kappa'_x}^2+{\kappa'_n}^2=
\frac{{\kappa'_n}^2}{\sin^2 (\kappa'_n \tilde W)}\Rightarrow
\left|\frac{E}{\gamma}\right|=
\left|\frac{\kappa'_n}{\sin(\kappa'_n \tilde W)}\right|
>\frac{|\kappa'_n|}{|\kappa'_n| \tilde W}=\frac{1}{\tilde W}.
\end{equation}
The corresponding $\Phi$ functions are~\cite{supplem}
\begin{eqnarray}
\Phi_A^{\vec K'}(y) &=&
\frac{\gamma}{E} \left((\kappa'_x+i\kappa'_n) C e^{i\kappa'_n y}+
(\kappa'_x-i\kappa'_n) D e^{-i\kappa'_n y}\right)=\\
&& 2 i C \,{\rm sign}\!\left(\frac{E}{\gamma}
\frac{\kappa'_n}{\sin(\kappa'_n \tilde W)}\right)
\sin(\kappa'_n (\tilde W-y));\nonumber\\
\Phi_B^{\vec K'}(y) &=& C e^{i\kappa'_n y}+D e^{-i\kappa'_n y}=
C 2 i \sin(\kappa'_n y).\nonumber
\end{eqnarray}
These are confined states extending all over the ribbon.
Obviously, once the expressions of the functions $\Phi$ have been obtained,
the overall wave function is given by the equations (\ref{wavefunctionbisz}),
(\ref{envelopez}) and (\ref{phiz}).
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth,angle=0]{zigbands1.eps}
\caption{Bands of a zigzag nanoribbon with $N=45$ zizag lines (a)
and with $N=50$ zigzag lines (b), computed both with a
simple tight-binding model not including edge magnetization effects
(thick dotted lines) and with the $\vec k \cdot \vec p$ method
(thin solid lines). The two dashed lines correspond to the energy values
$\pm\gamma/\tilde W$; the dispersion relations in the region between the
two dashed lines are obtained for real values of $z$, while those outside
this region correspond to purely imaginary values of $z$.}
\label{f12}
\end{figure}\noindent
In fig.~\ref{f12} we show the bands of a zigzag nanoribbon with $N=45$ zigzag
lines and of a zigzag nanoribbon with $N=50$ zigzag lines, that we have
computed both with a simple tight-binding model not including edge
magnetization effects (thick dotted lines) and with the $\vec k \cdot \vec p$
(Dirac equation) method (thin solid lines). For low energy values and for not
too narrow ribbons the results obtained with the two techniques are very
similar.
In both cases, the presence of the two bands flattened towards zero and
corresponding to the edge states can be clearly seen.
\begin{figure}[b]
\centering
\includegraphics[width=\textwidth,angle=0]{nanoribbon-armchair.eps}
\caption{Sketch of an armchair nanoribbon with $N$ dimer lines (the black
atoms are carbon atoms, while the grey atoms are passivation atoms).}
\label{f13}
\end{figure}\noindent
\subsection{Armchair nanoribbons}
Instead, in the case of armchair nanoribbons (fig.~\ref{f13}), the graphene
sheet has been cut along the direction of the nearest-neighbor carbon bonds,
and therefore the edges have an armchair shape. In order to
simplify the following calculations, we can choose (see fig.~\ref{f13})
the graphene lattice vectors in the real space $\vec a_1$ and $\vec a_2$
(and consequently those in the reciprocal space $\vec b_1$ and $\vec b_2$)
in this way (we express them in the reference frame
$\Sigma=(\hbox{\boldmath{$\hat x$}},\hbox{\boldmath{$\hat y$}},
\hbox{\boldmath{$\hat z$}})$):
\begin{equation}
\vec a_1 \mathrel{\mathop\equiv_{\Sigma}}
\left[\begin{array}{c}
\displaystyle \frac{\sqrt{3}}{2}\,a \\[5pt]
\noalign{\vskip3pt}
\displaystyle \frac{a}{2} \\[5pt]
\noalign{\vskip3pt}
0
\end{array}\right]
,\quad
\vec a_2 \mathrel{\mathop\equiv_{\Sigma}}
\left[\begin{array}{c}
\displaystyle \frac{\sqrt{3}}{2}\,a \\[5pt]
\noalign{\vskip3pt}
\displaystyle -\frac{a}{2} \\[5pt]
\noalign{\vskip3pt}
0
\end{array}\right]
,\quad
\vec b_1 \mathrel{\mathop\equiv_{\Sigma}}
\left[\begin{array}{c}
\displaystyle \frac{2\pi}{\sqrt{3}a}\\ [5pt]
\noalign{\vskip3pt}
\displaystyle \frac{2\pi}{a}\\[5pt]
\noalign{\vskip3pt}
0
\end{array}\right]
,\quad
\vec b_2 \mathrel{\mathop\equiv_{\Sigma}}
\left[\begin{array}{c}
\displaystyle \frac{2\pi}{\sqrt{3}a}\\[5pt]
\noalign{\vskip3pt}
\displaystyle -\frac{2\pi}{a}\\[5pt]
\noalign{\vskip3pt}
0
\end{array}\right]
\end{equation}
(which, being
$\vec b_1=2\pi (\vec a_2 \times \hbox{\boldmath{$\hat z$}})/
(\vec a_1 \cdot (\vec a_2 \times \hbox{\boldmath{$\hat z$}}))$ and
$\vec b_2=2\pi (\hbox{\boldmath{$\hat z$}} \times \vec a_1)/
(\vec a_1 \cdot (\vec a_2 \times \hbox{\boldmath{$\hat z$}}))$,
fulfill the relation $\vec a_i \cdot \vec b_j=2 \pi \delta_{ij}$).
Consequently we have that
\begin{eqnarray}
\label{ka}
&& \vec K =\frac{1}{3}(\vec b_2-\vec b_1) \mathrel{\mathop\equiv_{\Sigma}}
\frac{4\pi}{3a}
\left[\begin{array}{c}
0\\[6pt]
\noalign{\vskip3pt}
-1\\[6pt]
\noalign{\vskip3pt}
0
\end{array}\right]=
\left[\begin{array}{c}
0\\[6pt]
\noalign{\vskip3pt}
-K\\[6pt]
\noalign{\vskip3pt}
0
\end{array}\right],\\[6pt]
&& \vec K' =\frac{1}{3}(\vec b_1-\vec b_2) \mathrel{\mathop\equiv_{\Sigma}}
\frac{4\pi}{3a}
\left[\begin{array}{c}
0\\[6pt]
\noalign{\vskip3pt}
1\\[6pt]
\noalign{\vskip3pt}
0
\end{array}\right]=
\left[\begin{array}{c}
0\\[6pt]
\noalign{\vskip3pt}
K\\[6pt]
\noalign{\vskip3pt}
0
\end{array}\right],\nonumber
\end{eqnarray}
with $K=4\pi/(3a)$. For our choice of $\vec a_1$
and $\vec a_2$, the angle $\theta'$ from the vector $\vec a_1+\vec a_2$ (i.e.
from the axis $\hbox{\boldmath{$\hat x$}}'$ used in previous calculations)
to the axis $\hbox{\boldmath{$\hat x$}}$ (taken in the longitudinal direction)
is equal to $0$.
Therefore the total wave function is given by (eq.~\ref{wavefunction})
\begin{equation}
\label{wavefunctionbisa}
\psi (\vec r)=
\sum_{\vec R_A}\psi_A (\vec R_A)\varphi(\vec r -\vec R_A)+
\sum_{\vec R_B}\psi_B (\vec R_B)\varphi(\vec r -\vec R_B),
\end{equation}
with (eq.~(\ref{assumptions2}) with $\theta'=0$)
\begin{equation}
\label{envelopea}
\left\{ \begin{array}{l}
\displaystyle
\psi_A (\vec r)=
e^{i \vec K\cdot \vec r} F_A^{\vec K}(\vec r)-
i\,e^{i \vec K'\cdot \vec r} F_A^{\vec K'}(\vec r),\\[6pt]
\displaystyle
\psi_B (\vec r)=
i\,e^{i \vec K\cdot \vec r} F_B^{\vec K} (\vec r)+
e^{i \vec K'\cdot \vec r} F_B^{\vec K'} (\vec r),
\end{array} \right.
\end{equation}
where (using eq.~(\ref{ka})), if we write
$\displaystyle \vec r \mathrel{\mathop\equiv_{\Sigma}} [x,y,0]^T$ we
have that $\vec K\cdot \vec r=-Ky$ and that $\vec K'\cdot \vec r=Ky$.
In the absence of an external potential the envelope functions
satisfy the usual Dirac equation (eq.~(\ref{absence}))
\begin{eqnarray}
&& \gamma
\left[\begin{array}{cccc}
0 &
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} &
0 &
0 \\[6pt]
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} &
0 &
0 &
0 \\[6pt]
0 &
0 &
0 &
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} \\[6pt]
0 &
0 &
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} &
0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[6pt]
F_B^{\vec K} (\vec r)\\[6pt]
F_A^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K'} (\vec r)
\end{array}\right]=\\[6pt]
&& E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[6pt]
F_B^{\vec K} (\vec r)\\[6pt]
F_A^{\vec K'} (\vec r)\\[6pt]
F_B^{\vec K'} (\vec r)
\end{array}\right].\nonumber
\end{eqnarray}
Due to the translational invariance along the $x$-direction, we can write
the envelope functions as the product of a propagating part along the
longitudinal direction $x$ and of a confined part along the transverse
direction $y$. Here we have to consider the same longitudinal component
$\kappa_x$ for the wave vector measured from $\vec K$ and $\vec K'$
because in this case if we consider $\kappa'_x \ne \kappa_x$ the
boundary conditions are satisfied for every $x$ only by the identically
null wave function. Therefore we can assume that
\begin{equation}
\label{phia}
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\
F_B^{\vec K} (\vec r)
\end{array}\right]=
e^{i \kappa_x x}
\left[\begin{array}{c}
\Phi_A^{\vec K} (y)\\
\Phi_B^{\vec K} (y)
\end{array}\right]
\ \hbox{and that}\
\left[\begin{array}{c}
F_A^{\vec K'} (\vec r)\\
F_B^{\vec K'} (\vec r)
\end{array}\right]=
e^{i \kappa_x x}
\left[\begin{array}{c}
\Phi_A^{\vec K'} (y)\\
\Phi_B^{\vec K'} (y)
\end{array}\right].
\end{equation}
We have to enforce that the overall wave function vanishes in correspondence
with the ``boundary lattice sites'' on the lower and upper edges of the ribbon.
Let us define as $W$ the real width of the nanoribbon, {\em i.e.} the distance between
the bottom row and the top row of carbon atoms of the ribbon; if the ribbon
has $N$ dimer lines across the ribbon width, we have that $W=(N-1)a/2$.
If we take $y=0$ in correspondence of the row of ``boundary lattice sites''
on the lower edge of the ribbon, the row of ``boundary lattice sites'' on the
upper edge of the ribbon will be at $y=\tilde W=W+2 \, a/2=W+a=(N+1)a/2$.
Therefore, for every $x$, we must have $\psi (x,y=0)=\psi (x,y=\tilde W)=0$.
We notice that in an armchair nanoribbon the ``boundary lattice sites''
on the lower and upper edges belong to both the $A$ and the $B$ sublattices.
Therefore, looking at eq.~(\ref{wavefunctionbisa}) and observing that
the atomic orbitals $\varphi$ are strongly localized around the atom on which
they are centered, the boundary condition on the wave function is equivalent
to setting, for every $x$,
$\psi_A (x,y=0)=\psi_B (x,y=0)=\psi_A (x,y=\tilde W)=\psi_B (x,y=\tilde W)=0$.
Using eq.~(\ref{envelopea}) we obtain the following 4 boundary conditions:
\begin{eqnarray}
\label{b1}
&& \psi_A (x,y=0)=0 \quad\forall x \Rightarrow
e^{-iK0} F_A^{\vec K} (x,y=0)-i e^{iK0} F_A^{\vec K'} (x,y=0)=\\
&& F_A^{\vec K} (x,y=0)-i F_A^{\vec K'} (x,y=0)=
e^{i \kappa_x x} \Phi_A^{\vec K}(0)-i e^{i \kappa_x x} \Phi_A^{\vec K'}(0)=
0\quad\forall x \Rightarrow\nonumber\\
&& \Phi_A^{\vec K}(0)-i \Phi_A^{\vec K'}(0)=0;\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{b2}
&& \psi_B (x,y=0)=0 \quad\forall x \Rightarrow
i e^{-iK0} F_B^{\vec K} (x,y=0)+e^{iK0} F_B^{\vec K'} (x,y=0)=\\
&& i F_B^{\vec K} (x,y=0)+F_B^{\vec K'} (x,y=0)=
i e^{i \kappa_x x} \Phi_B^{\vec K}(0)+
e^{i \kappa_x x} \Phi_B^{\vec K'}(0)=0\quad\forall x
\Rightarrow\nonumber\\
&& i \Phi_B^{\vec K}(0)+\Phi_B^{\vec K'}(0)=0;\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{b3}
&& \psi_A (x,y=\tilde W)=0 \quad\forall x \Rightarrow
e^{-iK \tilde W} F_A^{\vec K} (x,y=\tilde W)-
i e^{iK \tilde W} F_A^{\vec K'} (x,y=\tilde W)=\\
&& e^{-iK \tilde W} e^{i \kappa_x x} \Phi_A^{\vec K}(\tilde W)-
i e^{iK \tilde W} e^{i \kappa_x x} \Phi_A^{\vec K'}(\tilde W)=
0\quad\forall x \Rightarrow\nonumber\\
&& e^{-iK \tilde W} \Phi_A^{\vec K}(\tilde W)-
i e^{iK \tilde W} \Phi_A^{\vec K'}(\tilde W)=0;\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{b4}
&& \psi_B (x,y=\tilde W)=0 \quad\forall x \Rightarrow
i e^{-iK \tilde W} F_B^{\vec K} (x,y=\tilde W)+
e^{iK \tilde W} F_B^{\vec K'} (x,y=\tilde W)=\\
&& i e^{-iK \tilde W} e^{i \kappa_x x} \Phi_B^{\vec K}(\tilde W)+
e^{iK \tilde W} e^{i \kappa_x x} \Phi_B^{\vec K'}(\tilde W)=
0\quad\forall x \Rightarrow\nonumber\\
&& i e^{-iK \tilde W} \Phi_B^{\vec K}(\tilde W)+
e^{iK \tilde W} \Phi_B^{\vec K'}(\tilde W)=0.\nonumber
\end{eqnarray}
As we can see, in armchair nanoribbons the boundary conditions couple
the envelope functions relative to the Dirac points $\vec K$ and $\vec K'$.
We can solve the part of the Dirac equation around the point $\vec K$, that is
\begin{equation}
\label{systema}
\gamma
\left[\begin{array}{cccc}
0 &
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} \\[3pt]
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} &
0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)
\end{array}\right]=
E \left[\begin{array}{c}
F_A^{\vec K} (\vec r)\\[3pt]
F_B^{\vec K} (\vec r)
\end{array}\right],
\end{equation}
repeating the calculations made for zigzag nanoribbons
(eqs.~(\ref{systemza})-(\ref{solcz})) and obtaining that
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
\Phi_A^{\vec K}(y)=
\frac{\gamma}{E}\left((\kappa_x-z) A e^{zy}+(\kappa_x+z) B e^{-zy}\right),\\[5pt]
\displaystyle
\Phi_B^{\vec K}(y)=A e^{zy}+B e^{-zy},
\end{array} \right.
\end{equation}
with $z=\sqrt{\kappa_x^2-\left(\frac{E}{\gamma}\right)^2}$
and thus $E=\pm \gamma \sqrt{\kappa_x^2-z^2}$.
Analogously, we can solve the part of the Dirac equation around the point
$\vec K'$, that is
\begin{equation}
\gamma
\left[\begin{array}{cccc}
0 &
-i\,\frac{\partial}{\partial x}+\frac{\partial}{\partial y} \\[3pt]
-i\,\frac{\partial}{\partial x}-\frac{\partial}{\partial y} &
0
\end{array}\right]
\left[\begin{array}{c}
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right]=
E \left[\begin{array}{c}
F_A^{\vec K'} (\vec r)\\[3pt]
F_B^{\vec K'} (\vec r)
\end{array}\right],
\end{equation}
repeating the calculations made for zigzag nanoribbons
(eqs.~(\ref{systemz1a})-(\ref{solcz1}), with the difference that $\kappa'_x$
and $z'$ here have to be replaced by $\kappa_x$ and $z$) and obtaining
that
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
\Phi_A^{\vec K'}(y)=
\frac{\gamma}{E}\left((\kappa_x+z) C e^{zy}+(\kappa_x-z) D e^{-zy}\right),\\[5pt]
\displaystyle
\Phi_B^{\vec K'}(y)=C e^{zy}+D e^{-zy},
\end{array} \right.
\end{equation}
with (as written before)
$z=\sqrt{{\kappa_x}^2-\left(\frac{E}{\gamma}\right)^2}$
and thus $E=\pm \gamma \sqrt{{\kappa_x}^2-{z}^2}$.
Let us define $z=i \kappa_n$. In this case the dispersion relation becomes
$E=$\break
$\pm \gamma \sqrt{{\kappa_x}^2+{\kappa_n}^2}$; therefore $\kappa_x$ and
$\kappa_n=-i z$ are the longitudinal and transverse components of the
wave vector, measured from the Dirac point.
The functions $\Phi$ become
\begin{equation}
\label{phiaa}
\left\{ \begin{array}{l}
\displaystyle
\Phi_A^{\vec K}(y)=
\frac{\gamma}{E}\left((\kappa_x-i\kappa_n) A e^{i\kappa_n y}+
(\kappa_x+i\kappa_n) B e^{-i\kappa_n y}\right),\\[5pt]
\displaystyle
\Phi_B^{\vec K}(y)=A e^{i\kappa_n y}+B e^{-i\kappa_n y},\\[5pt]
\displaystyle
\Phi_A^{\vec K'}(y)=
\frac{\gamma}{E}\left((\kappa_x+i\kappa_n) C e^{i\kappa_n y}+
(\kappa_x-i\kappa_n) D e^{-i\kappa_n y}\right),\\[5pt]
\displaystyle
\Phi_B^{\vec K'}(y)=C e^{i\kappa_n y}+D e^{-i\kappa_n y}.
\end{array} \right.
\end{equation}
Now we can enforce the 4 boundary conditions (\ref{b1})-(\ref{b4}),
obtaining:
\begin{eqnarray}
\label{bc1}
\qquad && \Phi_A^{\vec K}(0)-i \Phi_A^{\vec K'}(0)=0 \Rightarrow\\[5pt]
&& (\kappa_x-i\kappa_n)A+(\kappa_x+i\kappa_n)B
-i(\kappa_x+i\kappa_n)C-i(\kappa_x-i\kappa_n)D=0 \Rightarrow\nonumber\\[5pt]
&& \kappa_x\,(A+B-iC-iD)+i\kappa_n\,(-A+B-iC+iD)=0;\nonumber\\[5pt]
\label{bc2}
&& i \Phi_B^{\vec K}(0)+\Phi_B^{\vec K'}(0)=0 \Rightarrow
i(A+B)+(C+D)=0 \Rightarrow\\[5pt]
&& A+B-iC-iD=0;\nonumber\\[5pt]
\label{bc3}
&& e^{-iK \tilde W} \Phi_A^{\vec K}(\tilde W)
-i e^{iK \tilde W} \Phi_A^{\vec K'}(\tilde W)=0 \Rightarrow\\[5pt]
&& e^{-iK \tilde W} (\kappa_x-i\kappa_n) A e^{i\kappa_n \tilde W}
+e^{-iK \tilde W} (\kappa_x+i\kappa_n) B e^{-i\kappa_n \tilde W}\nonumber\\[5pt]
&& -i e^{iK \tilde W} (\kappa_x+i\kappa_n) C e^{i\kappa_n \tilde W}
-i e^{iK \tilde W} (\kappa_x-i\kappa_n) D e^{-i\kappa_n \tilde W}=\nonumber\\[5pt]
&& \kappa_x
\left(A e^{i(\kappa_n-K)\tilde W}+B e^{-i(\kappa_n+K)\tilde W}
-iC e^{i(\kappa_n+K)\tilde W}-iD e^{-i(\kappa_n-K)\tilde W}\right)\nonumber\\[5pt]
&& +i\kappa_n
\left(-A e^{i(\kappa_n-K)\tilde W}+B e^{-i(\kappa_n+K)\tilde W}
-iC e^{i(\kappa_n+K)\tilde W}+iD e^{-i(\kappa_n-K)\tilde W}\right)=0;\nonumber\\[5pt]
\label{bc4}
&& i e^{-iK \tilde W} \Phi_B^{\vec K}(\tilde W)
+e^{iK \tilde W} \Phi_B^{\vec K'}(\tilde W)=0 \Rightarrow\\[5pt]
&& i e^{-iK \tilde W} (A e^{i\kappa_n \tilde W}+B e^{-i\kappa_n \tilde W})
+e^{iK \tilde W} (C e^{i\kappa_n \tilde W}+D e^{-i\kappa_n \tilde W})=0
\Rightarrow\nonumber\\[5pt]
&& iA e^{i(\kappa_n-K)\tilde W}+iB e^{-i(\kappa_n+K)\tilde W}
+C e^{i(\kappa_n+K)\tilde W}+D e^{-i(\kappa_n-K)\tilde W}=0
\Rightarrow\nonumber\\[5pt]
&& A e^{i(\kappa_n-K)\tilde W}+B e^{-i(\kappa_n+K)\tilde W}
-iC e^{i(\kappa_n+K)\tilde W}-iD e^{-i(\kappa_n-K)\tilde W}=0.\nonumber
\end{eqnarray}
In the following we examine the different cases in which all of these 4
boundary conditions are satisfied.
\noindent
{\bf Case I}
\noindent
If $\kappa_n=0$ the condition (\ref{bc1}) is equivalent to the condition
(\ref{bc2}), and the condition (\ref{bc3}) is equivalent to the condition
(\ref{bc4}).
But the condition (\ref{bc2}) is satisfied if
\begin{equation}
A+B-iC-iD=0 \Rightarrow
A+B=iC+iD \Rightarrow
\left\{ \begin{array}{l}
A+B=G\\[5pt]
C+D=-iG
\end{array} \right.
\end{equation}
(where we have defined $A+B\equiv G$).
The condition (\ref{bc4}) instead is satisfied if (exploiting the fact that
$\kappa_n=0$)
\begin{eqnarray}
&& A e^{i(\kappa_n-K)\tilde W}+B e^{-i(\kappa_n+K)\tilde W}
-iC e^{i(\kappa_n+K)\tilde W}-iD e^{-i(\kappa_n-K)\tilde W}=0
\Rightarrow\\[5pt]
&& A e^{-iK\tilde W}+B e^{-iK\tilde W}
-iC e^{iK\tilde W}-iD e^{iK\tilde W}=0
\Rightarrow\nonumber\\[5pt]
&& (A+B) e^{-iK\tilde W}-i(C+D) e^{iK\tilde W}=0
\Rightarrow
G e^{-iK\tilde W}-G e^{iK\tilde W}=0
\Rightarrow\nonumber\\[5pt]
&& -G (e^{iK\tilde W}-e^{-iK\tilde W})=0
\Rightarrow
-G 2 i \sin(K\tilde W)=0.\nonumber
\end{eqnarray}
Since in this case ($\kappa_n=0$) for $G=0$ all the $\Phi$ functions
(\ref{phiaa}) would become identically null and thus we have to consider
$G \ne 0$, this equation can be satisfied only if $\sin(K\tilde W)=0$.
But, since $K=4\pi/(3a)$ and $\tilde W=(N+1)a/2$, we have that
\begin{equation}
\sin(K\tilde W)=0 \Rightarrow
\sin\left(\frac{4\pi}{3a}(N+1)\frac{a}{2}\right)=0 \Rightarrow
\sin\left(\frac{2\pi}{3}(N+1)\right)=0
\end{equation}
and this is true only if $N+1$ is a multiple of 3, {\em i.e.} if $N+1=3M$ (with
$M$ integer) and thus $N=3M-1$. In this case we have that
\begin{eqnarray}
K\tilde W &=& \frac{2\pi}{3}(N+1)=\frac{2\pi}{3}(3M)=2\pi M \Rightarrow\\[5pt]
K &=& 2M\frac{\pi}{\tilde W}\Rightarrow
2M\frac{\pi}{\tilde W}-K=0(=\kappa_n)\nonumber
\end{eqnarray}
and the nanoribbon is metallic (as we will see). Being $\kappa_n=0$, the
$\Phi$ functions (\ref{phiaa}) are equal to
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
\Phi_A^{\vec K}(y)=
\frac{\gamma}{E}(\kappa_x A+\kappa_x B)=\frac{\gamma}{E}\kappa_x (A+B)=
\frac{\gamma}{E}\kappa_x G,\\[8pt]
\displaystyle
\Phi_B^{\vec K}(y)=A+B=G,\\[8pt]
\displaystyle
\Phi_A^{\vec K'}(y)=
\frac{\gamma}{E}(\kappa_x C+\kappa_x D)=
\frac{\gamma}{E} \kappa_x (C+D)=
-\frac{\gamma}{E} \kappa_x i G,\\[8pt]
\displaystyle
\Phi_B^{\vec K'}(y)=C+D=-iG.
\end{array} \right.
\end{equation}
\vskip10pt\noindent
\noindent
{\bf Case II}
\vskip5pt\noindent
\noindent
The other possibility is to satisfy the conditions (\ref{bc1})-(\ref{bc2})
in this way:
\begin{equation}
\label{firsttwo}
\left\{ \begin{array}{l}
A+B-iC-iD=0\\[5pt]
-A+B-iC+iD=0
\end{array} \right.
\Rightarrow
\left\{ \begin{array}{l}
2B-2iC=0\\[5pt]
2A-2iD=0
\end{array} \right.
\Rightarrow
\left\{ \begin{array}{l}
C=-iB\\[5pt]
D=-iA
\end{array} \right.
\end{equation}
(where in the first step we have summed and subtracted the two equations of
the system), and to satisfy the conditions (\ref{bc3})-(\ref{bc4}) enforcing
\begin{equation}
\left\{ \begin{array}{l}
A e^{i(\kappa_n-K)\tilde W}+B e^{-i(\kappa_n+K)\tilde W}
-iC e^{i(\kappa_n+K)\tilde W}-iD e^{-i(\kappa_n-K)\tilde W}=0,\\[5pt]
-A e^{i(\kappa_n-K)\tilde W}+B e^{-i(\kappa_n+K)\tilde W}
-iC e^{i(\kappa_n+K)\tilde W}+iD e^{-i(\kappa_n-K)\tilde W}=0.
\end{array} \right.
\end{equation}
Using (\ref{firsttwo}), we can write these equations in the following form:
\begin{equation}
\left\{ \begin{array}{l}
A e^{i(\kappa_n-K)\tilde W}+B e^{-i(\kappa_n+K)\tilde W}
-B e^{i(\kappa_n+K)\tilde W}-A e^{-i(\kappa_n-K)\tilde W}=0,\\[5pt]
-A e^{i(\kappa_n-K)\tilde W}+B e^{-i(\kappa_n+K)\tilde W}
-B e^{i(\kappa_n+K)\tilde W}+A e^{-i(\kappa_n-K)\tilde W}=0.
\end{array} \right.
\end{equation}
If now we separate the real and imaginary part of $\kappa_n$
({\em i.e.} we write $\kappa_n$ as $\kappa_n=\kappa_{nr}+i\kappa_{ni}$) we have
that
\begin{eqnarray}
&& \left\{ \begin{array}{l}
A e^{-\kappa_{ni} \tilde W} e^{i(\kappa_{nr}-K)\tilde W}
+B e^{\kappa_{ni} \tilde W} e^{-i(\kappa_{nr}+K)\tilde W}\\[4pt]
-B e^{-\kappa_{ni} \tilde W} e^{i(\kappa_{nr}+K)\tilde W}
-A e^{\kappa_{ni} \tilde W} e^{-i(\kappa_{nr}-K)\tilde W}=0,\\[4pt]
-A e^{-\kappa_{ni} \tilde W} e^{i(\kappa_{nr}-K)\tilde W}
+B e^{\kappa_{ni} \tilde W} e^{-i(\kappa_{nr}+K)\tilde W}\\[4pt]
-B e^{-\kappa_{ni} \tilde W} e^{i(\kappa_{nr}+K)\tilde W}
+A e^{\kappa_{ni} \tilde W} e^{-i(\kappa_{nr}-K)\tilde W}=0,
\end{array} \right.\Rightarrow\\[4pt]
&& \left\{ \begin{array}{l}
\big[A e^{-\kappa_{ni} \tilde W} \cos((\kappa_{nr}-K)\tilde W)
+B e^{\kappa_{ni} \tilde W} \cos((\kappa_{nr}+K)\tilde W)\\[4pt]
-B e^{-\kappa_{ni} \tilde W} \cos((\kappa_{nr}+K)\tilde W)
-A e^{\kappa_{ni} \tilde W} \cos((\kappa_{nr}-K)\tilde W)\big]\\[4pt]
+i\big[A e^{-\kappa_{ni} \tilde W} \sin((\kappa_{nr}-K)\tilde W)
-B e^{\kappa_{ni} \tilde W} \sin((\kappa_{nr}+K)\tilde W)\\[4pt]
-B e^{-\kappa_{ni} \tilde W} \sin((\kappa_{nr}+K)\tilde W)
+A e^{\kappa_{ni} \tilde W} \sin((\kappa_{nr}-K)\tilde W)\big]=0,\\[4pt]
\big[-A e^{-\kappa_{ni} \tilde W} \cos((\kappa_{nr}-K)\tilde W)
+B e^{\kappa_{ni} \tilde W} \cos((\kappa_{nr}+K)\tilde W)\\[4pt]
-B e^{-\kappa_{ni} \tilde W} \cos((\kappa_{nr}+K)\tilde W)
+A e^{\kappa_{ni} \tilde W} \cos((\kappa_{nr}-K)\tilde W)\big]\\[4pt]
+i\big[-A e^{-\kappa_{ni} \tilde W} \sin((\kappa_{nr}-K)\tilde W)
-B e^{\kappa_{ni} \tilde W} \sin((\kappa_{nr}+K)\tilde W)\\[4pt]
-B e^{-\kappa_{ni} \tilde W} \sin((\kappa_{nr}+K)\tilde W)
-A e^{\kappa_{ni} \tilde W} \sin((\kappa_{nr}-K)\tilde W)\big]=0,
\end{array} \right.\Rightarrow\nonumber\\[4pt]
&& \left\{ \begin{array}{l}
-(e^{\kappa_{ni} \tilde W}-e^{-\kappa_{ni} \tilde W})
\big[A \cos((\kappa_{nr}-K)\tilde W)-B \cos((\kappa_{nr}+K)\tilde W)\big]\\[4pt]
+i(e^{\kappa_{ni} \tilde W}+e^{-\kappa_{ni} \tilde W})
\big[A \sin((\kappa_{nr}-K)\tilde W)-B \sin((\kappa_{nr}+K)\tilde W)\big]=0,\\[4pt]
(e^{\kappa_{ni} \tilde W}-e^{-\kappa_{ni} \tilde W})
\big[A \cos((\kappa_{nr}-K)\tilde W)+B \cos((\kappa_{nr}+K)\tilde W)\big]\\[4pt]
-i(e^{\kappa_{ni} \tilde W}+e^{-\kappa_{ni} \tilde W})
\big[A \sin((\kappa_{nr}-K)\tilde W)+B \sin((\kappa_{nr}+K)\tilde W)\big]=0,
\end{array} \right.\Rightarrow\nonumber\\[4pt]
&& \left\{ \begin{array}{l}
-2\sinh(\kappa_{ni} \tilde W)
\big[A \cos((\kappa_{nr}-K)\tilde W)-B \cos((\kappa_{nr}+K)\tilde W)\big]\\[4pt]
+i2\cosh(\kappa_{ni} \tilde W)
\big[A \sin((\kappa_{nr}-K)\tilde W)-B \sin((\kappa_{nr}+K)\tilde W)\big]=0,\\[4pt]
2\sinh(\kappa_{ni} \tilde W)
\big[A \cos((\kappa_{nr}-K)\tilde W)+B \cos((\kappa_{nr}+K)\tilde W)\big]\\[4pt]
-i2\cosh(\kappa_{ni} \tilde W)
\big[A \sin((\kappa_{nr}-K)\tilde W)+B \sin((\kappa_{nr}+K)\tilde W)\big]=0.
\end{array} \right.\nonumber
\end{eqnarray}
If we sum and subtract the two equations, we obtain
\begin{eqnarray}
\label{secondtwo}
&& \left\{ \begin{array}{l}
4\sinh(\kappa_{ni} \tilde W) B \cos((\kappa_{nr}+K)\tilde W)\\[4pt]
-i4\cosh(\kappa_{ni} \tilde W) B \sin((\kappa_{nr}+K)\tilde W)=0,\\[4pt]
-4\sinh(\kappa_{ni} \tilde W) A \cos((\kappa_{nr}-K)\tilde W)\\[4pt]
+i4\cosh(\kappa_{ni} \tilde W) A \sin((\kappa_{nr}-K)\tilde W)=0,
\end{array} \right.\Rightarrow\\[4pt]
&& \left\{ \begin{array}{l}
B \big[\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}+K)\tilde W)
-i \cosh(\kappa_{ni} \tilde W) \sin((\kappa_{nr}+K)\tilde W)\big]=0,\\[4pt]
A \big[\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}-K)\tilde W)
-i \cosh(\kappa_{ni} \tilde W) \sin((\kappa_{nr}-K)\tilde W)\big]=0.
\end{array} \right.\nonumber
\end{eqnarray}
Apart from the case $A=B=0$, which (being also $C=-iB$ and $D=-iA$) gives
identically null functions $\Phi$, both of these two equations are satisfied
in 3 cases, that we will indicate with II-A, II-B and II-C.
\noindent
{\bf Case II-A}
\vskip5pt
\noindent
The eqs.~(\ref{secondtwo}) are satisfied if
\begin{equation}
\left\{ \begin{array}{l}
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}+K)\tilde W)
-i \cosh(\kappa_{ni} \tilde W) \sin((\kappa_{nr}+K)\tilde W)=0,\\[6pt]
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}-K)\tilde W)
-i \cosh(\kappa_{ni} \tilde W) \sin((\kappa_{nr}-K)\tilde W)=0.
\end{array} \right.
\end{equation}
If we separately equate to zero the real and imaginary parts, we find
\begin{equation}
\left\{ \begin{array}{l}
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}+K)\tilde W)=0,\\[6pt]
\cosh(\kappa_{ni} \tilde W) \sin((\kappa_{nr}+K)\tilde W)=0,\\[6pt]
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}-K)\tilde W)=0,\\[6pt]
\cosh(\kappa_{ni} \tilde W) \sin((\kappa_{nr}-K)\tilde W)=0.
\end{array} \right.
\end{equation}
Since the hyperbolic cosine is never equal to zero, these become
\begin{equation}
\left\{ \begin{array}{l}
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}+K)\tilde W)=0,\\[6pt]
\sin((\kappa_{nr}+K)\tilde W)=0,\\[6pt]
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}-K)\tilde W)=0,\\[6pt]
\sin((\kappa_{nr}-K)\tilde W)=0.
\end{array} \right.
\end{equation}
However, when the sine of an angle is equal to zero, the cosine of
that angle is certainly different from zero; therefore the previous
equations become
\begin{equation}
\left\{ \begin{array}{l}
\sinh(\kappa_{ni} \tilde W)=0,\\[6pt]
\sin((\kappa_{nr}+K)\tilde W)=0,\\[6pt]
\sin((\kappa_{nr}-K)\tilde W)=0.
\end{array} \right.
\end{equation}
Since the hyperbolic sine is null only when its argument is null, we
conclude that in this case:
\begin{equation}
\left\{ \begin{array}{l}
\kappa_{ni}=0,\\[6pt]
\sin((\kappa_{nr}+K)\tilde W)=0,\\[6pt]
\sin((\kappa_{nr}-K)\tilde W)=0,
\end{array} \right.
\Rightarrow
\left\{ \begin{array}{l}
\kappa_{n}\hbox{ real},\\[6pt]
\sin((\kappa_n+K)\tilde W)=0,\\[6pt]
\sin((\kappa_n-K)\tilde W)=0.
\end{array} \right.
\end{equation}
From the condition on $\sin((\kappa_n+K)\tilde W)$ it follows that
\begin{eqnarray}
\sin((\kappa_n+K)\tilde W)=0 &\Rightarrow&
(\kappa_n+K)\tilde W=n\pi \Rightarrow\\[6pt]
\kappa_n+K=n\frac{\pi}{\tilde W} &\Rightarrow&
\kappa_n=n\frac{\pi}{\tilde W}-K \nonumber
\end{eqnarray}
(with $n$ integer).
Then from the condition on $\sin((\kappa_n-K)\tilde W)$, substituting what
we have just found and then remembering that $K=4\pi/(3a)$
and that $\tilde W=(N+1)a/2$, we obtain that
\begin{eqnarray}
&& \sin((\kappa_n-K)\tilde W)=0 \Rightarrow
\sin\left(\left(n\frac{\pi}{\tilde W}-K-K\right)\tilde W\right)=
\sin\left(n\pi-2K \tilde W\right)=\\
&& \sin\left(n\pi-2\frac{4\pi}{3a}(N+1)\frac{a}{2}\right)=
\sin\left(\pi\left(n-4\,\frac{N+1}{3}\right)\right)=0 \Rightarrow\nonumber\\
&& n-4\,\frac{N+1}{3}\ \hbox{is an integer.}\nonumber
\end{eqnarray}
This is true only if $N+1$ is a multiple of 3, {\em i.e.} if $N+1=3M$ (with $M$
integer), {\em i.e.} if $N=3M-1$; this means that the nanoribbon is metallic
(as we will see).
In this case the $\Phi$ functions (\ref{phiaa}) are equal to
\begin{equation}
\left\{ \begin{array}{r@{\ }l}
\displaystyle
\Phi_A^{\vec K}(y)=
& \displaystyle
\frac{\gamma}{E}\left((\kappa_x-i\kappa_n) A e^{i\kappa_n y}+
(\kappa_x+i\kappa_n) B e^{-i\kappa_n y}\right),\\[6pt]
\displaystyle
\Phi_B^{\vec K}(y)=
& \displaystyle
A e^{i\kappa_n y}+B e^{-i\kappa_n y},\\[6pt]
\displaystyle
\Phi_A^{\vec K'}(y)=
& \displaystyle
\frac{\gamma}{E}\left((\kappa_x+i\kappa_n) C e^{i\kappa_n y}+
(\kappa_x-i\kappa_n) D e^{-i\kappa_n y}\right)=\\[6pt]
& \displaystyle
-i\frac{\gamma}{E}\left((\kappa_x+i\kappa_n) B e^{i\kappa_n y}+
(\kappa_x-i\kappa_n) A e^{-i\kappa_n y}\right),\\[6pt]
\displaystyle
\Phi_B^{\vec K'}(y)=
& \displaystyle
C e^{i\kappa_n y}+D e^{-i\kappa_n y}=
-i\left(B e^{i\kappa_n y}+A e^{-i\kappa_n y}\right),
\end{array} \right.
\end{equation}
that can be written as a superposition of the modes
\begin{equation}
\left\{ \begin{array}{l}
\displaystyle
\!\!\Phi_A^{\vec K}(y)=
\frac{\gamma}{E}(\kappa_x+i\tilde\kappa_n) A e^{-i\tilde\kappa_n y},\\[6pt]
\displaystyle
\!\!\Phi_B^{\vec K}(y)=A e^{-i\tilde\kappa_n y},\\[6pt]
\displaystyle
\!\!\!\Phi_A^{\vec K'}(y)=
-\frac{\gamma}{E}(\kappa_x+i\tilde\kappa_n) iA e^{i\tilde\kappa_n y},\\[6pt]
\displaystyle
\!\!\Phi_B^{\vec K'}(y)=-iA e^{i\tilde\kappa_n y},
\end{array} \right.
\ \hbox{and}\
\left\{ \begin{array}{l}
\displaystyle
\!\!\Phi_A^{\vec K}(y)=
\frac{\gamma}{E}(\kappa_x+i\kappa_n) B e^{-i\kappa_n y},\\[6pt]
\displaystyle
\!\!\Phi_B^{\vec K}(y)=B e^{-i\kappa_n y},\\[6pt]
\displaystyle
\!\!\Phi_A^{\vec K'}(y)=
-\frac{\gamma}{E}(\kappa_x+i\kappa_n) iB e^{i\kappa_n y},\\[6pt]
\displaystyle
\!\!\Phi_B^{\vec K'}(y)=-iB e^{i\kappa_n y},
\end{array} \right.
\end{equation}
with $\kappa_n=(n\pi/\tilde W)-K$ and $\tilde\kappa_n=-\kappa_n$.
We notice that, since in this case $N+1=3M$, we have that
\begin{equation}
K\tilde W=\frac{4\pi}{3a}(N+1)\frac{a}{2}=\frac{2\pi}{3}(N+1)=
\frac{2\pi}{3}(3M)=2\pi M \Rightarrow
K=2M\frac{\pi}{\tilde W}
\end{equation}
and therefore $\tilde\kappa_n$ can be written as
\begin{eqnarray}
\tilde\kappa_n &=& -\kappa_n=-\left(n\frac{\pi}{\tilde W}-K\right)=
\left(-n\frac{\pi}{\tilde W}+2K\right)-K=\\
&& \left(-n\frac{\pi}{\tilde W}+4M\frac{\pi}{\tilde W}\right)-K=
(4M-n)\frac{\pi}{\tilde W}-K=\tilde n\frac{\pi}{\tilde W}-K,\nonumber
\end{eqnarray}
with $\tilde n=4M-n$ integer.
Clearly, if $\kappa_n$ satisfies
$E=\pm \gamma \sqrt{{\kappa_x}^2+{\kappa_n}^2}$, also
$\tilde\kappa_n=-\kappa_n$ satisfies
$E=\pm \gamma \sqrt{{\kappa_x}^2+{\tilde\kappa_n}^2}$.
It can be observed that in the particular case in which $\kappa_n=0$
we find again Case I.
\vskip5pt\noindent
\noindent
{\bf Case II-B}
\noindent
Equations~(\ref{secondtwo}) are satisfied also if
\begin{equation}
\left\{ \begin{array}{l}
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}+K)\tilde W)
-i \cosh(\kappa_{ni} \tilde W) \sin((\kappa_{nr}+K)\tilde W)=0,\\[5pt]
A=0.
\end{array} \right.
\end{equation}
If we separately equate to zero the real and imaginary parts of the first
equation, we find
\begin{equation}
\left\{ \begin{array}{l}
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}+K)\tilde W)=0,\\[5pt]
\cosh(\kappa_{ni} \tilde W) \sin((\kappa_{nr}+K)\tilde W)=0,\\[5pt]
A=0.
\end{array} \right.
\end{equation}
Since the hyperbolic cosine is never equal to zero, these become
\begin{equation}
\left\{ \begin{array}{l}
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}+K)\tilde W)=0,\\[5pt]
\sin((\kappa_{nr}+K)\tilde W)=0,\\[5pt]
A=0.
\end{array} \right.
\end{equation}
But when the sine of an angle is equal to zero, surely the cosine of
that angle is different from zero; therefore the previous equations become
\begin{equation}
\left\{ \begin{array}{l}
\sinh(\kappa_{ni} \tilde W)=0,\\[5pt]
\sin((\kappa_{nr}+K)\tilde W)=0,\\[5pt]
A=0,
\end{array} \right.
\end{equation}
Since the hyperbolic sine is null only when its argument is null, we
conclude that in this case:
\begin{equation}
\left\{ \begin{array}{l}
\kappa_{ni}=0,\\[5pt]
\sin((\kappa_{nr}+K)\tilde W)=0,\\[5pt]
A=0,
\end{array} \right.
\Rightarrow
\left\{ \begin{array}{l}
\kappa_{n}\hbox{ real},\\[5pt]
\sin((\kappa_n+K)\tilde W)=0,\\[5pt]
A=0.
\end{array} \right.
\end{equation}
Due to the fact that $A=0$, also $D=-iA=0$ (while $C=-iB$).
Instead the consequence of the condition on $\sin((\kappa_n+K)\tilde W)$ is
\begin{eqnarray}
\sin((\kappa_n+K)\tilde W)=0 &\Rightarrow&
(\kappa_n+K)\tilde W=n\pi \Rightarrow\\
\kappa_n+K=n\frac{\pi}{\tilde W} &\Rightarrow&
\kappa_n=n\frac{\pi}{\tilde W}-K.\nonumber
\end{eqnarray}
In this case the $\Phi$ functions (\ref{phiaa}) are equal to
\begin{equation}
\left\{ \begin{array}{r@{\ }l}
\displaystyle
\Phi_A^{\vec K}(y)=
& \displaystyle
\frac{\gamma}{E}\left((\kappa_x-i\kappa_n) A e^{i\kappa_n y}+
(\kappa_x+i\kappa_n) B e^{-i\kappa_n y}\right)=\\[8pt]
& \displaystyle
\frac{\gamma}{E}(\kappa_x+i\kappa_n) B e^{-i\kappa_n y},\\[8pt]
\displaystyle
\Phi_B^{\vec K}(y)=
& \displaystyle
A e^{i\kappa_n y}+B e^{-i\kappa_n y}=
B e^{-i\kappa_n y},\\[8pt]
\displaystyle
\Phi_A^{\vec K'}(y)=
& \displaystyle
\frac{\gamma}{E}\left((\kappa_x+i\kappa_n) C e^{i\kappa_n y}+
(\kappa_x-i\kappa_n) D e^{-i\kappa_n y}\right)=\\[8pt]
& \displaystyle
-\frac{\gamma}{E}(\kappa_x+i\kappa_n) iB e^{i\kappa_n y},\\[8pt]
\displaystyle
\Phi_B^{\vec K'}(y)=
& \displaystyle
C e^{i\kappa_n y}+D e^{-i\kappa_n y}=
-iB e^{i\kappa_n y}.
\end{array} \right.
\end{equation}
\noindent
{\bf Case II-C}
\noindent
Finally, eqs.~(\ref{secondtwo}) are satisfied also if
\begin{equation}
\left\{ \begin{array}{l}
B=0,\\[5pt]
\sinh(\kappa_{ni} \tilde W) \cos((\kappa_{nr}-K)\tilde W)
-i \cosh(\kappa_{ni} \tilde W) \sin((\kappa_{nr}-K)\tilde W)=0.
\end{array} \right.
\end{equation}
With calculations analogous to Case II-B, we conclude~\cite{supplem}
that in this case:
\begin{equation}
\left\{ \begin{array}{l}
B=0,\\[5pt]
\kappa_{n}\hbox{ real},\\[5pt]
\sin((\kappa_n-K)\tilde W)=0.
\end{array} \right.
\end{equation}
Due to the fact that $B=0$, also $C=-iB=0$ (while $D=-iA$).
Instead the consequence of the condition on $\sin((\kappa_n-K)\tilde W)$ is
\begin{eqnarray}
\sin((\kappa_n-K)\tilde W)=0 &\Rightarrow&
(\kappa_n-K)\tilde W=n\pi \Rightarrow\\
\kappa_n-K=n\frac{\pi}{\tilde W} &\Rightarrow&
\kappa_n=n\frac{\pi}{\tilde W}+K.\nonumber
\end{eqnarray}
In this case the $\Phi$ functions (\ref{phiaa}) are equal to
\begin{equation}
\left\{ \begin{array}{r@{\ }l}
\displaystyle
\Phi_A^{\vec K}(y)=
& \displaystyle
\frac{\gamma}{E}\left((\kappa_x-i\kappa_n) A e^{i\kappa_n y}+
(\kappa_x+i\kappa_n) B e^{-i\kappa_n y}\right)=\\[8pt]
& \displaystyle
\frac{\gamma}{E}(\kappa_x-i\kappa_n) A e^{i\kappa_n y}=
\frac{\gamma}{E}(\kappa_x+i\tilde\kappa_n) A e^{-i\tilde\kappa_n y},\\[8pt]
\displaystyle
\Phi_B^{\vec K}(y)=
& \displaystyle
A e^{i\kappa_n y}+B e^{-i\kappa_n y}=
A e^{i\kappa_n y}=A e^{-i\tilde\kappa_n y},\\[8pt]
\displaystyle
\Phi_A^{\vec K'}(y)=
& \displaystyle
\frac{\gamma}{E}\left((\kappa_x+i\kappa_n) C e^{i\kappa_n y}+
(\kappa_x-i\kappa_n) D e^{-i\kappa_n y}\right)=\\[8pt]
& \displaystyle
-\frac{\gamma}{E}(\kappa_x-i\kappa_n) iA e^{-i\kappa_n y}=
-\frac{\gamma}{E}(\kappa_x+i\tilde\kappa_n) iA e^{i\tilde\kappa_n y},\\[8pt]
\displaystyle
\Phi_B^{\vec K'}(y)=
& \displaystyle
C e^{i\kappa_n y}+D e^{-i\kappa_n y}=
-iA e^{-i\kappa_n y}=-iA e^{i\tilde\kappa_n y},
\end{array} \right.
\end{equation}
with
\begin{equation}
\tilde\kappa_n=-\kappa_n=-\left(n\frac{\pi}{\tilde W}+K\right)=
-n\frac{\pi}{\tilde W}-K=\tilde n\frac{\pi}{\tilde W}-K
\end{equation}
(where $\tilde n=-n$ is an integer).
Clearly, if $\kappa_n$ satisfies
$E=\pm \gamma \sqrt{{\kappa_x}^2+{\kappa_n}^2}$, also
$\tilde\kappa_n=-\kappa_n$ satisfies
$E=\pm \gamma \sqrt{{\kappa_x}^2+{\tilde\kappa_n}^2}$.
In conclusion, in all the cases we have that
\begin{equation}
\label{concluphia}
\left\{ \begin{array}{l}
\displaystyle
\Phi_A^{\vec K}(y)=
\frac{\gamma}{E}(\kappa_x+i\kappa_n) A e^{-i\kappa_n y},\\[8pt]
\displaystyle
\Phi_B^{\vec K}(y)=A e^{-i\kappa_n y},\\[8pt]
\displaystyle
\Phi_A^{\vec K'}(y)=
-\frac{\gamma}{E}(\kappa_x+i\kappa_n) iA e^{i\kappa_n y},\\[8pt]
\displaystyle
\Phi_B^{\vec K'}(y)=-iA e^{i\kappa_n y},
\end{array} \right.
\end{equation}
with $A$ being a proper normalization constant,
$\kappa_n=(n\pi/\tilde W)-K$ and
$E=\pm \gamma \sqrt{{\kappa_x}^2+{\kappa_n}^2}$.
Consequently, for eq.~(\ref{envelopea}) we have that
\begin{eqnarray}
&& \psi_A (\vec r)=
e^{i \vec K\cdot \vec r} F_A^{\vec K}(\vec r)-
i\,e^{i \vec K'\cdot \vec r} F_A^{\vec K'}(\vec r)=\\[3pt]
&& e^{-iKy} \Phi_A^{\vec K}(y) e^{i\kappa_x x}-
i\,e^{iKy} \Phi_A^{\vec K'}(y) e^{i\kappa_x x}=\nonumber\\[3pt]
&& \left(e^{-iKy} \Phi_A^{\vec K}(y)-i\,e^{iKy} \Phi_A^{\vec K'}(y)\right)
e^{i\kappa_x x}=\nonumber\\[3pt]
&& \frac{\gamma}{E}
\left(e^{-iKy} (\kappa_x+i\kappa_n) A e^{-i\kappa_n y}+
i e^{iKy} (\kappa_x+i\kappa_n) iA e^{i\kappa_n y}\right)
e^{i\kappa_x x}=\nonumber\\[3pt]
&& -\frac{\gamma}{E}
(\kappa_x+i\kappa_n) A \left(e^{i(\kappa_n+K) y}-e^{-i(\kappa_n+K) y}\right)
e^{i\kappa_x x}=\nonumber\\[3pt]
&& -\frac{\gamma}{E}
(\kappa_x+i\kappa_n) A 2 i \sin\left((\kappa_n+K) y\right)e^{i\kappa_x x}\nonumber
\end{eqnarray}
and that
\begin{eqnarray}
&& \psi_B (\vec r)=
i\,e^{i \vec K\cdot \vec r} F_B^{\vec K}(\vec r)+
e^{i \vec K'\cdot \vec r} F_B^{\vec K'} (\vec r)=\\[3pt]
&& i\,e^{-iKy} \Phi_B^{\vec K}(y) e^{i\kappa_x x}+
e^{iKy} \Phi_B^{\vec K'}(y) e^{i\kappa_x x}=\nonumber\\[3pt]
&& \left(i\,e^{-iKy} \Phi_B^{\vec K}(y)+
e^{iKy} \Phi_B^{\vec K'}(y)\right) e^{i\kappa_x x}=\nonumber\\[3pt]
&& \left(i\,e^{-iKy} A e^{-i\kappa_n y}-
e^{iKy} iA e^{i\kappa_n y}\right) e^{i\kappa_x x}=\nonumber\\[3pt]
&& -iA \left(e^{i(\kappa_n+K) y}-e^{-i(\kappa_n+K) y}\right) e^{i\kappa_x x}=\nonumber\\[3pt]
&& -iA 2i \sin\left((\kappa_n+K) y\right) e^{i\kappa_x x}=\nonumber\\[3pt]
&& 2A \sin\left((\kappa_n+K) y\right) e^{i\kappa_x x}.\nonumber
\end{eqnarray}
We observe that in large ribbons the lowest-energy modes will have
values of $\kappa_n$ much smaller than $K$ and thus their wave functions
will be characterized by a transverse wave vector approximately equal to
$K$ and by a transverse wave length about equal to
$2\pi/K=2\pi\,(3a/(4\pi))=3a/2$, {\em i.e.} of the order of the lattice constant.
No edge state exists in armchair nanoribbons.
Using the relations $K=4\pi/(3a)$ and $\tilde W=(N+1)a/2$, we have that
\begin{equation}
\label{kn}
\kappa_n=n\frac{\pi}{\tilde W}-K=\frac{n 2 \pi}{(N+1)a}-\frac{4\pi}{3a}=
\frac{2\pi(3n-2(N+1))}{3(N+1)a}.
\end{equation}
Since $E_n=\pm \gamma \sqrt{{\kappa_x}^2+{\kappa_n}^2}$, we have a double
band degeneracy if, for any integer $n$, another integer $n'$ exists such
that $\kappa_{n'}=-\kappa_n$ and thus $E_{n'}=E_n$. This happens if
\begin{eqnarray}
3n'-2(N+1) &=& -(3n-2(N+1))\Rightarrow
3n'=-3n+4(N+1)\Rightarrow\\
n' &=& -n+\frac{4(N+1)}{3},\nonumber
\end{eqnarray}
with $n$ and $n'$ integer, which means that $N+1$ has to be a multiple
of 3, {\em i.e.} $N+1=3M$ with $M$ integer, or equivalently $N=3M-1$
(so that $n'=-n+4M$).
We also observe that among the allowed $\kappa_n$'s (given by eq.~(\ref{kn}))
we have $\kappa_n=0$ if an integer $n$ exists, such that
\begin{equation}
3n-2(N+1)=0\Rightarrow
n=\frac{2(N+1)}{3},
\end{equation}
which again means that $N+1$ has to be a multiple of 3, {\em i.e.} $N+1=3M$
with $M$ integer, or equivalently $N=3M-1$ (so that $n=2M$).
Therefore an armchair nanoribbon has a double band degeneracy and has
$\kappa_n=0$ among the allowed values of $\kappa_n$ only if it has a
number of dimer lines $N=3M-1$ (with $M$ an integer). In this case
for $\kappa_n=0$ we have $E=\pm \gamma |\kappa_x|$ which vanishes
for $\kappa_x\to0$ and thus the nanoribbon is metallic. Instead for
$N \ne 3M-1$ the armchair nanoribbon is not metallic and has non-degenerate
bands.
This conclusion is coherent with the fact that the dispersion relations of
an armchair nanoribbon can be obtained from those of graphene enforcing the
Dirichlet boundary conditions at $y=0$ and $y=\tilde W$; this means that there
has to be an integer number of transverse half-wavelengths $\lambda_y/2$
inside $\tilde W$; thus it must happen that
\begin{equation}
\tilde W=n \frac{\lambda_y}{2} \Rightarrow
k_y=\frac{2\pi}{\lambda_y}=n\frac{\pi}{\tilde W}
\end{equation}
(where $k_y$ is the transverse component of the total wave vector, measured
from the origin of the reciprocal space). Therefore the bands of the ribbon
can be obtained by cross-sectioning those of graphene along the parallel lines
$k_y=n\pi/\tilde W$, and then folding them into the Brillouin zone
$(-\pi/(\sqrt{3}a),\pi/(\sqrt{3}a))$ of the armchair nanoribbon (the unit cell
of which has a length $3a_{C-C}=\sqrt{3}a$). There are bands of the nanoribbon with
a zero gap, and thus the nanoribbon is metallic, only if some of the lines with
$k_y=n\pi/\tilde W$ pass through a Dirac point of graphene (where the
graphene dispersion relations have a zero gap). But, since
\begin{equation}
\tilde W=(N+1)\frac{a}{2} \Rightarrow
a=\frac{2 \tilde W}{N+1} \Rightarrow
K=\frac{4\pi}{3a}=\frac{4\pi}{3}\frac{N+1}{2 \tilde W}=
2\,\frac{N+1}{3}\frac{\pi}{\tilde W},
\end{equation}
this is possible only if $N+1$ is a multiple of 3, {\em i.e.} $N+1=3M$ with
$M$ integer, or equivalently $N=3M-1$.
A more exact tight-binding analysis (taking into consideration the reduction
of the carbon-carbon bond lengths parallel to dimer lines at the edges
with respect to the bond lengths in the core of the ribbon) leads to the
appearance of a small gap also in this subset of armchair nanoribbons,
that have to be more correctly considered as quasi-metallic
ones~\cite{son,fujita1,fujita2}.
In fig.~\ref{f14} we show the bands of an armchair nanoribbon with
$N=98$ dimer lines (metallic) and of an armchair nanoribbon with
$N=99$ dimer lines (semiconductor), that we have computed both with a
tight-binding method not including the reduction of the bond lengths at the
edges (thick dotted lines) and with the $\vec k \cdot \vec p$ (Dirac equation)
method (thin solid lines). As we see, for low energy values and for not
too narrow ribbons they are nearly coincident.
\begin{figure}
\centering
\includegraphics[width=\textwidth,angle=0]{armbands1.eps}
\caption{Bands of an armchair nanoribbon with $N=98$ dimer lines (a)
and with $N=99$ dimer lines (b), computed both with a tight-binding
method not including the reduction of the bond lengths at the
edges (thick dotted lines) and with the $\vec k \cdot \vec p$ method
(thin solid lines).}
\label{f14}
\end{figure}\noindent
All previous considerations are valid both for real values of
$\kappa_x$ (propagating modes), and for purely imaginary values of
$\kappa_x$ (evanescent modes).
As an application of the relations (\ref{jx}) and (\ref{jy}) to
the case of an armchair nanoribbon in the absence of an external potential,
we can observe, using the (\ref{concluphia}) (with $\kappa_n$ real and
$\kappa_x$ real or purely imaginary), that
\begin{eqnarray}
J_x &=& v_F\,\left({F_A^{\vec K}}^* F_B^{\vec K}+
{F_B^{\vec K}}^* F_A^{\vec K}+
{F_A^{\vec K'}}^* F_B^{\vec K'}+
{F_B^{\vec K'}}^* F_A^{\vec K'}\right)=\\[5pt]
&& v_F\,\frac{\gamma}{E}\,\Big(
(\kappa_x^*-i\kappa_n) A^* e^{i\kappa_n y} e^{-i\kappa_x^* x}
A e^{-i\kappa_n y} e^{i\kappa_x x}\nonumber\\[5pt]
&& +A^* e^{i\kappa_n y} e^{-i\kappa_x^* x}
(\kappa_x+i\kappa_n) A e^{-i\kappa_n y} e^{i\kappa_x x}\nonumber\\[5pt]
&& +(\kappa_x^*-i\kappa_n) i A^* e^{-i\kappa_n y} e^{-i\kappa_x^* x}
(-i) A e^{i\kappa_n y} e^{i\kappa_x x}\nonumber\\[5pt]
&& +iA^* e^{-i\kappa_n y} e^{-i\kappa_x^* x}
(\kappa_x+i\kappa_n) (-i)A e^{i\kappa_n y} e^{i\kappa_x x}\Big)=\nonumber\\[5pt]
&& v_F\,\frac{\gamma}{E} |A|^2 e^{i(\kappa_x-\kappa_x^*) x}
(\kappa_x^*-i\kappa_n+\kappa_x+i\kappa_n+
\kappa_x^*-i\kappa_n+\kappa_x+i\kappa_n)=\nonumber\\[5pt]
&& 2 v_F\,\frac{\gamma}{E} |A|^2 e^{i(\kappa_x-\kappa_x^*) x}
(\kappa_x+\kappa_x^*),\nonumber
\end{eqnarray}
which if $\kappa_x$ is real (and thus $\kappa_x^*=\kappa_x$) is equal to
(remembering that $v_F=\gamma/\hbar$)
\begin{equation}
\label{jxa}
J_x=4\,v_F\,\frac{\gamma}{E}\,|A|^2 \kappa_x=
4|A|^2\,\frac{\gamma^2}{\hbar E}\,\kappa_x,
\end{equation}
while if $\kappa_x$ is purely imaginary (and thus $\kappa_x^*=-\kappa_x$)
is null.
Note that (using (\ref{p}) and (\ref{concluphia})) if $\kappa_x$ is real
the probability density is equal to
\begin{eqnarray}
P &=& |F_A^{\vec K}(\vec r)|^2+|F_A^{\vec K'}(\vec r)|^2+
|F_B^{\vec K}(\vec r)|^2+|F_B^{\vec K'}(\vec r)|^2=\\
&& \left(\frac{\gamma}{E}\right)^2 |\kappa_x+i \kappa_n|^2 |A|^2+|A|^2+
\left(\frac{\gamma}{E}\right)^2 |\kappa_x+i \kappa_n|^2 |A|^2+|A|^2=\nonumber\\
&& 2 |A|^2
\left(1+\left(\frac{\gamma}{E}\right)^2 |\kappa_x+i \kappa_n|^2\right)=
2 |A|^2 \left(1+\frac{\gamma^2 (\kappa_x^2+\kappa_n^2)}{E^2}\right)=\nonumber\\
&& 2 |A|^2 \left(1+\frac{E^2}{E^2}\right)=2 |A|^2 2=4 |A|^2.\nonumber
\end{eqnarray}
Moreover, since in this case the energy dispersion relations are
$E=\pm \gamma \sqrt{{\kappa_x}^2+{\kappa_n}^2}$,
the mean velocity of the electrons is
\begin{eqnarray}
v_x &=& \frac{1}{\hbar}\frac{\partial E}{\partial k_x}=
\frac{1}{\hbar} \left(\pm \gamma \frac{1}{2}
\frac{1}{\sqrt{{\kappa_x}^2+{\kappa_n}^2}} 2 \kappa_x \right)=\\
&& \pm \frac{\gamma}{\hbar} \frac{\kappa_x}{\sqrt{{\kappa_x}^2+{\kappa_n}^2}}=
\frac{\gamma^2}{\hbar}
\frac{\kappa_x}{\pm \gamma \sqrt{{\kappa_x}^2+{\kappa_n}^2}}=
\frac{\gamma^2}{\hbar E} \kappa_x.\nonumber
\end{eqnarray}
Therefore if $\kappa_x$ is real we have that $J_x=P v_x$, as expected.
As to the transversal part of the probability current density, we have that
\begin{eqnarray}
\qquad J_y &=& -i\,v_F\,\left({F_A^{\vec K}}^* F_B^{\vec K}-
{F_B^{\vec K}}^* F_A^{\vec K}-
{F_A^{\vec K'}}^* F_B^{\vec K'}+
{F_B^{\vec K'}}^* F_A^{\vec K'}\right)=\\
&& -i\,v_F\,\frac{\gamma}{E}\,\Big(
(\kappa_x^*-i\kappa_n) A^* e^{i\kappa_n y} e^{-i\kappa_x^* x}
A e^{-i\kappa_n y} e^{i\kappa_x x}\nonumber\\
&& -A^* e^{i\kappa_n y} e^{-i\kappa_x^* x}
(\kappa_x+i\kappa_n) A e^{-i\kappa_n y} e^{i\kappa_x x}\nonumber\\
&& -(\kappa_x^*-i\kappa_n) iA^* e^{-i\kappa_n y} e^{-i\kappa_x^* x}
(-i) A e^{i\kappa_n y} e^{i\kappa_x x}\nonumber\\
&& +iA^* e^{-i\kappa_n y} e^{-i\kappa_x^* x}
(\kappa_x+i\kappa_n) (-i)A e^{i\kappa_n y} e^{i\kappa_x x}\Big)=\nonumber\\
&& -i\,v_F\,\frac{\gamma}{E}\,|A|^2\,e^{i(\kappa_x-\kappa_x^*) x}
(\kappa_x^*-i\kappa_n-\kappa_x-i\kappa_n
-\kappa_x^*+i\kappa_n+\kappa_x+i\kappa_n)=0,\nonumber
\end{eqnarray}
as expected in a transversally confined structure.
\section{Conclusion}
The $\vec k \cdot \vec p$ method and the related envelope function method are
widely used to study the physical properties of materials within a continuum
approach, without having to resort to an atomistic analysis, which requires
(in the case of large structures) a prohibitive computational effort. These
methods have been developed in many and sometimes quite different ways by
several authors and have been successfully applied to a multitude of different
problems. This explains the great variety and inhomogeneity of the related
literature. In this review, we have briefly described the basics of these
methodologies, dwelling upon the treatments that we have considered more
useful for an easy comprehension. For a detailed explanation of the different
approaches, the interested reader can resort to the many papers and
books on the topic, some of which are listed in the references.
In particular, we have focused on the application of the $\vec k \cdot \vec p$
method to graphene and graphene-related materials, where it results in a
description of the electronic properties in terms of the Dirac equation.
We have compared the different formulations adopted in the literature and
we have shown how this continuum approach allows to quickly obtain the
most relevant electrical properties of graphene, carbon nanotubes and
graphene nanoribbons.
\acknowledgments
We would like to thank Prof.~\textsc{T.~Ando}, Prof.~\textsc{P.~Lugli} and
Dr.~\textsc{G.~Scarpa} for useful discussions and suggestions.
We also gratefully acknowledge support from the EU\break
FP7 IST Project GRAND
(contract number 215752) via the IUNET consortium.
\addtocontents{toc}{\SkipTocEntry}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{\bf Introduction}
The theory of strong interaction physics is well understood
within perturbative Quantum Chromodynamics(PQCD) $\cite {MUTA}$.
The PQCD describes the dynamics of quarks and gluons at very high energies.
The laboratory experiments involving
hadrons either in the initial state or final state can be understood
by treating these quarks and gluons as asymptotic
states and by safely using the perturbative techniques thanks to
asymptotic freedom. In other words,
the hadronic cross sections can be well expressed in terms of partonic
cross sections. The Parton Model (PM) \cite {ALT} has been a successful
model to tie up these perturbatively
calculable parton level cross sections and the experimentally measured
cross sections. This involves the introduction of certain probability
distributions which can not be calculable
in PQCD but are just inputs of the model.
Though they are not calculable, they are found to be universal in the
sense that
they are process independent and the evolution in terms of the scale is well
known. In other words, if these probability distributions are measured
in an experiment at a scale, say $Q^2$ which is above the
$\Lambda_{QCD}$ where
the QCD perturbation theory is reliable, then these distributions can
be used as inputs to predict the rates or cross sections of different
experiments which involve these distributions.
In this sense they are something to do with the hadrons
participating in the process and are process independent.
Also, using the well known Altarelli-Parisi evolution equation satisfied
by these distributions, one can find out how these distributions change as
the scale changes and thereby use them appropriately for the experiments done
at various energies.
There are two types of such probability distributions one comes across
in hadron physics. The well understood among them is the parton probability
distribution. It is generally denoted by $f_{a(h) /H(s)}(x,Q^2)$ and the
meaning of it is the probability of finding a parton of type $a$ with
polarisation $h$ inside the hadron $H$ of polarisation $s$ with momentum
fraction $x$ of the hadron and the scale $Q^2$. These distributions
are usually measured in Deep Inelastic Scattering(DIS) experiments.
The universality of these distributions and the predicted evolution
in terms of the scale $Q^2$ are proved experimentally $\cite {ALT}$. The other
distribution is nothing but the fragmentation function.
The fragmentation functions are mirror image of the parton probability
distributions. These functions are generally denoted by
$D_{a(h)}^{H(s)}(x,Q^2)$. They measure
the probability that a parton of type $a$ with polarisation $h$ fragments
into a hadron of type $H$ with polarisation $s$ and carries away
$x$ fraction of the parton's momentum at a scale $Q^2$ \cite{BF}.
These functions are
usually measured in $e^-$ $e^+$ $ \rightarrow H~X$ experiments.
The universality and the $Q^2$ evolution of unpolarised functions
(which does not contain any information about the spin of the hadrons
produced) are well understood.
In recent years, there has been several interesting works to understand
these fragmentation functions.
These include the measurement of the fragmentation functions for
charged and neutral pions and kaons $\cite {MAT}$.
The QCD inspired Parton Model analysis sublamented with
Altarelli-Parisi evolution equations for these functions
is shown to explain the pion and kaon production rates at leading and
next-to-leading order $\cite {BKK}$. The another field of current
interest is the inclusive $\Lambda$ production at $e^-$ $e^+$ scattering
experiments $\cite {LUN}$. This is important as
$\Lambda~$s are produced copiously
and also it is easy to detect them. On the theoretical side,
the study of polarisation effects in the production cross sections
is more interesting. As we know, at low energies the unpolarised
$e^-$ $e^+$ scattering will not produce longitudinally
polarised $\Lambda~$s (we do not attempt to review the
transversely polarised $\Lambda~$s produced
in unpolarised scattering experiments) but
when the center of mass energy crosses the $Z$ threshold, then
due to parity violating effects, the produced $\Lambda~$s can
naturally be polarised. Recall that these are produced due to
the fragmentation of polarised partons produced in the scattering.
Unfortunately, it is not clear how these partons fragment into
these hadrons. There exists only model calculations which can
tell us how much of the parton spin is transferred to $\Lambda$.
In the naive quark model, the only contribution to polarised
$\Lambda$ is due to strange quark and the contribution
coming from other partons is identically zero. As we know
this naive picture breaks down due to complicated
QCD effects of both perturbative and non-perturbative origins.
Along this line of thought there has been an interesting work
by Burkardt and Jaffe $\cite {BUR}$ who have charted
out an experimental programme
to measure polarised $u$ and $d$ quark fragmentation functions
in addition to $s$ quark fragmentation function. The non-zero
value for $u$ and $d$ quark
fragmentation functions will invalidate the quark model picture.
More recently the references $\cite {RAV1},\cite {RAV2}$ discuss the
importance of gluons in the polarised $\Lambda$ production.
The analysis in the reference $\cite{RAV1}$ is purely
based on the AP evolution equations satisfied by
the polarised fragmentation functions of quarks and gluons.
It has been shown that the gluons play significant role
when the scale at which the experiment is done is very high.
In the massive gluon scheme, it has been demonstrated \cite {RAV2}
that the gluons contribute to polarised lambda production.
Since the fragmentation functions are defined in this scheme,
the gluonic contribution to polarised $\Lambda$ is
scheme dependent. The situation is very similar to
the gluonic contribution to polarised structure function
measured in DIS. This paper extends the previous analysis \cite {RAV2}
to include the $Z$ boson exchange in order to completely extract
various parton fragmentation functions.
The paper is organised as follows. In the second chapter we discuss the
importance of polarised gluons in the $\Lambda$ production
using AP evolution equations. We then systematically compute
the QCD corrections to the asymmetries defined below
which are useful to extract various partonic contributions to
the $\Lambda$ production. The results are presented in the chapter 3.
We finally conclude in the chapter 4.
The appendix contains the relevant details of the
results presented in the chapter 3.
\section{\bf The $Q^2$ Evolution of the Polarised Fragmentation Functions}
In the parton model, the production cross section for $\Lambda$
with polarisation $s$ in $e^-$ $e^+$ scattering is related to
that of a parton and the probability
that the parton fragments into $\Lambda$. In other words,
\begin{equation}
{d \sigma^{\Lambda (s)} (s') \over dx d \Omega}(x,Q^2) = \sum_{a,\lambda} \int_x^1 {dy \over y} {d \hat \sigma^{ a(\lambda)} (s') \over dy d \Omega} D_{a(\lambda)}^{\Lambda(s)}
(x/y,Q^2)
\end{equation}
The left hand side is
the hadronic cross section for the production
of polarised $\Lambda$ within a solid angle $\Omega$ with respect to the
beam direction in $e^-$ $e^+$ scattering. Here $s$, $s'$ and $\lambda$
are the helicities of $\Lambda$, electron and parton respectively.
The Bjorken like variables $x=2 p_{\Lambda}.q/Q^2$ and $y=2 p_a.q/Q^2$ are
scaling variables in the hadron and parton levels respectively
with $q^2=Q^2$,
$p_{\Lambda}$,$p_a$ and $q$ being the momenta of produced hadron,
parton and the intermediate vector boson($\gamma$ or $Z$) respectively.
The production cross section for a parton with helicity
$\lambda$ (right hand side of the equation)
is completely calculable in PQCD. On the other hand, the
probability that a parton of helicity $\lambda$ fragments into
$\Lambda$ of helicity $s$, $D_{a(\lambda)}^{\Lambda(s)}(x/y,Q^2)$,
is not calculable due to non-perturbative
mechanism involved in the fragmentation region. Hence, these
fragmentation functions are just inputs of the model
which can be extracted from the experiment. Though they are
not calculable within the realm of PQCD, they are universal in the
sense that they are process independent. More interestingly
their evolution in terms of $Q^2$ and $x$ are completely
governed by the well known Altarelli-Parisi evolution equations.
In the polarised experiments, the hadronic cross section is directly
proportional to polarised parton fragmentation functions defined as
\begin{equation}
\Delta D_a^\Lambda(x,Q^2)= D_{a(\uparrow)}^{\Lambda(\uparrow)}(x,Q^2)-
D_{a(\uparrow)}^{\Lambda(\downarrow)}(x,Q^2)
\nonumber
\end{equation}
Let us first analyse the $Q^2$ evolution of these fragmentation
functions using Altarelli-Parisi evolution equations.
The evolution equations are well known and are given by
\begin{equation}
{d \over dt} \Delta D_{q_i}^\Lambda(x,t) = {\alpha_s(t) \over 2 \pi}
\int_x^1 {dy \over y} \left[ \Delta D_{q_i}^\Lambda(y,t) \Delta P_{qq}(x/y)
+ \Delta D_g^\Lambda(y,t) \Delta P_{gq}(x/y)\right]
\label{apeqnq}
\end{equation}
\begin{equation}
{d \over dt} \Delta D_g^\Lambda(x,t)\! =\! {\alpha_s(t) \over 2 \pi}
\!\int_x^1\!\! {dy \over y} \left[ \sum_{j=1}^{2f}\!\Delta D_{q_i}^\Lambda(y,t)
\Delta P_{qg}(x/y)\!\! +\!\! \Delta D_g^\Lambda(y,t) \Delta P_{gg}(x/y)\right]
\label{apeqn}
\end{equation}
where $t=\log(Q^2/\Lambda^2)$ and $\alpha_s(t)$ is the strong coupling
constant. Here, $\alpha_s(t) \Delta P_{ab}(y) dt$
is the probability density of finding
a parton of type $a$ at the scale $t+dt$ with momentum fraction
$y$ inside the parton of type $b$ at a scale $t$.
The splitting functions are given by
\begin{eqnarray}
\Delta P_{qq}(z)&=&C_2(R)\left({1+z^2 \over (1-z)_+}+ {3 \over 2} \delta(1-z)
\right)
\nonumber\\
\Delta P_{gq}(z)&=&C_2(R) \left( {1- (1-z)^2 \over z}\right)
\nonumber\\
\Delta P_{qg}(z)&=&{1\over 2} (z^2-(1-z)^2)
\nonumber \\
\Delta P_{gg}(z)&=&C_2(G)\left( (1+z^4) \left({1\over z} + {1\over (1-z)_+}
\right) \right. \nonumber \\
&&\left. -{(1-z)^3 \over z} + \left( {11 \over 6} - {2 \over 3}
{T(R) \over C_2(G)}\right) \delta(1-z) \right)
\nonumber
\end{eqnarray}
Here, $C_2(R)= (N^2-1)/2N$, $C_2(G)=N$ and $T(R)=f/2$ with $N=3$ for
$SU(3)$ and $f$ being the number of flavours $\cite {MUTA}$. In
the above equations, usual $+$ prescription has been used
to regulate $z \rightarrow 1$ singularity.
Notice that the above equations are similar to the AP equations
satisfied by the polarised parton distribution functions but for
the interchange in $\Delta p_{qg}$ and $\Delta p_{gq}$.
The reason for this is as follows:
The emission of a quark(gluon) from a quark
(gluon) only affects the probability of quark (gluon) fragmenting
into hadron. Hence the splitting functions
$\Delta P_{qq}$ and $\Delta P_{gg}$ are unaffected.
On the other hand, the emission of a quark from a gluon
changes the probability of the gluon fragmenting into hadron.
Similarly, the emission of a gluon from a quark affects the probability
of quark fragmenting into hadron. That is why,
the splitting functions $\Delta P_{qg}$ and $\Delta P_{gq}$ are interchanged.
As these equations can easily be solved in
the Mellin space, we define
\begin{eqnarray}
\Delta D_a^\Lambda(n,t)&=&\int_0^1 x^{n-1} \Delta D_a^\Lambda(x,t) dx
\nonumber \\
\Delta P_{ab}(n)&=&\int_0^1 x^{n-1} \Delta P_{ab}(x) dx
\end{eqnarray}
The complete solution for the $n$th moment is not illuminating.
Recall that the first moments of the polarised parton distributions
are interesting as they are related to the spin content of the
polarised hadron and the measurement of them will tell us
how the hadron spin is shared among the partons.
In the same spirit, we here look at the first moment of polarised
parton fragmentation functions.
The measurement of it for various hadrons will tell us how the
parton helicity is distributed among the produced hadrons.
But it is experimentally a hard task.
From eqn.(\ref {apeqn}), we find that
the first moment of the polarised gluon fragmentation function to
order $\alpha_s(t)$ satisfies a simple first order differential
equation, that is
\begin{equation}
{ d \over dt} \Delta D_g^\Lambda(t) = \alpha_s(t) \beta_0 \Delta D_g^\Lambda(t)
\label{apg}
\end{equation}
where $\beta_0=(11 C_2(G)- 4 T(R))/12 \pi$.
The solution to the above equation can be found very easily
using renormalisation group(RG) equation for the QCD coupling constant,
\begin{equation}
{d \over dt}\alpha_s(t)= - \beta_0 \alpha_s^2(t)
\label{alrg}
\end{equation}
From eqns.(\ref {apg}, \ref {alrg}),
we obtain an interesting behaviour of first moment of gluon
fragmentation function: the product of the first moment of
polarised gluon fragmentation function times the strong coupling constant
is scale independent to order $\alpha_s^2(t)$,
\begin{equation}
{d \over dt} (\alpha_s(t) \Delta D_g^\Lambda(t))= 0 (\alpha_s(t)^3)
\label{aldg}
\end{equation}
In other words, to order $\alpha_s^2(t)$, $\Delta D_g^\Lambda$ increases
as the scale $t$ increases, i.e
\begin{equation}
\Delta D_g^\Lambda(t) = K \log \left({Q^2 \over \Lambda^2}\right)
\end{equation}
where $K$ is some constant. It is worth recalling that the
counter part of such a relation for polarised gluon
distribution function exists and has opened up a better understanding of
the spin structure of the proton $\cite {ANS}$. That is,
\begin{equation}
{d \over dt} (\alpha_s(t) \Delta g(t))= 0 (\alpha_s(t)^2)
\end{equation}
where $\Delta g(t)$ is the first moment of polarised gluon
distribution function. From the above equation
it is clear that polarised gluonic contribution to
proton spin could be important at very high energies. But this equation
does not say anything about the absolute value of gluonic contribution.
Now let us consider the first moment of
the polarised quark fragmentation function into polarised hadron.
From eqn.(\ref {apeqnq}), it turns out
\begin{equation}
{ d \over dt} \Delta D_q^\Lambda(t)={1 \over \pi}
\alpha_s(t) \Delta D_g^\Lambda(t)
\label{aldq}
\end{equation}
From eqns. (\ref {aldg}) and (\ref {aldq}), we find that
$\Delta D_{q_i}^\Lambda(t)$ grows as $t$.
This is to be compared with the first moment of the polarised quarks
in polarised hadron which is scale independent.
Hence the relations(eqns.(\ref {aldg},\ref {aldq}))
suggest that at very high
energies the polarised gluon fragmentation into polarised
hadrons(say $\Lambda$) is as significant as polarised quark fragmentation.
This point is very important when one is interested in
the analysis of integrated asymmetries coming from various partons.
Also the first moment of the fragmentation
functions will tell us given a parton of definite polarisation,
how much of its polarisation is
transferred into the produced hadron.
This is to be compared with the interpretation of the
first moment of polarised parton
distributions that it measures how much of hadron's(say proton's) helicity
is shared by the parton. If we sum over all these contributions,
it will turn out to be the helicity of the hadron.
Similarly in the fragmentation sector if we sum
over all the hadron fragmentation functions coming from a specific
polarised parton with their appropriate polarisations then
it will coincide with the polarisation of the parton.
\section{Asymmetries}
The analysis in the last chapter shows that the gluonic contribution to
the polarised $\Lambda$ is as important as the quark contribution.
Hence the QCD corrections to the processes which involve
the extraction of these fragmentation functions are important.
Recently, Burkardt and Jaffe $\cite {BUR}$ have discussed the method of
extracting various polarised quark fragmentations to polarised
$\Lambda$ by measuring some specific asymmetries in both unpolarised
and polarised $e^-$ $e^+$ scattering experiments.
As a preliminary effort, in the reference $\cite {RAV1}$, $\cite {RAV2}$
various QCD corrections are computed to the cross
section asymmetry when there is no $Z$ exchange
(purely electromagnetic ($em$)). The factorisation of IR singularities
has also been verified to order $\alpha_s(Q^2)$ $\cite {RAV1}$.
In the following, we compute the QCD corrections to those asymmetries
(discussed by Burkardt and Jaffe)
which involve both $\gamma$ and $Z$ vector bosons as intermediate particles.
\subsection{\bf The Unpolarised $e^-~e^+~$ Scattering}
First we compute the following asymmetry $\cite {BUR}$.
\begin{equation}
{d \sigma^{\Lambda(\downarrow)} \over dx d \Omega=\sum_a \int_x^1 {dy \over y } \left[{d \hat \sigma^{a(\downarrow)} \over dy d \Omega\right]
\Delta D_a^{\Lambda} (x/y,Q^2)
\label{DSC}
\end{equation}
In the above asymmetry the spins of the initial leptons are averaged.
This asymmetry is identically zero when there is
only $\gamma$ exchange. At very high
energies, the $Z$ exchange is also possible. This will make
the asymmetry non-zero thanks to parity violation. In the above
equation $\Omega$ is the solid angle within which the produced
parton fragments into $\Lambda$. The arrows in the parenthesis of
$\Lambda$ and the partons denote their helicities with respect to
the beam direction. The scale $Q^2$ is the invariant mass of the photon
or $Z$ produced(i.e.,$q^2=Q^2$) at the $e^+~e^-$ vertex. The sum is taken over
all the partons such as quarks, anti-quarks and gluons fragmenting into
$\Lambda$. Recall that the kinematic scaling variables $x$ and $y$ are
defined as $x=2 p_\Lambda.q/Q^2$ and $y=2 p_a.q/Q^2$ respectively.
The parton level asymmetry given in the eqn.(\ref {DSC}) is factorisable as
\begin{equation}
{d \hat \sigma^{a(\downarrow)} \over dy d \Omega= {1 \over 4 Q} \sum_{I=Z,\gamma Z} {\cal L}^{(I)}_{\mu \nu}(Q^2)
{\cal D}_{(I)}(Q^2) \sum_{j=1,3,4} {\cal T}_j^{\mu \nu} H^{(I)a}_j(y,Q^2)
\label{GZ}
\end{equation}
where ${\cal D}_{(I)}(Q^2)$ are propagators given by
\begin{equation} \begin{array}{rl}
{\cal D}_{(Z)}(Q^2)={\displaystyle 1 \over \displaystyle (Q^2 - M_Z^2)^2} \quad & \quad
{\cal D}_{(\gamma Z)}(Q^2)={\displaystyle 1 \over\displaystyle
Q^2 (Q^2 - M_Z^2)}
\end{array}
\end{equation}
The tensors ${\cal T}_j^{\mu \nu}$ are constructed by looking at the
symmetry properties of the amplitudes for direct $Z$ and $\gamma Z$
interference contributions. These two amplitudes are individually
gauge invariant in the mass less limit as can be checked explicitly. Hence
these tensors are parametrised as
\begin{equation} \begin{array}{rl}
{\cal T}_1^{\mu \nu} = {\displaystyle i \over\displaystyle p_a.q}
\epsilon^{\mu \nu \lambda \sigma} q_\lambda
p_{a\sigma} \quad& \quad
{\cal T}_3^{\mu \nu} = -g^{\mu \nu} + {\displaystyle q^\mu\displaystyle q^\nu
\over \displaystyle Q^2}
\nonumber
\end{array}
\end{equation}
\begin{equation}
{\cal T}_4^{\mu \nu} ={\displaystyle 1 \over p_a \cdot q }
\left(p_a^\mu - q^\mu {
\displaystyle p_a.q \over \displaystyle Q^2}\right)
\left(p_a^\nu - q^\nu { \displaystyle p_a.q \over \displaystyle Q^2}\right)
\nonumber
\label{tengz}
\end{equation}
The tensors proportional to $q^\mu$ in ${\cal T}_3$ and ${\cal T}_4$
are immaterial
as the leptonic tensors are individually gauge invariant.
The leptonic tensors ${\cal L}_{(I)}^{\mu \nu}$ can be easily workout
and are found to be
\begin{eqnarray}
{\cal L}_{(Z)}^{\mu \nu}&=& { \pi \alpha \over Sin^2\theta_W Cos^2\theta_W}
\left [(v_e^2+a_e^2) l^{\mu \nu} - 2 i v_e a_e \tilde l^{\mu \nu}
\right] \nonumber \\
{\cal L}_{(\gamma Z)}^{\mu \nu}&=& { 4 \pi \alpha \over Sin\theta_W
Cos\theta_W}
\left [v_e l^{\mu \nu} - i a_e \tilde l^{\mu \nu}
\right]
\label{lepgz}
\end{eqnarray}
where
\begin{equation} \begin{array}{rl}
{ l}^{\mu \nu} = q_1^\mu q_2^\nu + q_1^\nu q_2^\mu - g^{\mu \nu} q_1.q_2
\quad &
\tilde { l}^{\mu \nu} = \epsilon^{\mu \nu \alpha \beta} q_{1 \alpha}
q_{2 \beta}
\end{array}
\nonumber
\end{equation}
with $q_1$ and $q_2$ being the momenta of incoming positron and electron
respectively. Here $\alpha$ is the fine structure constant,
$v_e$ and $a_e$ are the vector and axial vector couplings
in the $e^+ e^- Z$ vertex and $\theta_W$ is the Weignberg angle\cite{PDB}.
It is clear from the above tensors that the terms proportional
to $q^\mu$ or $q^\nu$ in the tensors ${\cal T}_3$ and ${\cal T}_4$ when
contracted with the leptonic tensors give zero contribution.
Substituting the tensors (eqn.(\ref {tengz})) and the leptonic
tensors (eqn.(\ref {lepgz})) in the eqn.(\ref {GZ}), we obtain
\begin{eqnarray}
{d \hat \sigma^{a(\downarrow)} \over dy d \Omega\!\! \!&=&\!\!\!{ 1 \over 2 Q^2} { \pi \alpha \over Sin\theta_W Cos\theta_W}
{1 \over (Q^2 - M_Z^2)}
\left [ Cos\theta {\cal H}_1^a
+ {\cal H}_3^a + {y \over 4} (1- Cos^2 \theta)
{\cal H}_4^a \right]
\end{eqnarray}
where the general form of ${\cal H}_i^a$ for $i=3,4$ is given by
\begin{equation}
{\cal H}_i^a= {Q^3 \over Q^2 -M_Z^2} {1 \over 2 Sin\theta_W Cos\theta_W}
(v_e^2+a_e^2) H_i^{(Z)a} + 2 Q v_e H_i^{(\gamma Z)a}
\end{equation}
and that of ${\cal H}_1^a$ is given by
\begin{equation}
{\cal H}_1^a={Q^3 \over Q^2 - M_Z^2 }{ 1 \over Sin\theta_W Cos\theta_W}
v_e a_e H_1^{(Z)a} + 2 Q a_e H_1^{(\gamma Z)a}
\end{equation}
where, $v_q$ and $a_q$ are vector
and axial vector couplings in $q \bar q Z $ vertex \cite {PDB}.
The superscript in $H$ denotes
the origins of these contributions such viz, direct $Z$ or the interference
between $\gamma$ and $Z$ contributions.
The contribution to $H_i^{(I)a}$ can come from processes to lowest order
(see fig.1)($\alpha_s^{(0)}$) in $\alpha_s$ (no gluon emission) as well as
from processes involving single gluon emission(see fig.2)
and virtual contributions(see fig.3)
to first order in $\alpha_s$. The evaluation
of asymmetries to lowest order is very simple as it involves only
two body phase space. When the gluon accompanies the quark and anti-quark
pair, then the evaluation is cumbersome due to three body
phase space integral. We have given below a simple looking formula
to compute the three body phase space after performing most of
the integrals using the delta functions.
The $H_i^{(I)a}~$s are related to the matrix elements as follows:
\begin{equation}
H_i^{(I)a}={\displaystyle Q \over \displaystyle 32 (2 \pi)^3} \int dx_1
{\cal P}^{\mu \nu}_i {\displaystyle 1 \over\displaystyle 4 \pi} |M_I^a|^2_{\mu \nu}
(\downarrow -\uparrow)
\end{equation}
where the projectors are given by
\begin{eqnarray}
{\cal P}_1^{\mu \nu}&=& i \epsilon^{\mu \nu \lambda \sigma}{\displaystyle
p_{a\lambda} q_\sigma \over \displaystyle 2 p_a.q}\nonumber\\
{\cal P}_3^{\mu \nu}&=&-{1 \over 2} \left( g^{\mu \nu}+ 4{\displaystyle
p_a^\mu p_a^\nu \over Q^2 y^2} \right)\nonumber \\
{\cal P}_4^{\mu \nu}&=&{1 \over y} \left( g^{\mu \nu}+12{\displaystyle
p_a^\mu p_a^\nu \over Q^2 y^2} \right)
\end{eqnarray}
The terms $H_i^{(I)a}$ can be computed from the fig.2 and are found to
be of the form
\begin{equation} \begin{array}{rl}
H_i^{(Z)a}= 3 {\displaystyle \alpha Q \over\displaystyle 64 \pi Sin^2\theta_W Cos^2\theta_W} C_i^{(Z)a};
&
H_i^{(\gamma Z)a}= -3 {\displaystyle \alpha e_q Q \over \displaystyle 16 \pi Sin\theta_W Cos\theta_W}
C_i^{(\gamma Z)a}
\label{HI}
\end{array}
\end{equation}
where
\begin{equation} \begin{array}{rl}
C_1^{(Z)a} = {1 \over 2} (v_q^2+a_q^2) {\cal C}_1^{a} \quad
& \quad C_1^{(\gamma Z)a} = v_q ~{\cal C}_1^{a} \\
\quad \quad C_i^{(Z)a} = \eta_a v_q a_q ~{\cal C}_i^{a} \quad \quad
& \quad C_i^{(\gamma Z)a} = \eta_a a_q ~{\cal C}_i^{a} \\
\end{array}
\label{CI}
\end{equation}
with $i=3,4$ and $\eta_q=1$ for quarks and $-1$ for anti-quarks.
The factor $3$ appearing
in the eqn.(\ref {HI}) is due to the number of colours. The functions
${\cal C}_i^{a}$ are computed in the appendix.
As we have already mentioned the functions ${\cal C}_i^a$ to lowest
order get contributions from the $Z$ or the $\gamma Z$ decaying
into a polarised quark and antiquark pair. This asymmetry is just
proportional to $\delta(1-y)$ but for charge and other group
factors. To order $\alpha_s$, ${\cal C}_i^a~$s get contributions from
two types of processes: 1. polarised quark or antiquark production
with a real gluon emission and virtual gluon corrections to the quark and
anti-quark(self energy and vertex corrections) 2. polarised
gluon emission from unpolarised quark and anti-quark pair.
These processes suffer
Infrared(IR) divergences when masses of the quarks and gluons
are taken to be zero. In order to regulate these divergences we give
gluons a small mass $m_g$ and finally take the limit $m_g$ going to zero.
We compute these cross sections in the limit $p_a.q$ and $Q^2$
tending to infinity with their ratio fixed. This is analogous to
DIS limit but the scaling variable here is inverse of the Bjorken
scaling variable.
\begin{eqnarray}
{\cal C}_3^a&=&\delta_{a,q/\bar q} \delta(1-y) + {4 \over 3}
{\alpha_s \over 2 \pi} \left ( C_G^a- {4 \over y^2} C_P^a\right)\nonumber\\
{\cal C}_4^a&=&-2 \delta_{a,q/\bar q} \delta(1-y) + {4 \over 3}
{\alpha_s \over 2 \pi} \left (-{2 \over y}C_G^a+ {24 \over y^3} C_P^a\right)
\nonumber \\
{\cal C}_1^a&=&\delta_{a,q/\bar q} \delta(1-y) + {4 \over 3}
{\alpha_s \over 2 \pi} C_1^a
\label{}
\end{eqnarray}
Notice that the delta function is absent for gluons.
The functions $C_i^a$ for quarks are given below,
\begin{eqnarray}
C_1^q&=& \left ( {1+ y^2 \over 1-y} \right)_+ \log
\left ({Q^2 \over m_g^2}\right)
+ (1+ y^2) \left ( {\log(1-y) \over 1-y} \right)_+ +
{1+y^2 \over 1-y}\log(y) \nonumber \\
&&
-{3 \over 2} (1-y)
-{3 \over 2} \left( 1 \over 1-y\right)_+ -\left({9\over 4}-
{\pi^2 \over 3} \right) \delta(1-y) \nonumber \\
C_G^q&=&C_1^q + 2-y \nonumber \\
C_P^q&=&{y^2 \over 4}
\label{}
\end{eqnarray}
Notice that the above expressions are well defined in soft limit.
As is shown in the appendix, the soft singularities coming from real
gluon emission diagrams are exactly cancelled by that
coming from the virtual diagrams such as self energy and vertex corrections.
The term $\log (m_g^2)$ purely comes from the collinear divergence
which can not be avoided in the mass less limit. This ill defined
term will finally be absorbed into the fragmentation functions at the
level of $\Lambda$ production cross section. Hence the fragmentation
functions are defined in what is usually called the massive gluon scheme.
Similarly for gluons we have
\begin{eqnarray}
C_1^g&=& 2 (2-y) \log \left ({Q^2 y^2 \over m_g^2}\right) - 4(2-y)
\nonumber\\
C_G^g&=&0
\nonumber\\
C_P^g&=&0
\label{glco}
\end{eqnarray}
The asymmetry given in the eqn.(\ref {DSC}) is expressed in terms of the
${\cal C}_i^{(I)a}~$s as
\begin{eqnarray}
{d \sigma^{\Lambda(\downarrow)} \over dx d \Omega\!\!\!\!\!&=&\!\!\! 3 { \alpha^2 \over 2 Q^2}\!\sum_{a=q,\bar q, g} \left[
-2 \chi_1\!v_e a_q e_q \eta_q \left({\cal C}_3^{a} \!+\!
{x \over 4} (1\!-\! Cos^2\theta) {\cal C}_4^{a}
\right)\right. \nonumber \\
&&\left. +2 \chi_2 (v_e^2+a_e^2) v_q a_q \eta_q
\left({\cal C}_3^{a} + {x \over 4} (1- Cos^2\theta)
{\cal C}_4^{a}\right)\right. \nonumber \\
&&\left. -2 Cos\theta \left(\chi_1 e_q v_q a_e {\cal C}_1^{a} -
\chi_2 a_e v_e (v_q^2\!+\!a_q^2) {\cal C}_1^{a}\right)\right] \otimes
\Delta D_a^{\Lambda}(x) \nonumber \\
\label{asymone}
\end{eqnarray}
where
\begin{equation} \begin{array}{rl}
\chi_1 ={\displaystyle Q^2 \over \displaystyle Q^2-M_Z^2}
{\displaystyle 1 \over\displaystyle 16 Sin^2\theta_W Cos^2\theta_W};
&
\chi_2 = {\displaystyle Q^4 \over\displaystyle (Q^2-M_Z^2)^2}
{\displaystyle 1 \over \displaystyle 256 Sin^4\theta_W
Cos^4\theta_W}
\end{array}
\end{equation}
The $\chi_1$ and $\chi_2$ for $Z$ propagator with finite width
will be modified and are found in ref. \cite {PDB}
and the convolution of two functions $f(x)\otimes g(x)$ means
\begin{equation}
f(x) \otimes g(x) = \int_x^1 {dy \over y} f(y) g(x/y)
\end{equation}
\subsection{\bf The Polarised $e^-~e^+$ Scattering}
Now let us find out the following asymmetry $\cite {BUR}$ where one of the
initial leptons and the produced parton are polarised
\begin{equation}
\dsg=\sum_a \int_x^1 {dy \over y } \left[\dsf\right]
\Delta D_a^{\Lambda} (x/y,Q^2)
\label{DSG}
\end{equation}
Following the similar procedure, we can decompose the parton
level asymmetry in terms of leptonic(${\cal L}^{\mu \nu}$) and
gauge invariant hadronic(${\cal T}^{\mu \nu}$)tensors as
\begin{equation}
\dsf={1\over 2 Q} \sum_{I=\gamma,Z,\gamma Z} {\cal L}^{(I)}_{\mu \nu}
(\downarrow)
{\cal D}_{(I)}(Q^2) \sum_{j=1,3,4} {\cal T}_j^{\mu \nu} H^{(I)a}_j(y,Q^2)
\label{GGZ}
\end{equation}
where the new ${\cal D}_{(\gamma)}(Q^2)= 1/Q^4$. In the above equation,
$I=\gamma$ corresponds to the diagrams where only photons are
intermediate vector bosons. This is the extra contribution which is
absent in the previous asymmetry. This is possible because of
the polarisation of initial lepton. This diagram is gauge invariant as it is.
The diagrams for $I=Z,\gamma Z$ are discussed already.
Notice that the diagrams are split in such a way that for each $I$
the diagrams are individually gauge invariant, hence the simple
tensor decomposition.
The leptonic tensors ${\cal L}_{(I)}^{\mu \nu}(\downarrow)$ for the polarised
electron are found to be
\begin{eqnarray}
{\cal L}_{(\gamma)}^{\mu \nu}(\downarrow) &=& 8 \pi \alpha
\left[ l^{\mu \nu} -i \tilde l^{\mu \nu} \right]\nonumber \\
{\cal L}_{(Z)}^{\mu \nu}(\downarrow) &=& {\displaystyle \pi \alpha
\over \displaystyle 2 Sin^2\theta_W Cos^2\theta_W}(v_e+a_e)^2
\left[ l^{\mu \nu} -i \tilde l^{\mu \nu} \right]\nonumber \\
{\cal L}_{(\gamma Z)}^{\mu \nu}(\downarrow) &=& {\displaystyle 2 \pi \alpha
\over \displaystyle Sin\theta_W Cos\theta_W}(v_e+a_e)
\left[ l^{\mu \nu} -i \tilde l^{\mu \nu} \right]
\label{lepggz}
\end{eqnarray}
where $l^{\mu \nu}$ and $\tilde l^{\mu \nu}$ are already given.
Substituting the above
leptonic tensors in eqn.(\ref{GGZ}) we find,
\begin{eqnarray}
\dsf\!\!\!\!&=&\!\!\! {\displaystyle 1 \over \displaystyle 2 Q^2}
{\displaystyle \pi \alpha \over \displaystyle Sin\theta_W Cos\theta_W}
{\displaystyle (v_e+ a_e) \over \displaystyle Q^2-M_Z^2}
\left[{\cal H}_3^a(\downarrow)\!+\!Cos\theta {\cal H}_1^a(\downarrow)
\right.\nonumber \\
&&\!\!\left. +{y \over 4}(1- Cos^2\theta){\cal H}_4^a(\downarrow)\right]
+ Cos\theta {\displaystyle 4 \pi \alpha \over Q^3} {\cal H}_{\gamma}^a
(\downarrow)
\label{DSE}
\end{eqnarray}
The general form of ${\cal H}_i^a(\downarrow)$ is found to be
\begin{equation}
{\cal H}_i^a(\downarrow)= {\displaystyle Q^3 \over \displaystyle
Q^2 -M_Z^2} {\displaystyle 1 \over \displaystyle 2 Sin\theta_W
Cos\theta_W} (v_e+a_e) H_i^{(Z)a} + 2 Q H_i^{(\gamma Z)a}
\label{HDA}
\end{equation}
The functions $H_i^{(I)a}$ for $i=1,3,4$ are given in the
eqns.(\ref {HI}). The new ${\cal H}_{\gamma}^a(\downarrow)$
is given by
\begin{equation}
{\cal H}^a_{\gamma}(\downarrow)=3 {\displaystyle \alpha Q \over
\displaystyle 8 \pi} e_q^2 {\cal C}^{a}_1
\label{NHG}
\end{equation}
Substituting eqn.(\ref {DSE}) in eqn.(\ref {DSG}) and using
the eqns.(\ref {HDA},\ref{NHG},\ref{HI},\ref{CI}) we find
\begin{eqnarray}
\dsg\!\!\!\!&=&\!\!\!3 {\alpha^2 \over 2 Q^2}\! \sum_{q,\bar q,g}
\left[\!- \chi_1 (v_e\!\!+\!a_e)
a_q e_q \eta_q\! \left (2{\cal C}_3^{a}
\!\!+\!\!{x \over 2} (1\!-\! Cos^2\theta){\cal C}_4^{a}
\right) \right. \nonumber \\
&& \left. +~ \chi_2~ (v_e+a_e)^2 v_q a_q \eta_q \left(2{\cal C}_3^{a}
+{x \over 2} (1- Cos^2\theta){\cal C}_4^{a} \right) \right.
\nonumber \\
&&\left. -2~ \chi_1~ (v_e+a_e) v_q e_q~ Cos\theta~ {\cal C}_1^{a}
\right. \nonumber\\
&&\left. + \chi_2~ (v_e+a_e)^2 (v_q^2+a_q^2)~ Cos\theta~~ {\cal C}_1^{a}
\right. \nonumber \\
&& \left. +~e_q^2~ Cos\theta ~{\cal C}^{a}_1
\right]\otimes \Delta D_a^\Lambda(x)
\label{asymtwo}
\end{eqnarray}
From the eqns.(\ref {asymone}, \ref {asymtwo}) we find that the
asymmetries reduce to those given Burkardt and Jaffe(BJ)'s paper
$\cite {BUR}$ when we
put the strong coupling
constant $\alpha_s(Q^2)$ equal to zero confirming
the correctness of our results presented in this paper.
Also, we can reproduce the results for the asymmetries in the absence of
$Z$ exchange diagrams by putting
both $\chi_1$ and $\chi_2$ equal to zero. Hence our results presented
here are consistent with BJ asymmetries. The extraction
of both quark and gluons now becomes complicated because of the presense
of gluonic fragmentation functions. Notice that the expressions
(eqns. (\ref {asymone}), (\ref {asymtwo})) look very similar to
that of BJ asymmetries with the replacement of quark and antiquark fragmentation
functions by the parton fragmentation functions convoluted with
complicated looking functions ${\cal C}_i^a$.
This simple fact that the structures are identical
might help us in the extraction of
the combination of fragmentation functions with ${\cal C}_i^a$
instead of just the fragmentation functions. From this combination
and an independent measurement on polarised gluonic contribution
one should be able to disentangle both the quark and gluon fragmentation
functions. Once these functions are measured in the laboratory,
these have nice physical interpretation that their first moments
will tell us the polarisation contribution coming from various partons to
a specific hadron in the sense described earlier. As is seen
from the analysis of $Q^2$ evolutions of
these fragmentation functions using AP evolution equations,
the first moment of
quark and gluon fragmentation functions will play a crucial role
at high energies in the production of polarised $\Lambda$ hyperons
at $e^-~e^+$ annihilation processes.
\section{Conclusion}
In this paper we have extensively studied the polarised $\Lambda$
production in $e^+~e^-$ annihilation process. This involves the
measurement of what are called polarised fragmentation functions
of quarks and gluons into polarised $\Lambda$.
The extraction of the fragmentation functions of different
flavours is very complicated due to the charge factors multipling
them. Burkardt and Jaffe were successful in disentangling
these distributions by looking at various asymmetries at different
kinematical regions. In this paper, using the Altarelli-Parisi
evolution equation, we have shown
that the polarised gluon fragmentation is also important when one
is interested in finding spin or helicity
coming from various partons to the produced polarised $\Lambda$.
In fact we find that the first moment of the polarised gluon fragmentation
is logarithmically raising as $Q^2$ increases. So at high
energies gluons play crucial role invalidating the naive expectations
based on simple coupling constant arguments. This is similar
to the behaviour one encounters in the study of spin content of
the proton or any hadron. In fact the behaviour in the fragmentation sector
is much more stronger than that in the structure function sector
due to the vanishing of first moment of $\Delta P_{qg}(x)$.
Since the gluons can contribute only from order $\alpha_s(Q^2)$,
we have considered the full QCD corrections to the asymmetries
discussed earlier. This of course, further complicates the
extraction procedure. The phenomenological implications
of these QCD corrections and the extraction of gluonic fragmentation
function are under investigation.
It is worth to recollect the status of gluonic contribution to polarised
structure function measured in DIS experiments. The status
here is very similar to that in DIS due to the scheme dependence of
these gluonic contributions. Hence, a scheme independent
measurement of polarised gluonic fragmentation function
would substantiate our analysis. The extraction of these distribution
is very important primarily due to two reasons. One of them is that
this might tell more about quark and gluon dynamics in the fragmentation
sector. Particularly, the measurement of non strange quark contributions
to polarised $\Lambda$ production will be a test of existing models for
polarised $\Lambda$ fragmentation. Also, the importance of polarised
gluons in the $\Lambda$ production can be experimentally tested.
I would like to thank prof. M.V.N. Murthy for his constant encouragement.
It is a pleasure to thank Prof. R.L. Jaffe for his critical
comments on the part of the work presented here.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{INTRODUCTION}
The totally asymmetric simple exclusion process \cite{sz,derrida,schuetz} is
the simplest model of non-equilibrium systems of interacting self-driven
particles. Various extensions of this model have been reported in the
last few years for capturing the essential features of the collective
spatio-temporal organizations in wide varieties of systems, including
those in vehicular traffic \cite{css,helbing,Schad,nagatanirev,cnss}. Traffic
of buses and bicycles have also been modeled following similar approaches
\cite{oloan,jiang}. A simple bus route model \cite{oloan} exhibits
clustering of the buses along the route and the quantitative features
of the coarsening of the clusters have strong similarities with
coarsening phenomena in many other physical systems. Under normal
circumstances, such clustering of buses is undesirable in any real
bus route as the efficiency of the transport system is adversely
affected by clustering. The main aim of this paper is to introduce a
traffic control system into the bus route model in such a way that
helps in suppressing this tendency of clustering of the buses. This
new model exhibits a competition between the two opposing tendencies
of clustering and de-clustering which is interesting from the point
of view of fundamental physical principles. However, the model may
also find application in developing adaptive traffic control systems
for public conveyance systems.
In some of earlier bus-route models, movement of the buses was monitored
on coarse time intervals so that the details of the dynamics of the
buses in between two successive bus stops was not described explicitly.
Instead, the movement of the bus from one stop to the next was captured
only through probabilities of hopping from one stop to the next; hopping
takes place with the lower probability if passengers are waiting at the
approaching bus stop \cite{oloan}. An alternative interpretation of the
model is as follows: the passengers could board the bus whenever and
wherever they stopped a bus by raising their hand, this is called the
{\it hail-and-ride} system.
Several possible extensions of the bus route model have been reported
in the past \cite{cd,nagatani,Chi}. For example, in \cite{cd},
in order to elucidate the connection between the bus route model with
parallel updating and the Nagel-Schreckenberg model, two alternative
extensions of the latter model with space-/time-dependent hopping
rates are proposed. If a bus does not stop at a bus stop, the
waiting passengers have to wait further for the next bus; such
scenarios were captured in one of the earlier bus route models
\cite{nagatani}, using modified car-following model. In \cite{Chi},
the bus capacity, as well as the number of passengers getting on and
off at each stop, were introduced to make the model more realistic.
Interestingly, it has been claimed that the distribution of the time
gaps between the arrival of successive buses is described well by the
Gaussian Unitary Ensemble of random matrices \cite{Mex}.
In this paper, by extending the model in \cite{oloan}, we suggest a new
public conveyance model (PCM). Although we refer to each of the public
vehicles in this model as a ``bus'', the model is equally applicable
to train traffic on a given route. In this PCM we can set up arbitrary
number of bus stops on the given route. The {\it hail-and-ride} system
turns out to be a special case of the general PCM. Moreover, in
the PCM the duration of the halt of a bus at any arbitrary bus stop
depends on the number of waiting passengers. As we shall demonstrate
in this paper, the delay in the departure of the buses from crowded
bus stops leads to the tendency of the buses to cluster on the route.
Furthermore, in the PCM, we also introduce a traffic control system that
exploits the information on the number of buses in the ``segments''
in between successive bus stops; this traffic control system helps
in reducing the undesirable tendency of clustering by dispersing the
buses more or less uniformly along the route.
In this study we introduce two different quantitative measures of
the efficiency of the bus transport system, and calculate these
quantities, both numerically and analytically, to determine the
conditions under which the system would operate optimally.
This paper is organized as follows, in Sec.~$2$ PCM is introduced
and we show several simulation results in Sec.~$3$.
The average speed and the number of waiting
passengers are studied by mean field
analysis in Sec.~$4$, and conclusions are given in Sec.~$5$.
\section{A STOCHASTIC CA MODEL FOR PUBLIC CONVEYANCE}
In this section, we explain the PCM in detail. For the sake of simplicity,
we impose periodic boundary conditions. Let us imagine that the road is
partitioned into $L$ identical cells such that each cell can accommodate
at most one bus at a time. Moreover, a total of $S$ ($0\le S \le L$)
{\it equispaced} cells are identified in the beginning as bus stops. Note
that, the special case $S=L$ corresponds to the {\it hail-and-ride} system.
At any given time step, a passenger arrives with probability $f$ to the
system. Here, we assume that a given passenger is equally likely to
arrive at any one of the bus stops with a probability $1/S$. Thus, the
average number of passengers that arrive at each bus stop per unit time
is given by $f/S$. In contrast to this model, in ref.~\cite{cgns,kjnsc}
the passengers were assumed to arrive with probability $f$ at all the
bus stops in every time step.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.6]{modelAB.eps}
\caption{Schematic illustration of the PCM. In the model A, the hopping
probability to the bus stop does not depend on the number of waiting
passengers. In contrast, in the model B the hopping probability to the
bus stop depends on the number of waiting passengers. Thus if the
waiting passengers increase, the hopping probability to the bus stop is
decreased.}
\label{modelAB}
\end{center}
\end{figure}
The model A corresponds to those situations where, because of
sufficiently large number of broad doors, the time interval during
which the doors of the bus remain open after halting at a stop, is
independent of the size of waiting crowd of passengers. In contrast,
the model B captures those situations where a bus has to halt
for a longer period to pick up a larger crowd of waiting passengers.
The symbol $H$ is used to denote the hopping probability of a bus
entering into a cell that has been designated as a bus stop. We consider
two different forms of $H$ in the two versions of our model which are
named as model A and model B. In the model A we assume the form
\begin{equation}
H=\left\{\begin{array}{cl}
Q & {\rm \,\,\,\,\,\, no \,\, waiting \,\, passengers }\\
q & {\rm \,\,\,\,\,\, waiting \,\, passengers\,\, exist }
\end{array}
\right.
\label{ant}
\end{equation}
where both $Q$ and $q$ ($Q > q$) are constants independent of the number
of waiting passengers. The form (\ref{ant}) was used in the original
formulation of the bus route model by O'Loan et al.\ \cite{oloan}.
In contrast to most of all the earlier bus route models, we assume
in the model B that the maximum number of passengers that can get
into one bus at a bus stop is $N_{\rm max}$. Suppose, $N_i$ denotes
the number of passengers waiting at the bus stop $i$ $(i=1,\cdots,S)$
at the instant of time when a bus arrives there. In contrast to the
form (\ref{ant}) for $H$ in model A, we assume in model B the form
\begin{equation}
H=\frac{Q}{\min(N_i,N_{\rm max})+1}
\label{wait}
\end{equation}
where $\min(N_i,N_{\rm max})$ is the number of passengers who can get
into a bus which arrives at the bus stop $i$ at the instant of time
when the number of passengers waiting there is $N_i$. The form
(\ref{wait}) is motivated by the common expectation that the time
needed for the passengers boarding a bus is proportional to their
number. FIG.~$\ref{modelAB}$ depicts the hopping probabilities in
the two models A and B schematically.
The hopping probability of a bus to the cells that are not designated
as bus stops is $Q$; this is already captured by the expressions
(\ref{ant}) and (\ref{wait}) since no passenger ever waits at those
locations.
In principle, the hopping probability $H$ for a real bus would depend
also on the number of passengers who get off at the bus stop; in the
extreme situations where no passenger waits at a bus stop the hopping
probability $H$ would be solely decided by the disembarking passengers.
However, in order to keep the model theoretically simple and tractable,
we ignore the latter situation and assume that passengers get off only
at those stops where waiting passengers get into the bus and that the
time taken by the waiting passengers to get into the bus is always
adequate for the disembarking passengers to get off the bus.
Note that $N_{\rm max}$ is the {\it maximum boarding capacity} at each bus
stop rather than the {\it maximum carrying capacity} of each bus.
The PCM model reported here can be easily extended to incorporate an
additional dynamical variable associated with each bus to account for
the instantaneous number of passengers in it. But, for the sake of
simplicity, such an extension of the model is not reported here.
Instead, in the simple version of the PCM model reported here, $N_{\rm max}$
can be interpreted as the maximum carrying capacity of each bus if we
assume that all of the passengers on the bus get off whenever it stops.
The model is updated according to the following rules. In step
$2-4$, these rules are applied in {\it parallel} to all
buses and passengers, respectively:
\begin{enumerate}
\item {\it Arrival of a passenger}\\
A bus stop $i$ ($i=1,\cdots,S$) is picked up randomly, with probability
$1/S$, and then the corresponding number of waiting passengers in
increased by unity, i.e. $N_i$ $\rightarrow$ $N_i+1$, with probability
$f$ to account for the arrival of a passenger at the selected bus stop.
\item {\it Bus motion}\\
If the cell in front of a bus is not occupied by another bus,
each bus hops to the next cell with the probability $H$.
Specifically, if passengers do not exist in the next cell in both
model A and model B hopping probability equals to $Q$ because
$N_i$ equals to 0. Else, if passengers exist in the next cell,
the hopping probability equals to $q$ in the model A, whereas
in the model B the corresponding hopping probability equals to
$Q/(\min(N_i,N_{\rm max})+1)$. Note that, when a bus is
loaded with passengers to its maximum boarding capacity
$N_{\rm max}$, the hopping probability in the model B equals to
$Q/(N_{\rm max}+1)$, the smallest allowed hopping probability.
\item {\it Boarding a bus}\\
When a bus arrives at the $i$-th ($i=1,\cdots,S$) bus stop cell, the
corresponding number $N_i$ of waiting passengers is updated to
$\max(N_i-N_{\rm max},0)$ to account for the passengers boarding the bus.
Once the door is closed, no more waiting passenger can get into the bus
at the same bus stop although the bus may remain stranded at the same
stop for a longer period of time either because of the unavailability
of the next bus stop or because of the traffic control rule explained
next.
\item {\it Bus information update}\\
Every bus stop has information $I_j$ ($j=1,\cdots,S$) which is the
number of buses in the segment of the route between the stop $j$ and
the next stop $j+1$ at that instant of time. This information is
updated at each time steps. When one bus leaves the $j$-th bus stop,
$I_j$ is increased to $I_j+1$. On the other hand, when a bus leaves
$(j+1)$-th bus stop, $I_j$ is reduced to $I_j-1$. The desirable value
of $I_j$ is $I_0 = m/S$, where $m$ is the total number of buses,
for all $j$ so that buses are not clustered
in any segment of the route. We implement a traffic control rule
based on the information $I_j$: a bus remains stranded at a stop $j$
as long as $I_j$ exceeds $I_0$.
\end{enumerate}
We use the average speed $\langle V \rangle$ of the buses and the
number of the waiting passengers $\langle N \rangle$ at a bus stop
as two quantitative measures of the efficiency of the public conveyance
system under consideration; a higher $\langle V \rangle$ and smaller
$\langle N \rangle$ correspond to an efficient transportation system.
\section{COMPUTER SIMULATIONS OF PCM}
In the simulations we set $L=500, Q=0.9, q=0.5$ and $N_{\rm max}=60$.
The main parameters of this model, which we varied, are the number of
buses ($m$), the number of bus stops ($S$) and the probability ($f$)
of arrival of passengers. The number density of buses is defined by
$\rho=m/L$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.8]{spatio.eps}
\caption{Space-time plots in the model B for the parameter values
$f=0.6, S=5, m=30$. The upper two figures correspond to the case
where no traffic control system based on the information $\{I\}$
is operational. The upper left figure corresponds to the initial
stage (from $t=1000$ to $t=1500$) whereas the upper right plot
corresponds to the late stages (from $t=4000$ to $t=4500$). The
lower figures correspond to the case where the information ($\{I\}$)
based bus-traffic control system is operational (left figure
shows data from $t=1000$ to $t=1500$ while the right figure
corresponds to $t=4000$ to $t=4500$). Clearly, information-based
traffic control system disperses the buses which, in the absence
of this control system, would have a tendency to cluster.
}
\label{spatemp}
\end{center}
\end{figure}
Typical space-time plots of the model B are given in FIG.~\ref{spatemp}.
If no information-based traffic control system exits, the buses have a
tendency to cluster; this phenomenon is very simular to that observed
in the ant-trail model \cite{cgns,kjnsc}. However, implementation of
the information-based traffic control system restricts the size of such
clusters to a maximum of $I_0$ buses in a segment of the route in between
two successive bus stops. We study the effects of this control system
below by comparing the characteristics of two traffic systems one of
which includes the information-based control system while the other
does not.
\subsection{PCM without information-based traffic control}
In the FIG.~\ref{S=5_Rulefalse_noinfo} - FIG.~\ref{S=500_Ruletrue_noinfo},
we plot $\langle V \rangle$ and $\langle N \rangle$ against the density
of buses for several different values of $f$.
Note that, the FIG.~\ref{S=500_Rulefalse_noinfo} and
FIG.~\ref{S=500_Ruletrue_noinfo} corresponds to the hail-and-ride system
for models A and B, respectively.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.52]{aveV_S=5_Q=0.9_q=0.5_Rule=false_info=false.eps}
\includegraphics[scale=0.52]{aveN_S=5_Q=0.9_q=0.5_Rule=false_info=false.eps}
\caption{The average speed and the average number of waiting passengers
in the model A are plotted against the density for the parameters $S=5$
and $f=0.3$, 0.6 and 0.9.}
\label{S=5_Rulefalse_noinfo}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.52]{aveV_S=50_Q=0.9_q=0.5_Rule=false_info=false.eps}
\includegraphics[scale=0.52]{aveN_S=50_Q=0.9_q=0.5_Rule=false_info=false.eps}
\caption{The plot of $\langle V \rangle$ and $\langle N \rangle$ of the
model A for $S=50$ and $f=0.3$, 0.6 and 0.9.}
\label{S=50_Rulefalse_noinfo}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.52]{aveV_S=500_Q=0.9_q=0.5_Rule=false_info=false.eps}
\includegraphics[scale=0.52]{aveN_S=500_Q=0.9_q=0.5_Rule=false_info=false.eps}
\caption{The plot of $\langle V \rangle$ and $\langle N \rangle$ of the
model A for $S=500(=L)$ and $f=0.3$, 0.6 and 0.9.}
\label{S=500_Rulefalse_noinfo}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.52]{aveV_S=5_Q=0.9_q=0.5_Rule=true_info=false.eps}
\includegraphics[scale=0.5]{aveN_S=5_Q=0.9_q=0.5_Rule=true_info=false.eps}
\caption{The plot of $\langle V \rangle$ and $\langle N \rangle$ of the
model B for $S=5$ and $f=0.3$, 0.6 and 0.9.}
\label{S=5_Ruletrue_noinfo}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.52]{aveV_S=50_Q=0.9_q=0.5_Rule=true_info=false.eps}
\includegraphics[scale=0.5]{aveN_S=50_Q=0.9_q=0.5_Rule=true_info=false.eps}
\caption{The plot of $\langle V \rangle$ and $\langle N \rangle$ of the
model B for $S=50$ and $f=0.3$, 0.6 and 0.9.}
\label{S=50_Ruletrue_noinfo}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.52]{aveV_S=500_Q=0.9_q=0.5_Rule=true_info=false.eps}
\includegraphics[scale=0.5]{aveN_S=500_Q=0.9_q=0.5_Rule=true_info=false.eps}
\caption{The plot of $\langle V \rangle$ and $\langle N \rangle$ of the
model B for $S=500(=L)$ and $f=0.3$, 0.6 and 0.9.}
\label{S=500_Ruletrue_noinfo}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.9]{ZIPrankB50S50f6.eps}
\caption{The distribution of waiting passengers is plotted against
all bus stops for the parameters $f=0.6$, $B=50$, $S=50$.
The horizontal line means the ranking, where we arrange the bus stops
according to the descending order of $\langle N \rangle$.
}
\label{zip}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.52]{velQf9s50.eps}
\includegraphics[scale=0.52]{waitQf9s50.eps}
\caption{The average speed and the average number of waiting passengers
in the model B are plotted against the density for the parameters
$f=0.9, S=50$; the hopping parameters are $Q=0.8$ and $Q=1.0$.
}
\label{FD4}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.8]{opt.eps}
\caption{The optimal density of buses in the model B is plotted
against $Q$. The parameters are
$f=0.9, S=5$ (normal line),$f=0.6, S=5$ (finer broken line),
$f=0.9, S=50$ (bold broken line),
$f=0.6, S=50$ (longer broken line).
}
\label{opt}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.7]{capaV.eps}
\includegraphics[scale=0.7]{capaN.eps}
\includegraphics[scale=0.7]{capaUNIT.eps}
\caption{Comparison between the case of bus capacity $60$ with
bus capacity $120$.
The parameters are $Q=0.9$, $S=10$, $f=0.6$ in the model B without information.
The top figure shows the average velocity, the center figure shows
waiting passengers and the bottom figure shows the number of conveyed
passengers per unit bus, i.e. this number is calculated by (total number of
on-boarding passengers on all buses)/(the number of buses),
against the bus density up to $0.5$.
In each figure, the horizontal axis shows the density; the numbers
without parentheses denote the number densities in the case $N_{\rm max}=60$,
whereas the numbers in the parentheses denote the number densities in
the case $N_{\rm max}=120$.}
\label{hiki}
\end{center}
\end{figure}
These figures demonstrate that the average speed $\langle V \rangle$,
which is a measure of the efficiency of the bus traffic system,
exhibits a {\it maximum} at around $\rho=0.2\sim0.3$ especially in the model
B (comparing FIG.~\ref{S=5_Rulefalse_noinfo} with
FIG.~\ref{S=5_Ruletrue_noinfo}, it shows the model
B (FIG.~\ref{S=5_Ruletrue_noinfo}) reflects the bus bunching more clearly
than the model A (FIG.~\ref{S=5_Rulefalse_noinfo}) especially at large f
and small $\rho$).
The average number of waiting passengers $\langle N \rangle$, whose
inverse is another measure of the efficiency of the bus traffic system,
is vanishingly small in the region $0.3 < \rho < 0.7$; $\langle N
\rangle$ increases with decreasing (increasing) $\rho$ in the regime
$\rho < 0.3$ ($\rho > 0.7$).
The average velocity of the model A becomes smaller as S increases in
the low density region (see
FIG.~\ref{S=5_Rulefalse_noinfo}, FIG.~\ref{S=50_Rulefalse_noinfo} and
FIG.~\ref{S=500_Rulefalse_noinfo}).
In contrast, in the model B (FIG.~\ref{S=50_Ruletrue_noinfo} and
FIG.~\ref{S=500_Ruletrue_noinfo})
we observe that there is no significant difference in the average
velocity.
Note that the number of waiting passengers is calculated by (total
waiting passengers)/(number of bus stops). The total number of waiting
passengers in this system is almost the same under the case $S=50$ and
hail-and-ride system $S=L$ in both models.
When the parameter $S$ is small (comparing FIG.~\ref{S=5_Rulefalse_noinfo}
and FIG.~\ref{S=5_Ruletrue_noinfo}), in the model B the waiting
passengers are larger and the average velocity is smaller than in the
model A, since the effect of the delay in getting on a bus is taken into
account. In the model B (comparing FIG.~\ref{S=5_Ruletrue_noinfo},
FIG.~\ref{S=50_Ruletrue_noinfo} and FIG.~\ref{S=500_Ruletrue_noinfo}),
the case $S=50$ is more efficient than $S=5$, i.e. the system is likely
to become more efficient, as $S$ increases. However, we do not find any
significant variation between $S=50$ and $S=500$. When $S$ is small,
the system becomes more efficient by increasing the number of bus
stops.
If the number of bus stops increase beyond $50$, then there is little
further variation of the efficiency as $S$ is increased up to the maximum
value $500$.
From FIG.~\ref{zip}, the distribution of $\langle N \rangle$ over all the
bus stops in the system is shown.
We see that the distribution does not show the Zipf's law, which is
sometimes seen in natural and social phenomena; frequency of used
words \cite{word}, population of a city \cite{population}, the number of
the access to a web site \cite{web}, and intervals between successive
transit times of the cars of traffic flow \cite{musha}.
Next, we investigate the optimal density of buses at which the average
velocity becomes maximum. The optimal density depends on $Q$ and is
$\rho=0.3$ for $Q=0.8$ (FIG.~\ref{FD4}, see also FIG.~\ref{opt}).
In FIG.~\ref{FD4}, it is shown that the density corresponding to the
maximum velocity shifts to higher values as $Q$ becomes larger.
FIG.~\ref{opt} shows the optimal density of buses in the model B
without information-based control system. From this figure, we find
that the optimal density, for case $S=50$, is smaller than that for
$S=5$. Moreover, for given $S$, the optimal density decreases with
decreasing $f$. However, for both $S=5$ and $S=50$, the optimal
density corresponding to $Q=1.0$ is higher for $f=0.6$ than that for
$f=0.9$.
What is more effective way of increasing the efficiency of the public
conveyance system on a given route by increasing the number of buses
without increasing the carrying capacity of each bus, or by increasing
the carrying capacity of each bus without recruiting more buses? Or,
are these two prescriptions for enhancing efficiency of the public
conveyance system equally effective? In order to address these questions,
we make a comparative study of two situations on the same route: for
example, in the first situation the number of buses is $10$ and each has
a capacity of $60$, whereas in the second the number of buses is $5$ and
each has a capacity of $120$. Note that the total carrying capacity
of all the buses together is $600$ ($60\times 10$ and $120\times 5$ in
the two situations), i.e., same in both the situations. But, the number
density of the buses in the second situation is just half of that in
the first as the length of the bus route is same in both the situations.
In FIG.~$\ref{hiki}$, the results for these two cases are plotted; the
different scales of density used along the $X$-axis arises from the
differences in the number densities mentioned above.
From FIG.~$\ref{hiki}$, we conclude that, at sufficiently low
densities, the average velocity is higher for $N_{\rm max}=60$ compared to
those for $N_{\rm max}=120$. But, in the same regime of the number density
of buses, larger number of passengers wait at bus stops when the bus
capacity is smaller. Thus, in the region $\rho<0.05$, system
administrators face a dilemma: if they give priority to the average
velocity and decide to choose buses with $N_{\rm max}=60$, the number of
passengers waiting at the bus stops increases. On the other hand if they
decide to make the passengers happy by reducing their waiting time at
the bus stops and, therefore, choose buses with $N_{\rm max}=120$, the
travel time of the passengers after boarding a bus becomes longer.
However, at densities $\rho>0.05$, the system administrators can satisfy
both the criteria, namely, fewer waiting passengers and shorter travel
times, by one single choice. In this region of density, the public
conveyance system with $N_{\rm max}=60$ is more efficient than that with
$N_{\rm max}=120$ because the average velocity is higher and the number of
waiting passengers is smaller for $N_{\rm max} = 60$ than for $N_{\rm max}=120$.
Thus, in this regime of bus density, efficiency of the system is enhanced
by reducing the capacity of individual buses and increasing their number
on the same bus route.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.52]{aveV_S=5_Q=0.9_q=0.5_Rule=true_info=true.eps}
\includegraphics[scale=0.52]{aveN_S=5_Q=0.9_q=0.5_Rule=true_info=true.eps}
\caption{The plot of $\langle V \rangle$ and $\langle N \rangle$ of the
model B with information ($S=5$ and $f=0.3$, 0.6 and 0.9)}
\label{S=5_Ruletrue_info}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.75]{aveV_S=5_Q=0.9_q=0.5_Rule=true_f=0.9.eps}
\includegraphics[scale=0.75]{aveN_S=5_Q=0.9_q=0.5_Rule=true_f=0.9.eps}
\caption{The model B with $S=5$ and $f=0.9$.
The left vertical dash line is $\rho=0.28$ and the right is $\rho=0.73$
in the two figures.}
\label{S=5_info}
\end{center}
\end{figure}
\subsection{PCM with information-based traffic control}
The results for the PCM with information-based traffic control system is
shown in FIG.~$\ref{S=5_Ruletrue_info}$ and FIG.~$\ref{S=5_info}$.
In the FIG.~$\ref{S=5_Ruletrue_info}$ we plot $\langle V \rangle$ and
$\langle N \rangle$ against the density of buses for the parameter
$S=5$. The density corresponding to the peak of the average velocity
shifts to lower values when the information-based traffic control
system is switched on.
The data shown in FIG.~$\ref{S=5_info}$ establish that implementation
of the information-based traffic control system does not necessarily
always improve the efficiency of the public conveyance system. In
fact, in the region $0.3 < \rho < 0.7$, the average velocity of the
buses is higher if the information-based control system is switched
off. Comparing $\langle V \rangle$ and $\langle N \rangle$ in
FIG.~\ref{S=5_info}, we find that information-based traffic control
system can improves the efficiency by reducing the crowd of waiting
passengers. But, in the absence of waiting passengers, introduction
of the information-based control system adversely affects the
efficiency of the public conveyance system by holding up the buses
at bus stops when the number of buses in the next segment of the
route exceeds $I_0$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.75]{headrankB50S10f9.eps}
\caption{Distribution of headway distance for
$S=10$, $m=50$, $f=0.9$ in model B. This figure shows the plot of headway
distance against the ranking.}
\label{headrank}
\end{center}
\end{figure}
Finally, FIG.~\ref{headrank} shows the distribution of headway distance
against the ranking, where we arrange the order of magnitude according
to the headway distance of buses in descending order. From this figure
it is found that the headway distribution is dispersed by the effect of
the information. The average headway distance with the information-based
traffic control is equal to $8.34$, in contrast to a much shorter value
of $0.66$ when that control system is switched off. Thus we confirm that
the availability of the information $I_j$ and implementation of the
traffic control system based on this information, significantly reduces
the undesirable clustering of buses.
\section{MEAN FIELD ANALYSIS}
Let us estimate $\langle V \rangle$ theoretically in the low density
limit $\rho \to 0$. Suppose, $T$ is the average time taken by a bus
to complete one circuit of the route. In the model A, the number of
hops made by a bus with probability $q$ during the time $T$ is $S$,
i.e. the total number of bus stops. Therefore the average period $T$
for a bus in the model A is well approximated by
\begin{equation}
T = \frac{L-S}{Q}+\frac{S}{q}
\label{time}
\end{equation}
and hence,
\begin{equation}
\langle V \rangle=\frac{L}{T}=\frac{LQq}{q(L-S)+QS}\,.
\label{av}
\end{equation}
In model B, in the low density limit where $m$ buses
run practically unhindered and are distributed uniformly in the system
without correlations, the average number of passengers $N$ waiting
at a bus stop, just before the arrival of the next bus, is
\begin{equation}
N =\frac{f}{S}\left(\frac{\frac{L}{S}-1}{Q}+\frac{1}{q}\right)\frac{S}{m}.
\label{N}
\end{equation}
The first factor $f/S$ on the right hand side of the equation (\ref{N})
is the probability of arrival of passengers per unit time. The second
factor on the right hand side of (\ref{N}) is an estimate of the average
time taken by a bus to traverse one segment of the route, i.e. the part
of the route between successive bus stops. The last factor in the same
equation is the average number of segments of the route in between two
successive buses on the same route. Instead of the constant $q$ used in
(\ref{av}) for the evaluation of $\langle V \rangle$ in the model A, we
use
\begin{equation}
\bar q =\frac{Q}{N+1}
\label{barq}
\end{equation}
in eq.~(\ref{av}) and eq.~($\ref{N}$) for the model B. Then, for the model
B, the hopping probability $Q$ is estimated self-consistently solving
\begin{equation}
\langle V \rangle = Q - \frac{f}{m},
\label{waitH}
\end{equation}
(\ref{av}) and (\ref{barq}) simultaneously.
We also obtain, for the model B, the average number of passengers
$\langle N \rangle$ waiting at a bus stop in the $\rho \to 0$ limit.
The average time for moving from one bus stop to the next is
$\Delta t=(L/S-1)/Q+1/{\bar q}$ and, therefore, we have
\begin{eqnarray}
\langle N \rangle &=& (f/S)\cdot(\Delta t + 2 \Delta t + \cdots + (S-1)\Delta
t)/S\nonumber\\
&=&\frac{f(S-1)({\bar q}(L-S)+SQ)}{2S^2Q{\bar q}}.
\label{an}
\end{eqnarray}
As long as the number of waiting passengers does not exceed $N_{\rm max}$,
we have observed reasonably good agreement between the analytical estimates
(\ref{av}), (\ref{an}) and the corresponding numerical data obtained from
computer simulations. For example, in the model A, we get the estimates
$\langle V \rangle=0.85$ and $\langle N \rangle=1.71$ from the approximate
mean field theory for the parameter set $S=50$, $m=1$, $Q=0.9$, $q=0.5$,
$f=0.3$. The corresponding numbers obtained from direct computer
simulations of the model A version of PCM are 0.84 and 1.78, respectively.
Similarly, in the model B under the same conditions, we get $\langle V
\rangle=0.60$ and $\langle N \rangle=2.45$ from the mean field theory,
while the corresponding numerical values are 0.60 and 2.51, respectively.
If we take sufficiently small $f$'s, then the mean-field estimates agree
almost perfectly with the corresponding simulation data. However, our mean
field analysis breaks down when a bus can not pick up all the passengers
waiting at a bus stop.
\section{CONCLUDING DISCUSSIONS}
In this paper, we have proposed a public conveyance model (PCM) by using
stochastic CA. In our PCM, some realistic elements are introduced: e.g.,
the carrying capacity of a bus,
the arbitrary number of bus stops, the halt time of a bus that depends
on the number of waiting passengers, and an information-based bus traffic
control system which reduces clustering of the buses on the given route.
We have obtained quantitative results by using both computer simulations
and analytical calculations. In particular, we have introduced two
different quantitative measures of the efficiency of the public conveyance
system. We have found that the bus system works efficiently in a region
of moderate number density of buses; too many or too few buses drastically
reduce the efficiency of the bus-transport system. If the density of the
buses is lower than optimal, not only large number of passengers are kept
waiting at the stops for longer duration, but also the passengers in the
buses get a slow ride as buses run slowly because they are slowed down
at each stop to pick up the waiting passengers. On the other hand, if the
density of the buses is higher than optimal, the mutual hindrance created
by the buses in the overcrowded route also lowers the efficiency of the
transport system. Moreover, we have found that the average velocity
increases, and the number of waiting passengers decreases, when the
information-based bus traffic control system is switched on. However,
this enhancement of efficiency of the conveyance system takes place
only over a particular range of density; the information-based bus traffic
control system does not necessarily improve the efficiency of the system
in all possible situations.
We have compared two situations where the second situation is obtained
from the first one by doubling the carrying capacity of each bus and
reducing their number to half the original number on the same route.
In the density region $\rho > 0.05$ the system of $N_{\rm max}=60$ is more
efficient than that with $N_{\rm max}=120$. However, at small densities
($\rho < 0.05$), although the average velocity increases, the number of
waiting passengers also increases, by doubling the carrying capacity
from $N_{\rm max}=60$ to $N_{\rm max} = 120$. Hence, bus-transport system
administrators would face a dilemma in this region of small density.
Finally, in our PCM, the effect of the disembarking passengers on the
halt time of the buses has not been captured explicitly. Moreover,
this study is restricted to periodic boundary conditions. The clustering
of particles occurs not only in a ring-like bus route, but also in
shuttle services of buses and trains. Thus it would be interesting to
investigate the effects of the information-based traffic control system
also on such public transport systems. In a future work, we intend to
report the results of our investigations of the model under non-periodic
boundary conditions.
We hope our model will help in understanding the
mechanism of congestion in public conveyance system and will provide
insight as to the possible ways to reduce undesirable clustering of the
vehicles.
\vspace{0.5cm}
\noindent{\bf Acknowledgments}: Work of one of the authors (DC) has been
supported, in part, by the Council of Scientific and Industrial Research
(CSIR), government of India.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{Sect:Introduction}
\begin{center}
\textit{The anxiety, stress, financial strife, grief, and general uncertainty of this time will undoubtedly lead to behavioral health crises.} - Coe and Enomoto, McKinsey on the COVID-19 crisis \cite{Mckinsey_Returning}.
\end{center}
COVID-19 has become a paradigm shifting phenomenon across domains and disciplines, affecting billions of people worldwide directly or indirectly. The current Coronavirus pandemic disaster has led to escalating emotional and mental health issues with significant consequences, and this presents a serious challenge to reopening and recovery initiatives \cite{goldmann2014mental}. In the United States (US) alone, COVID-19 has infected well over 1.25 million people and killed over 80,000, and continues to spread and claim more lives \cite{worldometers}. Moreover, as a result of the 'Lockdown' (present research uses Lockdown as the term representing the conditions resulting from state and federal government Novel Coronavirus response regulations and advisories restricting government, organizational, personal, travel and business functionalities), over 30 million people lost their jobs along with a multi-trillion dollar economic impact \cite{US_Labor}; furthermore these alarming numbers are growing everyday. There is tremendous dissatisfaction among common people due the continued physical, material and mental health challenges presented by the Lockdown, evidenced by the growing number of protests across the US \cite{Tension_over_restriction}. There appears to be a significant sentiment, and a strong desire in people to go back to work, satisfy basic physical, mental and social needs, and eagerness to earn money as shown in exploratory public sentiment graph in Fig. \ref{Fig:Reopneing_sentiment}. However, there is also a significant sentiment to stay safe, and many prefer the stay-at-home Lockdown measures to ensure lower spread of the Coronavirus \cite{CDC_What_can_do}. While politicians may have vested interests, state governments and the federal government cannot ignore public sentiment. Therefore, to a consequential measure, the reopening of America, and its states and regions will be influenced by perceived public sentiment.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\linewidth]{Figures/reop_senti_graph.pdf}
\caption{Reopening public sentiment graph.}
\label{Fig:Reopneing_sentiment}
\end{figure}
Hence, it is important to address the question: What is the dominant public sentiment in America concerning the reopening and in some form, going forward into a New Normal?. 'New normal' is a term being used to represent the potentially transformed socioeconomic systems \cite{Mckinsey_Beyond}, as result of COVID-19 issues, amidst the likelihood of multiple future waves of the Coronavirus pandemic. There is a fear that the current Coronavirus wave will see a spike in COVID-19 infections with reopening and associated loosening of Lockdown restrictions. The present study analyzed public sentiment using Tweets from the first nine days of the month of May, 2020, with a keyword "reopen" from users with country denoted as USA, to gauge public sentiment along eight key dimensions of anger, anticipation, disgust, fear, joy, sadness, surprise and trust, as visualized in Fig. \ref{Fig:Reopneing_sentiment}. Twitter data and Tweets text corpus have been widely used in academic research and by practitioners in multiple disciplines, including education, healthcare, expert and intelligent systems, and information systems \cite{mohammadi2018academic, sinnenberg2017twitter, ghiassi2016targeted, lim2019mining, visvizi2019tweeting}. Sentiment analysis with Tweets presents a rich research opportunity, as hundreds of millions of Twitter users express their messages, ideas, opinions, feelings, understanding and beliefs through Twitter posts. Sentiment analysis has gained prominence in research with the development of advanced linguistic modeling frameworks, and can be performed using multiple well recognized methods, and tools such as R with readily available libraries and functions, and also through the use of custom textual analytics programming to identify dominant sentiment, behavior or characteristic trait \cite{saif2016contextual, samuel2014automating, pandey2017twitter}. Tweets analytics have also been used to study, and provide valuable insights on a diverse range of topics from public sentiment, politics and pandemics to stock markets \cite{ansari2020analysis, samuel2020covid, kretinin2018going}.
There are strong circumstantial motivations, along with urgent time-sensitivity, driving this research article. Firstly, in the US alone, we have seen over 89,000 deaths and over 1.4 million COVID-19 cases, at the point of writing this manuscript, and the numbers continue to increase. Secondly, there have been significant economic losses and mass psychological distress due to unprecedented job loss for tens of millions of people. These circumstantial motivations led us to our main research motivation: discovery of public sentiment towards reopening the US economy. The key question we seek to address is, what are the public sentiments and feelings about reopening the US economy?. Public sentiment trends insights would be very useful to gauge popular support, or the absence thereof, for any and all state level of federal reopening initiatives.
\\
Scientists, researchers and physicians are suggesting that the reopening process should be on a controlled, phased and watchful basis. The general recommendation is it should be done into three phases \cite{ReopenGuardian04_29}: \textit{Phase 1}: slow down the virus spread through complete Lockdown and very strict measures (for instance, mandatory stay-at-home orders, which have been active in many states). \textit{Phase 2}: reopen on a state-by-state, business-by-business, and block-by-block basis with caution. People would still be required to maintain social distancing, use PPE (personal protective equipment) and strict public hygiene \cite{Christopher2020Testing}. All of these would need to be implemented while protecting the most vulnerable sections of the population, using potential strategies such as the 'relax-at-home' or 'safe-at-home' mode. Moreover, the states would be expected to increase testing, tracing and tracking of COVID-19 cases, and ensure ample PPE supplies. \textit{Phase 3}: It would be possible for people to go back to near-normal pre-Coronavirus lifestyles, when proven vaccines and/or antibodies become commonly available, and mass-adoption sets in. Early research shows that 90\% of transmissions have taken place in closed environments, namely, home, workplace, restaurants, social gatherings and public transport, and such prevalence in critical social spaces contributed to increased fear sentiment \cite{KnowRisk_Erin,Qian2020Indoor}.
\\
\begin{figure}[htbp]
\centering
\includegraphics[width=0.55\linewidth]{Figures/wordcloud.jpeg}
\caption{Word cloud about public sentiment.}
\label{Fig:Reopening_wordcloud}
\end{figure}
From a current sentiments analysis perspective, since a large ratio of people need to start work, return to their businesses and jobs, and restart the economy, a quick reopening is perceived as being strongly desirable. However, there is fear regarding a potential second outbreak of the COVID-19 pandemic, and this presents a cognitive dilemma with implications for mental health and emotional conditions. Information and information formats have an impact on human sentiment, behavior and performance, and it is possible to associate underlying feeling or belief to expressed performance and communication \cite{samuel2017information}. The present research addresses the COVID-19 public sentiment trend identification challenge by generating insights on popular sentiment about reopening the economy, using publicly available Tweets from users across the United States. These publicly downloadable Tweets were filtered with 'reopen' as the keyword, and insights were generated using textual data visualization, starting with the word cloud Figure \ref{Fig:Reopening_wordcloud} to gain a quick overview of the textual corpus, as such textual visualizations have the potential to provide both cross-sectional and narrative perspectives of data \cite{Conner2019picture}. Textual analytics were used to discover most frequently used words, phrases, and prominent public sentiment categories.
The present research is exploratory in nature and applied in its focus. Given the urgency of the COVID-19 situation in the United States and worldwide, this study has reluctantly forfeited the intellectual luxury of time consuming adoption and integration of theoretical frameworks, which would satisfy academic rigor but would forego contributing practically to critically time-sensitive and desperately needed COVID-19 recovery and reopening solutions. Consequentially, this research is part of our COVID-19 solutions research stream, which has enthusiastically embraced the generation of insights using rapid exploratory analytics and logical induction, with timely and significantly practical implications for policy makers, local, state and federal governments, organizations and leaders to understand and prepare for sentiment-sensitive scenarios. This research has thus aimed to discover the most critical insights, optimized to time constraints, through sentiment-sensitive reopening scenarios analysis. The rest of the paper is organized as follows. Section \ref{Sect2:Scenario_analysis} highlights the scenario analysis of past and current events and public sentiment. Section \ref{Sect3:Methods} demonstrates the adopted method for analyzing public sentiment. Section \ref{Sect4:Discussion} is dedicated for in-depth discussion, pointing out limitations and opportunities of this research. Finally, Section \ref{Sect5:Conclusion} concludes this paper.
\section{Scenario Analysis: COVID-19 Sentiment Fallout}\label{Sect2:Scenario_analysis}
Extant research has illustrated the dramatic growth in public fear sentiment using textual analytics for identifying dominant sentiments in Coronavirus Tweets \cite{samuel2020covid}. Public fear sentiment was driven by a number of alarming facts as described in the subsections below. To start with, we list a number of initiatives aimed at understanding, explaining and predicting various aspects of the Coronavirus phenomena. A number of works made early prediction of the spread of COVID-19 \cite{Zhong2020Early,Aboelkassem2020Pandemic}. Zhong et al. \cite{Zhong2020Early} made the early prediction of the spread of the coronavirus using simple epidemic model called SIR (Susceptible-Infected-Removed) dynamic model. The model works with two key parameters; the infection rate and the removal rate. The infection rate is the number of infections by one infective by unit time and the removal rate is the ratio of the removed number to the number of infectives, where the removed number includes recovered and dead infectives. Y. Aboelkassem \cite{Aboelkassem2020Pandemic} used a Hill-type mathematical model to predict the number of infections and deaths and the possible re-opening dates for a number of countries. The model works based on the three main estimated parameters; the steady state number (the saturation point), the number of days when the cases attain the half of the projected maximum, and Hill-exponent value.
Li et al. \cite{Li2020Characterizing} categorized COVID-19 related social media post into seven situational information categories. They also identify some key features in predicting the reposted amount of each categorical information. The predicting accuracy of different supervised learning methods are different: the accuracy of SVM (Support Vector Machine), Naive Bayes, and Random Forest are 54\%, 45\% and 65\%, respectively.
\subsection{COVID-19 people sentiment-impact: Fear}
\begin{figure}[htbp]
\centering
\subfloat[COVID-19 Emergency room visits.]{\includegraphics[width=0.5\linewidth]{Figures/COVID_like_illiness.pdf}\label{Fig:COVID_illness}}
\subfloat[Hospitalizations per 100,000 of population.]{\includegraphics[width=0.5\linewidth]{Figures/Hospitalization.pdf}\label{Fig:Hospitalization}}
\caption{Impact of COVID-19 outbreak on medical resources in US (source, CDC \cite{CDC_CLI_ILI}).}
\label{Fig:COVID_illness_hospitalization}
\end{figure}
In the US, concurrent to the physical healthcare problem, extant research observes the mass fear sentiment about Cornonavirus and COVID-19 phenomena to be on a significant growth curve from the time the study started tracking the sentiment, in the February and climbing steeply towards March, 2020 \cite{samuel2020covid}. According to Center for Disease Control and Prevention (CDC), this was also around the time that a massive number of people started seeking medical attention for COVID-19 conditions \cite{CDC_CLI_ILI}. Fig. \ref{Fig:COVID_illness} shows the percentage of patients visits for COVID-19-like illnesses (CLI) and Influenza-like illnesses (ILI), compared to the total number of emergency department visits from December 1, 2019 to April 26, 2020 in the Unites States (source, CDC \cite{CDC_CLI_ILI}). It clearly indicates that a significant number of people began using emergency facilities for CLI and ILI from around March of 2020, and this corresponds with the growth fear and panic sentiments associated with COVID-19. However, by late April, we see the emergency visits curve relaxing along with a decline in the number of new infections in many states in the US. Fig. \ref{Fig:Hospitalization} exhibits COVID-19-associated weekly hospitalizations per 100,000 of the US population among different age categories from March 04 to May 02, 2020 in the US. It shows that from the second week of March, a significant number of people aged over 65 years needed to be hospitalized; and the hospitalizations count peaks by the middle of April. It clearly shows that this age group is amongst the most vulnerable to the COVID-19 outbreak. The age group of 50-64 years follows as the second most impacted, and the age group of 18-49 years follows as being the third most impacted. However, COVID-19 had a very limited impact on the 0-17 years age group. Healthcare experts and researchers continue to monitor, develop and apply solutions that are helping physical recovery. From a current mental healthcare and sentiment tracking perspective, we did not find any reports highlighting changes to early stage COVID-19 fear and panic sentiment, nor clear updates on sentiment trends from extant literature. Hence, having identified an important gap, this present research identifies changes in public sentiment by collecting and analyzing Twitter data for the first part of May 2020, as described in the methods section.
\begin{figure}[htbp]
\centering
\subfloat[Death cases.]{\includegraphics[width=0.5\linewidth]{Figures/Death_map.pdf}\label{Fig:Death_map}}
\subfloat[Deaths of despair.]{\includegraphics[width=0.5\linewidth]{Figures/Death_of_despair_map.pdf}\label{Fig:Death_of_despair_map}}
\caption{COVID-19 outbreak by states as of May 7, 2019 (source, worldometers \cite{worldometers}).}
\label{Fig:cases_death_despair_map}
\end{figure}
\subsection{Societal sentiment-impact: Despair}
\textit{The whole is often greater than the sum of its parts} - this adage holds true regarding the adverse collective psychological, sentimental and mental healthcare impact of the Coronavirus and COVID-19 phenomena on human society and societal structures. While there was a significant growth in fear and anxiety on the individual level, collectively as a society, these took the shape of panic and despair, as evidenced in panic-buying driven shortages of items in super markets for which there were no supply chain disruptions, indicating that these shortages were driven by adverse public sentiment. Certain American states and regions were more drastically impacted than the others. COVID-19 had a severe impact on New York and New Jersey, followed by Massachusetts, Illinois, Michigan and California as shown in Fig. \ref{Fig:cases_death_despair_map} (source, worldometers \cite{worldometers}). So far New York and New Jersey have seen the maximum number of deaths (\ref{Fig:Death_map}). Interestingly, if we notice the death of despair prediction for 2020-2029 as shown in Fig. \ref{Fig:Death_of_despair_map}, New York and New Jersey are not in the most affected list, instead some states, which perhaps have less job opportunity and higher older population, will suffer the most (source, Well Being Trust \& The Robert Graham Center \cite{Death_of_despair}). The prediction shows that the most likely states to bear a negative impact in the long run will be New Mexico, Nevada, Wyoming, Oregon, Florida and West Virginia, due to the looming socioeconomic fallout of COVID-19. This implies a longer term and more complex mental healthcare challenge, as states begin their journey to recovery - it will be vital for governments and relevant organizations to track, understand and be sensitive to shifts in public sentiment at the local and national levels.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{Figures/Unemployment_number.pdf}
\caption{Unemployment situation in US due to COVID-19 (source, US Labor \cite{US_Labor}).}
\label{Fig:Unemployment}
\end{figure}
\begin{comment}
\begin{figure}[htbp]
\centering
\subfloat[Unemployment rate.]
{\includegraphics[width=0.5\linewidth]{Figures/Unemployment_Rate}
\label{Fig:Unemployment_rate}}
\subfloat[Unemployment count.]
{\includegraphics[width=0.5\linewidth]{Figures/Unemployment_number}
\label{Fig:Unemployment_count}}
\par
\subfloat[Unemployment by states.]
{\includegraphics[width=0.55\linewidth]{Figures/Unemployment_map.pdf}
\label{Fig:Unemployment_map}}
\caption{Unemployment situation in US due to COVID-19 (source, US Labor \cite{US_Labor}).}
\label{Fig:Unemployment}
\end{figure}
\end{comment}
\subsection{Economic-downturn sentiment-impact: Confusion}
The Coronavirus pandemic has caused significant global disruptions, leading to continuing losses valued at trillions of US dollars due to the closure of many businesses and entire industries, such as hospitality and air travel. However, confusion is rampant due to counter efforts by governments, nationalized monetary policy interventions, and large scale financial assistance to individual citizens, organizations and businesses. For example, the US government has successfully provided multi-trillion dollar stimulus checks, small business assistance and a wide range of concessions and equity market support mechanisms. Such initiatives make it difficult for investors, and individuals to gauge fundamental value of many assets as government policy intervention can support systemic market shifts, and disrupt forecasts. Risks in the global financial market have elevated substantially due to investors' anxiety in this pandemic crisis and assets price variability \cite {coronavirusworld, papadamou2020direct, zhang2020financial}. Data analysis on daily COVID-19 cases and stock market returns for 64 countries demonstrated a strong negative return with increasing confirmed cases \cite{ashrafstock}. The stock market reacted strongly in the early days of confirmed cases compared to a certain period after the initial confirmed cases. In contrast, the reaction of individual stock markets is closely related to the severity of the local outbreaks, which leads to economic losses, high volatility and unpredictable financial markets, and Tweets textual analytics have been used to gauge associated sentiment \cite {zhang2020financial, kretinin2018going}. All of these COVID-19 effects lead to a delicate market equilibrium, immersed in a fair degree of confusion, positively supported by expectation of government intervention, and limited by the negative consequences of COVID-19.
Also, COVID-19 has caused a severe interruption in the functioning of businesses and institutions, leading to loss of employment, diminished income opportunities and disruption of labor markets, leading to significant increase in distress \cite {adams2020inequality}. A prolonged pandemic may lead to mass unemployment and irreversible business failures \cite {zhang2020financial}. According to the International Labor Organization (ILO), the global unemployment rate will increase by 3\%$\sim$13\%, subsequently it will increase the underemployment and reduce the economic activities such as global tourism \cite{figari2020welfare}. Adams-Prassl et al., \cite {adams2020inequality} studied the comparative job losses and found that about 18\% and 15\% of people lost their jobs in early April in the US and UK, respectively, whereas only 5\% people lost their job in Germany. They also predicted that the probability of losing the job of an individual is about 37\% in the US and 32\% in the UK and 25\% in Germany in May. Evidently, people are worried and anxious about their livelihood and future income, driving a sense of urgency for reopening the economy. Fig. \ref{Fig:Unemployment} demonstrates how the unemployment situation in the US is worsened by the COVID-19 pandemic (source, US Department of Labor \cite{US_Labor}). By the mid of March, 2020, many people started losing their jobs and by late April around 16\% of the population became unemployed.
As of now over 30 million people have lost their jobs, and this number continues to grow as the Lockdown continues.
The global economic downturn caused by COVID-19 is estimated to be worst since the great recession in 2008 \cite {figari2020welfare}. COVID-19 related shutdown is expected to affect 20\%-25\% output decline in advanced economies (e.g., US, Canada, Germany, Italy, French) with a 33\% drop in consumer expenditure, and around 2\% decline in annual GDP growth rate \cite {oecd2020initialimpact}. It is projected that annual global GDP growth will be reduced to 2.4\% in 2020 with a negative growth rate in the first quarter of 2020, from a weak rate of 2.9\% in 2019 \cite {coronavirusworld}. Compared to the pre-COVID-19 period, it is estimated that about 22\% of the US economy would be threatened, 24\% of jobs would be in danger and total wage income could be reduced by around 17\% \cite {del2020supply}.
\section{Method and Current Sentiment Analytics}\label{Sect3:Methods}
Thus far, this study has used secondary data and extant research to motivate, inform and direct the research focus towards current sentiment analytics on the subject of reopening the US economy. Previous sections have summarized key aspects of the extensive socioeconomic damage caused by the Coronavirus pandemic, and highlighted associated psychological and sentiment behavior challenges. For the purposes of the main data analysis, this research uses a unique Twitter dataset of public data specifically collected for this study using a custom date range and filtered to be most relevant to the reopening discussion. While public sentiment has changed from apathy, disregard and humor in the earliest stages of the Coronavirus pandemic, to fear in February and March of 2020, and despair in March and April of 2020, yet there is a lack of clarity on the nature of public sentiment surrounding reopening of the economy. The analysis of Twitter data uses textual analytics methods that includes discovery of high frequency key words, phrases and word sequences that reflect public thinking on the topic of reopening. These publicly posted key words, phrases and word sequences also allow us to peek into the direction of evolving public sentiment. Anecdotal Tweets provide insights into special cases and they provide a peek into influential Tweets and logical inflection points. In the final parts of the data analysis, the study provides insights into dominant current sentiment with sentiment analysis using the R programming language, associated sentiment analysis libraries (R packages) and lexicons.
\begin{table}[htbp]
\centering
\caption{Twitter data features: Mentions \& Hashtags.}\label{Table:endogenous_features}
\subfloat[Mention count.]{
\begin{tabular}{ll}
\hline
\multicolumn{1}{l}{\textbf{Tagged}} & \multicolumn{1}{l}{\textbf{Rank}} \\ \hline
realDonaldTrump & 1 \\
GovMikeDeWine & 2 \\
NYGovCuomo & 3 \\
GavinNewsom & 4 \\
GovLarryHogan & 5 \\
\hline
\end{tabular}}
\hspace{1.5cm}
\subfloat[Hashtag count.]{
\begin{tabular}{ll}
\hline
\textbf{Hashtag} & \textbf{Rank} \\
\hline
COVID19 & 1 \\
BREAKING & 2 \\
Covid\_19 & 3 \\
coronavirus & 4 \\
Texas & 5 \\
\hline
\end{tabular}}
\end{table}
\subsection{Method} \label{Method}
The present study used Twitter data from May 2020 to gauge sentiment associated with reopening. 293,597 Tweets, with 90 variables, were downloaded with a date range from 04/30/2020 to 05/08/2020 using the rTweet package in R and associated Twitter API, using the keyword "reopen". This follows standard process for topic specific data acquisition and the dataset was saved in .rds and .csv formats for exploratory data analysis \cite{pepin2017visual, samuel2020beyond}. The complete data acquisition and analysis were performed using the R programming language and relevant packages. The dataset was cleaned, filtered of potential bot activity and subset by country to ensure a final subset of clean Tweets with country tagged as US. It is possible that the process omitted other Tweets from the US which were not tagged by country as belonging to the US. Furthermore, for ethical and moral purposes, we used custom code to replace all identifiable abusive words with a unique word text "abuvs" appended with numbers to ensure a distinct sequence of characters. While we wanted to avoid the display of abusive words in an academic manuscript, we still believed it to be useful to retain the implication that abusive words were used, for further analysis. After the filtering, cleaning and stopwords processes, a final dataset consisting of 2507 Tweets and twenty nine variables was used for all the Twitter data analysis, textual analytics and textual data visualization in this paper. A visual summary of the key words in the textual corpus of the filtered Tweets is provided in the word cloud visualization in Fig. \ref{Fig:Reopening_wordcloud}.
\subsubsection{N-Grams Word Associations}\label{Subsect:Word_Association}
The text component of the dataset, consisting of filtered Tweets only was used to create a text corpus for analysis and data visualization. Word frequency and N-grams analysis were used to study the text corpus of all the tweets, to discover dominant patterns, and are summarized in Fig. \ref{Fig:N_grams}. Word frequency analysis revealed an anticipated array of high frequency words including economy, states, businesses, COVID, open, back, work, country, reopening, plan and governor. N-grams, which focus on the identification of frequently used word pairs and word sequences revealed interesting patterns. The most frequent Bigrams (two word sequences) included: open economy, reopen country, social distancing, time reopen, states reopen and want reopen. These largely indicate positive sentiment towards reopening. The most frequent Trigrams (three word sequences) included: get back work, people want reopen, stay home order and want reopen country. These trigrams also indicate medium to strong support for the reopening. The most frequent "Quadgrams" (four word sequences) included: can't happen forever, goin worse lets get, and constitutional rights must stop. Quadgrams reveal more complex sentiment, with positive but weak support for reopening. For example, 'can't happen forever' most likely implies that the Lockdown cannot last indefinitely, and 'constitutional rights must stop' most likely implying that intrusive Lockdown measures are not appreciated.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\linewidth]{Figures/nGrams.pdf}
\caption{N-Grams.}
\label{Fig:N_grams}
\end{figure}
\subsubsection{Descriptive Analysis of Tweets}\label{Subsect: Describe}
Descriptive analysis was used to explore the data and Tables \ref{Table:endogenous_features} and \ref{Tab:Locations} summarize the reopening Twitter data features associated with sentiment analysis. Table \ref{Table:endogenous_features}(a) ranks the dominant Twitter user names mentioned in the reopening Tweets data, and Table \ref{Table:endogenous_features}(b) ranks the leading hash tags used in the data. The text of the Tweets was also analyzed in conjunction with other variables in the dataset to gain insights into behavioral patterns, which included grouping by technology (device used to post Tweet) and analysis of key words usage by such technological classification. We grouped Tweets into two technology-user classes: iPhone users and Android users, to explore potential behavioral aspects. Extant research support such a classification of technology users for analyzing sentiment, psychological factors, individual differences and technology switching \cite{miller2012smartphone, samuel2020covid, shaw2016individual, lin2014understanding}. We identified 1794 Twitter for iPhone users, and 621 Twitter for Android users, in our dataset and ignored smaller classes, such as users of web client technologies. For the purposes of relative analysis, we normalized the technological groupings so as to ensure comparison of ratio intrinsic to each group, and to avoid the distortion caused by unequal number of users in the two technology-user classes. Our analysis and grouped data visualizations revealed some interesting patterns as summarized in Fig. \ref{Fig:AndroidvsIphone}. Twitter for iPhone users had a marginally higher ratio of 'reopen' mentions, while they were at par with Twitter for Android users in their references to 'business' and 'time' urgency words. Twitter for iPhone users tended to use more abusive words, while Twitter for Android users tended to post more Tweets referencing 'work', 'Trump', 'Politics', 'COVID-19' and 'economy'. Negative sentiment is usually tagged to the use of abusive words, positive sentiment is currently associated with reopening words and either positive or negative sentiment could be associated with political, work, business and economy words, subject to context and timing. This reveals that, technology user groups may differ in their sentiment towards reopening and that by itself could merit additional research focus.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/AndroidvsIphone.pdf}
\caption{Tweets grouped by device type}
\label{Fig:AndroidvsIphone}
\end{figure}
\subsubsection{Illustrative Tweets and Sentiment}\label{Subsect: Anecdote}
\begin{center}
\textit{"Reopen everything now!!!!"} 5/3/2020, 21:46 Hrs, Twitter for iPhone\\
\textit{"NO state is ready to reopen"} 5/2/2020, 23:55 Hrs, Twitter for Android\\
\end{center}
It is important to connect textual analytics discoveries, such as word frequencies and N-grams sequences, to context and demonstrate potential ways in which the popular words and word sequences are being used. To that purpose, this section presents and discusses a few notable Tweets associated with the N-grams and descriptive textual analytics. Name mentions, abusive words and hashtags in the Tweets displayed in this paper have been deleted or replaced to maintain confidentiality, research ethics and relevance, and spellings, though incorrect have been retained as posted originally. We observed Tweets with high positive sentiment and emotional appeal such as "What a beautiful morning to reopen the economy!" and "Ready to use common sense and reopen our country". Some of the Tweets focused on humor: "More importantly when do the bars reopen", "Day 50: I find it amusing that skinheads are the ones calling to reopen hair salons", "The first weekend the bars reopen is going to kill more people than the coronavirus" and "First karaoke song when the bars reopen will be a whole new world". A large number of Tweets referenced jobs, work and businesses, such as: "Then tell the states to reopen. That is the only way to create jobs", "Then tell the states to reopen. That is the only way to create jobs", "How about you just reopen the state?", "WE ARE BEGGING YOU TO OPEN UP THE ECONOMY BUT YOU DONT CARE! Our jobs won't be there if you keep this going!" and "I don't want to get it but we must all reopen and get back to work".
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/4Senti.pdf}
\caption{Different type sentiments as time progresses.}
\label{Fig:4_sentiments}
\end{figure}
Many Tweets were political: "History will also show that during the pandemic the Democrats did nothing to reopen the country or economy but continued to collect a paycheck while 30\% was unemployed", "Trump thinks he needs the states to reopen for his reelection" and celebrities were not spared of the negative sentiment: "Melinda Gates, multi-billionaire hanging out comfortably at home, insists America not reopen the economy until at least January and until America implements paid family medical leave. \#outOfTouch" and "Bill Gates laughs when he hears about the economy falling because he wants you to die". Some Tweets expressed skepticism: "This is not hard. The economy won't get better even if you open up EVERYTHING because consumer consumption is based on CONFIDENCE. Economics 101 y'all. America ain't ready to reopen", "The need to reopen the economy is definitely evidence that capitalism will kill us all" and "What happens when the 20\% win and stores reopen and the other 80\% still refuse to show up?". Frustration was evident in some Tweets "i never want to hear the words 'reopen' and 'economy' ever again", and some Tweets appealed to logic: "The cure can't be worse than the virus. It's time to reopen America", "If they haven't been preparing by now, it's their problem. Many others have spent their time getting ready to Reopen", "But we don't have proof that will happen. When is a good time to reopen? It will always be risky" and "More will be devistated if we don't reopen. Follow the protocol set out and get us back to work". Also, some quoted caution: "I do believe there will be serious soul searching in a few weeks, as states reopen and coronavirus case numbers explode", while other Tweets emphasized individual rights: "It is really past time to reopen our country and to allow US citizens our constitutional rights" and "Reopen. Let owner make a living. No one is being forced To go there. Bring back choice". As evident, many strong, complex and diverse emotions are expressed in these Tweets examples, and it is nearly impossible to manually estimate cumulative sentiment classes or scores for large Tweets datasets. However, with the development of standardized sentiment scoring and classification technologies, it has become efficient to perform sentiment analysis on large and unstructured data, and current research leverages R tools to perform sentiment analysis on reopening data.
\subsection{Sentiment analysis}\label{Subsect: sentir}
The scaling up of computing technologies over the past decade has made it possible for vast quantities of unstructured data to be analyzed for patterns, including the identification of human sentiment expressed in textual data. Sentiment analysis is one of the main research insights benefits from textual analytics as it extracts potentially intended sentiment meaning from the text being analyzed. Early stage past research used custom methods and researcher defined protocols to identify both sentiment and personality traits such as dominance, through the analysis of electronic chat data, and standardized methods to assign positive and negative sentiment scores \cite{ samuel2014automating,he2015novel,ravi2015survey,samuel2018going}. Sentiment analysis assigns sentiment scores and classes, by matching keywords and word sequences in the text being analyzed, with prewritten lexicons and corresponding scores or classes. For this research, we used R and well known R packages Syuzhet and sentimentr to classify and score the reopening Tweets dataset \cite{R_Syuz, R_senti}. The R package Syuzhet was used to classify the Tweets into eight sentiment classes as shown in Fig. \ref{Fig:Reopneing_sentiment}. Syuzhet also measures positive and negative sentiment by a simple sum of unit positive and negative values assigned to the text, and therefore a single complex enough Tweet may simultaneously be scored as having a positive score of 2 and a negative score of 1. Sentir measures words and associated word sequence nuances, providing a score ranging from around -1 to 1, and the values could be less than -1 or greater than 1. The final sentiment score from sentiment package is a summation of sentiment values assigned to parts of the sentence (or textual field) and can be less than -1 or more than 1, as shown in Fig. \ref{Fig:pos_neg}. An analysis of the reopening Tweets displayed 48.27\% of positive sentiment, 36.82\% of negative sentiment and 14.92\% of neutral sentiment, with sentiment. The sentiment analysis, combing scores from sentiment and the classification provided by Syuzhet highlighting trust and anticipation reflected a largely positive sentiment towards reopening the economy.
\begin{comment}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.49\linewidth]{Figures/PosNeg.pdf}
\caption{Postive and Negative sentiment about reopening.}
\label{Fig:pos_neg}
\end{figure}
\end{comment}
\begin{figure}[htbp]
\centering
\subfloat[Sentiment Score.]
{\includegraphics[width=0.5\linewidth]{Figures/PosNegBarplot.pdf}
\label{Fig:PosNegBarplot}}
\subfloat[Sentiment Class.]
{\includegraphics[width=0.5\linewidth]{Figures/NegPosSyuz1.pdf}
\label{Fig:BarpPN}}
\caption{Positive and Negative sentiment about reopening.}
\label{Fig:pos_neg}
\end{figure}
\subsubsection{ Sentiment Scenario Analysis}
There is a high level of uncertainty regarding future events, and questions surrounding the effectiveness of available healthcare facilities in protecting the populace against a potential second wave of Coronavirus remain. There is a serious concern that an unfettered and undisciplined reopening of the US economy may lead to rapid spreading of the Coronavirus and a corresponding steep increase in COVID-19 cases and deaths. However, it is also clear that states cannot keep businesses and services closed indefinitely. This situation needs to be addressed from multiple perspectives, and there are numerous efforts underway to develop solutions for phased openings and preparation for the New Normal. The present study seeks to leverage the insights discovered through timely sentiment analytics of the reopening Tweets data, and apply the findings to the "Open Now", including planned and phased openings, versus "Open (indefinitely) later", including advocacy of maintaining complete shutdown, by analyzing four potential New Normal scenarios:
\begin{enumerate}[label=\alph*)]
\item \textbf{Scenario 1:} Positive public sentiment trend and reopen now
\item \textbf{Scenario 2:} Positive public sentiment trend and reopen later
\item \textbf{Scenario 3:} Negative public sentiment trend and reopen now
\item \textbf{Scenario 4:} Negative public sentiment trend and reopen later
\end{enumerate}
These New Normal scenarios are highlighted and discussed on a \textit{ceteris-paribus} basis, that is we only consider sentiment variance and reopening timing, holding all else equal, such as the progression of COVID-19, healthcare and social distancing protocols, and all other necessary precautions, preparations and "ongoing intensive deep sanitization cycles, physical-distance protocols and associated personal and community paraphernalia" \cite{Samuel8Principles}.
\subsubsection{Positive sentiment scenarios a. \& b.}
Extant research has demonstrated the validity of using Tweets sentiment analysis to understand human behavior \cite{ibrahim2019decoding}. Sentiment analysis of the reopening Tweets data has indicated dominant sentiment trends for trust and anticipation, and a larger proportion of positive sentiment. Scenario \textbf{a} is a valuable New Normal setting from a leadership and policy perspective, as it provides the enthusiasm and public support that is required for the massive efforts that will be needed to wind the economy back up into action. A positive and supportive public attitude will also help the government, and public and private organizations in implementing health protocols more effectively than if positive sentiment were missing. Scenario \textbf{b} in contrast will be a missed New Normal opportunity, where in spite of positive public sentiment trends, there is failure to reopen in a timely manner. There are risks associated with any decision or strategy that could be applied, however, the risk of losing the support of positive public sentiment is too significant to be ignored. There is a likelihood, that the positive sentiment and forward looking energy to reopen, restart businesses, rejoin work and push ahead, may be dominated by fear, panic and despair once again due to prolonged socioeconomic stress and financial loss. It will remain important for societal leaders and responsible agencies to seize positive sentiment trends indicating support for reopening, and make the most of it, especially in the absence of deterministic healthcare and socioeconomic solutions.
\subsubsection{Negative sentiment scenarios c. \& d.}
If negative public sentiment were to dominate, then it would have the potential to hinder both a quick reopening, and any effective reopening at any later stage. Many Tweets have expressed extreme negative sentiment, such as ``[Abusive word] people reopen the whole freaking country and let everybody die ..." and ``we are all going to die". Interestingly sentiment analysis showed that, though the number of positive sentiment Tweets were higher, the negativity of the negative sentiment score Tweets were more extreme than the positiveness of the positive sentiment score Tweets. The extreme negative Tweet had a sentiment score of $\sim$1.51, and the extreme positive sentiment score was $\sim$1.36, implying that though fewer Tweets were negative, the intensity of their linguistic expression was higher. Scenario \textbf{c} would lead to a volatile start to the New Normal scenario, as it would lead to a reopening without adequate public sentiment support. Reopening now with the absence of a dominant positive public sentiment trend could generate a number of adverse effects, including failure of businesses that attempt to reopen. It is still possible that some quick reopening associated successes and positive information flows, reopening now under scenario \textbf{c} may still lead to a limited measure of New Normal success as the economy restarts and businesses and individuals are empowered to create economic value. However, scenario \textbf{d}, \textit{ceteris-paribus}, has the potential to lead to the most adverse New Normal scenarios. Scenario \textbf{d} is a combination of dominant negative public sentiment trends, where sentiment categories such as fear and sadness dominate, and negative sentiment scores peak along with a delayed opening without time constraints. Indefinite Lockdown has never been an option, and the strategy has been to flatten the curve through Lockdown and distancing measures, so as to create a buffer time zone for healthcare facilities to prepare and cater to the infected, and for governments to make plans and preparations for optimal post-reopening New Normal scenarios. Scenario \textbf{d} has the potential to lead to growth in negative public sentiment trends, creating long term mental healthcare challenges, coupled with rapidly deteriorating economic fundamentals.
In summary, current reopening data analytics indicate a positive public sentiment trend, which allows for the the more favorable outcomes \textbf{a} and \textbf{b}, which can lead to optimal New Normal scenarios which maximize the benefits of reopening, while limiting related risks.
\section{Discussion: Limitations, risks and opportunities.} \label{Sect4:Discussion}
The present research has a few limitations, which must be kept in mind for any application of the public sentiment trends insights and New Normal scenarios presented in this research. These limitations also present opportunities for future studies and further extension of this research. Post-COVID-19 reopening and recovery are complex challenges with significant uncertainties and unknowns - this is a new systemic crisis for which the world has no established solutions or proven models to depend on. Needless to say, any reopening endeavor or the absence thereof, any action or inaction, are all fraught with significant risks. We identify and discuss some of these risks from a sentiment association perspective. The subsections below provide elaborations on the limitations of this research, risks and opportunities for future work.
\begin{table}[htbp]
\centering
\caption{Twitter data features: Locations.}\label{Tab:Locations}
\subfloat[Tagged.]{\begin{tabular}{ll}
\hline
\multicolumn{1}{l}{\textbf{Location}} & \multicolumn{1}{l}{\textbf{Rank}} \\ \hline
Los Angeles, CA & 1 \\
Brooklyn, NY & 2 \\
Chicago, IL & 3 \\
Florida, USA & 4 \\
Pennsylvania, USA & 5 \\
Manhattan, NY & 6 \\
North Carolina, USA & 7 \\
Houston, TX & 8 \\
San Francisco, CA & 9 \\
\hline
\end{tabular}}
\hspace{1.5cm}
\subfloat[Stated.]{\begin{tabular}{ll}
\hline
\multicolumn{1}{l}{\textbf{Location}} & \multicolumn{1}{l}{\textbf{Frequency}} \\ \hline
Los Angeles, CA & 1 \\
United States & 2 \\
Brooklyn, NY & 3 \\
New York, NY & 4 \\
Las Vegas, NV & 5 \\
Orlando, FL & 6 \\
Chicago, IL & 7 \\
Dallas, TX & 8 \\
North Carolina, USA & 9 \\
\hline
\end{tabular}}
\end{table}
\subsection{Limitations}
There are two areas of limitations of this research: quality of data and variance in sentiment analysis tools. Twitter data has been used extensively in research and in practice. However, the data is susceptible to bot activity and also to data errors caused voluntarily or inadvertently by users who provide inaccurate data. For example, Table \ref{Tab:Locations} (a) \& (b) are both location variables, and apart from the fact that they are poorly tagged, they do not provide a reliable interpretation of actual location of the user. Extensive cleaning and data preparation are necessary, which is often time and resource consuming, especially with textual data. Furthermore, though large volumes of public Twitter data can be acquired with reasonable effort, the quality of data can be affected by bot activity, repeat posts and spam posts. Though the reopening Tweets data used for the present research was cleaned and well prepared prior to analysis, following standard processes, yet the likelihood that the algorithmic processes did not successfully address all issues remains. Secondly, the tools available for sentiment analysis are subject to a measure of error, and are subject to the scope of the underlying lexicons. That is also the reason why using multiple lexicons could lead to somewhat different results depending on the context and the complexity of the underlying textual corpus \cite{S2019Viral}. This limitation in sentiment analysis tools can usually be mitigated by analyzing a larger number of Tweets. This research was intended to be exploratory and directional in nature and therefore, multiple data sources were not used. Ideally, public sentiment must be gauged through multiple listening mechanisms, using data from multiple social media platforms and public communications, to provide a better representation of the population for sentiment analysis.
\subsection{Reopening Risks}
Fear became a prominent public sentiment as awareness of the seriousness and devastating effects of the Coronavirus pandemic increased \cite{samuel2020covid}. This sentiment was not unjustified, due to diverse risks associated with the pandemic. COVID-19 has a higher transmissibility with a reproduction number (R0) of 2.0 - 6.47 (average R0 is 3.58) which indicates that the disease can be transmitted to 2.0 - 6.47 people from an average infected person \cite {liu2020reproductive,yu2020modelling}. This transmissibility rate is higher than recent infection diseases such as SARS (Severe Acute Respiratory Syndrome) and Ebola which have a reproduction number of 2 – 5 \cite {liu2020reproductive}. Hence, COVID-19 is highly contagious, especially within enclosed spaces such as trains, buses, restaurants, crowded factory floors, indoor markets and similar spaces. However, it is also well known that COVID-19 mainly impacts only the vulnerable parts of the population (commonly known as those with preexisting conditions and the elderly with weak immune systems), and this awareness has led to growing concerns and protests for reopening businesses and workplaces around the world. It is a public health issue to assess the safety of workplaces, and estimate their likelihood to transmit contagious diseases rapidly through a variety of activities (e.g., customer and patient dealings, close contact interaction with colleagues) \cite {edwards2016influenza, webster2019systematic, baker2020burden}. For example, healthcare workers (90\% are exposed more than once a month and 75\% are exposed more than once a week) bear a higher risk of getting infected, and thus may constitute a sub-segment for sentiment analysis, which can aid mental health evaluation. Besides, some other occupations (e.g., police, firefighters, couriers, social workers, daycare teachers, and construction workers) have a higher number of exposed workers in the United States. Self-isolation can significantly reduce ICU beds requirements, and flatten the disease outbreak curve. With a R0 of 2.5 and without self-isolation, 3.8 times the number of ICU beds would be required in the US to treat critically affected people \cite {moghadas2020projecting}. In contrast, about 20\% of self-isolation by infected persons, could reduce ICU beds by 48.4\%. With a RO of 2, self-isolation can reduces ICU bed requirements by 73.5\%. Knowledge of infectious disease transmission in workplaces, social distancing and stay at home practices are critical safeguards from rapid spread of infections \cite {edwards2016influenza}. Thus, for reopening workplaces and sustaining the economy, it is crucial to adopt appropriate protective measures (i.e., PPE, mandatory influenza symptoms sick leave) besides adequate workplace settings (i.e., emergency preparedness, risk mitigation plans, personal hygiene) to reduce risk and spreading of COVID-19 \cite {dryhurst2020risk, moghadas2020projecting}. Sentiment analysis can help track and manage public sentiment, as well as local or groups sentiments, subject to availability of suitable and timely data, and thus contribute to risk mitigation.
To prevent rapid transmission of COVID-19, most of the affected countries around the world implemented varying forms of Lockdown policies (e.g., quarantine, travel ban, social distancing), with significant economic consequences \cite {moser2020pandemic, zhang2020financial, yu2020modelling}. During this Lockdown period, people were forced to stay at home, and many lost their jobs, leading to significant emotional upheavals. Two recent surveys demonstrated that numerous small businesses have shut down due to COVID-19 \cite {bartik2020small, metlife2020small}. Collecting data from 5,800 small businesses, Bartik et. al. \cite {bartik2020small} found that about 43\% of them are temporarily closed, and on average they reduced their employee counts by 40\% as compared to the beginning of the year. According to the US Chamber of Commerce, about 24\% of all small businesses are already closed, and another 40\% of those who have not closed are quite likely to close by June of 2020. Despite these alarming numbers, about 46\% of those surveyed believe that within a period of six to twelve months, the US will return to a state of normalcy \cite {metlife2020small}. About 25\% of all workers in the United States who are employed in information technology, administration, financial, and engineering sectors can perform work from home \cite {baker2020cannot}. In contrast, about 75\% of workers employed in healthcare, industrial manufacturing, retail and food services, etc. are unable to perform work from home due to the nature of thier work which mandates physical presence. Consequently, a large portion of the workers (108.4 M) are at risk of adverse health outcomes. Reopening the economy will exacerbate the situation with a higher potential for COVID-19 transmission. Not reopening the economy has the potential to create irreversible socioeconomic damage which can subsequently lead to greater loss of lives and more pain, along with diminished long term capabilities to fight the Coronavirus pandemic, and other crises, should they persist of arise. Sentiment analysis, and public sentiment trends based scenario analysis can inform and enlighten decision makers to make better decisions and thus help mitigate risks in crisis, disaster and pandemic circumstances.
\subsection{Opportunities}
Technology supported sentiment analysis presents numerous opportunities for value creation through serving as a support mechanism for high quality decision making, and for increasing awareness leading to risks mitigation. For researchers and academicians, this study presents a new sub-stream of research which provides a way to use sentiment analysis in crisis management scenario analysis. Extant research has demonstrated that information facets and categories can influence human thinking and performance, and hence can impact feelings. This presents an opportunity for future research to perform sentiment analysis with multiple information facets at varying levels to examine potential behavioral variations \cite{samuel2017effects}. This study also provides pandemic specific public sentiment insights to researchers specifically interested in pursuing the Coronavirus pandemic and COVID-19 studies. Academicians can expand the current study using additional sentiment analysis tools and customized crisis relevant sentiment analysis lexicons. There is also an easy opportunity for researchers to gain rapid insights by repeating this study for future time periods and thus help develop a sequence of reopening and recovery relevant public sentiment analysis studies. Such an approach would also be very useful for practitioners who have a strong need for recognizing market sentiment for a wide range of business decisions.
Practitioners can make use of this study in at least two ways: the first is a direct application of the positive public sentiment exploratory findings of this study, after additional validation and due diligence, to inform and improve organizational decisions; and the second is to utilize the logic of methods presented in this research for situation specific and localized sentiment trends based scenario analysis related to organizational decisions. For example, the four New Normal scenarios schema can be customized and adapted to business situations, where modified sentiment estimation methods can be used to gauge specific consumer segment sentiment and used to inform products (or services) design, manufacturing (or process) and marketing decisions. The [positive sentiment : negative sentiment] X [action 1 (now) : action 2 (later)] matrix, from the `Sentiment Scenario Analysis' section provides a conceptual scenario analysis framework which has high potential for scaling and adaptation.
\section{Conclusions and Future Work} \label{Sect5:Conclusion}
\begin{center}
\textit{``We shall draw from the heart of suffering itself\\
the means of inspiration and survival."} – Churchill \cite{Samuel8Principles}
\end{center}
Sentiment analysis is a valuable research mechanism to support insights formation for the reopening and recovery process. Sentiment analysis tools can be effectively used to gauge public sentiment trends from social media data, especially given the high levels of trust in social media posts among many networks \cite{shareef2020group}. This study discovered positive public sentiment trends for the early part of May, 2020. Specifically, the study discovered high levels of trust sentiment and anticipation sentiment, mixed with relatively lower levels of fear and sadness. Positive sentiment count dominated negative sentiment count, though negative Tweets used more extreme language. While sentiment analysis provides insights into public feelings, it is only one part of a complex set of contributions which will be required to effectively move into the New Normal. Sentiment analysis based on the Tweets data used in this study, appears to indicate public sentiment support for reopening. The message that ``it is time to reopen now" was prominent in the reopening Tweets data. Furthermore, we believe that this research contributes to the sentiment component of the recent call for cyberpsychology research, and well developed sentiment analysis tools and models can help explain and classify human behavior under pandemic and similar crisis conditions \cite{guitton2013developing, guitton2020cyberpsychology}. Positive public support sentiment will remain critical for successful reopening and recovery, and therefore additional initiatives tracking sentiment and evaluating public feelings effectively will be extremely useful. Sentiments can be volatile and localized, and hence technological and information systems initiatives with real-time sentiment tracking and population segmentation mechanisms will provide the most valuable public sentiment trends insights. With additional research and validation\footnote{This is a pre-print version of an under review Journal article and some portions may be reduced /absent, including the statistical /quantitative analysis in Section \#3}, the research strategy utilized in this paper can be easily replicated with necessary variations, and applied on future data from multiple social media sources, to generate critical and timely insights that can help form an optimal New Normal.
\bibliographystyle{elsarticle-num}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Albert Einstein's Weak Equivalence Principle (WEP) is one of the main cornerstones of general relativity
as well as of many other gravitational theories. One statement of the WEP is that any
freely falling, uncharged test body will follow a trajectory independent of its internal composition
and structure. It implies that any two different species of massless
(or negligible rest mass) neutral particles, or two particles of same species with different energies,
if emitted simultaneously from the same source and traveling through the same gravitational fields,
should reach us at the same time \cite{will06,will14}.
By measuring how closely in time the two different particles arrive, one can test the accuracy of the WEP
through the Shapiro (gravitational) time delay effect \cite{shapiro64}. In practice, all metric
theories of gravity incorporating the WEP predict that all test particles must follow identical trajectories and
undergo the same Shapiro time delay. In other words, as long as the WEP is valid, all metric
theories predict $\gamma_{1}=\gamma_{2}\equiv\gamma$, where $\gamma$ is the parametrized post-Newtonian (PPN) parameter
($\gamma$ denotes how much space curvature is provided by unit rest mass of the objects along or near the path of the particles \cite{will06,will14})
and the subscripts represent two different particles. In this case, the WEP validity
can be characterized by limits on the differences of $\gamma$ value for different test particles (see, e.g.,
Refs.~\cite{krauss88,longo88,sivaram99,gao15,wei15,Tingay16,wei16}).
Any possible violation of the WEP would have far-reaching consequences for mankind's view of nature,
so it is important to extend the tests of its validity by making use of the panoply of new types
of astronomical signals being brought to the fore in the multi-messenger era.
So far, tests of the WEP through the relative differential variations of the $\gamma$ values
have been made using the emissions from supernova 1987A \cite{krauss88,longo88}, gamma-ray bursts
(GRBs) \cite{sivaram99,gao15}, fast radio bursts (FRBs) \cite{wei15,Tingay16}, and TeV blazars
\cite{wei16}. Particularly, assuming that the observed time delay between different
frequency photons from FRBs are caused mainly by the gravitational potential of the Milky Way,
Ref.~\cite{wei15} set the most stringent limits to date on $\gamma$ differences, yielding $\sim 10^{-8}$.
Even more encouragingly, the most recent studies \cite{nusser16+zhang16} show that the constraints
on the WEP accuracy from FRBs can be further improved by a few orders of magnitude when taking into
account the gravitational potential fluctuations of the large scale structure, rather than the
Milky Way's gravity.
In addition, the discovery of a triple system \cite{Ransom14}, made of a
millisecond pulsar PSR J0337+1715 and two white dwarves, has recently provided a new interesting
test of the equivalence principle. The very large difference in the gravitational
binding energies of the pulsar and the white dwarf makes this system very promising
on the equivalence principle test.
Although the tests on the WEP have reached high precision, most of
the tests rely on the relative arrival time delays of (exclusively) photons
with different energies.
The first and only WEP test with different species of particles was the measurement of the time delay
between the photons and neutrinos from supernova 1987A \cite{krauss88,longo88}. It was shown
that the $\gamma$ values of photons and neutrinos are equal to an accuracy of approximately 0.34\%.
New multi-messenger signals exploiting different emission channels are essential for testing the WEP
to a higher accuracy.
Recently, the Laser Interferometer Gravitational-wave Observatory
(LIGO) team report their discovery of the first gravitational wave (GW) source, GW 150914
\cite{abbott16a}, opening a brand new channel for studying the Universe, which could lead to
breakthroughs in both fundamental physics and astrophysics. In fact, the next generation of
gravitational detectors, including the advanced LIGO, advanced VIRGO and KAGRA, appear poised to
detect a plethora of increasingly sophisticated gravitational wave (GW) signals in the very near
future \cite{abbott09,acernese08,kuroda10,acernese15,aasi15}. Phenomenologically, one may
treat the GWs with different frequencies as different gravitons to test the WEP. In extending the
constraints on $\Delta \gamma$, tests of the WEP using GW measurements become more robust against
various assumptions, since, e.g. GWs do not suffer absorption or scattering along their path,
in contrast to photons. In the following, we illustrate the progress that can be expected in testing
the WEP with the reported/future GW observations.
\section{Description of the Method}
The Shapiro time delay effect \cite{shapiro64} causes the time interval for particles to pass
through a given distance to be longer in the presence of a gravitational potential $U(r)$ by
\begin{equation}
\Delta t_{\rm gra}=-\frac{1+\gamma}{c^3}\int_{r_e}^{r_o}~U(r)dr\;,
\end{equation}
where $\gamma$ is a PPN parameter, $r_{o}$ and $r_{e}$ correspond to locations of observation
and the source of particle emission.
Assuming that the observed time delays $(\Delta t_{\rm obs})$ between correlated particles
from the same astronomical source are mainly caused by the gravitational potential of the Milky Way,
and adopting the Keplerian potential for the Milky Way, we have \cite{longo88,misner73}
\begin{equation}
\begin{split}
\Delta t_{\rm obs}>\Delta t_{\rm gra}=\Delta \gamma \frac{GM_{\rm MW}}{c^{3}} \times\qquad\qquad\qquad\qquad\qquad\\
\ln \left\{ \frac{ \left[d+\left(d^{2}-b^{2}\right)^{1/2}\right] \left[r_{G}+s_{\rm n}\left(r_{G}^{2}-b^{2}\right)^{1/2}\right] }{b^{2}} \right\}\;,
\end{split}
\label{eq:gammadiff}
\end{equation}
where $\Delta \gamma$ is the difference between the $\gamma$ values for different test particles,
$M_{\rm MW}\simeq6\times10^{11}M_{\odot}$ is the Milky Way mass \cite{mcmillan11,footnote},
$d$ represents the distance from the source to the center of the Milky Way (if the source is of
extra-galactic or cosmological origin, $d$ is approximated as the distance from the source to
Earth), $r_{G}\simeq8$ kpc is the distance from the Sun to the center of the Milky Way, $b$ denotes
the impact parameter of the particle paths relative to the Milky Way center, and $s_{\rm n}=\pm1$
is the sign of the correction of the source direction.
If the source is located along the direction of the Galactic center, $s_{\rm n}=+1$. While, $s_{\rm n}=-1$
corresponds to the source located along the direction pointing away from the Galactic center. Note that the impact
parameter $b$ is on the order of the distance of the Sun from the Galactic center, i.e.,
$b\leq r_{G}$. With Equation \ref{eq:gammadiff}, one can constrain the WEP by putting a strict limit
on the differences of $\gamma$ value \cite{krauss88,longo88,sivaram99,gao15,wei15,Tingay16,wei16}.
We notice that although the method adopted in this work can provide severe constraints on the accuracy of the WEP, which is one of the important postulates of GR, it can not be directly used to distinguish
between specific gravity theories, such as GR and its alternatives. Many precise methods have been
devised to test the accuracy of GR through the measurement of the absolute value of $\gamma$ based
on the fact that GR predicts $\gamma=1$ (see Ref. \cite{will14} for a recent review). However, it is worth pointing out that $\gamma=1$ is not a
sufficient condition to identify general relativity, since it is not the only theory that predicts
$\gamma=1$ \cite{will14}. Thus, further investigations would be essential for
developing more accurate tests of the WEP and for distinguishing between GR and other alternative
gravity theories.
\begin{figure}[h]
\epsfig{figure=f1.eps,width=3.5truein}
\vskip-0.1in
\caption{Expected limits on the differences of the $\gamma$ values between
the GW signals and the photons for various types of EM counterparts.
The vertical lines correspond to different characteristic times.}
\end{figure}
\section{WEP test using GW signals}
The process of compact binary coalescence (CBC; either neutron star (NS) binary, black hole
(BH) binary or NS-BH binary) provides the primary targets for the second generation of GW detectors,
such as the advanced LIGO/VIRGO \cite{abbott09,acernese08,kuroda10,acernese15,aasi15}. The first
reported GW detection, GW 150914, is a BH-BH merger with two BH masses $36^{+5}_{-4} \rm{M_{\odot}}$
and $29^{+4}_{-4} \rm{M_{\odot}}$, respectively \cite{abbott16a}. Of significant interest for
CBC GW detections is the fact that some
relevant fundamental physics postulates, including the WEP, may be constrained using gravitational radiation alone
\cite{will14,will98}. This could be done exploiting the fact that the frequency of the
gravitational radiation sweeps from low frequencies at the initial
moment of observation (in-spiral phase) to a higher frequency at the
final moment (coalescence phase). Note that the GW frequency eventually
saturates to a constant value in the vicinity of the light-ring.
The amplitude, however, decreases monotonically after
reaching its peak at the light-ring.
Any WEP violation will cause a distortion of the observed phasing of
the waves, and would result in a shorter (or longer) than expected
overall time of passage of a given number of cycles.
It is worth pointing out that there are many effects that can change
the lifespan of a waveform: spin corrections, eccentricity, spin
precession, etc. However, it is difficult to disentangle these effects from a WEP
violation. Our upper limits on the WEP accuracy are based
on very conservative estimates of the observed time delay
(i.e., the whole time delay is assumed to be caused by the WEP violation).
In fact, the inclusion of contributions from the neglected effects
in the observed waveform could improve the limits on WEP to some degree.
In this case, since no EM counterparts are required, all CBC GW detections would be relevant.
For instance, the signal of GW 150914 increases in frequency and amplitude in about 8 cycles
(over 0.2 s) from 35 to 150 Hz, where the amplitude reaches a maximum \cite{abbott16a}. Considering
the localization information of GW 150914, we could tighten the limit on the WEP to
$\Delta \gamma \sim 10^{-9}$.
More recently, the Fermi Gamma-Ray Burst Monitor (GBM) team reported that GBM observations at the time of GW150914 reveal the presence of a
weak transient source above 50 keV, 0.4 s after the GW event was detected, with a false alarm probability of 0.0022
\cite{connaughton16}. If this is indeed the EM counterpart of GW150914 (see possible interpretations in \cite{zhangb16+loeb16}), with the aforementioned method, we could
further extend the WEP test with GWs and photons, setting a severe limit on WEP to an accuracy of $10^{-8}$, five
orders of magnitude tighter than the results set by the photons and neutrinos from supernova 1987A.
Besides BH-BH mergers, GW signals from binary NSs and NS-BH mergers are also expected to be detected in the near future \cite{abadie10}, for which a variety of detectable electromagnetic (EM) counterparts have been widely discussed
\cite{eichler89,li98,metzger12}, including
the following representative cases: the prompt short GRB emission, the afterglow emission of the
on-beam ultra-relativistic outflows, and the macronova/kilonova emission of the sub-relativistic
r-process material ejected during the merger. For NS-NS mergers, if the merger product is a massive millisecond pulsar
instead of a BH, the detectable EM signatures from the system
become much richer and brighter (see Ref. \cite{zhang13+gao13} for details). Joint detections of GW/EM signals,
once achieved, could be used to give important constraints on the WEP.
Consider the case of a joint detection of GW/EM signals from a NS-NS or NS-BH coalescence event in
the advanced LIGO/VIRGO era. Since the sky and binary orientation averaged sensitivity of the advanced
LIGO/VIRGO network for CBC is of the order of $\sim100$ Mpc \cite{abbott09,acernese08,kuroda10,acernese15,aasi15}, here we assume the
distance from the GW source to the Earth to be $d=200$ Mpc. It is worth pointing out that the constraints
on the WEP are not greatly affected by the source distance uncertainty (see Ref.~\cite{wei15} for
more explanations). To account for the source direction uncertainty, and based on the fact that the
impact parameter $b\leq r_{G}$, here we present four extreme cases by assuming $b=0.001r_{G}$ and
$s_{\rm n}=+1$, $b=0.001r_{G}$ and $s_{\rm n}=-1$, $b=0.999r_{G}$ and $s_{\rm n}=+1$, and $b=0.999r_{G}$
and $s_{\rm n}=-1$, respectively. The real results should lie within the range circumscribed by these
extreme cases.
Regarding the EM counterpart of the GW detection, suppose we are lucky to detect all the promising
emission types, e.g. the short GRB prompt emission, the on-beam GRB afterglow emission and the
macronova emission. Recently, Ref.~\cite{li16} discussed the time lags between the GW signal and
all these EM counterparts in some detail, and suggested that the time delay $\Delta t_{\rm obs}$ is
expected to be of the order of $\sim$ 0.01--1 s (short GRB), 0.01--1 day (on-beam afterglow), or
1--10 days (macronova), respectively. With these expected time delays and with the location information
in hand, we would be able to set bounds on the WEP from Equation~(\ref{eq:gammadiff}). The expected
constraints on the differences of the $\gamma$ values are shown in Figure~1. It has been suggested
that the macronova emission may be the most frequently-detectable EM signal of the coalescence events
\cite{li98,metzger12}. If the macronova emission is detected at $\Delta t_{\rm obs}\sim1$ day after the merger,
a strict limit on the WEP will be $\Delta \gamma <10^{-3}$. One can see from this plot that much more
severe constraints would be achieved ($\sim$ $10^{-3}$--$10^{-5}$ or $10^{-8}$--$10^{-10}$) if the
EM counterpart is an on-beam afterglow or a short GRB. Note that the compact binary coalescence and
the EM counterpart do not occur at the same time, since $\Delta t_{\rm obs}$ has a contribution from
the intrinsic emission time lag ($\Delta t_{\rm lag}$) between the photons and the GW signals. Here
we take $\Delta t_{\rm lag}=0$ to give a conservative estimate of the WEP. More severe constraints
could be achieved with a better understanding of the nature of $\Delta t_{\rm lag}$ allowing one
to remove its contribution from $\Delta t_{\rm obs}$. On the other hand, it should be underlined
that these upper limits are based on very conservative estimates of the gravitational potential of the
Milky Way. If the gravitational potential fluctuations from the intervening large scale structures are
taken into considered, our constraint results would be further improved by orders of magnitude
\cite{nusser16+zhang16}.
\section{Summary and discussion}
In conclusion, we show that new WEP tests can be carried out with potentially much higher
accuracy in the GW era.
For all kinds of CBC GW detections, regardless of whether EM counterparts are detected or not, we can always
use GWs with different frequencies to give stringent constraints on the accuracy of the WEP. Taking GW 150914 as
an example, it takes less than one second for the GW signals emitted from lower frequency to higher frequency
where the signal amplitude reaches a maximum (e.g., 35 Hz to 150 Hz), resulting in a tightening of the
limit on the WEP to approximately $10^{-9}$, which is as good as the current
most stringent results from FRBs \cite{wei15,Tingay16}.
Once EM counterparts of the GW signals are firmly detected, an interesting WEP test could be performed
by using the measured time delay between the GWs and any associated photons. Also taking GW 150914
as an example, if the claimed short GRB, GW150914-GBM, is indeed the EM counterpart of GW150914, a severe
limit on WEP could be set to an accuracy of $10^{-8}$, five orders of magnitude tighter than the results set
by the photons and neutrinos from supernova 1987A \cite{krauss88,longo88}.
Finally, considering the capabilities of the advanced LIGO/VIRGO network and the source direction
uncertainty, we found that for the expected GW detection from NS-NS/BH mergers, if the prompt short GRB
emission and/or its afterglow emission is detected, a stringent limit on the WEP could be set at the
level of $\Delta \gamma < (10^{-8}$--$10^{-10})$ (prompt) or $\sim$ $10^{-3}$--$10^{-5}$ (afterglow).
Due to the low detection rates of GRB- accompanied GW signals, the first positively identified
electromagnetic counterpart of a GW signal is very likely to be a macronova. If the macronova emission
is detected at $\Delta t_{\rm obs}\sim1$ day after the merger, a strict limit on the WEP will be
$\Delta \gamma <10^{-3}$.
In sum, the main result of this paper is to propose a method to test WEP,
which can be applied when future robust GW/EM associations become available.
For GW 150914, we have applied our method to the available data
(the GW data and the putative EM signal following the GW signal) and derived some
stringent limit on WEP, not achievable by previous analyses.
There are astrophysical uncertainties in applying our method. Examples of such astrophysical uncertainties are, e.g. the
distance of a purely GW-detected sources such as GW 150914; the astrophysical time lags between
EM and GW emission mentioned in Ref. \cite{li16}; the detection of more than one EM emission component
of a short GRB; etc. Such astrophysical uncertainties, however, will certainly diminish in time,
with improving EM instruments and observations, the addition of further GW detectors at
different Earth locations, etc.
\vskip 0.1in
\noindent{\bf Acknowledgements:}
We are grateful to the anonymous referees for insightful comments.
We also thank Dr. Xi-Long Fan, who can not be a co-author
due to some restrictions as a member of the LIGO collaboration, for extensive
discussions and actual contribution to this manuscript.
This work is partially supported by the National Basic Research Program (``973'' Program)
of China (Grants 2014CB845800 and 2013CB834900), the National Natural Science
Foundation of China (grants Nos. 11322328, 11433009, 11543005, 11573014, and 11303009),
the Youth Innovation Promotion Association (2011231), the Strategic Priority Research Program
``The Emergence of Cosmological Structures'' (Grant No. XDB09000000) of the Chinese Academy of Sciences, the Natural Science
Foundation of Jiangsu Province (Grant No. BK20161096), and NASA NNX 13AH50G, 14AF85G and 15AK85G.
\vskip 0.2in
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The work function is a fundamental surface parameter of a material that determines how much energy is required to extract an electron to a field-free region outside the surface; lower work functions facilitate electron emission at lower temperatures. Work functions play a key role in technologies that require precise control of contact barriers such as printed and organic electronics.\cite{Zhou2012b,Lindell2006,Dadlani2014} Materials with low work functions are crucial for electron emission devices (THz sources\cite{Snapp2012,Barik2013} and fluorescent light bulbs\cite{Watanabe2011}), electron sources for scientific instruments,\cite{Voss2014,Trenary2012} and high-brightness photocathodes.\cite{Antoniuk2020} Especially, for thermionic energy converters (TECs)\cite{Lee2014, Lee2012, Schwede2013} discovery of thermally stable, ultra-low work function materials (less than 1.5 eV) would allow thermionic conversion of heat ($>1500\;^\circ$C) directly to electricity with efficiencies exceeding 30\%. Materials with high work functions play a key role in engineering the electron tunneling barrier in electronics (for example in dynamic RAM applications\cite{Kim2016a} and for contacts in modern 2D-based transistors\cite{Chuang2014}) as well as selective contacts in solar cells\cite{Schulz2016}.\\
The most commonly used materials for low work function applications that are chemically and thermally stable are compounds like lanthanum hexaboride\cite{Pelletier1979,Kanitkar1976} and thoriated tungsten\cite{PhysRev.49.78,PhysRev.53.570,Sillero2010,Bergner2011} with a work function around 2.5 eV. For thermionic converters, even lower work functions are required, which are achieved by sub-monolayer coatings of alkali metals (most commonly cesium) on metal surfaces. The resulting work functions are much lower than the work function of either metal or coating individually. This effect is due to the partial transfer of electron charge from the adsorbate to the substrate and the resulting formation of surface dipoles that lower the vacuum energy level near the surface.\cite{Chou2012} Coatings using both cesium and oxygen are well known to achieve $\sim 1$ eV work functions in photocathode applications, for instance on III-V semiconductors or silver.\cite{Uebbing1970,James1971} Diamondoids\cite{Narasimha2015} and phosphorous-doped diamond thin films have shown similarly low work functions.\cite{Koeck2009} More recently, a work function of 1.01 eV has been achieved by electrostatically gating cesium/oxygen covered graphene,\cite{Yuan2015} which resulted in enhanced TEC efficiency.\cite{Yuan2017} The lowest experimentally measured work function has been obtained by inducing a surface photovoltage on Cs/O$_2$ coated Gallium Arsenide.\cite{Schindler2019} The lowest theoretically predicted steady-state value to date is 0.7--0.8 eV for potassium adsorbed on monolayers of MoTe$_2$ or WTe$_2$.\cite{Kim2017}\\
In recent years, data-driven approaches based on high-throughput \textit{ab-initio} calculations has emerged as a new paradigm to facilitate the search through vast chemical spaces for new materials with tuned properties or novel behavior. Due to the continued increase in computing power and improvements of theoretical methods, the accuracy of predicted material properties has reached a reliability comparable to experiments while greatly surpassing them in terms of speed and cost. The rapid increase in available computational data structured in open source material databases such as Materials Project (MP),\cite{Jain2013a} AFLOW,\cite{Curtarolo2012} and NOMAD\cite{Draxl2018} has opened an avenue towards material discovery using data-mining and statistically driven machine learning approaches. However, most big material databases largely lack to report surface properties like the work function as each bulk material typically has dozens of distinct low-index crystalline surfaces and terminations. Each unique surface must be generated and calculated separately increasing the complexity and computational effort required. In the MP database this has only been done for $\sim 100$ polymorphs of elemental crystals.\cite{Tran2016} Based on several thousand newly predicted 2D materials\cite{Cheon2017} there are two other databases\cite{Haastrup2018,Choudhary2017} that report \textit{ab-initio} work functions. It is straight-forward to calculate the work function for 2D materials as they have only one surface that is relevant. The JARVIS-DFT and the C2DB databases have work functions calculated for about $600$ and $3000$ surfaces, respectively.\\
Some statistical analyses have been carried out in literature showing that the electronegativity is linearly correlated with the work function both for elemental crystals and binary compounds.\cite{Tran2016, Yamamoto1974} Additionally, for elemental crystals an inverse correlation with the atomic radius is pointed out. The work function of elemental crystals ranges between 2 and 6 eV (for Cesium and Selenium, respectively). The statistical analyses of about 30 binary compounds shows that a correlation between the electronegativity of the atom with the lower electronegativity is the strongest (better than arithmetic or geometric mean of the individual electronegativities). Density functional theory (DFT) has been a well-established approach (using a slab configuration) to calculate the work function, similar to the more simplistic Jellium model.\cite{Lang1971} Also a phenemonolgical model has been developed that is able to estimate the work function fairly accurately for metals and alkaline-metal coated surfaces.\cite{Brodie2014} This phenomenological equation is a function of the atomic radius and the number of atomic sites per unit cell area. However, it relies on a single parameter (loosely related to the number of electrons that an atom can donate to the surface) that is not clearly defined for more complex surfaces and takes on nonphysical values in the case of alkaline coatings. Very recently, Hashimoto et al.\cite{Hashimoto2020a} attempted to screen for low and high work function materials using a Bayesian optimization approach. However, they assume the work function to be approximated solely as a bulk property neglecting any surface contributions during screening. For the highest and lowest ``bulk work function" material candidates the actual surface contributions haven then been included which rendered most of their top candidate materials to exhibit average work functions between 3 and 6 eV. Unsurprisingly, among their top candidate materials, they have found that the (110) surface of elemental Cesium has a low work function of 2.0 eV and that the (111) surface of KEuO$_2$ has a relatively high work function of 8.4 eV.
The approximated bulk work function of some of the screened work function candidates differs as much as 7 eV from the actual work function when including the surface contributions. This clearly shows that, while for simple structures (such as elemental metals) the work function can theoretically be predicted from bulk properties alone,\cite{Halas2010} it is important to consider surface contributions to qualitatively and quantitatively predict the work function of a material. The surface termination, atom adsorption (most commonly Oxygen and Hydrogen), contamination, and reconstructions can affect the surface dipole and hence the effective work function.\\
In this paper, we use high-throughput density functional theory to calculate the work function of 29,270 surfaces created from 2,492 bulk crystal structures (up to ternary compounds). The created database gives insight into work function trends observed across a large chemical space. Based on the database we develop a machine learning model with a low mean absolute test-error of 0.19 eV which is more than 4 times lower than the baseline (i.e., predicting every surface to have the database-average work function) and about 3 times better than state-of-the-art benchmarking machine learning models (automatminer and Coulomb matrix). The database and machine learning model enable us to identify several promising material surfaces with extremely low ($<1.5$ eV) and extremely high ($>8$ eV) work functions.
\section{Methods}
\begin{figure*}[tp]
\includegraphics[width=\textwidth]{Figures/workflow.pdf}
\caption{workflow of the creation of the work function database and surrogate machine learning model. The illustration includes the steps for material selection, high-throughput DFT calculations, surface slab creation, and supervised machine learning predictions. The dashed block details the procedure of determining the unique terminations of all surfaces up to a Miller index of 1.
}%
\label{fig:workflow}
\end{figure*}
\subsection{High-throughput work function calculations}
The workflow of the work function database's creation is illustrated in Figure \ref{fig:workflow}. A total of 2,492 crystal structures were queried from the Materials Project (on 7/9/2020) using the REST framework.\cite{Jain2013a,Ong2015} Up to ternary materials with 5 or less atoms in the unit cell were considered that have an energy above hull of less than 0.2 eV/atom and are tagged as an experimental structure (i.e., there exists at least one ICSD entry that reports the corresponding material as experimentally synthesized). Materials with an element present that is radioactive, a noble gas or from the actinoid group were excluded. Further, materials that have experimental tags that indicate high pressure or low temperature conditions, as well as low-dimensional materials were excluded as well. The frequency with which each chemical species appears in the database (bulk compounds) is plotted as a heat-map on the periodic table in Figure S1.\\
From this set of materials, surfaces up to a Miller index of 1 were generated using Pymatgen's surface module.\cite{Ong2013,Tran2016,Tran2019} Each surface orientation generally has more than one unique surface termination. The Pymatgen surface module has the option to generate slabs with different terminations determined by possible shifts in the $c$-direction. However, this may result in surfaces that are actually not unique because two terminations with distinct $c$-positions might actually be equivalent taking into account a shift in the $a$ or $b$ directions (or a rotation in the $a$-$b$ plane). To ensure that we generate all possible \textit{unique} terminations we developed an algorithm based on the local environment of surface atoms, as summarized in the dashed block in Figure \ref{fig:workflow}. First, a list of nearest neighbors (considering only atoms above the reference atom, i.e. with a larger $c$-component, up to a cutoff radius of 7 $\AA{}$) with their distances and chemical elements are compiled for each atom in the slab. Atoms with identical nearest neighbor lists are grouped together into local environment groups (LEGs). In the second step, for each atomic layer (i.e., atoms with the same $c$-component considering a tolerance of 0.05 $\AA{}$) the LEG of all atoms in the layer is grouped into a list (one list per layer of atoms). Thirdly, to determine the number of unique terminations $n_\mathrm{term}$, we check whether the slab exhibits a mirror plane parallel to the surface. Finally, the number of unique lists of LEGs $n_\mathrm{env}$ determines the number of unique terminations $n_\mathrm{term}$ as follows: $n_\mathrm{term}= n_\mathrm{env}$ if the slab has a mirror symmetry determined in step 3, or $n_\mathrm{term}=2 n_\mathrm{env}$ else. This is due to the fact that the local environments are determined in the positive $c$-direction (only above the reference atom). Hence, a termination A on one side of the slab with termination B underneath might not be equivalent to the A termination on the other side if there is no mirror symmetry (e.g., the terminations in the following case may not be equivalent: \textbf{AB}ABA\textbf{BA}, where AB$\neq$BA). Hence, to minimize redundancies we check whether the slab exhibits mirror symmetry in the $c$-direction (in the previous example, the slab has 2 unique terminations in case of a mirror symmetry and has 4 unique terminations otherwise).\\
According to $n_\mathrm{term}$ we subtract the appropriate number of atomic layers to generate slabs with all unique terminations. Further, we minimize the number of slabs required for the DFT calculations by having two distinct terminations on either side of the slab, whenever possible. The initial slab thickness is minimized while still ensuring that after all necessary subtractions the final slab is at least 10 $\AA{}$ thick. Following this procedure, we create 24,334 slabs of which 23,603 converged during self-consistent field calculations with a total computational time of around 105,000 core-hours. The converged slabs returned a total of 29,270 unique surfaces and their work functions (where 8,131 surfaces had been removed due to being identified as duplicates during featurization).
\subsection{Density Functional Theory}
The work function is calculated by gradient-corrected DFT using the PBE exchange-correlation functional.\cite{PBE} Self-consistent, periodic, total energy calculations are performed using Quantum Espresso (v.6.4.1.).\cite{QE} Ultrasoft Vanderbilt pseudopotentials\cite{Garrity2014} are used to describe core electron interactions and the Kohn-Sham one-electron valence states are expanded in a plane wave basis set with a kinetic energy cutoff of 400 eV. The electron density is expanded up to ten times the energy of the plane wave energy cutoff. An extra 30 unoccupied bands are added for holding valence electrons to improve convergence. All slabs generated by the high-throughput procedure described above have a minimum thickness of 10 $\AA{}$ and 15 $\AA{}$ of vacuum between periodic slab repetitions in the $c$-direction to preclude interactions between periodic images. Brillouin zone sampling is performed under a grid spacing of less than $0.05 \;\mathrm{\AA{}}^{-1}$ with finite-temperature Gaussian smearing ($\sigma = 0.1$ eV). A dipole correction is applied in the $c$-direction. The work function is determined by the difference of the electrostatic energy in the vacuum region and the Fermi energy. The PBE exchange-correlation functional has previously been shown to give reliable work functions for elemental crystals in agreement with experimental values with errors below 0.3 eV, which is comparable to the experimental precision.\cite{DeWaele2016} The DFT calculation inputs for Quantum Espresso are automatically generated with the atomic simulation environment (ASE)\cite{Hjorth_Larsen_2017} Python package and submitted into a high performance computing queuing system (SLURM) using job arrays. To estimate the convergence accuracy of the DFT-calculated work functions we reran $\sim 1\%$ of the database (randomly selected) with stricter convergence parameters (energy cutoff of 700 eV and Brillouin zone sampling with a grid spacing of $\leq 0.02 \;\mathrm{\AA{}}^{-1}$). The resulting mean absolute error and root-mean square error determined are 0.03 and 0.04 eV, respectively.
\subsection{Supervised Learning}
The dataset is randomly split into training and test sets (90/10 split) and the hyperparameters are optimized implementing 10-fold cross-validation on the training set. Multivariate linear regression, random forest, and neural network models are set up with the scikit-learn package in Python. The custom featurization procedure is laid out in the results and discussion section. As mentioned above, this featurization procedure resulted in 8,131 surfaces with duplicate features which were removed from the dataset before training. For benchmarking purposes we used the automatminer testing suit\cite{Dunn2020} and a conventional Coulomb matrix (topmost 10 surface atoms as input, $\ell_2$-sorted, with a random forest model).\cite{Himanen2020} For automatminer we use the ``express" setting and for comparison we used the bulk unit cell and the topmost 5 atomic layers of the surface slabs as inputs.
\section{Results and Discussion}
\begin{figure}
\includegraphics[width=0.45\textwidth]{Figures/WF_distr.pdf}
\caption{Distribution of the DFT-calculated work function database plotted as a stacked histogram. Outline corresponds to overall distribution under which the stacked, colored bars signify the number of surfaces based on elemental, binary, and ternary compounds. The average of the distribution is 4.22 eV with a standard deviation of 1.25 eV.
}%
\label{fig:WF_distr}
\end{figure}
\begin{table}
\begin{tabular}{ccccc}
\hline
&\textbf{Average}& \textbf{St.Dev.}& \textbf{Minimum} & \textbf{Maximum}\\
\hline
Elemental & 4.27 & 1.36 & 2.0 & 8.4\\
Binary & 4.17 & 1.19 & 1.0 & 11.2\\
Ternary & 4.26 & 1.29 & 0.9 & 11.4\\
\hline
All & 4.22 & 1.25 & 0.9 & 11.4\\
\hline
\end{tabular}
\caption{Detailed work function distribution metrics for elemental, binary, and ternary compounds. All values in eV.}
\label{tab:distr}
\end{table}
\subsection{Analysis of Work Function Database}
First, we analyze the work function database created by high-throughput DFT (29,128 surfaces based on 2,492 bulk crystals) in terms of its distribution and trends in the studied chemical space. The work function distribution of the database is plotted in Figure \ref{fig:WF_distr} and shows a near-Gaussian distribution with an extended tail towards higher work functions. The average of the entire distribution is at 4.22 eV with a standard deviation of 1.25 eV, ranging from a minimum to a maximum work function of 0.9 to 11.4 eV, respectively. The stacked bar-chart signifies which proportion of the surfaces within each bin stems from an elemental, binary, or ternary compound. Their distribution metrics are given in Table \ref{tab:distr}.\\
The observation that the distribution in work functions is near-Gaussian could indicate that the chemical space we chose was diverse enough to evenly sample work functions across possible values. The extended tail at the high work function end appears to be an artifact coming from ionically unrelaxed surfaces where a small, electronegative atom (e.g. oxygen, hydrogen) is cleaved at a large, unphysical distance. This might also be the case for the low work function tail but appears to be less pronounced. This artifact can be mitigated by ionically relaxing the surface slabs and we expect this would result in an overall slightly narrower distribution. Interestingly, the work function distributions of binary and ternary compounds (and to a certain extent also the elemental crystals) have similar averages, standard deviations, and ranges.\\
Trends in the work function based on which chemical species are present at the surface are shown in Figure \ref{fig:WF_trends}. The fraction of surfaces with a low work function ($<3$ eV, i.e., roughly one standard deviation below average) is especially high for surfaces with alkali, alkaline, or lanthanides present in the topmost atomic layer. Conversely, the fraction of surfaces with a high work function ($>6$ eV, i.e., roughly one standard deviation above average) is especially high for surfaces with halogens or oxygen present in the topmost atomic layer, as well as carbon, nitrogen, surfur, and selenium (cf. Figure \ref{fig:WF_trends}a). The total number of surfaces (rather than fractions) are shown in Figure S2.\\
The average work function is plotted as a heat-map based on the chemical species present in the topmost two atomic layers. The trends observed in Figure \ref{fig:WF_trends}a are also seen in the average work function trend in Figure \ref{fig:WF_trends}b. However, additionally one can observe trends based on combinations of chemical species in the topmost and second atomic layers. For example, the work function average is larger for surfaces where oxygen is present in the first layer and hydrogen in the second layer. In contrast, the work function average is lower for surfaces with halogens present in either first or second layer and nitrogen in the respective other layer. Further trends are plotted in Figures S3 (barchart of average work function as a function of the chemical species present at the topmost layer) and S4 (heat-maps showing percentage and total number of low and high work function surfaces as a function of chemical species present in the top two layers).\\
The trends described generally agree with the chemical intuition that surfaces terminated with electropositive atoms from the alkali, alkaline, and lanthanide groups give a low work function, whereas electronegative atoms from the non-metal groups cause increased work functions. However, it is interesting to note that while $\sim 40\%$ of surfaces that have an alkali/alkaline metal present in the topmost atomic layer have work function below 3 eV, still $\sim 60\%$ have work functions above 3 eV, contrary to chemical intuition.
\begin{figure*}[tp]
\includegraphics[width=\textwidth]{Figures/WF_trends.pdf}
\caption{Work function trends observed in the database. \textbf{a} Fraction of material surfaces that have a work function below 3 eV (purple) and above 6 eV (orange) is shown depending on which type of chemical species is present at the topmost surface. The dashed lines indicate the average fraction across the entire database. \textbf{b} Heat-map of the average work function plotted as a function of chemical species present at the topmost layer (vertical axis) and second atomic layer (horizontal axis). The color bar displays work functions below and above average as blue and red, respectively. Categories with a population of less than 10 surfaces have been left blank.
}%
\label{fig:WF_trends}
\end{figure*}
\subsection{Machine Learning Model for Work Function Predictions}
\begin{figure*}[tp]
\includegraphics[width=\textwidth]{Figures/results_ML.pdf}
\caption{Comparison of machine learning model performances. \textbf{a} Mean absolute errors (MAEs) of training and test sets are given for this paper's machine learning models: Linear regression, neural network, and random forest implementing 15 physically motivated features. The benchmarking models (automatminer with bulk unit cells and with surface slab of topmost 5 atomic layers as inputs, and a Coulomb matrix of the topmost 10 surface atoms, $\ell_2$-sorted, with a random forest model) are shown in comparison. The baseline model (always guessing the average work function) and the DFT accuracy are indicated by a dashed line and green-shaded area, respectively. \textbf{b} 5-fold cross-validated root mean square error (RMSE) as a function of number of input features is shown. Features were selected by recursive feature elimination for linear regression and random forest model. The top 15 most predictive features were selected for the final models.
}%
\label{fig:ML}
\end{figure*}
The large database created by high-throughput DFT calculations forms the basis for a surrogate machine learning model that enables the prediction of the work function at a fraction of the computational cost. As a first step, we assess common models from the materials science machine learning community as a benchmark. For that, we employ the automatminer testing suite,\cite{Dunn2020} and a conventional Coulomb matrix (topmost 10 surface atoms as input, $\ell_2$-sorted, trained with a random forest model).\cite{Himanen2020} For automatminer we use the ``express" setting and compare using the bulk unit cell and the topmost 5 atomic layers of the surface slabs as inputs. As a baseline model we predict the work function to be the average work function regardless of the surface. The automatminer model performs only marginally better than the baseline model when bulk structures are used as an input. When the surface slabs are used as inputs the performance increases and is comparable to the performance of the Coulomb matrix. The mean absolute errors (MAEs) are shown for the training and test sets in Figure \ref{fig:ML}a. The baseline MAE is 0.90 eV and the DFT accuracy is indicated in the green-shaded area between 0.03 and 0.3 eV, corresponding to the convergence error (see Methods) and the error between PBE-calculated and experimental work functions,\cite{DeWaele2016} respectively.\\
It is not surprising that the model performance is poor when the bulk structure is used as an input as the database contains multiple surfaces of different work functions for any given bulk structure. While the performance of the benchmarking models improves when the surface slab is used as the input instead, the MAEs are still large. This is likely due to the fact, that the models cannot distinguish between top and bottom of the input slab (which in general are not symmetric) and the database consists all unique terminations. In general, if one termination (located at the top surface) is labeled with the calculated work function, the same termination exists in another input structure at the bottom surface (whereas the calculated work function always refers to the top surface).\\
We developed a custom featurization of the surface slabs by considering physically motivated features of the topmost three surface layers (atoms grouped into the layers within a tolerance in $c$-direction of 0.3 $\AA{}$, see effect of tolerance value on model performance in Figure S5). The considered atomic features are electronegativity $\chi$, inverse atomic radius $1/r$, first ionization energy $E_\mathrm{ion}$, and Mendeleev number $n_\mathrm{mend}$. Given that each layer may contain more than one atom-type, we consider the minimum, maximum, and average of each of these atomic features. This gives a total of 36 elemental features for the topmost 3 layers. Additionally, we add structural features: The packing fraction for each layer (number of atoms per unit cell area) $A_\mathrm{pack}^{-1}$ and the distances between atomic layers 1 and 2, $d_{1-2}$, and between layers 1 and 3, $d_{1-3}$. Out of this total 41 features the most significant features are selected with recursive feature elimination (RFE) using a random forest model, as plotted in Figure \ref{fig:ML}b. The best 6 features largely account for the model performance: $\left<\chi_1\right>$, $\mathrm{min}\left(\chi_1\right)$, $\mathrm{min}\left(1/r_1\right)$, $\mathrm{min}\left(E_\mathrm{ion,1}\right)$, $d_{1-2}$, and $A_\mathrm{pack,1}^{-1}$. For the final model we use the best 15 features, which are the 6 features mentioned above and $\left<\chi_2\right>$, $\left<1/r_1\right>$, $\mathrm{max}\left(E_\mathrm{ion,1}\right)$, $\left<E_\mathrm{ion,2}\right>$, $\left<n_\mathrm{mend,1}\right>$, $\mathrm{min}\left(n_\mathrm{mend,1}\right)$, $d_{1-3}$, $A_\mathrm{pack,2}^{-1}$, and $A_\mathrm{pack,3}^{-1}$.\\
It is worth noting that the majority of features we selected were physics-motivated or based on correlations observed in literature. The work function has been shown to linearly correlate with electronegativity for elemental crystals and binary compounds,\cite{Tran2016, Yamamoto1974}, and inversely correlate with the atomic radius. Another work has proposed a phenomenological equation for the work function that depends on the atomic radius and the number of atomic sites per unit cell area at the surface.\cite{Brodie2014} We chose the ionization energy as a feature based on physical intuition that low ionization energies lead to easy electron extraction. Lastly, the Mendeleev number has been shown to be a descriptive feature for many material property predictions.\cite{Cheon2018a,Villars2004} Interestingly, the most predictive features (top 6) are features from the topmost layer (including the layer distance $d_{1-2}$). This is in agreement with the the fact that clear trends are observed considering only the topmost surface (cf. Figure \ref{fig:WF_trends}a). Also, we note that we tried adding further elemental features without a clear physics-based motivation (e.g. polarizability) as well as further modes (e.g. geometric mean, range, variance) -- however, this did not improve the model performance.\\
Using this featurization approach (with 15 features) outperforms all benchmarking models (automatminer, in comparison, uses 200 features) even when a linear regression model is chosen, as seen in Figure \ref{fig:ML}a. When a non-linear learning model is used (neural network or random forest model) the MAEs are significantly reduced. Our best model using random forests has a test-MAE of 0.19 eV, comparable to the accuracy of work function calculations employing DFT. This test performance is about 3 times better than the best benchmarking model and more than 4 times better than the baseline. Figure \ref{fig:RF} shows the predicted work function for both the training and test sets in comparison to the DFT calculated values. The kernel-density estimate distributions for both training and test sets are plotted for predicted and actual work functions showing that the predicted distribution is qualitatively faithful to the actual one. Notably, for the neural network and random forest models there is still a gap between training and test MAEs despite thorough hyperparameter tuning. This gap may be closed by adding more data in the future.\\
The prediction of the work function using this model is roughly $10^5$ faster than DFT while having a MAE comparable to the accuracy of DFT. The database and the machine learning models are available at \href{https://github.com/peterschindler/WorkFunctionDatabase}{github.com/peterschindler/WorkFunctionDatabase} enabling other researchers (including experimentalist) to use this model for work function predictions or to improve on our model performance.\\
For future work, this model may be extended to include surface relaxations during the high-throughput DFT calculations which could enable the prediction of the work function of a relaxed surface solely based on features derived from the unrelaxed structure. Another improvement would be to consider the surface energy to determine which surface termination (and orientation) is the most stable one, similar to work by Palizhati et al.\cite{Palizhati2019} Combining these two models could render an effective model for predicting the experimentally most relevant surface work function for each bulk compound.
\begin{figure}
\includegraphics[width=0.45\textwidth]{Figures/RF.pdf}
\caption{Predicted work functions vs. DFT calculated work functions. The kernel-density estimate distributions for both training and test sets are plotted for predicted and actual work function distributions at the top and right, respectively.
}%
\label{fig:RF}
\end{figure}
\section{Conclusions}
In summary, we demonstrate a workflow to create a work function database from high-throughput DFT calculations that enables us to establish a surrogate machine learning model for rapid work function predictions. Our model has a MAE comparable to the accuracy of DFT while being $\sim10^5$ times faster. Using this approach facilitates the probing of a vast chemical space for novel material surfaces with exceptionally low or high work functions.
\section{Author Information}
\subsection{Corresponding author} *E-mail:\\[email protected]
\section{Acknowledgement}
P.S. gratefully acknowledges financial support from the Austrian Science Fund (FWF) under contract J3980-N27. Further, he would like to extend gratitude to Prof. Jan Torgersen for proving access to computing resources.
\section{\@startsection {section}{1}{\z@}%
{-10pt \@plus -1ex \@minus -.2ex}{.5ex }{\normalfont\Large\bfseries\sectionfont}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}%
{10pt\@plus 1ex \@minus .2ex}{-0.5ex \@plus .2ex}{\normalfont\large\bfseries\subsectionfont}}
\def\frontmatter@title@format{\titlefont\centering}%
\def\frontmatter@title@below{\addvspace{-5pt}}%
\def\dropcap#1{\setbox1=\hbox{\dropcapfont\uppercase{#1}\hskip1pt}
\hangindent=\wd1 \hangafter-2 \noindent\llap{\vbox to0pt{\vskip-7pt\copy1\vss}}}
\renewenvironment{thebibliography}[1]{%
\bib@heading%
\ifx\bibpreamble\relax\else\ifx\bibpreamble\@empty\else
\noindent\bibpreamble\par\nobreak
\fi\fi
\list{\@biblabel{\@arabic\c@enumiv}}%
{\settowidth\labelwidth{\@biblabel{#1}}%
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\@openbib@code
\usecounter{enumiv}%
\let\p@enumiv\@empty
\renewcommand*\theenumiv{\@arabic\c@enumiv}
}%
\sloppy\clubpenalty4000\widowpenalty4000%
\sfcode`\.=\@m}
{\def\@noitemerr
{\@latex@warning{Empty `thebibliography' environment}}%
\endlist}
\newcommand*\bib@heading{%
\section{\refname
\fontsize{8}{10}\selectfont
}
\newcommand*\@openbib@code{%
\advance\leftmargin\bibindent
\itemindent -\bibindent
\listparindent \itemindent
\parsep \z@
}%
\newdimen\bibindent
\bibindent=1.5em
\makeatother
\section*{Suppl Figures}
\begin{figure}
\includegraphics[width=\textwidth]{Figures/periodic_table_heatmap.pdf}
\caption{\textbf{Heat-map of elements present in the bulk compounds used to create the work function database.} Ni, Li, and O are the most common elements, whereas Tc, Re, and Eu are the least common.}%
\label{fig:periodictable}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{Figures/WF_barchart_1layer_totals_nelements_multiple0-3.pdf}
\caption{\textbf{Total number of surfaces with low and high work functions plotted as a function of chemical species present at the topmost layer.} Analogous to Figure 3a of the main text.}%
\label{fig:1layertotal}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{Figures/WF_barchart_1layer_averages_nelements_multiple0-3.pdf}
\caption{\textbf{Work function averages plotted as a function of chemical species present at the topmost layer.} Error bars indicate the standard deviation.}%
\label{fig:1layeravg}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{Figures/double_heatmaps.pdf}
\caption{\textbf{Heat-maps of surfaces with low and high work functions based on chemical species present at topmost and second atomic layers.} Heat-maps plotted \textbf{a} as fractions, and \textbf{b} as total numbers.}%
\label{fig:heatmaps}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{Figures/results_tol.pdf}
\caption{\textbf{RMSE plotted as a function of tolerance used for grouping atoms into layers} 5-fold cross-validation is used for the RMSE and is shown for linear regression and random forest. The number of feature duplicates across the database at different tolerance values is plotted in green and these surfaces are removed from the dataset before training. The final tolerance value used for models in the main paper is 0.3 $\AA{}$.}%
\label{fig:tol}
\end{figure}
\bibliographystyle{naturemag}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec:introduction}
\begin{figure*}
\centering
\includegraphics[width=1.0\linewidth]{figures/004.teaser.png}
\caption{Red Blood Cell model of diameter 8~{\textmu}m containing approx. 250 million of hemoglobin molecules, membrane proteins and lipids (approx. 335 billion atoms), with lipid-bilayer and membrane-bound proteins constructed and rendered with our view-guided two-level Nanomatrix approach. The rendering exploits hardware ray tracing, maintaining highly interactive framerates.}
\label{fig:teaser}
\end{figure*}
\input{sections/01.Introduction}
\section{Related Work}
\label{sec:Related-Work}
\input{sections/02.RelatedWork}
\section{Technical Overview}
\label{sec:Technical-Overview}
\input{sections/03.TechnicalOverview}
\section{Pre-processing Phase}
\label{sec:Pre-processing-Phase}
\input{sections/04.PreprocessingPhase}
\section{Cell Cache Management}
\label{sec:Cell-Cache-Management}
\input{sections/05.CellCacheManagement}
\section{Atomistic Models Construction}
\label{sec:Construction}
\input{sections/06.Construction}
\section{Scalable RTX-based Molecular Rendering}
\label{sec:Rendering}
\input{sections/07.Rendering}
\section{Results}
\input{sections/08.Discussion}
\section{Conclusions}
\input{sections/09.Conclusion}
\section*{Acknowledgment}
\input{sections/Acknowledgments}
\bibliographystyle{unsrt}
\subsection{Scene partitioning}
The entire scene will be partitioned into several cells which will be during real-time rendering filled with structures on demand. These cells are organized in a grid that covers the entire scene. To create the grid, the first task is to define the axis-aligned bounding box (AABB) that tightly encloses the object distribution in the scene. These objects represent biological structures, such as biological cells, viral particles, bacteria, or organelles, which come in different sizes, and shapes. We use 3D mesh to define their boundaries that separate the internal of these structures from the outside environment. As the scene may consist of several copies of the same structure, the scene skeleton file contains information needed to instantiate the given meshes in the scene. This file contains a list of mesh instances and for each instance, the {\it mesh-id, patch-id, position}, and {\it rotation} are provided. The meshes and scene's skeleton are together used to estimate the scene's {grid} AABB.
The scene space is uniformly partitioned into a set of non-overlapping 3D {cells} of identical extents. The resulting regular grid is defined by the number of cells along every axis that represent the grid dimensions $grid.dim$, and by the size of the cell $cell.size_{xyz}$. In this 3D grid system, a cell is indexed by column $(i)$, row $(j)$, and layer $(k)$ which represent the cell location in the grid. Thanks to the regularity of the grid, we do not build any kind of explicit data structure for the scene cells. All needed positional information can be obtained from the above defined quantities. We define our very first cell, indexed with $(0,0,0)$ having its center in the origin of the so called {\it grid space}. An offset $grid.min$ can be used for calculating the position of the minimal cell corner $cell.min_{xyz}$:
\begin{equation}
grid.min_{xyz} = \frac{-1\times cell.size_{xyz} \times grid.dim_{ijk}}{2}
\end{equation}
\begin{equation}
cell.min_{xyz} = (cell_{ijk} \times cell.size_{xyz} )+grid.min_{xyz}
\end{equation}
In addition, we can easily access to a cell $cell_{ijk}$ that corresponds to a particular 3D position $P_{xyz}$, or calculate the cell center $cell_{xyz}$ by the following two equations:
\begin{equation}\label{eq:3DIndex}
cell_{ijk} = \lfloor \frac{P_{xyz}-grid.min_{xyz}}{cell.size_{xyz}}\rfloor
\end{equation}
\begin{equation}
cell_{xyz} = cell.min_{xyz} +\frac{cell.size_{xyz}}{2}
\end{equation}
\subsection{Tiles Preparation}
\label{sec:Tiles-Preparation}
\input{sections/04.4.TilesPreparation}
\subsection{Active cells}
Only a portion of the scene geometry content is available in the memory at any given time during the real-time rendering stage. We achieve this using uniform space partitioning into cells. Next, we need to identify which cells among the scene's cells should be visualized and stored in the memory. We denote this scene's cells as the {\it active cells}.
In our viewpoint-guided approach, the camera position is used to identify which cells should be active. Therefore, we first define the {\it central active cell}, which is the cell enclosing the camera. It can be obtained using \autoref{eq:3DIndex}, where $P_{xyz}$ is the camera viewpoint. After the central cell is identified, the neighboring cells are obtained. Thanks to the regularity of the grid, the adjacent cells of the central cell are easy to locate. In our implementation, we activate only the closest neighbor to the central cell in each axis $i,j$, and $k$, so the size of our {\it activation window} is ($3 \times 3 \times 3$) which gives us $27$ active cells $C$. However, based on the computational resources and settings of the size of the cells, this can be set to a larger number. Increasing the size of the activation window will increase the rendering overhead because each cell is drawn in a separate draw call, thus a larger number of images will need to be rendered for the final scene compositing.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/005.textures.png}
\caption{Illustration of the textures obtained from the geometry tiles. From the left: texture generated by Wang tiling with highlighted 16 base tiles; Diffuse texture; Normal texture, Ambient-Occlusion texture. }
\label{fig:Textures}
\end{figure}
The size of the activation window specifies the number of cells that will be populated and rendered. Subsequently, it specifies the number of {\it cell cache buffers} that need to be prepared. The {\it cell cache} is a GPU storage buffer that is readily available for the active cell to be filled with populated instances. This memory buffer is pre-allocated to fit a relatively large number of instances. In the prepossessing step, we allocate 27 storage buffers that represent the cell cache. As our scene is continuously regenerated, we choose to allocate cell cache in advance and just fill and clear them in real-time to avoid the overhead that comes from the frequent memory allocation and deallocation.
The cache manager controls the process of reusing deactivated cells by updating the pointers between the cell cache buffers and active cells and triggers an event that leads to the regeneration of the scene inside newly active cells. \autoref{fig:indexing-based-algorithm} shows an example that illustrates this algorithm on a 2D grid. In this 2D example, the scene's grid contains $16$ cells and the size of the activation window is $3\times3$.
In \autoref{fig:indexing-based-algorithm} (a), the camera is located in cell 4, which becomes the central cell and the cells $0$, $1$, $4$, $5$, $8$ and $9$ are the neighbors and all of them are inside the activation window. These cells are the active ones and should be populated. Every active cell should occupy a cell cache to fill it later with molecular instances in the construction stage. Once a cell becomes active, an unoccupied cell cache will be reserved for this cell and a pointer will be created to link them. This cell will be added to the list of cells that will be submitted to the construction stage. If the camera moves to the cell $9$ as shown in \autoref{fig:indexing-based-algorithm} (b), the cells $6$, $10$, $12$, $13$, and $14$ entered the activation window and need to be constructed while the cells $0$ and $1$ left the activation window, therefore, their pointers to the cell cache have been deleted which makes these cache buffers available for other cells. Cells $4$, $5$, $8$, and $9$ were populated previously, and they are still pointing to the same previous cell cache buffers. These pointer operations are important to avoid copying between buffers. In other words, if a cache has been assigned to a cell, it will be reserved for that cell as long as the cell is inside the activation window.
\subsection{Input Preparation}
In the pre-processing phase, we describe how the geometry tile set $L$ is prepared. The tiles set consists of rectangular patches containing molecular instances where every molecular instance is assigned its tile texture coordinate within the range $[0,1]$ with respect to the rectangle.
For performance reasons in the population phase, there are many threads that populate the structure in parallel. A single population thread is responsible for processing a single molecular instance per triangle, in case of rectangular tiling, and per cell, in case of a box tiling. We need to find a conservative estimate of threads to initialize before executing the population process. For that we identify a tile containing the maximal number of molecular instances and store this maximal number in $tileL_{max}$ or $tileB_{max}$, depending on whether it is a rectangular or a box tile. Then, in rectangular tiling case, we identify how many tiles are needed for covering the largest triangle in the scene. The total number of threads allocated for tiling each triangle is then the product of the number of instances in $tileL_{max}$ and the number of tiles necessary for tiling the largest triangle in the scene. For the box tiling, we allocate the number of threads analogously, i.e., the number of instances in the box tile $tileB_{max}$ is multiplied by the number of tiles necessary for filling one entire cell.
Another required input is a triangular mesh, where every triangle is associated with the texture coordinates as well. We expect that the mesh already contains texture parameterization and both {\it mesh $uv$-texture coordinates} are within $[0,1]$ range. The algorithm uses the texture coordinates for projecting molecular elements from mesh texture coordinates to tile texture coordinates and vice versa. If the mesh does not have a texture parameterization, a simple cube-map or spherical texture parameterization can be applied and used. However, depending on the shape of the mesh, it might be non-trivial to create fully seamless texturing. A seam in texture parameterization would result in a visible seam. The scene can contain multiple different meshes. For simplicity, in the rest of the paper, we refer to a single mesh. However, the method works in the same way for every mesh that has a texture parameterization.
Our goal is to populate rectangular tiles over a mesh so it forms a continuous surface. These tiles can be represented either by geometric molecular instances or by the corresponding image texture. Therefore, we need to establish a mapping from the mesh texture coordinate system into the tile texture coordinate system and vice versa. After that, we compute a size of the grid formed by tiles from tile set $L$ that cover the largest triangle of the mesh. This grid is later used to populate tiles on all mesh triangles.
The largest triangle $t_{big}$ within the set of triangles $T$ is identified. As all the tiles in $L$ are of the same size, the ratio between the size of $t_{big}$ and one representative tile $tile$ is computed. The ratio represents the number of tiles $(rep_u, rep_v)$ needed for tiling to cover the entire area of triangle $t_{big}$ with a sequence of tiles in its plane. Moreover,
a mapping $tile_{uvsize} = (uvmax(t_{big}) - uvmin(t_{big}) / (rep_u, rep_v)$ that represents the size of the tile in the texture coordinate space associated with the mesh, is computed.
For the entire mesh, a tiles recipe $TR$ using Wang tiling approach is created. The resulting size of the tiles recipe, which is in fact a 2D array, is computed as $(1 / tile_{uvsize})$. Afterwards, the array is filled with the indices of tiles from $L$ by the Wang tiling generator. This structure is prepared for later sampling for determining a tile at an arbitrary texture coordinate that belongs to the mesh. By dividing an $uv$ mesh texture coordinate by $tile_{uvsize}$, we get two dimensional index into $TR$. The other way around, if we multiply $TR$ by $tile_{uvsize}$, we get the size of the mesh texture. To this point, previously described computations are performed only once. The description of the iterative algorithm that populates a grid of active cells $C$ follows.
\label{sec:TriangleIntersetion}
We need to populate the rectangular tiles on the mesh. As the mesh can be of arbitrary size, we constrain the population by the grid of active cells $C$. The active cells can be in three configurations: outside of a mesh, inside of a mesh, or intersecting a mesh.
First, we initialize $c.cache$ for every $c \in C$ to void values. No information about any molecular instance nor intersected triangles of a mesh are stored. The $c.closestTriangle$ variable representing the closest triangle of a mesh to $c$ is set to null. This initialization is later done for new cells that are added into the set of active cells $C$.
Next step is to compute a set of intersected triangles $I_{c}$ (where $I_{c} \subset T$) for every $c \in C$. This information is stored for the particular cell in $c.cache$. Furthermore, for those active cells $c$ that are entirely inside or entirely outside the mesh, we need an in-out test. To achieve this, we store a reference to the closest triangle to such cell $c$ from all triangles $I_{c}$: $c.closestTriangle=closest(\forall I_{c})$. The in-out test is described in \autoref{solublespop}.
\subsection{Membrane Population}
To populate the membrane, we cover the mesh of a biological structure with a rectangular grid based on its texture parameterization. Each element of this grid represents a single tile from the tiles recipe $TR$ associated with the mesh.
Our approach is to align each triangle of the mesh with a plane of the $TR$ grid. Based on the texture coordinates we obtain a sub-grid that encloses the triangle and then re-project the sub-grid with its respective tiles onto the triangle in the 3D space. This way we obtain the 3D position of the starting corner of the sub-grid and also its orientation in world coordinates.
Within this sub-grid we populate all molecular instances in parallel. In our case, the sub-grid has always the same dimensions $(rep_u, rep_v)$ calculated from the biggest triangle of the mesh and we use it for smaller triangles as well. The sub-grid completely covers the triangle area, but tiles can also lie outside the triangle $t$ area or outside the cell $c$. Overall, we run for the sub-grid $tileL_{max}\times rep_u\times rep_v$ threads. Threads are associated with a particular molecular instance $m$ that belongs to a certain $tile$ and is stored in a linear buffer. Few remaining threads that exceed the number of molecular instances for a particular tile are discontinued. The molecular instance $m$ is associated with a tile $l$ is indexed by two indices in the 2D array of the sub-grid. To correctly define the 3D position of the molecular instance $m.pos$, we need to calculate it as a position within its tile $l$ and add the 3D position of the starting corner of the $l$. The 3D position of the starting corner of $l$ can be calculated based on $l$ indices within the sub-grid and the tile extents in the world coordinates in both dimensions. The calculation of position is analogous to indexing in multidimensional arrays. The only step is to determine the 3D position of the starting corner of the sub-grid $origin(tile_{ref})$ (as shown in \autoref{fig:Padding}). This 3D position we obtain from the known world-space 3D position corresponding to $t_{uvmin}$ and the known world-space offset $tile_{trans}$. Afterwards, we crop all the instances that lie outside of the triangle (using barycentric coordinates of the instance) and outside of the cell $c$.
If the position $pos$ passes the criteria, the $atomicCounter$ is increased and a new molecular instance $m$ is recorded in $c.cache[atomicCounter] = m$. Its type is set according to $m.type = m_{ref}.type$ and the position is set to $m.pos = pos$. The rotation is stored in an analogous way. The only difference with the rotation is, that it has to be adjusted by a rotation representing the rotation around the $y$-axis into the normal vector of the triangle. The $y$-axis is used as all the tiles were generated with the default orientation facing the $y$-axis.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{figures/009.padding.png}
\caption{Projecting a mesh triangle to the tiles recipe. The position of $t_{uvmin}$ determines the reference tile $tile_{ref}$ from the tiles recipe $TR$. Based on the size of the triangle, the amount of tiles ($rep_u$, $rep_v$) needed to cover the triangle is estimated. The offset $tile_{trans}$ in the mesh texture coordinates refers to the vector from $t_{uvmin}$ to the origin of the reference tile $origin(tile_{ref})$.
}
\label{fig:Padding}
\end{figure}
\subsection{Solubles Population}
\label{solublespop}
In the previous section, the description of membrane population was discussed. The next step is to populate internal parts of the biological structures with molecular instances, not the external space on the other side of the boundary. Similar to the membrane population, this approach is based on tiles. However, instead of the rectangular-based tile-set $L$, the box-based tile-set $B$ (see \autoref{fig:Patches}) is used. In this case, population does not rely on the texture coordinates of the mesh. Moreover, there is no Wang Tiles approach used for the $B$ tile-set. As these are not visible from outside the structure and when immersed inside, it is very unlikely to notice any seams in such a crowded environment. Therefore the seamless constraint is not applied in the box-tiling case. In principle, the same Wang Tiles concept as previously explained can be extended to 3D and used for $B$ tiles. In our implementation we generate each box tile $b \in B$ using the same size $size(b)$, limited to the cell $c.size$.
The population is done for every cell $c$ independently. Firstly, as previously mentioned, intersected triangles for every $c \in I_c$ are computed. Moreover, a list of intersected meshes $I_{meshes}$ specifying into which meshes the triangles $I_c$ belong is created. Afterwards, $c.closestTriangle$ (see \autoref{sec:TriangleIntersetion}) is determined.
Every mesh is associated with a box-tile $b$. The box tiles $b \in B$ are tiled inside the cells to fill the internal space of the mesh.
Analogously to the membrane population, we use the highest number of elements within tile $tileB_{max}$ together with the number of box-tiles fill the cell $c$, ($rep_x$, $rep_y$, $rep_z$), calculated as $rep_{xyz} = ceil(size(c) / size(b))$. The number of threads per $c$ is calculated as a product $tileB_{max}\times rep_x \times rep_y \times rep_z$. Threads are associated with a molecular instance $m$ which are stored in a linear buffer of the box-tile $b$. Few remaining threads are again discontinued. For every instance in a box-tile $m=b[i]$, its relative 3D position inside the box is computed. To calculate the absolute position, we need to calculate the world space position of the starting corner of the box-tile $b$. This position is calculated from the starting corner of the cell $c$, the three indices $x,y,z$ that refer to the respective box-tile $b$ and the world-space size of the box-tile $size(b)$. Once the 3D position $pos$ of molecular instance $m$ is calculated, it is tested whether it lies inside the cell $c$. Moreover, if the cell is intersected by triangles $I_c$, the algorithm decides on which half-space with the respect to $I_c$ the position $pos$ is. This orientation is determined based on the normal vector of the triangle mesh, whether the normal points toward the $pos$ wrt. the triangle center or away from $pos$. If $pos$ lies outside the biological structure, it is rejected and the computation stops. In the other case, a new molecular instance $m$ is created based on the information from $m_{ref}$ and its position is set to $m.pos=pos$. For the case when there is no intersection with any mesh in any of $c \in C$, the closest triangle information $c.closestTriangle$ is used as an indicator, and the triangle normal of the closest triangle is again used to determine whether the entire cell cache is inside or outside of the biological structure defined by the mesh. In case it is marked as outside, no population is performed.
\subsection{Acceleration Structures}
The acceleration structure {\it (AS)} is a core component of every efficient raytracing algorithm. To accelerate the raytracing in the modern GPUs, this component is implemented in hardware. NVIDIA GPU hardware implementation exposes only two levels of acceleration structure to the user. The {\it bottom-level AS (BLAS)} defines the geometric description of a molecular model (i.e., the position and radius of its atoms), while the {\it top-level AS (TLAS)} consists of instances that reference to one of the BLASes~\cite{RayTracingGems}. Each instance is associated with the transformation matrix, as well as molecular type of the instance to fetch the corresponding color value. This two-level hierarchy allows us to populate multiple instances of an object, while storing its geometry only once in the GPU memory.
In our rendering algorithm, we are using two representations of acceleration structures: the {\it cellular AS}, which defines the skeleton of the scene and contains all the mesh instances that define the shape, size, and position of the biological structures, and {\it atomistic AS} which contains the atomistic description of the active cells. In {cellular AS} we create a BLAS for every mesh, while in {atomistic AS} we create a BLAS for every molecular model and then we instantiate them within the scene (see \autoref{fig:SceneAS}).
Representing the {atomistic AS} as a single TLAS will lead to rebuilding it from scratch whenever the active cells are changed. RTX acceleration structure allows updating TLAS, which is cheaper than rebuild it, however, it can be used only for updating the instances information e.g. transformation matrix. If a new instance need to be added to the scene, the TLAS has to be rebuilt. To avoid that, our {atomistic AS} contains multiple TLASes, one TLAS per active cell. The active cell's TLAS is generated based on the contents of its cache. Once a cell becomes active, its TLAS will be built and will not change until that cell becomes inactive.
For each AS, hardware acceleration requires to provide the type of ray intersection test needed by the traversal program. The selection should be based on the BLAS lowest geometric representation. In the {\it cellular AS}, the meshes are defined as triangles, therefore we use the hardware triangle-ray intersection test that is built-in in the GPU. In the {\it atomistic AS} the molecular models are defined through its as atoms/spheres, therefore we use a custom-implemented sphere-ray intersection test.
\subsection{Parallel Rendering}
The scene is rendered as a combination of atomistic and cellular rendering. Details closer to the camera are rendered at the atomistic resolution, while objects further away from the camera are rendered as textures.
To prevent the sudden popping of atomistic structures, we implement a smooth transition from texture details into atomistic details and vice-versa using alpha blending.
In the molecular details rendering, the scene is rendered in two-pass rendering. In the first pass, the active cells are rendered separately into their respective frame buffer objects (FBOs). This pass takes the advantage of having multiple TLASes in the {atomistic AS} to parallelize their rendering. We use the {\it sort-last} parallel rendering scheme~\cite{Molnar-1994-A-sorting-classification-of-parallel-rendering} that can render the {atomistic AS} TLASes in parallel, which results in a very high data rate as the rendering operates independently. But instead of parallelizing the rendering through multiple GPUs, we use one GPU that use compute shader and Nvidia's \qcrFont{GLSL\_EXT\_ray\_query} extension to parallelize the rendering tasks between threads. The ray query extension allows us to invoke ray tracing query through the compute shader. This extension is an alternative to the ray tracing pipeline, but no separate dynamic shader or shader binding table is needed~\cite{RayQueries}.
In the first rendering pass, the atomistic AS TLASes are traced in parallel, a thread per pixel per active cell. In each thread, once the closest hit is found, its information (e.g. depth, instance\_id, atom\_id) is stored on full screen image buffer of the thread's active cell. Otherwise, the value ($-1$) is stored which means the ray did not hit any nanoscale structure (see \autoref{fig:rendering}).
In the second pass, the resulting FBOs are composited, based on the depth values to form the final rendered image. If there is at least one hit found in the 27 images, the {\it instance\_id} and {\it atom\_id} of the closest hit among them is used to get the molecular color. In addition, the shading is computed using {\it phong illumination model} as well as {\it ray-traced ambient occlusion}. If the is no hit (no nanoscale structure information is found), the cellular structure information is provided using image-based approach.
\subsection{Image-based Tiling}
Image-based impostors are usually used to avoid rendering objects that are far away from the viewpoint by replacing the geometry of these objects with a painted texture~\cite{aliaga-1999-Automatic-image-placement-to-provide-guaranteed-frame-rate,Aliaga-1999-MMR,schaufler-1996-Three-Dimensional-Image-Cache}.
In the tile preparation phase (see \autoref{sec:Tiles-Preparation}), GW-tiles have been created. Moreover, the corresponding texture map was synthesized. The key idea is to use both of these level of detail representations, while rendering the scene. When the camera is closer to a biological structure, the GW-tile is used. Once the camera zooms out, which causes the atomistic detail to disappear, the corresponding part of the texture map is rendered in the very same place. In \autoref{sec:Tiles-Preparation} also the description of tile recipe $TR$ was presented. The tile recipe forms a virtual map of tiles that covers the whole $uv$-texture space associated with the mesh.
The texture map is sampled while rendering a cellular mesh using the following approach. Using the texture coordinate $uv$ of a fragment, the respective $tileId$ is obtained from the tiles recipe. Moreover, the relative position $rel_{uv}$ inside this tile is computed. Based on the $tileId$, its respective starting position $tile_{origin}$ in texture map is obtained. The resulting color is fetched for the position $tile_{origin} + rel_{uv}$.
Besides the diffuse color, texture map consists of normal and ambient occlusion buffers which are used to add geometric detail to the shaded surface.
To avoid the sharp transition from geometry to image-based representation we apply alpha blending on the instances that are located in the far border of the neighboring cells which combine the geometry atomistic color with image-based cellular color.
\section{Discussion}
We implement Object-Space AO (OSAO) to convey the shape and the depth of molecules by tracing random AO-ray against the TLAS of the active cell to which the hit primitive belongs. The AO algorithm is implemented as described in NVIDIA Vulkan API tutorials~\cite{NVIDIA-Ambient-Occlusion}. Due to our tiling approach, the AO is inaccurate on the primitives that are located on the borders of active cells. To estimate the shading value correctly, the TLAS of the adjacent cell would need to be traced as well. This is one of the drawbacks of the sort-last scheme of parallel rendering~\cite{Dietrich-2007-Massive-Model-Rendering-Techniques}. To overcome this issue, a test has been added in the AO computation, if the hit atom is located on the cell border, then the AO-ray will traverse the AS of all active cells that intersect the hit atom.
Our method is scalable, however, its parameters should be adjusted based on the available computational resources. As the size increases, more geometrical information will be presented which enriches the scene with detail information. However, it will increase the computational complexity and the memory footprint. Our method allocates a part of the dedicated GPU memory for the cells' cache buffers. Clearly, increasing the cache buffer size will increase the allocated portion of the memory. On the other hand, increasing the size of the activation window will increase the number of the cells' cache buffers. In addition, the rendering overhead increases with the size of the activation window, because it requires more rendering threads in the first pass rendering and more images to be composited in the second pass rendering.
Our construction algorithm is meant for explanatory visualization of extremely large cellular mesoscale scenes that can be explored down to atomistic detail. The tiling strategy that we employ may be criticised by repetitiveness and associated plausibility of the resulting model. We want to emphasize that the explanatory visualization scenario allows for a certain algorithmic flexibility that might or might not be acceptable within scientific discovery workflows.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{intro}
In this paper we investigate the geometry of polycrystals and its implications for microstructure morphology within the nonlinear elasticity model of martensitic phase transformations \cite{j32,j40}. The rough idea is that the microstructure is heavily influenced by conditions of compatibility at grain boundaries resulting from continuity of the deformation.
However, in order to express this precisely, it is first of all necessary to give a careful mathematical description of the assumed grain geometry, something that is not often done even in mathematical treatments (a rare exception being \cite{taylor99}). In particular, it is useful to be able to articulate the intuitively obvious fact that in the neighbourhood of most points of an interior grain boundary only two grains are present, because it is at such points that it is easiest to apply compatibility conditions.
A second issue is then to develop useful forms of the compatibilty conditions at such points, expressed in terms of deformation gradients, which on the one hand do not make unjustified assumptions about the microstructure morphology, and on the other hand can be exploited to draw conclusions about that morphology.
The plan of the paper is as follows. In Section \ref{polycrystals} we give a precise description of grain geometry, defining interior and boundary grains, and the set of triple points. We then discuss the case of convex grains, showing that interior grains form convex polyhedra and that in 3D the set of triple points is a finite union of closed line segments. For possibly nonconvex grains we then show under weak conditions on the grain geometry that in 2D a polycrystal with $N$ grains can have at most $2(N-2)$ triple points, while in arbitrary dimensions the set of triple points is small.
In Section \ref{micro} we address some examples in which compatibility at grain boundaries leads to restrictions on possible microstructures. First we show that, for a cubic-to-tetragonal transformation, a macroscopically homogeneous zero-energy microstructure matching a pure dilatation on the boundary must involve more than four values of the deformation gradient. We discuss the reasons why nevertheless second-order laminates, involving to a good approximation just four gradients in a single grain, are observed in materials, such as the ceramic ${\rm BaTiO}_3$ and RuNb alloys, which undergo cubic-to-tetragonal transformations. Then we consider the situation of a bicrystal with special geometry formed of a material undergoing a phase transformation with just two energy wells (such as cubic-to-orthorhombic), and without further assumptions on the microstructure give conditions under which a zero-energy microstructure must be complex, i.e. cannot be a pure variant in either grain; this analysis uses a generalization of the Hadamard jump condition developed in \cite{u5}.
Finally in Section \ref{conclusion} we draw some conclusions and give some perspectives on possible future developments.
\section{Geometry of polycrystals}
\label{polycrystals}By a {\it domain} in $n$-dimensional Euclidean space $\R^n$, $n\geq 2$, we mean an open and connected subset of $\R^n$. (For the applications below $n=2$ or $3$.) If $E\subset\R^n$ then $\overline E$ denotes the closure of $E$, $\partial E$ the boundary of $E$, and ${\rm int}\,E$ the interior of $E$.
We consider a polycrystal which in a reference configuration occupies the bounded domain $\om\subset\R^n$. We suppose that the polycrystal is composed of a finite number $N\geq 1$ of disjoint grains $\om_j, 1\leq j\leq N$, where each $\om_j$ is a bounded domain, so that
\begin{equation}
\label{polyc}\om={\rm int}\,\bigcup_{j=1}^N\overline \om_j.
\end{equation}
In general (see Theorem \ref{convexgrains} below) we cannot assume that the boundaries of the $\om_j$ are smooth. We will make various different assumptions concerning this below, but we always assume the minimal requirement that each $\om_j$ is a {\it regular} open set, that is $\om_j={\rm int}\,\overline\om_j$. This avoids pathologies such as a grain consisting of an open ball with a single point at its centre removed. We can divide the grains into {\it interior grains} for which $\partial\om_j\subset\bigcup _{k\neq j}\partial\om_k$, and {\it boundary grains}, for which $\partial\om_j\setminus \bigcup _{k\neq j}\partial\om_k$ is nonempty. Note that an interior grain can have points of its boundary lying in $\partial\om$ (see Fig.~\ref{grainspicture}). We denote by $D=\bigcup_{j=1}^N\partial\om_j$ the union of the grain boundaries, and by $$T=\bigcup_{1\leq i_1<i_2<i_3\leq N}\partial\om_{i_1}\cap \partial\om_{i_2}\cap\partial\om_{i_3}$$
the set of {\it triple points}, i.e. points which belong to the boundaries of three or more grains.
\begin{figure}[hbt]
\centerline{\includegraphics[width=3.83in,height=2.15in,keepaspectratio]{polycrystal}}
\caption{Schematic polycrystal grain structure in 2D, with boundary grains shaded. The boundaries of the two interior grains $A$ and $B$ have points in common with the boundary $\partial\om$ of the polycrystal. }\label{grainspicture}
\end{figure}
\subsection{Convex grains}
\label{convex}
We recall that a set $E\subset \R^n$ is said to be {\it convex} if the straight line segment joining any two points $x_1, x_2\in E$ lies in $E$, i.e. $\lambda x_1+(1-\lambda)x_2\in E$ for all $\lambda\in [0,1]$. An {\it open half-space} is a subset $H$ of $\R^n$ of the form $H=\{x\in\R^n: x\cdot e<k\}$ for some unit vector $e\in\R^n$ and constant $k$. A nonempty bounded open subset $P\subset \R^n$ is a {\it convex polyhedron} if $P$ is the intersection of a finite number of open half-spaces. The {\it dimension} of a convex set $E\subset\R^n$ is the dimension of the affine subspace of $\R^n$ spanned by $E$. The following statements, which are probably known, are elementary consequences of the hyperplane separation theorem (see, for example, \cite[Theorem 11.3]{rockafellar70}), which asserts that if subsets $E,F$ are disjoint convex subsets of $\R^n$ with $E$ open, then there exist a unit vector $e\in\R^n$ and a constant $k$ such that
$$x\cdot e<k\leq y\cdot e \mbox{ for all } x\in E, y\in F.$$
In particular, taking $F=\{z\}$ with $z\in\partial E$, any open convex set is regular.
\begin{thm}
\label{convexgrains}
Suppose that each grain $\om_j$ is convex. Then\\
(i) each $\om_j$ is the intersection of $\om$ with a finite number of open half-spaces, \\
(ii) each interior grain is a convex polyhedron,\\
(iii) the set $T$ of triple points is a finite union of closed convex sets of dimension less than or equal to $n-2$.
\end{thm}
\begin{proof} If $N=1$ there is nothing to prove, so we suppose that $N>1$.
By the hyperplane separation theorem, given a grain $\om_j$, for any $k\neq j$ there exists an open half-space $H_{j,k}$ such that $\om_j\subset H_{j,k}$ and $\overline\om_k\subset \R^n\setminus H_{j,k}$. Hence
$$\om_j\subset \om\cap\bigcap_{k\neq j}H_{j,k}.$$
Let $x\in\om\cap\bigcap_{k\neq j}H_{j,k}$. Then since $\om\cap\bigcap_{k\neq j}H_{j,k}$ is open and disjoint from $\overline \om_k$ for $k\neq j$, it follows that $x$ is an interior point of $\overline\om_j$. Since $\om_j$ is regular, $x\in\om_j$, and hence $ \Omega_j = \Omega\cap
\bigcap_{k\neq j} H_{j,k}$. This proves (i).
Let $\om_j$ be an interior grain and suppose for contradiction that $x\in \bigcap_{k\neq j}H_{j,k}$ with $x\not\in\om_j$. Given any $x_0\in\om_j$ there exists a convex combination $y=\lambda x_0+(1-\lambda)x$, $\lambda\in[0,1]$, with $y\in\partial\om_j$. Since $\bigcap_{k\neq j}H_{j,k}$ is convex, $y\in\bigcap_{k\neq j}H_{j,k}$, and thus $y\not\in\partial\om_k$ for $k\neq j$, contradicting that $\om_j$ is an interior grain. This proves (ii).
Given $1\leq i_1<i_2<i_3\leq N$ the set $K=\partial\om_{i_1}\cap\partial\om_{i_2}\cap\partial\om_{i_3}=\overline\om_{i_1}\cap\overline\om_{i_2}\cap\overline\om_{i_3}$ is closed and convex. Let $A$ denote the linear span of $K$. Then by \cite[Theorem 6.2]{rockafellar70} there exists a relative interior point $\bar x $ of $K$ in $A$, that is for some $\ep>0$ the closed ball $\overline {B(\bar x,\ep)}=\{x\in\R^n:|x-\bar x|\leq\ep\}$ is such that $\overline {B(\bar x,\ep)}\cap A\subset K$. In particular the dimension of $K$, which by definition is the dimension of $A$, is less than $n$. Suppose for contradiction that the dimension of $K$ is $n-1$, so that $A=\{x\in\R^n:x\cdot e=k\}$ is a hyperplane. Then there exists a point $x_1\in \om_{i_1}$ which lies strictly on one side of $A$, say $x_1\cdot e<k$. Hence the closed convex hull of $x_1$ and $\overline {B(\bar x,\ep)}\cap K$ lies in $\overline\om_{i_1}$, and its interior contains the open half-ball $\{x\in\R^n:x\cdot e<k, |x-\bar x|<\ep'\}$ for some small $\ep'>0$. Since $\om_{i_1}$ is regular this half-ball is a subset of $\om_{i_1}$. Repeating this argument for $i_2$ and $i_3$ we find a half-ball centre $\bar x$ which is a subset of two of the disjoint grains $\om_{i_1}, \om_{i_2},\om_{i_3}$. This contradiction implies (iii).
\end{proof}
\noindent Part (iii) implies that if $n=2$ there are finitely many triple points (see Theorem \ref{triplepointsbound} below for a more general statement), while if $n=3$ then $T$ is the union of finitely many closed line segments.
\subsection{Triple points in 2D}
\label{2D}
A famous counterexample in topology, the Lakes of Wada (see, for example, \cite{hockingyoung2nd, gelbaumolmsted}), shows that there can be three (or more) simply-connected, regular, open subsets of the closed unit square $[0,1]^2$ in $\R^2$ having a common boundary. Thus there is no hope to prove that the set $T$ of triple points is finite for $n=2$ without imposing further restrictions on the geometry of the grains $\om_j$.
We will assume that each grain is a bounded domain in $\R^2$ which is the region inside a Jordan curve, that is a non self-intersecting continuous loop in the plane. Such curves can be highly irregular. Nevertheless we can give a precise bound on the number of triple points.
\begin{thm}[\cite{u5}]
\label{triplepointsbound}
Assume that each grain $\om_j, j=1,\ldots,N$, is the region inside a Jordan curve. Then there are a finite number $m$ of triple points, and $m\leq 2(N-2)$.
\end{thm}
The bound is optimal, and attained for the configuration shown in Fig.~\ref{bound}.
\begin{figure}[hbt]
\centerline{\includegraphics[width=3.07in,height=1.72in,keepaspectratio]{grains}}
\caption{$N$ grains (labelled $1$ to $N$) in 2D with $2(N-2)$ triple points.}\label{bound}
\end{figure}
The proof of Theorem \ref{triplepointsbound} involves a reduction to a problem of graph theory, as in the proof of the Four Colour Theorem for maps \cite{wilson14}, and use of Euler's formula relating the numbers of faces, vertices and edges of a polyhedron.
\subsection{Triple points in 3D}
\label{3D}
For dimensions $n=3$ and higher, we do not have as precise results as Theorem \ref{triplepointsbound}. However, we can prove under rather general conditions that the set $T$ of triple points is in some sense very small compared to the union of the grain boundaries $D$. We assume that the closure $\overline\om_j$ of each grain is a topological manifold with boundary, that is for each $x\in\bar\om_j$ there is a relatively open neighbourhood $U(x)$ and a homeomorphism $\varphi$ between $U(x)$ and a relatively open neighbourhood of the closed half-space
$$\R^n_+:=\{(x_1,\ldots,x_n):x\cdot e_n\geq 0\},
$$
where $e_n=(0,\ldots,0,1)$. The precise details of this definition are not so important for this paper, but we note that if $n=2$ and $\om_j$ is the region inside a Jordan curve then $\overline\om_j$ is a topological manifold with boundary, while for $n\geq 2$ any domain whose boundary can be locally represented in suitable Cartesian coordinates by the graph of a continuous function, and which lies on one side of its boundary, is also a topological manifold with boundary. Thus any geometry that is likely to be encountered in practice satisfies this condition.
\begin{thm}[\cite{u5}]
\label{triplepointsnd}
Suppose that the closure $\overline\om_j$ of each grain is a topological manifold with boundary. Then the set
$T$ of triple points is closed and nowhere dense in the union $D$ of grain boundaries, i.e. there is no point $x\in T$ and $\varepsilon>0$ such that $B(x,\ep)\cap D\subset T$.
\end{thm}
\noindent
If $n=3$, then under the hypotheses of Theorem \ref{triplepointsnd} the set $T$ can have infinite length (technically, its one-dimensional Hausdorff measure can be infinite). One can conjecture that this is impossible if the grains $\om_j$ have more regular, for example Lipschitz, boundaries.
\section{Microstructure of polycrystals}
\label{micro}
In this section we derive some results concerning martensitic microstructure in polycrystals using the framework of the nonlinear elasticity model for martensitic phase transformations (see \cite{j32,j40}), in which at a constant temperature the total elastic free energy is assumed to have the form
\begin{equation}
\label{total}
I(y)=\int_\om W(x,\nabla y(x))\,dx,
\end{equation}
where $y:\om\to\R^3$ is the deformation, and $\om\subset\R^3$ has the form \eqref{polyc}, where we make the very mild additional assumption that the boundary $\partial\om_j$ of each grain has zero 3D measure (volume). Denoting $M^{3\times 3}=\{\mbox{real }3\times 3\mbox{ matrices}\}$, $M^{3\times 3}_+=\{A\in M^{3\times 3}: \det A>0\}$ and $SO(3)=\{R\in M^{3\times 3}_+: R^TR=\1\}$, we suppose that the free-energy density $W$ is given by $W(x, A)=\psi(AR_j)$ for $x\in\om_j$, where $R_j\in SO(3)$ and $\psi$ is the free-energy density corresponding to a single crystal.
We assume that $\psi:M^{3\times 3}_+\to [0,\infty)$ is continuous, frame-indifferent, that is
\begin{equation}\label{frame-indifferent}
\psi(QA)=\psi(A)\mbox{ for all } A\in M^{3\times 3}_+, Q\in SO(3),
\end{equation}
and has a symmetry group $\mathcal S$, a subgroup of $SO(3)$, so that
\begin{equation}
\label{cubic}
\psi(AR)=\psi(A)\mbox{ for all } A\in M^{3\times 3}_+, R\in {\mathcal S}.
\end{equation}
For the case of cubic symmetry ${\mathcal S}= P^{24}$, the group of rotations of a cube into itself. We assume that we are working at a temperature at which the free energy of the martensite (which we take to be zero) is less than that of the austenite, so that $K=\{A\in M^{3\times 3}_+: \psi(A)=0\}$ is given by
\begin{equation}\label{energywells}
K=\bigcup_{i=1}^M SO(3) U_i,
\end{equation}
where the $U_i$ are positive definite symmetric matrices representing the different variants of martensite, so that the $U_i$ are the distinct matrices $RU_1R^T$ for $R\in P^{24}$.
Zero-energy microstructures are represented by {\it gradient Young measures} $(\nu_x)_{x\in\om}$ satisfying $\supp\nu_x\subset KR_j^T$ for $x\in \om_j$. For each $x\in\om$, $\nu_x$ is a probability measure on $M^{3\times 3}$ that describes the asymptotic distribution of the deformation gradients $\nabla y^{(j)}$ of a minimizing sequence $y^{(j)}$ for $I$ (i.e. such that $I(y^{(j)})\to 0$) in a vanishingly small ball centred at $x$. Here the {\it support} $\supp \nu_x$ of $\nu_x$ is defined to be the smallest closed subset $E\subset M^{3\times 3}$ whose complement $E^c$ has zero measure, i.e. $\nu_x(E^c)=0$; intuitively $\supp \nu_x$ can be thought of as the limiting set of gradients at $x$. Thus the condition that $\supp\nu_x\subset KR_j^T$ for $x\in \om_j$ is equivalent to $\int_\om\int_{M^{3\times 3}}W(x,A)\,d\nu_x(A)\,dx=\sum_{j=1}^N\int_{\om_j}\int_{M^{3\times 3}}\psi(AR_j)\,d\nu_x(A)\,dx=0$ and expresses that the microstructure has zero energy. The corresponding macroscopic deformation gradient is given by $\nabla y(x)=\bar\nu_x=\int_{M^{3\times 3}}A\,d\nu_x(A)$. We note the {\it minors relations}
\begin{eqnarray}\label{minors} \det \bar\nu_x=\langle \nu_x,\det\rangle=\int_{M^{3\times 3}}\det A\,d\nu_x(A),\\ \cof \bar\nu_x=\langle \nu_x,\cof\rangle=\int_{M^{3\times 3}}\cof A\, d\nu_x(A),\label{minors1}
\end{eqnarray}
where $\cof A$ denotes the matrix of cofactors of $A$. Note that \eqref{minors} implies that $\det\bar\nu_x=\det U_1$ for any zero-energy microstructure.
(See \cite{j56} for a description of gradient Young measures in the context of the nonlinear elasticity model for martensite.)
In the case of cubic symmetry, and the absence of boundary conditions on $\partial\om$, there always exist such zero-energy microstructures. Indeed by the self-accommodation result of Bhattacharya \cite{bhattacharya92} for cubic austenite there exists a homogeneous gradient Young measure $\nu$ with $\supp \nu\subset K$ and $\bar\nu=(\det U_1)^\frac{1}{3}\1$. We can then define for $x\in\om_j$ the measure $\nu_x(E)=\nu(R_j^TER_j)$ of a subset $E\subset M^{3\times 3}$ of matrices. Then, since $R_j^TM^{3\times 3}R_j=M^{3\times 3}$ we have that $\bar\nu_x=\int_{M^{3\times 3}}A\,d\nu_x(A)=R_j\int_{M^{3\times 3}}B\,d\nu(B)R^T_j=(\det U_1)^\frac{1}{3}\1$ for $x\in\om_j$. By a result of Kinderlehrer \& Pedregal \cite{kinderlehrerpedregal91} it follows that $(\nu_x)_{x\in\om}$ is a gradient Young measure, and since $\int_{M^{3\times 3}}\psi(AR_j)\,d\nu_x(A)=\int_{M^{3\times 3}}\psi(R_jB)\,d\nu(B)=0$ it follows that $(\nu_x)_{x\in\om}$ is a zero-energy microstructure.
\subsection{Higher-order laminates for cubic-to-tetragonal transformations}
\label{fourgradients}
In this subsection we consider a cubic-to-tetragonal transformation, for which $K$ is given by \eqref{energywells} with $M=3$ and
$U_1=\diag(\eta_2,\eta_1,\eta_1)$, $U_2=\diag(\eta_1,\eta_2,\eta_1)$, $U_3=\diag(\eta_1,\eta_1,\eta_2)$, where $\eta_1>0$, $\eta_2>0$ and $\eta_1\neq\eta_2$. Motivated by the observation above that a zero-energy microstructure with uniform macroscopic deformation gradient $(\det U_1)^\frac{1}{3}\1=\eta_1^\frac{2}{3}\eta_2^\frac{1}{3}\1$ exists for any polycrystal, we discuss whether this can be achieved with a microstructure that in each grain involves just $k$ gradients, where $k$ is small. Without loss of generality we can consider a single unrotated grain, so that the question reduces to whether there exists a homogeneous gradient Young measure $\nu$ having the form
\begin{equation}\label{finite}
\nu=\sum_{i=1}^k\lambda_i\delta_{A_i} \mbox{ with }\lambda_i\geq 0, \sum_{j=1}^k\lambda_j=1, \mbox{ and } A_i\in K,
\end{equation}
and with macroscopic deformation gradient $\bar\nu = \eta_1^\frac{2}{3}\eta_2^\frac{1}{3}\1$. In \eqref{finite} we have used the notation $\delta_A$ for the Dirac mass at $A\in M^{3\times 3}$, namely the measure defined by
$$\delta_A(E)=\left\{\begin{array}{ll}1& \mbox{if } A\in E,\\ 0&\mbox{if }A\not\in E.\end{array}\right.$$
The following result implies in particular that this is impossible unless $k>4$, so that \eqref{finite} cannot be satisfied for a double laminate, a result also obtained by Muehlemann \cite{muehlemann15}.
\begin{thm}
\label{4gradthm}
There is no homogeneous gradient Young measure $\nu$ with $\supp\nu\subset K=\cup_{i=1}^3SO(3)U_i$ and satisfying
$\bar\nu=\eta_1^\frac{2}{3}\eta_2^\frac{1}{3}\1$, such that $\supp \nu \cap(SO(3)U_j\cup SO(3)U_k)$ contains at most two matrices for some distinct pair $j,k\in\{1,2,3\}$.
\end{thm}
\begin{proof} Suppose first that $\supp \nu$ is contained in the union of two of the wells, say $\supp\nu\subset SO(3)U_1\cup SO(3)U_2$. Then by the characterization of the quasiconvex hull of $SO(3)U_1\cup SO(3)U_2$ in \cite{j40} we have that $\bar\nu^T\bar\nu e_3=\eta_1^\frac{4}{3}\eta_2^\frac{2}{3}e_3=\eta_1^2 e_3$. Hence $\eta_1=\eta_2$, a contradiction.
Without loss of generality we can therefore suppose that
\begin{equation}
\label{nuform}
\nu=\lambda_1 \mu +\lambda_2\delta_{R_2U_2}+\lambda_3\delta_{R_3U_3}
\end{equation}
where $\lambda_i\geq 0$, $\sum_{i=1}^3\lambda_i=1$, $R_2, R_3\in SO(3)$ and $\mu$ is a probability measure on $SO(3)U_1$. Define $\mu^*(E)=\mu(EU_1)$ for $E\subset M^{3\times 3}$. Then $\mu^*$ is a probability measure with $\supp \mu^*\subset SO(3)$. Let $H=\bar\mu^*$. Then $\bar\mu=\int_{SO(3)U_1}A\,d\mu(A)=\int_{SO(3)}RU_1\,d\mu^*(R)=HU_1$. Letting $k=\eta_2/\eta_1$ and calculating $\bar\nu$ from \eqref{nuform}, we deduce that
\begin{eqnarray}
\nonumber
\hspace{-.2in}k^\frac{1}{3}\1=\lambda_1 H \diag (k,1,1)+\lambda_2R_2\diag(1,k,1)&&\\
&&\hspace{-.95in}+\lambda_3R_3\diag(1,1,k).\label{mubar}
\end{eqnarray}
We now apply the minors relation \eqref{minors1} to $\nu$. Noting that
\begin{eqnarray*}\langle\mu,\cof\rangle&=&\int_{SO(3)U_1}\cof A\,d\mu(A)\\&=&\int_{SO(3)}\cof(RU_1)\,d\mu^*(R)\\&=&\int_{SO(3)}R\,\cof(U_1)\,d\mu^*(R)\\&=&H\,\cof U_1,
\end{eqnarray*}
we obtain
\begin{eqnarray}\nonumber\hspace{-.2in}
k^{-\frac{1}{3}}\1=\lambda_1 H \diag (k^{-1},1,1)+\lambda_2R_2\diag(1,k^{-1},1)&&\\
&&\hspace{-1.35in}+\lambda_3R_3\diag(1,1,k^{-1}).\label{cof}
\end{eqnarray}
Subtracting \eqref{cof} from \eqref{mubar} and dividing by $k-k^{-1}$ we deduce that
\begin{equation}\label{ck}
c(k)\1=\lambda_1He_1\otimes e_1+\lambda_2R_2e_2\otimes e_2+\lambda_3 R_3e_3\otimes e_3,
\end{equation}
where $c(k)=\frac{k^\frac{1}{3}-k^{-\frac{1}{3}}}{k-k^{-1}}=(1+k^\frac{2}{3}+k^{-\frac{2}{3}})^{-1}>0$, from which it follows that
$$\lambda_1He_1=c(k)e_1, \; \lambda_2R_2e_2=c(k)e_2, \;\lambda_3R_3=c(k)e_3.$$
Hence $\lambda_2=\lambda_3=c(k)$. Acting \eqref{mubar} on $e_1$ we have that
$$k^\frac{1}{3}e_1=c(k)(ke_1+R_2e_1+R_3e_1). $$ Hence $k^\frac{1}{3}\leq c(k)(k+2)$, from which we obtain $(k^\frac{1}{3}-k^{-\frac{1}{3}})^2\leq 0$, so that $k=1$, a contradiction.
\end{proof}
\noindent The conclusion of Theorem \ref{4gradthm} contrasts with observations of polycrystalline materials undergoing cubic-to-tetragonal phase transformations, but for which some grains are completely filled by a single double laminate. Such cases arise for the ceramic ${\rm BaTiO}_3$
\cite{Arlt90}
and in various RuNb and RuTa shape-memory alloys
\cite{manzonietal09,
manzoni11,
vermautetal13,manzonietal14}.
Arlt \cite{Arlt90}
gives an interesting qualitative discussion of energetically preferred grain microstructure, drawing a distinction between the microstructures in interior and boundary grains. Following his reasoning, a likely explanation for why double, and not higher-order, laminates are observed in interior grains in these materials is that it is energetically better to form a double laminate with gradients away from the energy wells, than to form a higher-order laminate having gradients extremely close to the energy wells. According to this explanation, the extra interfacial energy (ignored in the nonlinear elasticity model) involved in forming a higher-order laminate would exceed the total bulk plus interfacial energy for the double laminate. Of course once the gradients are allowed to move away from the wells the conclusion of Theorem \ref{4gradthm} will not hold. Additional factors could include cooperative deformation of different grains (so that the assumption of a pure dilatation on the boundary is not a good approximation), some of which may have more complicated microstructures than a double laminate. It is interesting that third-order laminates are observed for RuNb alloys undergoing cubic-to-monoclinic transformations \cite{vermautetal13}.
\subsection{Bicrystals with two martensitic energy wells}
\label{bicrystals}
We now consider restrictions on possible zero-energy microstructures in polycrystals without making any assumptions other than those given by the grain geometry and texture. In particular, unlike in Section \ref{fourgradients}, we make no assumptions on the macroscopic deformation gradient of the microstructure. The restrictions result only from continuity of the deformation across grain boundaries. In order to give precise results, we restrict attention to a {\it bicrystal}, that is a polycrystal with just two grains $\om_1$ and $\om_2$. We assume that the grains have the cylindrical form
$\om_1=\omega_1\times (0,d),\; \om_2=\omega_2\times (0,d)$, where $d>0$, and $\omega_1, \omega_2\subset\R^2$ are bounded domains. We assume for simplicity that the boundaries $\partial\omega_1, \partial\omega_2$ are smooth and intersect nontrivially, so that $\partial\omega_1\cap\partial\omega_2$ contains points in the interior $\omega$ of $\overline\omega_1\cup\overline\omega_2$. The interface between the grains $\partial\Omega_1\cap\partial\om_2=(\partial\omega_1\cap\partial\omega_2)\times (0,d)$ thus contains points in a neighbourhood of which $\om_1$ and $\om_2$ are separated by a smooth surface having normal $n(\theta)=(\cos\theta,\sin\theta,0)$ in the $(x_1,x_2)$ plane.
We consider a martensitic transformation with two energy wells (for example, orthorhombic-to-monoclinic) for which $M=2$ and $K=SO(3)U_1\cup SO(3)U_2$, where $U_1=\diag (\eta_2,\eta_1,\eta_3)$, $U_2=\diag(\eta_1,\eta_2,\eta_3)$, where $\eta_1>0$, $\eta_2>0$, $\eta_1\neq\eta_2$ and $\eta_3>0$. We further suppose that $\om_1$ has cubic axes in the coordinate directions $e_1, e_2, e_3$, while in $\om_2$ the cubic axes are rotated through an angle $\alpha$ about $e_3$. Thus a zero-energy microstructure corresponds to a gradient Young measure $(\nu_x)_{x\in\om}$ such that
$$\supp \nu_x\subset K \mbox{ for }x\in\om_1,\;\;\supp\nu_x\subset KR(\alpha) \mbox{ for }x\in\om_2,$$
where $R(\alpha)=\left(\begin{array}{lll} \cos\alpha&-\sin\alpha&0\\ \sin\alpha&\cos\alpha& 0\\
0&0&1\end{array}\right)$. It can be shown that $KR(\alpha)=K$ if and only if $\alpha=\frac{r\pi}{2}$ for some integer $r$. Hence we assume that $\alpha\neq\frac{r\pi}{2}$.
We ask whether it is possible for there to be a zero-energy microstructure {\it which is a pure variant in one of the grains}, i.e. either for $i=1$ or $i=2$, $\nu_x=\delta_{Q(x)U_j}$ for $x\in\om_i$ and some $j$, where $Q(x)\in SO(3)$. Since $\om_i$ is connected, a standard result \cite{reshetnyak67} shows that $\nu_x=\delta_{Q(x)U_j}$ for $x\in\om_i$ implies that $Q(x)$ is smooth and hence \cite{shield73} is a constant rotation, so that $\nabla y(x)$ is constant in $\om_i$.
\begin{thm}[\cite{u5}]
\label{planar}
Suppose that the interface between the grains is planar, i.e. $\partial\om_1\cap\partial\om_2\subset \Pi(N)$ where $\Pi(N)=\{x\in\R^3:x\cdot N= a\}$ for some unit vector $N=(N_1,N_2,0)$ and constant $a$. Then there exists a zero-energy microstructure which is a pure variant in one of the grains.
\end{thm}
\noindent Thus, in order to eliminate the possibility of a pure variant in one of the grains we need a curved interface. To give an explicit result we consider the special case when $\alpha=\frac{\pi}{4}$. Let
\begin{eqnarray*}
&D_1=(\frac{\pi}{8},\frac{3\pi}{8})\cup (\frac{5\pi}{8},\frac{7\pi}{8})\cup(\frac{9\pi}{8},\frac{11\pi}{8})\cup(\frac{13\pi}{8},\frac{15\pi}{8}), \\ &D_2=(\frac{-\pi}{8},\frac{\pi}{8})\cup (\frac{3\pi}{8},\frac{5\pi}{8})\cup(\frac{7\pi}{8},\frac{9\pi}{8})\cup(\frac{11\pi}{8},\frac{13\pi}{8}).
\end{eqnarray*}
\begin{thm}[\cite{u5}]
\label{45}
Let $\alpha=\frac{\pi}{4}$ and $\frac{\eta_2}{\eta_1}\leq\sqrt{1+\sqrt{2}}$. If $\partial\om_1\cap\partial\om_2$ has points with normals $n(\theta)$ and $n(\theta')$ with $\theta\in D_1$ and $\theta'\in D_2$, then there is no zero-energy microstructure which is a pure variant in one of the grains.
\end{thm}
\noindent The main ingredients in the proofs of Theorems \ref{planar} and \ref{45} are (i) a reduction to two dimensions using the plane strain result \cite{j39}, (ii) the characterization in \cite{j40} of the quasiconvexification of $K$, (iii) a generalization of the Hadamard jump condition that implies that the difference between the polyconvex hulls of suitably defined limiting sets of gradients, on either side of a point on the grain boundary where the normal is $n$, contains a rank-one matrix $a\otimes n$, and (iv) long and detailed calculations.
\section{Conclusions and perspectives}
\label{conclusion}
In this paper we have provided a framework for discussing the effects of grain geometry on the microstructure of polycrystals as described by the nonlinear elasticity model of martensitic transformations. This consists of two threads, a description and analysis of the grain geometry itself, and the use of generalizations of the Hadamard jump condition and other techniques to delimit possible zero-energy microstructures compatible with a given grain geometry.
Both threads need considerable development. The quantitative description of polycrystals, as described for example in the book \cite{kurzydlowskiralph}, is a large subject which has many aspects (for example, sectioning and stochastic descriptions) for which a more rigorous treatment would be valuable.
The problem of determining possible zero-energy microstructures is essentially one of multi-dimensional calculus, namely that of determining deformations compatible with a given geometry having deformation gradients lying in, or Young measures supported in, the energy-wells corresponding to each grain. Nevertheless we are very far from understanding how to solve it in any generality, one obstacle being the well-known lack of a useful characterization of quasiconvexity (see, for example, \cite{p31}), which is known to be a key to understanding compatibility. The generalizations of the Hadamard jump conditions considered in \cite{u5} (see also \cite{iwaniecetal02}) are also insufficiently general and tractable. As well as for polycrystals, such generalized jump conditions are potentially relevant for the analysis of nonclassical austenite-martensite interfaces as proposed in \cite{j46,j48}, which have been observed in CuAlNi \cite{j63,j64}, ultra-low hysteresis alloys \cite{song13}, and which have been suggested to be involved in steel \cite{koumatosmuehlemann15}.
Despite the usefulness of the nonlinear elasticity theory, we have seen in connection with Theorem \ref{4gradthm} that in some situations the effects of interfacial energy can make its predictions of microstructure morphology inconsistent with experiment. This highlights the importance of developing a better understanding of how polycrystalline microstructure depends on the small parameters describing grain size and interfacial energy.
\section*{Acknowledgements} The research of JMB was supported by by the EC (TMR contract FMRX - CT EU98-0229 and
ERBSCI**CT000670), by
EPSRC
(GRlJ03466, the Science and Innovation award to the Oxford Centre for Nonlinear
PDE EP/E035027/1, and EP/J014494/1), the ERC under the EU's Seventh Framework Programme
(FP7/2007-2013) / ERC grant agreement no 291053 and
by a Royal Society Wolfson Research Merit Award. We thank Philippe Vermaut for useful discussions concerning RuNb alloys.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction: SUSY and future colliders}
If Supersymmetry (SUSY) \mcite{susy,*Wess:1974tw,*Nilles:1983ge,*Haber:1984rc,*Barbieri:1982eh} is to explain the
current problems of the Standard Model for
particles physics, such as
the naturalness of the theory, the hierarchy problem, the nature of Dark Matter,
or the possible discrepancy between the observed and predicted value on the
muon magnetic moment (g-2), a light electroweak SUSY sector is preferred.
From LEP II, it is known that an electroweak sector with masses below $\sim$ 100 GeV
is excluded, except for some very special cases.
From LHC, we know that
a coloured sector with masses below $\sim$ 1 TeV is also excluded.
However, except for the third generation squarks, the coloured sector
does not contribute significantly to clarify the mentioned issues with the SM.
The model-space of the electroweak sector of SUSY can conveniently be sub-divided by the nature of
the Lightest SUSY Particle (the LSP) as the Bino-, Higgsino- or Wino-region, defined by whether $M_1$, $\mu$, or $M_2$ is
the smallest of the three, and thus which field is the largest contributor to the mass-eigenstate (not to be
confused with {\it pure} Wino, Bino or Higgsino models, where the respective contributions are close to 100 \%).
Alternatively, one can classify by the size of the mass-difference, $\Delta(M)$, between
the LSP and the next-to-lightest SUSY particle (the NLSP),
as high $\Delta(M)$ or low $\Delta(M)$. The first case coincides with the Bino-region, the second
contains the Higgsino- and Wino-regions, which differ in important experimental consequences.
In other words: In the Higgsino- and Wino-regions,
the electroweak
SUSY sector
is ``compressed'', i.e. the masses of some of the other electroweak bosinos tend to be close to the LSP mass.
In this situation, most decays of massive sparticles are via cascades,
and at the end of these cascades, the mass difference is small, in turn
meaning that the final decay into the invisible LSP releases little energy.
While such events show large missing energy, this is of no help at hadron colliders - contrary to
the case at lepton colliders - since the initial energy is unknown.
Therefore, to address such cases at hadron colliders, one must resort to missing transverse momentum, a much more delicate signal.
Consequently, for such topologies, current limits from LHC are for specific models, and the results from LEP II
\mcite{lepsusywg,aleph,*Heister:2001nk,*Heister:2003zk,*Heister:2002mn,Abdallah:2003xe,Achard:2003ge,Abbiendi:2003ji}
are those that yield the model-independent exclusions.
The same observations are also valid if the NLSP is a slepton in general,
and the $\stau$ in particular.
The organisation of the note is as follows. We first discuss how to compare different options on an equal footing in section 2,
and present our scan-range, tools and general observations about the mass-spectra in section 3.
In section 4, we discuss the interpretation of the electroweak SUSY chapter of the physics Briefing-book \cite{Strategy:2019vxc}
to the
update of the European strategy for particle physics (the ESU) in view of
our observations. In section 5, for reference, we summarise the ILC projections, before concluding in section 6.
\begin{wrapfigure}{R}{8.5cm}
\begin{center}
\subfloat[][$\mu$ vs. $M_1$]{ \includegraphics [scale=0.4]{plots/pMSSM11_DM_abs_mchar1_2K_abs_mneu1_1250_chi2}}
\caption{ pMSSM11 fit by MasterCode to LHC13/LEP/g-2/DM(=100\%~LSP)/precision observables in the $\MXC{1}$ - $\MXN{1}$ plane. From \cite{Bagnaschi:2017tru}. \label{fig:mastercode}}
\end{center}
\end{wrapfigure}
\section{Comparing options}
For asserting the capabilities of future facilities to explore SUSY, it is important to make the
distinction between discovery potential and exclusion potential.
The former is power to discover {\it some} model,
while the latter is the power to exclude {\it all} models
compatible with the shown parameters, i.e. marginalising over all non-shown
parameters.
The methodologies needed in the two cases are different. In the first case,
one would concentrate on specific models yielding signatures that are
observable as far into uncharted territory as possible,
while in the latter case one needs to determine which model is the
most difficult, and evaluate whether that worst-case would be observed, if it is realised in nature.
The latter was indeed the focus at LEP II. The limits from there have
been marginalised over all other parameters, and can
be considered definite.
A further consideration that must be made in weighing different future projects
against each other is the level of understanding.
This includes the level of maturity of the project,
ranging from existing results (e.g. from LEP or LHC), over
existing/in construction new detectors and machines (HL-LHC),
TDR-level new facilities, such as ILC or CLIC,
to conceptual extensions to existing facilities (HE-LHC, LHeC).
It further extends to new conceptual ideas, such as the different
options for FCC, and continues to emerging technologies, e.g. plasma acceleration or $\mu$-colliders.
One must also consider the level of detail of studies done, which range
from fully simulated, well defined detectors and accelerators (LHC, HL-LHC, ILC, and CLIC),
fully simulated evolving concepts, e.g. CepC, over detailed fast simulation
(i.e. with more detail than purely parametric simulations), to
parametric simulations with parametric input from full simulation of the proposed detector,
or simply using parameters from an existing detector at a new facility.
Also pure four-vector smearing of generated objects and simple cross-section level estimates
can be used as initial estimates.
In the case of cross-section and luminosity scaling estimates, one should also
consider whether they were done at the level of the final published exclusion reach,
or had access to more basic information of the extrapolated experiment
(background event count, efficiency tables etc.)
Finally, when it comes to interpreting the results of such studies it is also important to
consider whether they were done by detector experts or not.
This is of particular importance for systematics-limited experiments,
and cases where detailed knowledge of object-finding and reconstruction is
essential.
\section{Estimating SUSY reach}
\begin{figure}[t]
\begin{center}
\subfloat[][$\mu$ vs. $M_1$]{\includegraphics [scale=0.28]{plots/scan-range_p1}}
\subfloat[][$\mu$ vs. $M_2$]{\includegraphics [scale=0.28]{plots/scan-range_p2}}
\subfloat[][$M_1$ vs. $M_2$]{\includegraphics [scale=0.28]{plots/scan-range_p3}}
\caption{The scanned points in $\mu$, $M_1$ and $M_2$.\label{fig:cube}}
\end{center}
\end{figure}
Several groups \mcite{susyfits,*Bagnaschi:2017tru,*Bagnaschi:2018zwg,*Caron:2016hib,*Aad:2015baa} have
combined current experimental observations with SUSY theory to estimate whereto in the
parameter space observations point, and to estimate what regions actually are excluded at present.
One example, from the MasterCode group \cite{Bagnaschi:2017tru}, is show as Figure \ref{fig:mastercode}.
It, interestingly, indicates that current results point to the aforementioned ``compressed region''.
This type of studies
aims at answering the question
about where SUSY is most likely to be found, and to compare this with the
estimated capabilities of present or future facilities and techniques.
It should be noted that to arrive at definite conclusions,
the analyses typically include non-HEP observations.
Whether, and how, these can be put in to a HEP context already contains
assumptions.
E.g.\ it is often assumed that SUSY is the sole source of WIMP dark matter,
and that direct or indirect searches for WIMPs can be translated into HEP
observations.
However, other well motivated candidates for dark matter exists, the prime example
being the QCD axion \cite{Marsh:2015xka}, so it might well be that SUSY is realised in nature, but
is not the (full) explanation for Dark Matter.
In addition, the estimates sometimes include results that hint to physics beyond the standard model,
but that are not yet solidly established, g-2 of the muon being one example.
\begin{figure}[b]
\begin{center}
\subfloat[][Higgsino-like LSP ($\mu < M_1, M_2$)]{\includegraphics [scale=0.27]{plots/tomohikoalt_hino}}
\subfloat[][Wino-like LSP ($M_2 < M_1, \mu $)]{\includegraphics [scale=0.27]{plots/tomohikoalt_wino}}
\subfloat[][Bino-like LSP($M_1 < M_2, \mu $)]{\includegraphics [scale=0.27]{plots/tomohikoalt_bino}}
\caption{The LSP mass vs. the NLSP mass for the three cases. The colour coding is the following:
Points with the same colour have varying values of the
bosino parameters, as per Figure \ref{fig:cube}, while the colours are: All point have $\tan{\beta}$=10, except
light green ($\tan{\beta}$=3) and blue ($\tan{\beta}$=30). All have $M_A$= 5 TeV except black ($M_A$= 0.5 TeV)
and magenta ($M_A$= 10 TeV). All have positive sign of $\mu,M_1$, and $M_2$, except cyan
(-,+,+), olive-green (+,+,-), orange (+,-,+) and purple (+,-,-). Open symbols
are for $|M_2-2 M_1|/|M_1|>0.1$ (i.e. not close to the GUT-unification case).
\label{fig:broadbrush}}
\end{center}
\end{figure}
If one is interested in the {\it guaranteed} reach, rather than the {\it possible} reach,
one should not rely on assumptions that are not directly testable.
In essence, this means to concentrate on the {\it exclusion reach}.
In SUSY, the fundamental principle that sparticles and particles have the same
couplings and the same quantum-numbers (except for spin), sets a scene where such a program is possible.
It implies that cross-sections and decay modes are completely know within SUSY itself.
In particular, if R-parity conservation is assumed, it means that there is
always one completely know process, namely NLSP production, followed by
the decay of the NLSP to it's SM partner and the LSP, if kinematically allowed, with 100 \% branching ratio.
In estimating the exclusion reach, rather than the discovery reach, it is essential to find
the {\it most challenging} situations. Such a scenario is easily found, and we will consider
\begin{itemize}
\item the MSSM with R-parity conservation, since LEP experience shows that the case
of R-parity violation is always less demanding at e$^+$e$^-$ machines, and likely to also be
so at hadron machines.
\item The NLSP is not a sfermion, for the same reason. The $\stau$ is an exception, in that
the LEP experience indicates that a $\stau$ NLSP might be even more challenging than a bosino one.
However, the issue is even more pronounced at a hadron collider \cite{Abdughani:2019wss}.
\end{itemize}
Under these conditions both the LSP and the NLSP are more or less pure Binos, Winos, or Higgsinos,
and $M_1 , M_2$ and $\mu$ are the MSSM parameters most influencing the experimental signatures.
We consider any values, and combinations of signs, of these parameters,
up to values that makes the bosinos
kinematically out-of-reach
for any new facility, i.e. up to a few TeV.
We also vary other parameters ($\tan {\beta} $= 3 to 30, $M_A$= 0.5 to 10 TeV, $M_{sfermion}$= 5 to 10 TeV), to verify that
they have only
a minor impact on the signatures.
No other assumptions, such as relations between the parameters due to some specific
SUSY-breaking mechanism, are done.
No assumption on prior probabilities is implied, and therefore that the density of points in the
various projections that will be shown is not of great importance.
The important observation to be made is whether {\it there are} any points outside excluded regions: this
implies that the model {\it cannot} be excluded.
\begin{figure}[t]
\begin{center}
\subfloat[][Full range, all cases]{\includegraphics [scale=0.38]{plots/dmc1_dmn2_405}}
\hskip 0.5cm
\subfloat[][Blow-up of the Higgsino LSP region]{\includegraphics [scale=0.38]{plots/hino_dmc1_dmn2_v2_405}}
\caption{The mass difference between the LSP and $\XPM{1}$ versus that between the LSP and $\XN{2}$. \label{fig:dmx1dmn2}}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\subfloat[][Higgsino-LSP case]{\includegraphics [scale=0.38]{plots/hino_dmc1_mn1_w_dh_405_gut}}
\hskip 1.0cm
\subfloat[][Wino-LSP case]{\includegraphics [scale=0.38]{plots/wino_dmc1_mn1_405}}
\caption{$\Delta(\MXC{1})$ vs. $M_{LSP}$ in the small $\Delta(M)$ region. In (a), the
squares are points where $\Delta(\MXC{1})\approx\Delta(\MXN{2})/2$, and triangles are points where $\Delta(\MXC{1})\approx\Delta(\MXN{2})$
Colours as explained in Figure \ref{fig:broadbrush}.\label{fig:dmx1mlsp}}
\end{center}
\end{figure}
Figure \ref{fig:cube} shows the points studied in $M_1 , M_2$ and $\mu$ as three two-dimensional
projections.
We proceed to find what happens with spectra, cross-sections, and decay branching-ratios when exploiting this ``cube''.
In order to do so, {\tt SPheno 4.0.5} \mcite{shpeno,*Porod:2003um,*Porod:2011nf} was used to calculate spectra and and decay branching ratios at each point,
and {\tt Whizard 2.8.0} \mcite{whizard,*Kilian:2007gr,*Moretti:2001zz} was used to find the production cross-sections,
and to generate parton-level events.
In addition {\tt FeynHiggs 2.16.0} \mcite{feynhiggs,*Bahl:2018qog,*Bahl:2017aev,*Bahl:2016brp,*Hahn:2013ria,*Frank:2006yh,*Degrassi:2002fi,*Heinemeyer:1998np,*Heinemeyer:1998yj}
was used to calculate the expected mass of the SM-like higgs boson, and as a double-check of the sparticle mass-spectrum.
\begin{figure}
\begin{center}
\subfloat[][]{\includegraphics [scale=0.9]{plots/brbk_susy_1}}
\subfloat[][ $\XPM{1} \rightarrow Z \XN{1}$]{\includegraphics [scale=0.9]{plots/atlas-hl-proj-ATL-PHYS-PUB-2018-048_p20_fig}}
\subfloat[][$\XPM{1} \rightarrow h \XN{1}$]{\includegraphics [scale=0.9]{plots/atlas-hl-proj-ATL-PHYS-PUB-2018-048_p24_fig}}
\caption{The reaches in the high $\Delta(M)$ (Bino-LSP) region, as
reported in \cite{Strategy:2019vxc} (top),
and the two projections to HL-LHC from ATLAS \cite{ATL-PHYS-PUB-2018-048} (bottom).(b) corresponds to
the solid red line in (a).\label{fig:bbbino}}
\end{center}
\end{figure}
Around 80 \% of the points had calculated higgs-mass agreeing with the experimental value
at the 2$\sigma$ level of the theoretical uncertainty, with the exception
of the points with the highest of the three $\tan{\beta}$ values in our scan,
namely $\tan{\beta}$=30, where only 7, 9 and 23 \% of the points were in the range for
Wino-, Bino- and Higgsino-LSP, respectively.
None of the features shown in the following figures, however, change if demanding that the calculated higgs-mass
was in the two standard deviation range.
The main features are shown in Figure \ref{fig:broadbrush}.
One observes that, except for Bino-LSP, the LSP-NLSP splitting is small.
The colours indicate different settings of the secondary parameters; the observation is that they don't matter much.
In addition, the open circles indicate cases where GUT-scale unification
of $M_1$ and $M_2$ is not possible\footnote{
If $M_1$ and $M_2$ are unified at the GUT scale, the different RGE running of the two results
in the relation $M_2=(3g^2/5g^{\prime 2})M_1 \approx 2M_1$ at the weak scale. The maximally
stretched difference between the LSP and the NLSP
occurs when the LSP is pure Bino, and the NLSP a pure Wino. In this case $M_{LSP}=M_1$ and $M_{NLSP}=M_2=2M_1$.
A Higgsino admixture in these states can only make the difference {\it smaller}.}.
In many models, the next-to-next lightest SUSY particle (the NNLSP) is close in mass to the
NLSP.
Therefore, another aspect of experimental importance is the mass-differences to the LSP
of both the lightest chargino and of the second lightest neutralino: either of these
could be either the NLSP or the NNLSP.
This aspect is shown in Figure \ref{fig:dmx1dmn2} which shows $\Delta(M)$ for $\XPM{1}$ versus that of $\XN{2}$.
One notes three distinct regions
\begin{itemize}
\item Bino LSP: Both mass differences quite similar, but can take any value;
\item Wino LSP: $\Delta(\MXC{1})$ will be small, while $\Delta(\MXN{2})$ can vary largely;
\item Higgsino LSP: Both mass differences often small.
\end{itemize}
Note, however, that in the Higgsino LSP case, few models are on the ``Higgsino line'',
i.e. the case where the chargino is {\it exactly} in the middle of
mass-gap between the first and second neutralino.
Finally, Figure~\ref{fig:dmx1mlsp} show $\Delta(M)$ for $\XPM{1}$ vs. $M_{LSP}$
for a Higgsino LSP or a Wino LSP.
Here, one can note that in both scenarios, quite a large spread is possible, and
that some Higgsino models actually have a chargino LSP.
The last feature is a point of disagreement between the results of {\tt SPheno} and {\tt FeynHiggs} -
the latter does not find models with a chargino LSP\footnote{There are differences to be expected as in case of
{\tt SPheno} this is a pure $\overline{\mathrm{DR}}$ calculation whereas in {\tt FeynHiggs} on-shell calculation is preformed.This issue needs further investigation.}.
\section{SUSY In the Briefing-book}
In the physics Briefing-book of the update of the European strategy for particle
physics \cite{Strategy:2019vxc}, the reach of searches for electroweak SUSY particles
for different proposed future accelerators are
presented in chapter 8.3.2, and illustrated by two figures. We will discuss these in this section.
\subsection{Bino LSP\label{sect:bbbino}}
\begin{wrapfigure}{R}{9cm}
\begin{center}
\subfloat[][$M_1 , \mu$>0, $M_2$<0]{\includegraphics [scale=0.25]{plots/bin_br_n2_case3_muless}}
\subfloat[][$M_1 , M_2<0$, $\mu$ >0]{\includegraphics [scale=0.25]{plots/bin_br_n2_case4_muless}}
\caption{Branching ratios of $\XPM{1} \rightarrow Z \XN{1}$ (blue)
and $\XPM{1} \rightarrow h \XN{1}$ (red) in the Bino LSP case,
with $|\mu| < |M_2|$, and different signs of $\mu, M_1$, and $M_2$.
The same grid in absolute values of $\mu, M_1$, and $M_2$ is used in
both (a) and (b).\label{fig:brnsigns}}
\end{center}
\end{wrapfigure}
Figure 8.9 in \cite{Strategy:2019vxc} (reproduced here as Figure \ref{fig:bbbino}) shows the
estimated reaches in the Bino LSP case, i.e. for the large $\Delta(M)$ case.
The signature for this scenario at pp-colliders are events with large
missing transverse momentum (MET).
Due to the large mass-difference, the missing momentum originates from the invisible SUSY particles themselves,
i.e. there is no need for a system recoiling against the SUSY particles.
To first order, this makes the analyses robust, since only the mass-difference
is needed to predict the signal topologies.
However, there are still a number of model-dependencies, discussed below.
The sources of the curves shown for the various pp options
are from the projection to HL-LHC by the ATLAS collaboration \cite{ATL-PHYS-PUB-2018-048}.
The result presented in that publication is the
solid red line in Figure \ref{fig:bbbino}a, and the actual plot from the paper is reproduced as Figure \ref{fig:bbbino}b.
This curve is extrapolated giving the HE-LHC curve (red-dotted).
Several things should be noted: The curve shown is the exclusion reach,
not the discovery reach (for the CLIC and ILC curves, the differences between exclusion and discovery reach
are less than the width of the lines in the figure).
The ATLAS result is only shown down to $\MXC{1}$= 500 GeV,
the region below this is just a guide-the-eye straight line.
The the chosen decay-mode ($\XPM{1} \rightarrow Z \XN{1}$) is the most sensitive at low $\Delta{M}$.
The other mode ($\XPM{1} \rightarrow h \XN{1}$), shown in Figure \ref{fig:bbbino}c is less
powerful in this region. On the other hand the higgs mode is expected to probe higher $\MXC{1}$ at
the highest $\Delta{M}$. At CLIC or ILC, one does not need to make such a distinction.
The issue of the dominant decay mode is important, as illustrated in Figure \ref{fig:brnsigns}.
In these figures, we show the branching ratios in Bino-LSP models (i.e. models where $M_1$ is the smallest
of the bosino parameters), when only the relative signs
of $M_1,M_2$ and $\mu$ are modified.
The observation is that whether the $Z$ or the $h$ mode is dominant depends crucially of the
choice of the relative sign, and hence that the exclusion-region should be the {\it intersection} of
Figures \ref{fig:bbbino}b and c, not the {\it union}.
One can note that the exclusion region remains below a line with slope $\sim 1/2$
when luminosity and/or energy is increased. The reason for this is as follows:
Figure \ref{fig:fccxsect} shows how the cross-section varies with the sum of the
two bosino masses at FCChh-conditions. Here a simple setup - which is nevertheless adequate for
illustrating the scaling behaviour - was used to calculate
cross-section $\times$ branching-ratios using {\tt Whizard}. The process is
$pp \rightarrow$ uncoloured bosinos + gluon, with the {\tt Whizard}-default parton
density function (CTEQ6L1\cite{Pumplin:2002vw}). Two observations can be made. Firstly, there is a
close to exponential fall of the cross-section with mass. Secondly, the
cross-section at any given mass can vary by a factor $\sim$ 2, by varying the parameters
of the model.
\begin{figure}[t]
\begin{center}
\subfloat[][Higgsino LSP]{\includegraphics [scale=0.27]{plots/pp_hino_xsect}}
\subfloat[][Wino LSP]{\includegraphics [scale=0.27]{plots/pp_wino_xsect}}
\subfloat[][Bino LSP]{\includegraphics [scale=0.27]{plots/pp_bino_xsect}}
\caption{Cross sections for $pp \rightarrow$ two uncoloured bosinos + a gluon,
as a function of the sum of the masses of the two bosinos.
The five different final states are shown separately, as indicated in the figures.\label{fig:fccxsect}}
\end{center}
\end{figure}
The exponential fall-off with increasing mass comes around for the following reason:
If the mass of the interacting quark-pair would be fixed (equivalent to the
situation at a lepton collider, where the invariant mass of the initial state
is fixed (= 2$\times E_{beam}$)), the cross section versus the mass of
the produced bosino pair initially rises proportionally to $\beta$ - typical for fermion pair-production - followed
by a fall-off proportional to $\frac{1}{s}$, see Figure \ref{fig:xsectvsmqq}a.
Once this is folded with the distribution of $m_{qq}$ given by the rapidly falling
parton densities\footnote{Note that for the Drell-Yan production of the bosino pair,
at least one of the partons must come from the sea}, the actual distribution on $m_{qq}$,
given that a bosino production took place shows a distribution - albeit broad - that correlates with
the mass of the bosino pair (Figure \ref{fig:xsectvsmqq}b). This correlation is
close to linear, as can be seen in \ref{fig:xsectvsmqq}c.
\begin{figure}[b]
\begin{center}
\subfloat[][]{\includegraphics [scale=0.26]{plots/qq_fixed_ecm_bino_xsect}}
\subfloat[][]{\includegraphics [scale=0.26]{plots/ecms_for_ino_masses}}
\subfloat[][]{\includegraphics [scale=0.26]{plots/mqq_vs_mino}}
\caption{Properties of $\XPM{1}$ production:
(a) Cross-section at fixed $m_{qq}$; (b) Distributions of $m_{qq}$ at different $\MXC{1}$ in pp;
(c) Average $m_{qq}$ vs. $\MXC{1}$ in pp. The error-bars
represent the r.m.s. of the distribution, not the r.m.s. of the mean.\label{fig:xsectvsmqq}}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\subfloat[][]{\includegraphics [scale=0.38]{plots/mqq_bck_vs_cut_for_mino}}
\hskip 0.3cm
\subfloat[][]{\includegraphics [scale=0.38]{plots/xsect_bck_vs_cut_for_mino}}
\caption{Properties of signal and background of $\XPM{1}$ production.
(a) Average $m_{qq}$ in $pp \rightarrow Zg \rightarrow \nu\bar{\nu}g$
versus $\MXC{1}$ when the cut on the $\nu\bar{\nu}$ transverse momentum is adjusted according to the
expected missing transverse momentum of a $\XPM{1}$. It is set to 0.85 $\MXC{1}$. (b)
Cross-section of $\XPM{1}$-pair production (red-solid),
together with that of $pp \rightarrow Zg \rightarrow \nu\bar{\nu}g$
with cuts on the $\nu\bar{\nu}$ increasing with
$\MXC{1}$ for three choices of $\Delta(M)$, keeping the cut at 75 \% of the
highest possible missing $p_T$ for the signal: nominal (blue-solid, cut = 0.85 $\MXC{1}$),
$\frac{3}{4}\Delta(M)_{nom.}$ (blue-dotted, cut = 0.7 $\MXC{1}$),
and $\frac{1}{2}\Delta(M)_{nom.}$ (blue-dashed, cut = 0.5 $\MXC{1}$).\label{fig:fccbckxsect}}
\end{center}
\end{figure}
Generally, the maximum missing momentum due to the invisible LSPs is
\begin{align}
\bcancel{p}_{max} = & 2 \gamma_{f\bar{f}} \beta_{f\bar{f}} \gamma_{\scriptscriptstyle NLSP} E_{\scriptscriptstyle LSP} + 2 \gamma_{f\bar{f}} \gamma_{\scriptscriptstyle NLSP} p_{\scriptscriptstyle LSP}
\end{align}
In the Bino case, the initial $f\bar{f}$-system need not be boosted, so $\gamma_{f\bar{f}} \beta_{f\bar{f}} \approx 0$, $ \gamma_{f\bar{f}} \approx 1$,
$\gamma_{\scriptscriptstyle NLSP} = E_{\scriptscriptstyle NLSP} / M_{\scriptscriptstyle NLSP} \approx M_{f\bar{f}}/ 2 M_{\scriptscriptstyle NLSP}$ and
\begin{align}
\bcancel{p}_{max} \approx & 2 \frac{M_{f\bar{f}}}{2 M_{\scriptscriptstyle NLSP}} p_{\scriptscriptstyle LSP} \approx 2 \frac{M_{f\bar{f}} }{2 M_{N\scriptscriptstyle LSP}} \frac {M^2_{\scriptscriptstyle NLSP}-M^2_{\scriptscriptstyle LSP}}{2 M_{\scriptscriptstyle NLSP}}
\end{align}
where the last step is because
at interesting points, the SUSY particle mass is much above it's SM partner (even if this is a $W$, $Z$ or $h$).
From Figure \ref{fig:xsectvsmqq}c, we know that $M_{f\bar{f}} \approx 3 M_{\scriptscriptstyle NLSP}$, and
\begin{align}
\bcancel{p}_{max} \approx & \frac{3}{2} M_{\scriptscriptstyle NLSP} \left ( 1 - (\frac {M_{\scriptscriptstyle LSP}}{M_{\scriptscriptstyle NLSP}})^2 \right )
\end{align}
Hence, at these LSP-NLSP mass-ratios, the missing $p_T$ due to the invisible LSP is proportional to
the bosino-mass.
This means that one can increase the missing $p_T$ cut while conserving a given the signal efficiency as one
searches for higher bosino masses.
The missing $p_T$ from irreducible background (typically Dell-Yan + gluon, with
$Z\rightarrow \nu\bar{\nu}$) is obviously independent for the bosino masses, but does
depend on the required missing $p_T$, essentially the $p_T$ of the gluon.
Figure \ref{fig:fccbckxsect}a shows that if the $p_T$ cut applied is the one
that keeps the same signal efficiency at any given $\MXC{1}$,
the $M_{f\bar{f}}$ of the background events passing the cut also follows a linear trend
quite similar to that of the signal, seen in \ref{fig:xsectvsmqq}c.
In the figure, the cut has been set to 0.85 $\MXC{1}$, which according to
Eq. 3 corresponds to 0.75 $\times$ the maximal missing $p_T$ from $\XPM{1}$ pair-production,
when $M_{NLSP}=2M_{LSP}$, as it is at the border of the currently excluded region.
\begin{figure}[b]
\begin{center}
\subfloat[][Z mode]{\includegraphics [scale=0.9]{plots/brbk_susy_2}}
\subfloat[][Z mode]{\includegraphics [scale=0.4]{plots/ATL-PHYS-PUB-2018-031-fig5}}
\subfloat[][Z mode]{\includegraphics [scale=0.35]{plots/1812-07831-helhc}}
\caption{The reaches in the low $\Delta(M)$ (Higgsino- or Wino-LSP) region, as
reported in \cite{Strategy:2019vxc} (top),
and the two projections to HL-LHC from ATLAS \cite{ATL-PHYS-PUB-2018-031} and CMS\cite{Sirunyan:2018iwl,CMS:2018qsc}
(bottom).(b) corresponds to
the solid blue line in (a), (c) to the red line\label{fig:bbwinohiggsino}}
\end{center}
\end{figure}
The cross-section for this process therefore also falls exponentially with the required
$p_T$, (see Figure \ref{fig:fccbckxsect}b) meaning that the
signal-to-background ratio will remain constant along lines through the origin in the $\MXC{1}$ vs. $M_{LSP}$ plane.
On the other hand, Eq. 3 also shows that the missing $p_T$ decreases with increasing
$M_{\scriptscriptstyle LSP}/M_{\scriptscriptstyle NLSP}$, meaning that to exclude lower $\Delta(M)$ at the same efficiency requires
a decrease in the cut, leading to a large increase in the background.
Comparing the solid and dashed blue lines in Figure \ref{fig:fccbckxsect}b shows that to half
the excluded $\Delta(M)$ at $\MXC{1}$ = 2 TeV would require $\sim$ 10 times
more luminosity, assuming that no new background sources would start contributing (e.g. jet-energy resolution, jet-energy scale, non-direct
neutrinos, etc.), which clearly is an unrealistic best-case.
To conclude this discussion of the Bino LSP case, we note that although the signal is robust,
there are a number of issues that must be taken into account: The analyses are typically performed
using a set of processes involving production of different combinations of LSPs, NLSPs and NNLSPs.
The sensitivity is different for different channels, and Figure \ref{fig:fccxsect}c shows that the cross-sections,
and their ratios, can vary substantially between models.
Furthermore, we also noted that the dominating decay mode of the second neutralino is strongly dependent
on the relative signs of $\mu, M_1$ and $M_2$, and that the sensitivity of the analyses depends also on this.
To claim that an $M_{NLSP}$-$M_{LSP}$ is excluded, the analysis must be done assuming the least favourable
production-process and least favourable decay-mode.
Finally, we pointed out that to extend the coverage to higher NLSP masses at constant LSP mass,
while retaining the same
signal efficiency can be done by making the cut on MET {\it stronger}, and that the
signal-to-background ratio will remain constant when doing so.
In contrast,
to
extend the coverage to higher LSP masses at constant NLSP mass (i.e. to lower $\Delta(M)$) at constant
signal efficiency, one must
make the MET-cut {\it weaker}, and thus making the signal-to-background ratio lower.
A lower MET-cut also implies that proportionally more background originates from fake MET due
to detector effects, or from non-prompt neutrinos.
The conclusion is that while progress with increased (parton) luminosity in the $M_{NLSP}$ direction
is substantial, the progress into the region of lower $\Delta(M)$ will be much less pronounced.
\subsection{Wino/Higgsino LSP}
Figure 8.10 in \cite{Strategy:2019vxc} (reproduced here as Figure \ref{fig:bbwinohiggsino}) shows the
estimated reaches in the Wino or Higgsino LSP case, i.e. for the small $\Delta(M)$ case.
The two curves come from the HL-LHC projections from ATLAS \cite{ATL-PHYS-PUB-2018-031} (solid blue) and CMS \cite{Sirunyan:2018iwl,CMS:2018qsc}
(solid red).
In the CMS case, energy and luminosity
scaling extrapolation to HE-LHC (dashed red) and FCChh (dashed magenta) are also done.
Both collaborations make assumptions on the spectrum, however different ones:
ATLAS assumes $\Delta(\MXC{1})= \Delta(\MXN{2})/2$, while CMS assumes $\Delta(\MXC{1})= \Delta(\MXN{2})$.
In Figure \ref{fig:dmx1mlsp}a, the points in our scan that fulfil the ATLAS condition are marked with squares
and those that fulfil CMS one are marked with triangles.
\begin{figure}[b]
\begin{center}
\subfloat[][Higgsino-LSP, from \cite{Han:2018wus}]{\includegraphics [scale=1.0]{plots/han_et_all_dm_1805-00015_0014_fig2}}
\subfloat[][Wino-LSP, from \cite{Han:2018wus}]{\includegraphics [scale=1.0]{plots/han_et_all_dm_1805-00015_0014_fig1}}
\subfloat[][Higgsino-LSP, from \cite{Benedikt:2018csr} ]{\includegraphics [scale=1.05]{plots/fcchh_cdr_CERN-ACC-2018-0058-p184_fig2}}
\subfloat[][Wino-LSP, from \cite{Benedikt:2018csr}]{\includegraphics [scale=1.05]{plots/fcchh_cdr_CERN-ACC-2018-0058-p184_fig1}}
\caption{The sensitivities for the ``Disappearing tracks'' at future pp-colliders.
In (c) and (d), the grey bands corresponds to the actual FCChh conceptual detector.\label{fig:delphesdis}}
\end{center}
\end{figure}
The reason for the sharp cut-off at low mass-differences, seen in \ref{fig:bbwinohiggsino}a,
is that these searches require leptonic
decays of the NLSP to be possible to extract a signal from a huge, mainly hadronic, QCD background.
Lepton identification is therefore essential, and this requires that the particle reaches the
barrel calorimeters. The cut-off then appears because below this mass-difference, the decay products
are so soft that they are bent back by the detector B-field inside the radius of the calorimeters.
This cut-off would be at higher $\Delta(M)$ at FCChh, since the reference detector design \cite{Benedikt:2018csr} foresees
a considerably larger inner radius of the barrel calorimeter system ($\sim$ 2 m, while ATLAS and
CMS have an inner radius of 1.5 m and 1.3 m, respectively), and a stronger B-field
( 4 T vs. 2(3.8) T for ATLAS(CMS)).
From this, one also sees that the CMS properties are closer to the FCChh detector in this respect:
cut-off transverse momenta are 1.2, 0.7 and 0.4 GeV for FCChh, CMS and ATLAS, respectively.
In addition, the analyses use the combination of NLSP-NLSP, NLSP-NNLSP and NNLSP-NNLSP production, and assume
certain relations between the mass difference to the LSP of the NLSP or NNLSP,
different for ATLAS and CMS, as mentioned above.
Our scan shows that
these relations are quite particular cases.
Whether these assumptions are
essential or not is an open question, but we do conclude that the soft-lepton analysis will progress
to higher NLSP masses, but not to lower $\Delta(M)$, and remains model-dependent.
\begin{figure}[b]
\begin{center}
\subfloat[][Higgsino LSP, SPheno]{\includegraphics [scale=0.27]{plots/hino_dmc1_mn1_zoom_w_dh_405_gut}}
\hskip 1.0cm
\subfloat[][Higgsino LSP, SPheno]{\includegraphics [scale=0.27]{plots/hino_dmc1_mn1_zoom_signs_w_dh_405}}
\subfloat[][Higgsino LSP, FeynHiggs]{\includegraphics [scale=0.27]{plots/hino_dmc1_mn1_zoom_signs_fh}}
\hskip 1.0cm
\subfloat[][Wino LSP, SPheno]{\includegraphics [scale=0.27]{plots/wino_dmc1_mn1_zoom_signs_405}}
\caption{Zoom-in of Figure \ref{fig:dmx1mlsp}, showing $\Delta(\MXC{1})$ vs. $M_{LSP}$ in the small $\Delta(M)$ region.
In (a) only models with $\mu, M_1$, and $M_2$ all positive
are shown.
In (b) and (c) all models are shown, for both spectrum calculation codes we used.
The lines are from \cite{Fukuda:2017jmk} (a,b,c) and \cite{Ibe:2012sx}(d),
and are the mass-differences used in the calculation of the reaches shown in Figures \ref{fig:delphesdis}
and \ref{fig:delphesmonox}. In (a) and (b), the
squares are points where $\Delta(\MXC{1})=\Delta(\MXN{2})/2$, i.e. the ``deep Higgsino'' region;
the colours are explained in Figure \ref{fig:broadbrush}.\label{fig:bosinodmzoom}}
\end{center}
\end{figure}
The hatched band at the bottom of Figure \ref{fig:bbwinohiggsino}a shows the reach at very low $\Delta(M)$. The upper edge of
the band at $\Delta(M)$=1 GeV should not be taken literally; only the reach in $M_{LSP}$ is relevant.
Two methods are used to estimate the reach at very low $\Delta(M)$: ``Disappearing tracks'' and ``Mono-X''.
The ``Disappearing tracks'' signature, which consists of a topology where a reconstructed trajectory
terminates inside the tracking volume, indicating that a decay took place where the decay product
had a momentum below the threshold of detectability. This signature is effective for cases with low mass-differences,
since this potentially implies both a long lifetime of the primary NLSP, and little energy release in it's
decay.
The ``Mono-X'' technique is effective if the decay products of the produced bosinos are so soft that
they are invisible, or if only LSP-pairs are produced. One then searches for the a large un-balanced
$p_{T}$ from an initial state radiation, which could be a gluon, photon, $Z$, $W$ or $h$. In pp-collisions,
the gluon would be the prime candidate.
\begin{figure}[t]
\begin{center}
\subfloat[][Higgsino LSP]{\includegraphics [scale=0.27]{plots/hino_ctau_mn1_405}}
\subfloat[][Wino LSP]{\includegraphics [scale=0.27]{plots/wino_ctau_mn1_405}}
\subfloat[][]{\includegraphics [scale=0.27]{plots/ctau_vs_dm_hino}}
\caption{$c\tau(\XPM{1})$ vs. $M_{LSP}$ for Higgsino (a) and Wino (b) LSP, and $c\tau(\XPM{1})$ vs. $\MXC{1}$ for
Higgsino LSP (c).
Colours as explained in Figure \ref{fig:broadbrush}.\label{fig:ctau}}
\end{center}
\end{figure}
The ``Disappearing tracks'' method was used by FCChh (in the CDR\cite{Benedikt:2018csr}), as well as in
\cite{Han:2018wus}. The two results
are shown in Figure \ref{fig:delphesdis}. The upper row is from \cite{Han:2018wus}, while the lower row is
from \cite{Benedikt:2018csr}. In the latter, the grey curves should be considered: the pink ones shows what could be
obtained if the innermost layer of the vertex detector would be placed much closer to the beam than what is
assumed to be the closest conceivable radius, given the radiation levels expected \cite{Benedikt:2018csr}.
One can observe a large discrepancy between the results in the upper and lower row in the figure.
Both are based on Delphes parameterised fast simulation \cite{deFavereau:2013fsa}, but
the FCChh analysis is more realistic, in that it assumes
a detector as assumed in the CDR, and with a more FCChh-like number of pile-up events (even though it only assumes
a pile-up of 500, rather than the number 955 that is stated elsewhere in the CDR).
The analysis of \cite{Han:2018wus} simply uses the current ATLAS setup of Delphes, presumably meaning a pile-up level
of LHC, which is some 20 times lower than that foreseen at FCChh.
It should be noted that the CDR detector (the grey bands) has its closest layer closer than that of the current ATLAS,
and should actually be more powerful than ATLAS, in stark contrast with what is seen.
One observes that for Higgsinos, the significance of a signal only barely reaches two sigma.
This is quite different from what is found in \cite{Han:2018wus}, and reflects more realistic simulation would yield.
The key element for the ``Disappearing tracks'' analysis is the magnitude of $\Delta(M)$.
Figure \ref{fig:bosinodmzoom}a,b is a zoom in of Figure \ref{fig:dmx1mlsp}a, showing the
Higgsino LSP case. In the figure the
absolute lower limit of $\Delta(M)$ mentioned in the Briefing-book, which was given in
\cite{Fukuda:2017jmk}, is also shown. Figure \ref{fig:bosinodmzoom}a shows models where
$\mu$, $M_1$ and $M_2$ are all
positive. In this case, the lower limit is respected, but reached only for few models.
Figure \ref{fig:bosinodmzoom}b shows the situation after the full scan
where any combinations of signs of $\mu$, $M_1$ and $M_2$ is allowed.
Clearly the limit is violated, and this is because the calculation in \cite{Fukuda:2017jmk}
only refers to {\it SM} effects on the mass-splitting, assuming that mixing effects between the
SUSY fields are negligible. This situation occurs in the ``deep Higgsino'' region where $M_1$ and $M_2 > > \mu$.
In Figure \ref{fig:dmx1dmn2}b, the models that are in this region are those that lies on the line
labelled ``Pure Higgsino line''.
One also notes that many models, in particular those where $\mu$ is negative, features a
chargino LSP.
As already mentioned, {\tt SPheno} and {\tt FeynHiggs} give different results in this case, and
Figure \ref{fig:bosinodmzoom}c shows the spectrum under the same conditions as in Figure \ref{fig:bosinodmzoom}b,
but calculated with {\tt FeynHiggs} rather than {\tt SPheno}. One sees that
{\tt FeynHiggs} does not yield chargino LSPs, but does not seem to respect the limiting mass-difference.
This observation is interesting, but not essential for the question of guaranteed exclusion: The important feature
in this respect is that there {\it are} models with $\Delta(M)$ = 1 GeV and above at all $M_{LSP}$, i.e.
far above what is reachable with the ``Disappearing tracks'' method,
and that the two codes agree on this.
In Figure \ref{fig:bosinodmzoom}d, the corresponding zoom of Figure \ref{fig:dmx1mlsp}b is shown,
and illustrates the Wino LSP case.
The line in \ref{fig:bosinodmzoom}d is the lower limit given in \cite{Ibe:2012sx},
which as can be seen is respected, but is by no means attained by all models.
It is also worth mentioning that the Wino LSP scenario, by it's very construction,
does not allow for GUT-scale $M_1$-$M_2$ unification.
\begin{figure}[b]
\begin{center}
\subfloat[][Higgsino LSP]{\includegraphics [scale=1.]{plots/han_et_all_dm_1805-00015_0013_fig2}}
\subfloat[][Wino LSP Z]{\includegraphics [scale=1.]{plots/han_et_all_dm_1805-00015_0013_fig1}}
\caption{The sensitivities for the ``Mono-X'' technique at future pp-colliders. From \cite{Han:2018wus}.
\label{fig:delphesmonox}}
\end{center}
\end{figure}
For the ``Disappearing track'' analysis, the decay-length needs to be macroscopic.
In \cite{Aaboud:2017mpt}, the ATLAS collaboration reported their results
on this type of search at 13 TeV.
They found that the search is effective for lifetimes of about 200 ps,
corresponding to a $c\tau$ of about 6 cm.
Figure \ref{fig:ctau} shows $c\tau$ for the different considered models.
One can see that in the Higgsino case, hardly any points in our scan would yield a decay-length
long enough - in fact most of the models have a $c\tau$ below 1 mm.
In the Wino LSP case, on the other hand, there are good chances that $c\tau$ would
be 1 cm or more. There are, however, even in this case models where $c\tau$ is below 1 mm,
so a non-observation of a signal cannot be used to infer that the Wino LSP hypothesis is
excluded.
In Figure \ref{fig:ctau}c, the dependence of $c\tau$ on $\Delta(M)$ for a Higgsino LSP
is shown. One can note that $c\tau$ becomes above 1 mm only for $\Delta(M)$ less than
600 MeV, so in fact the excluded region from disappearing tracks is off the vertical
scale in Briefing-book Figure \ref{fig:bbwinohiggsino}.
For the ``Mono-X'' signature, the source of the limits is \cite{Han:2018wus}, and the
key figures from that publication are shown in
Figure \ref{fig:delphesmonox}. It should be noted that these figures were included in the HE/HL-LHC
input to ESU \cite{esuhehl},
not the FCChh one \cite{esufcchh}, nor in the FCChh CDR \cite{Benedikt:2018csr}.
As mentioned above, the analysis is based on Delphes fast simulation using the ATLAS-card,
and we saw that when applied to the ``Disappearing tracks'' it gave results far better than
those of the more realistic analysis in the FCChh CDR.
Furthermore, by scrutinising the dependence of the significance of a signal versus the
mass and assumed systematic errors, one
can conclude that the results are systematics limited, with systematics assumed to be between 1 and 2 \%.
This can be contrasted to existing ``Mono-X'' analyses from both ATLAS \cite{Aaboud:2017phn} and CMS \mcite{cmsmomox,*Chatrchyan:2012me,*Sirunyan:2017jix}, which
both estimate systematic errors at the level of 10 \%, with a pile-up 20 times lower than that expected at FCChh.
It is also noteworthy that there to date are no results from ATLAS nor CMS where their ``Mono-X'' searches have
been used to infer any conclusions about SUSY.
\section{Summary of the ILC projections\label{sec:ilcproj}}
\begin{figure}[b]
\begin{center}
\subfloat[][Higgsino LSP]{\includegraphics [scale=0.37]{plots/exclusion_complete_higgsino}}
\subfloat[][Wino LSP]{\includegraphics [scale=0.37]{plots/exclusion_complete_wino}}
\caption{Exclusion reaches for $\XPM{1}$ at LEP II~\cite{lepsusywg}, ILC-500 \cite{PardodeVera:2020zlr},
and LHC \cite{Aad:2019qnd,ATL-PHYS-PUB-2017-019,Aaboud:2017mpt,ATL-PHYS-PUB-2017-019}.
The LEP II, ILC, and ATLAS ``disappearing tracks'' analyses are valid without any assumption, while the
ATLAS ``soft leptons'' assumes $M_1$ and $M_2 > > \mu$ \label{fig:ilchinowino}}
\end{center}
\end{figure}
In this section, we make a brief summary of the expected performance of the SUSY searches at the ILC.
\begin{figure}[t]
\begin{center}
\subfloat[][Full range]{\includegraphics [scale=0.37]{plots/stau_nlsp_reach}}
\subfloat[][Zoom to last 50 GeV]{\includegraphics [scale=0.37]{plots/stau_nlsp_reach_zoom}}
\caption{Reach at ILC-500 for the $\stau$ NLSP case. From \cite{Berggren:2013vna} \label{fig:ilcstau}}
\end{center}
\end{figure}
A more comprehensive account can be found in \cite{Bambade:2019fyw,Fujii:2017ekh,Berggren:2013bua}.
Figure \ref{fig:ilchinowino}, from \cite{PardodeVera:2020zlr}, shows the reach of an 500 GeV ILC
in the search for $\XPM{1}$ in the Higgsino- and Wino-LSP cases.
These projections were obtained by extrapolation of the LEP II results \cite{lepsusywg}, using
background-levels and signal-efficiencies as reported in \cite{lepsusywg}, assuming no other
ameliorations over LEP II than increased beam energy, beam-polarisation, and data set size.
This is clearly a very conservative assumption as it neglects the progress in detector
technology, reported in volume 4 of the ILC TDR \cite{Behnke:2013lya}.
In \cite{PardodeVera:2020zlr}, it is shown that if $\Delta(M) > 3$ GeV, exclusion and
discovery-reach are only a few 100 MeV apart, and if $\Delta(M)$ is between 3 GeV and $m_\pi$,
\begin{figure}[b]
\begin{center}
\subfloat[][$\Delta{M}$=1.6 GeV. From \cite{Berggren:2013vfa}]{\includegraphics [scale=0.25]{plots/MrecoilC_mh124}}
\subfloat[][$\Delta{M}$=4.4 GeV. From \cite{Baer:2019gvu}]{\includegraphics [scale=0.24]{plots/fit_ngmm1_MeR}}
\subfloat[][$\Delta{M}$=9.7 GeV. From \cite{Baer:2019gvu}]{\includegraphics [scale=0.24]{plots/fit_ilc2_EeR}}
\caption{Examples of Higgsino signals at ILC-500, and various models and mass-differences.
The assumed integrated luminosity
is 500 fb$^{-1}$ in all cases. In (a) $\XPM{1}$ is the NLSP, in (b) and (c) it is
the $\XN{2}$. The spike in (b) is the $J/\Psi$.
\label{fig:ilchinopoints}}
\end{center}
\end{figure}
they are at most 5 GeV from each other. The only caveat is in a very particular situation
where destructive interference between the s-channel and the sneutrino-induced t-channel
could reduce the production cross-section drastically for the Wino-LSP case \cite{Choi:1998ut}. This only
happens when the sneutrino mass is close to the beam-energy, and in most of that
parameter-space, the sneutrino, not the $\XPM{1}$, is the NLSP.
The experimental implications of such a low mass sneutrino
were not studied\footnote{In \cite{MoortgatPick:1999ck}, a theoretical study of this
situation was undertaken. The experimental issues will be a topic for future studies.
}.
Even in this case,
exclusion is guaranteed up to $\MXC{1}$=246 GeV, discovery to 243 GeV,
if $\Delta(M) > 3$ GeV.
There is a substantial loss of reach only in the region where $\Delta(M)$ is between 3 GeV and $m_\pi$, where
exclusion is guaranteed up to $\MXC{1}$=225 GeV, discovery to 205 GeV.
However, one should keep in mind that the main reason for the drop in efficiency at LEP II in this
$\Delta(M)$ region was trigger-efficiency, and the ILC detectors are to be run trigger-less.
In \cite{Berggren:2013vna}, a SUSY parameter scan using detailed fast simulation of the ILD at ILC was done,
to establish the reach in the experimentally worst case NLSP, namely the $\stone$ with the mixing angle
in the $\stau$ sector that minimises the production cross-section.
The results are shown in Figure \ref{fig:ilcstau}, showing that also in this case,
exclusion (discovery) is possible to 10 (20) GeV below the kinematic limit, already a modest integrated
luminosity of 500 fb$^{-1}$, less than a third of what is expected at the favourable beam-polarisation settings
({\it viz.} right-handed electron, left-handed positron).
One can note that the limits are valid only if $\Delta(M)$ > 3-4 GeV,
however no dedicated low $\Delta(M)$ analysis was done in \cite{Berggren:2013vna} - it is the
subject of an ongoing study.
The same publication also studied the arguably most favourable NLSP candidate, the smuon, and found
that exclusion (discovery) would be assured at 2(4) GeV below the kinematic limit.
Several full simulation studies have been done at particular Higgsino-LSP model-points, typically with modest to very low
$\Delta(M)$ \cite{Berggren:2013vfa,Baer:2019gvu}.
A few examples are shown in Figure \ref{fig:ilchinopoints}, and illustrate how clean the signal is expected to be.
\section{Summary and Conclusions}
We have discussed the landscape of possible MSSM models that could
have a next-to-lightest SUSY particle (an NLSP) in reach of future
HEP facilities. We have concentrated on the case of an electroweak bosino
NLSP, as this in almost all cases is the most challenging one, in addition to
being quite likely.
In doing so, we scanned over a grid of values of $\mu$, $M_1$, and $M_2$,
with the only constraint that the NLSP should not be heavier than a few TeV.
We did {\it not} require that the models contained a viable dark matter
candidate, solved the naturalness problem, gave an explanation to
the g-2 anomaly, etc, nor that there were any particular relations
between the parameters. In this way, the study carries no prejudice
on any SUSY breaking scheme, nor the possible existence of other
beyond the standard model phenomena.
We confronted our findings on possible spectra, cross-sections and decay
branching-ratios to projections done for the various options for
future facilities, taking into consideration the detail and maturity
of both the projects and the individual analyses.
We concentrated on future pp-colliders, in particular FCChh,
only briefly touching upon the e$^+$e$^-$-colliders (mainly because
the conclusion about the latter are very simple: either discovery or
exclusion is guaranteed up a few GeV below the kinematic limit,
under all circumstances).
For the high $\Delta(M)$ {\it Bino-region},
the signal at pp-colliders is unambiguous, in the sense that it consists of missing
transverse momentum (MET), originating from the invisible SUSY particles themselves,
without need for a system recoiling against the SUSY particles.
We note, however, that the relative contributions from different possible processes
can vary over a large range, as can the decay branching ratios. Since the sensitivity
of the analyses depends on this, to claim exclusion one must establish which of these
yields the lowest sensitivity. This is usually not done, but rather a single
representative model is assumed.
A further observation is that
there is a simple scaling of the reach to be expected (Sect. \ref{sect:bbbino}),
which is corroborated by the data at LHC at 7 and 13 TeV respectively, and the
thorough HL-LHC projections.
This leads to the expectation that the reach will be extended far at the highest mass-differences,
while only modest progress can be hoped for to lower mass-differences.
Models currently excluded
are only such where unification of $M_1$ and $M_2$ at the
GUT-scale does not occur - only little progress can be expected into the region were
GUT-scale unification is possible.
For the low $\Delta(M)$ region, the {\it Wino-} and {\it Higgsino-} regions,
the MET from SUSY itself is too small to consist a signal-signature, and
more channel-specific searches are needed, in conjunction with the presence of a sizeable
system recoiling against the SUSY particles.
At mass-differences down to a few GeV, leptonic decays can be searched for.
Due to the need for lepton-identification, this method will not be able to reach
as low mass differences at FCChh as what is attained at LHC.
At mass differences an order of magnitude lower,
the lifetime of the NLSP might become big enough that its decay in the
detector would be observable (the ``Disappearing track'' technique).
FCChh prospects for Higgsinos with this technique are not promising,
while they are for Winos. This is due to the expected lower mass-differences
in the latter case.
In existing analyses from ATLAS and CMS of both these techniques, as well as in the
projections, very specific model-points have been assumed, usually corresponding to
situations where the mass-splitting is only due to SM loop-effects, ignoring the
effects of mixing in the SUSY sector. Our parameter-scan shows that these
assumptions are quite aggressive, and completely different mass-spectra are common-place.
In fact, with one of the spectrum calculators we used, some models actually acquire a $\XPM{1}$ LSP.
A second technique to probe $\Delta(M)$ below the cut-off of the soft-lepton technique is
the ``Mono-X'' one, where the decay of the NLSP is assumed to be undetectable, and the
signal would be the presence of a high $P_T$ mono-jet (or photon, Z, W or higgs), recoiling
against an invisible system.
The power of this technique has not been evaluated to a level that allows for any conclusions,
nor has it been used by ATLAS or CMS in a SUSY context.
In conclusion,
future pp-colliders do have a large {\it discovery reach}, where it is
permissible to assume that the model realised in nature is {\it favourable}.
However, the {\it exclusion reach}, where one must assume that the model
realised is the {\it least favourable}, is quite modest, and has not been evaluated
in large detail.
Notwithstanding the gaps in finding the least favourable model,
one can already note that the regions where the mass-differences are considerable, but
still small enough to allow models with GUT-scale $M_1$-$M_2$ unification
will to a large extent remain uncovered.
Furthermore, the low mass-difference regions leaves gaps both
above and below the region that the soft-lepton method can cover,
regions where both Higgsino- and Wino-LSP models thrive.
A window of opportunity exists at very small differences,
but only for very specific models.
None of these shortcomings are present for future TeV-scale e$^+$e$^-$ colliders.
At these facilities, SUSY will be excluded or discovered up to the kinematic
limit, under all circumstances, and remain {\it the} option to exhaustively test the
hypothesis of weak-scale SUSY.
\printbibliography[title=References]
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{Introduction}
In magnetic compounds lacking inversion symmetry, the handedness of the underlying crystal structure induces an asymmetric exchange coupling called the \textit{Dzyaloshinskii-Moriya} (DM) interaction~\cite{Dz64} which stabilizes long-period spatial modulations of the magnetization with a fixed rotation sense~\cite{Dz64, Bak80}. There has been a renewed interest in chiral helimagnetism in the last few years since the discovery of two-dimensional localized modulations called \textit{chiral skyrmions}~\cite{Bogdanov89,Bogdanov94, Romming13, Leonov16}.
In most nonlinear physical systems, similar two-dimensional localized states are radially unstable and collapse spontaneously into linear singularities. Chiral interactions provide a unique stabilization
mechanism, protecting two-dimensional localized states from this instability \cite{Bogdanov94,Leonov16}. That is why non-centrosymmetric magnets and other chiral condensed matter systems are of special interest in fundamental physics and mathematics as a particular class of materials where skyrmions can exist~\cite{Rossler06, Melcher15}. Chiral magnetic skyrmions are also considered promising objects for various spintronic applications, notably racetrack computer memories~\cite{Kiselev11,Fert13, Parkin15}.
The generic phase diagram for the magnetic states which can occur in in non-centrosymmetric magnets is shown in Fig.~\ref{crankshaft}(d). Skyrmions occur as lattices or as isolated particles within a different magnetic phase. The chiral skyrmions that were theoretically introduced in Refs.~\onlinecite{Bogdanov89, Bogdanov94, Bogdanov95} and experimentally investigated in nanolayers of chiral ferromagnets \cite{Romming13,Leonov16} were embedded in the magnetically saturated phase in which all the atomic spins are parallel. Recently, fundamentally different solitonic states embedded in the cone phase of chiral ferromagnets have been investigated mathematically \cite{LeonovJPCM16}. Unlike the axisymmetric skyrmions, these three-dimensional chiral solitons are inhomogeneous along their axes and asymmetric in the basal planes as shown in Fig. 1(a-c)\cite{LeonovJPCM16}. Whereas skyrmions embedded in the saturated phase repel one another, these skyrmions are mutually attractive and so tend to produce clusters of skyrmions~\cite{LeonovJPCM16, LeonovAPL16}.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{fig1sky.pdf}
\caption{\label{crankshaft} Magnetic structures of chiral skyrmions with zero anisotropy. (a--c) show color plots of the out-of-plane magnetic moment, $m_z (x, y)$, for (a) an axisymmetric skyrmion in the saturated state ($h$ = 1.1) and (b) a non-axisymmetric skyrmion in the cone phase ($h$ = 0.6). (c) shows a stack of magnetization planes $m_z(x,y)$ with different values of $z$ for the non-axisymmetric skyrmion. In the generic phase diagram of non-centrosymmetric ferromagnets (d), axisymmetric skyrmions (a) exist within the saturated phase and transform continuously into non-axisymmetric skyrmions (b) during the transition into the cone phase at the critical line $H_{C}$.}
\end{figure}
Non-axisymmetric skyrmions have been investigated theoretically in bulk~\cite{LeonovJPCM16} and confined chiral helimagnets~\cite{LeonovAPL16} but no experimental observations of these states have been reported to date. Here we show the first observation of attractive skyrmions in the cone phase of the cubic helimagnet Cu$_2$OSeO$_3$ by presenting transmission electron micrographs of skyrmion clusters and provide a theoretical description of these results.
\section{Theory}
\label{Theory}
Chiral solitons and modulated phases are described mathematically by equations which minimize the energy functional for a chiral ferromagnet~\cite{Dz64,Leonov16}:
\begin{eqnarray}
w = A (\mathbf{grad}\,\mathbf{M})^2 +
K (\mathbf{M}\cdot \mathbf{n})^2
- \mu_0 \mathbf{M}\cdot\mathbf{H} + w_D, \,\,
\label{density}
\end{eqnarray}
where $\mathbf{M}$ is the magnetization $\mathbf{M}=M(\sin\theta\cos\psi;\sin\theta\sin\psi;\cos\theta)$ with $\theta$ being the polar angle and $\psi$ being the azimuthal angle between each magnetic moment and the applied magnetic field $\mathbf{H}$ which points in the $z$ direction. $A$ is the exchange stiffness constant, $K$ is the uniaxial anisotropy constant and $\mathbf{n}$ is the unity vector along uniaxial anisotropy axis. The DM energy functionals $w_D$ are composed of Lifshitz invariants $\mathcal{L}^{(k)}_{i,j} = M_i \partial M_j/\partial x_k - M_j \partial M_i/\partial x_k $~\cite{Dz64,Bogdanov89} where $x$ is the spatial coordinate.
Denoting the distance from the skyrmion axis as $\rho$, skyrmions in the cone phase approach the solutions for the cone phase~\cite{Butenko10,Wilson14} ($\theta_c, \psi_c$) at large distances from the axis \cite{Leonov16}:
\begin{eqnarray}
\theta_{\rho\rightarrow \infty} = \theta_c = \arccos \left(H/H_C \right), \,
\psi_{ \infty} = \psi_c = 2\pi z/L_D, \, \,
\label{cone}
\end{eqnarray}
where $H_C = H_D \left( 1- K/K_0 \right)$ is the magnetic field above which the saturated state forms and $H_D =D^2 M/(2A)$. The pitch of the cone phase is $L_D = 4\pi A/|D|$ and $K_0 = D^2 /(4A)$ where $D$ is the Dzyaloshinskii-Moriya (DM) coupling energy.
For the important case of non-centrosymmetric cubic ferromagnets (with space group P2$_1$3), the DM energy functional is reduced to the isotropic form~\cite{Bak80}: $w_D = D(\mathcal{L}_{yx}^{(z)}+\mathcal{L}_{xz}^{(y)}+\mathcal{L}_{zy}^{(x)})=D\,\mathbf{M}\cdot \mathrm{rot}\mathbf{M}$. The calculated phase diagram for this case is representative for a whole
class of non-centrosymmetric ferromagnets and shown in Fig.~\ref{crankshaft}(d)~\cite{Butenko10}. In this phase diagram, axisymmetric skyrmions exist as ensembles of weakly repulsive strings within the magnetically saturated phase and non-axisymmetric skyrmions inhabit the cone phase. During the phase transition from the saturated to the cone phase through the critical line $H_C$ (Fig. \ref{crankshaft} (d)), axisymmetric skyrmions continuously transform into non-axisymmetric skyrmions. The structure of these non-axisymmetric skyrmions is discussed in detail in Supplemental Information I~\cite{supp} and unlike the axisymmetric skyrmions, the point at which $\theta=\pi$ is no longer lies on the central axis of the skyrmion but its position oscillates on moving in $z$ from one layer to the next. Experimentally, we produced individual non-axisymmetric skyrmions and their clusters by starting in the skyrmion lattice phase and increasing the magnetic field so the lattice fragments during the first-order transition to the cone phase.
\section{Experimental Methods}
\label{methods}
Copper oxy-selenite (Cu$_2$OSeO$_3$) is a cubic helimagnet with a characteristic helical period~\cite{Seki12} of 63~nm. {\it Ab-initio} calculations show that this helimagnetism is induced by the DM interaction~\cite{Yang12}. To acquire images of skyrmions, a single crystal was thinned to electron transparency (about 70~nm) by the conventional process of mechanical grinding and argon-ion polishing on the (110) face. Lorentz transmission electron microscopy (LTEM) was conducted using an FEI Tecnai F20 electron microscope and the sample cooled {\it in-situ} using a liquid-helium cooled holder with a base temperature of 10~K. Skyrmions appear as black or white circles in the images produced using this technique. The images also show white linear features which are not magnetic but are parts of the specimen surface which had peeled off and rolled into tiny tubes. We have not encountered this with the preparation of other materials. (See Supplemental Information II for full experimental methods~\cite{supp}.)
\section{Experimental Results}
Fig.~\ref{clusters2} shows frames from one of several videos we acquired (see Supplemental Information III~\cite{supp}) showing two skyrmion clusters (outlined in red) embedded in a host phase that produced no contrast in the image. At lower fields, the skyrmion lattice filled the whole sample. The smaller cluster contained 7 skyrmions and the larger, 13. There does not appear to be a preferred number of skyrmions in a cluster and in other videos we observed a single skyrmion as well as clusters with 6, 18 and 21 skyrmions. The fact that the skyrmions form clusters demonstrates that the interaction between them must be attractive and later we show theoretically that skyrmions embedded in the cone phase can be attractive whereas skyrmions embedded in the saturated state are purely repulsive.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{fig2skyN.pdf}
\caption{\label{clusters2} (a) LTEM images showing two skyrmion clusters (outlined in red) embedded in the cone phase of Cu$_2$OSeO$_3$ at 11~K in an out-of-plane applied magnetic field of $\mu_0 H$ = 116~mT. (b)--(d) show changes in the shape of the left-hand cluster with (c) taken 0.5~s and (d) taken 34.4~s after (b)
}
\end{figure}
Figs.~\ref{clusters1}(a)--(c) were taken under the same conditions from a different region of the sample and show a cluster of 30 skyrmions moving across the host phase and merging with the skyrmion lattice phase over 21~s. The boundaries of the cluster and the edge of the skyrmion lattice phase are delineated by red lines and by comparing panels (b) and (c), it can be seen that the phase boundary advances after merging and that the skyrmions in the cluster have spread evenly across the boundary.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{Fig3_new.pdf}
\caption{\label{clusters1} LTEM images of a Cu$_2$OSeO$_3$ nanolayer of thickness $L$ = 70~nm at 12~K in an out-of-plane applied magnetic field of $\mu_0 H$ = 116~mT. (a--c) show the skyrmion lattice phase on the left coexisting with the cone phase and the phase boundary delineated in red. A skyrmion cluster, outlined in red, merges with the skyrmion lattice phase over time with (b) taken 1.6~s after (a) and (c) 21.2~s after (a) during the first-order transition between the cone and skyrmion lattice phases. (d) A sketch of the phase diagram of cubic helimagnet nanolayers (adapted from Ref.~\onlinecite{LeonovPRL16}) includes areas with the helicoid, cone, and skyrmion lattices phases separated by first-order transition lines (solid lines) and the saturated state separated from the cone phase by a second-order transition line (dashed). (e) The calculated domain wall between the coexisting skyrmion lattice and cone phases at the first-order transition line.}
\end{figure}
The host phase appears featureless in the images so it is either the cone or saturated state since neither produce any contrast variation. Whilst the image alone does not allow us to identify the host phase, it is almost certainly the cone phase. It is clear from Figs.~\ref{clusters1}(a)--(c) that the experimental conditions are close to the phase boundary of the skyrmion lattice. Both the phase diagram for bulk Cu$_2$OSeO$_3$ derived by neutron diffraction~\cite{Adams12} and the theoretical phase diagram for thin cubic helimagnets in Fig.~\ref{clusters1}(d)~\cite{LeonovPRL16} indicate that if the material is initially in the skyrmion lattice phase, an increase in magnetic field will first create the cone phase and any coexistence should be between these two phases. Furthermore, coexisting phases like those we see here can only result from a first order transition whereas a transition from the skyrmion lattice to the saturated state should be a second order process which occurs via the gradual expansion of the period of the skyrmion lattice~\cite{Bogdanov94, LeonovPRL16}.
In all of the videos we recorded, the skyrmions were in constant motion as shown in Fig.~\ref{clusters2}(b)--(d). Similar motion has been reported by Mochizuki {\it et al.}~\cite{Mochizuki14} who attribute this to the heating of the specimen by the electron beam. We suggest instead that it may be caused by the specimen charging under the electron beam as coating the sample in a thin layer of carbon to improve its electrical conductivity slowed the movement of the skyrmions. Such charging can occur if the sample is insulating, like Cu$_2$OSeO$_3$, or not well grounded. Mochizuki {\it et al.} calculated that a steady flow of electrons through the sample from the electron beam was three orders of magnitude too low to move skyrmions via the spin-torque effect but it is possible that bursts of current caused by the specimen charging and discharging may be sufficient to cause this movement.
Areas with the cone and skyrmion lattice phases are separated by domain walls. The calculated contour plots of such a domain wall is presented in Fig.~\ref{clusters1}(e). The frontal parts of skyrmion
cores in the wall have a similar structure as those in non-axisymmetric skyrmions and play the role of nuclei for individual non-axisymmetric skyrmions. The attractive interaction between such skyrmions~\cite{LeonovJPCM16} explains the formation of skyrmion clusters observed in our experiments.
\section{Discussion}
\label{Discussion}
A non-axisymmetric skyrmion can be thought as the result of a continuous transformation of an axisymmetric skyrmion during the phase transition from the saturated to the cone phase (Fig. 1(d)). The equilibrium structure of a non-axisymmetric skyrmion is reached by the compatibility of the axisymmetric skyrmion core with the transversely modulated cone phase (Fig. \ref{crankshaft}(a-c)) ~\cite{LeonovJPCM16,LeonovAPL16}. The skyrmion core is separated from the host phase
by a broad asymmetric ``shell''.
The numerical calculations presented in Fig. \ref{crank4} elucidate the main features of non-axisymmetric skyrmions and their bound states. The calculated radial skyrmion energy densities
\begin{eqnarray}
e(\rho) = (2 \pi L_D)^{-1}
\int_0^{L_D} dz \int_0^{2\pi} d \varphi w_s (\theta, \psi)
\label{energy}
\end{eqnarray}
are plotted as functions $\Delta e(\rho) = (e (\rho) - e_{\mathrm{cone}})$ for different values of the applied field (Fig.~\ref{crank4}) where $w_s(\theta, \psi)$ is the energy density (Eqn.~\ref{density}) for an isolated non-axisymmetric skyrmion and $e_{\mathrm{cone}}$ is energy density (Eqn.~\ref{energy}) calculated for the cone phase (Eqn. \ref{cone}). It should be noted that as the host cone phase and embedded non-axisymmetric skyrmions are modulated along the film thickness $L$, the skyrmion energy density (Eqn.~\ref{energy} and Fig.~\ref{crank4}(e)) and the inter-skyrmion coupling depend on the confinement ratio, $\nu = L/L_D$. These subtle effects could be a subject of future theoretical and experimental investigations.
The characteristic lengths $R_1$, $R_2$, $R_3$ indicate several distinct regions in the radial energy density profiles $\Delta e(\rho)$ (Fig.~{\ref{crank4}}(c)--(e)). The functions $R_i (h)$ are plotted in Fig.~\ref{crank4}(d). For axisymmetric skyrmions (Fig.~\ref{crank4}(c)), the energy densities $\Delta e(\rho)$ consist of a positive energy ``bag'' located in the skyrmion center ($\rho < R_2$) surrounded by extended areas with negative energy density, where the DM coupling dominates \cite{Bogdanov94}. Negative asymptotics of the radial energy densities ($\Delta e(\rho) < 0$ for $\rho >> 1$ predetermines the \textit{repulsive} inter-soliton potential for axisymmetric skyrmions \cite{Bogdanov94}. For non-axisymmetric skyrmions (Fig.~\ref{crank4}(a) and (b)) the energy densities $\Delta e(\rho)$ are
positive at large distances from the skyrmion center ($\rho > R_3$). These areas correspond to ``shells'' separating the skyrmion core from the cone phase. The positive energy density of the shell
leads to attractive interactions between non-axisymmetric skyrmions \cite{LeonovJPCM16}. In other words, the attractive interaction between skyrmions in the cone phase is explained by the excessive energy density of the asymmetric shell compared with the skyrmion core and the cone phase~\cite{LeonovJPCM16}. Importantly, the equilibrium distribution of the magnetization in the shell determines the material parameters of the cluster: the bound energy and the distance between the constituent skyrmions.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{fig4skyN1.pdf}
\caption{(color online). (a--c) Radial energy density profiles $\Delta e(\rho)$ (see Eqn. \ref{energy}) at zero anisotropy with different values of the applied field plotted in units $e_0 = 4\pi D $. (d) Characteristic radii $R_i$ are plotted as functions of the applied field. In the cone phase (a,b) the skyrmion core is enclosed by positive ``shells'' ($\rho > R_3$) providing the attractive inter-skyrmion potential. Negative asymptotics of the radial energy density for axisymmetric skyrmions ($h > 1$) impose the repulsive skyrmion-skyrmion interaction (c). (e) The contour plot $e(x, y)$ calculated for $h = 0.5$ shows the central part with a large positive energy ($\rho < R_2$), the negative energy belt ($R_2 < \rho < R_3$), and the extended shell ($\rho > R_2$).
\label{crank4}
}
\end{figure}
Another type of attractive chiral skyrmion has been investigated theoretically in the precursor region of cubic helimagnets~\cite{Wilhelm11,Wilhelm12}. It was shown that due to the
``softening'' of the magnetization near the ordering temperature, the skyrmion-skyrmion coupling acquires an oscillatory character and promotes the formation of skyrmion clusters in this region.
The solutions for two-dimensional ``baby'' skyrmions with an oscillatory inter-particle potential have also been derived within the canonical Skyrme model \cite{LeonovNcomm15}. In magnetism the Skyrme
model is applied to describe a group of magnetic compounds with competing exchange interactions. {\it Ab-initio} calculations of attractive two-dimensional localized states in these magnetic systems
have been carried out by Rozsa {\it et al.} \cite{Rozsa16}. It was also found that the solutions for attractive baby skyrmions exist in modified Faddeev-Skyrme models with a Lennard-Jones type potential
term describing a short-range repulsion and a long-range attraction~\cite{Salmi15}.
The oscillatory vortex-vortex interaction attributed to type-II superconductors with small values of the Ginzburg-Landau parameter~\cite{Hubert71} leads to the first-order transition from the superconducting state into the Abrikosov vortex phase accompanied with the multidomain states of the competing phases~\cite{Essmann71}. Vortex clusters stabilized by the attractive inter-vortex coupling
have been also observed in MgB$_2$ and Sr$_2$RuO$_4$ \cite{Moshchalkov09, Gutierrez12, Curran11, Garaud14}. The attractive skyrmions in the cone phase of non-centrosymmetric ferromagnets represent an alternative to solitons with the oscillatory inter-particle potential investigated in Refs.~\onlinecite{Wilhelm11, Wilhelm12, LeonovNcomm15, Rozsa16, Hubert71, Essmann71}.
In conclusion, we report the first direct observations of clusters of attractive skyrmions embedded in the cone phase of a non-centrosymmetric ferromagnet. The clusters were generated by the magnetic-field-induced fragmentation of the skyrmion lattice during the first-order transition to the cone phase. This investigation used Cu$_2$OSeO$_3$ but the same method could be used to investigate skyrmion clusters in the cone phases of other non-centrosymmetric ferromagnets.
\acknowledgements
The authors are grateful to E. Babaev, K. Inoue, D. McGrouther, T. Monchesky and Y. Togawa for useful discussions. This work was funded by the Royal Society (United Kingdom) and the United Kingdom Engineering and Physical Sciences Research Council (EPSRC), grant number EP/N032128/1. M.C.H. and G.B. also acknowledge financial support from the EPSRC grant number EP/M028771/1. A.O.L. acknowledges the Japan Society for the Promotion of Science (JSPS) Core-to-Core Program, Advanced Research Networks and the JSPS Grant-in-Aid for Research Activity Start-up 17H06889. A.O.L. thanks Ulrike Nitzsche for technical assistance.
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction} \label{sec:intro}
Unmanned Vehicles (UVs), both aerial and ground, have been finding its use in a plethora of civilian \cite{Johnson2017,Ou2014} and indoor applications (see \cite{Nonami2007,Chen2014,Tomic2012} and references therein). Knowledge of position and orientation is necessary to ensure decision-making and autonomy for these vehicles. The problem of localization deals with the estimation of position and orientation of the UVs. Localization therefore requires sensing, and correspondingly, localization procedures are dependent of the available sensory measurements. Since sensory measurements are usually contaminated with noise, the problem of localization also requires filtering the noise in order to determine an accurate estimate of location and orientation. The investment in GPS-based vehicle positioning systems that rely on the GPS measurements have garnered a lot of attention in the literature \cite{Reina2007}. However, most indoor environments and many parts of the urban canyon do not have access to GPS; even if available, the access is intermittent and not reliable. Hence, localization in a GPS-denied or GPS-restricted environment is an active area of research; it also has additional advantages of robustness, efficiency, and flexibility \cite{Sharma2013}.
In this paper, we present a joint route optimization and localization algorithm in a GPS-denied environment to visit a set of targets while enabling localization as the UV traverses its route. The proposed approach uses known objects in the environment referred to as landmarks (LMs) to aid in localization. The UV determines it relative position and orientation with respect to the LMs using exteroceptive sensor like camera, laser, etc. and assess its motion and attitude information such as velocity and turn rate using interoceptive sensors like IMU, encoders, etc. This enables the UV to estimate its states and localize itself in a GPS-denied environment. Conditions under which the UV would be able to estimate its states and localize itself using the LMs are first derived. These conditions are then embedded into an optimization framework where the sequence in which the UV should visit the targets, the route it takes, and the location where the LMs should be placed to enable localization are optimized. In this way, we are ensured that the UV can perform localization using the LMs as it traverses its route and visits the targets. The problem statement is can be summarized as follows:
\noindent \textit{Given a UV stationed at a depot, a set of target locations, and a set of potential locations where LMs can be placed, find a route for the UV and a subset of potential LM locations where LMs are placed such that the following conditions are satisfied:
\begin{enumerate}
\item the route starts and ends at the depot and visits every target at least once,
\item the UV should be able to estimate its position and orientation from the LMs as it traverses its route, and
\item the sum of the total travel distance and the number of LMs used is a minimum.
\end{enumerate}
}
We remark that, it is also easier to think about the above problem as a single vehicle routing problem with additional constraints for LM placement to aid in localization. Henceforth, we shall refer to this problem as a single vehicle routing problem using LMs to aid localization, abbreviated as SVRP-LL. SVRP-LL, being a generalization of the traveling salesman problem (TSP), is NP-hard.
\section{Related work} \label{sec:lit_review}
The problem of localization of a vehicle, aerial and ground, in urban and indoor environments where GPS signals are not always reliable is well studied in the literature. In particular, authors in \cite{Wong2014} used techniques in computer vision to address the problem. Numerous variants of Simultaneous Localization and Mapping (SLAM) techniques have also been developed for high precision vehicle localization \cite{Levinson2007,Weiss2011}. Another frequently used method is infrastructure-aided localization for aerial and ground vehicles. In particular, infrastructure capable of providing range measurements to the vehicles are pre-installed in the environment and algorithms are developed to estimate the position and orientation of the vehicles \cite{Mao2007}.
The problem of localization is also of independent interest to infrastructure-aided localization for transportation applications. The idea of infrastructure-aided navigation and control for automated vehicles is not new and has been considered at least since California PATH's Automated Highway Systems (AHS) program. However, this idea is useful for other applications such as Advanced Traffic Management Systems (ATMS) and Advanced Traveler Information System (ATIS); one can design V2I \cite{Doan2009} (vehicle to infrastructure) communication protocols by which the type of vehicle is identified along with the time stamp for the communication thereby obviating the need for traffic measurement devices such as loop detectors which are error-prone. Authors in \cite{Ou2014} proposed a Vehicular Ad-Hoc Network (VANET) based localization approach in which each vehicle estimates its location on the basis of beacon messages broadcast periodically by pairs of Road Side Units (RSUs) deployed on either side of the road. Authors in \cite{Khattab2015} modified the previous approach by using two-way time of arrival information to localize vehicles based on communication with a single road side unit (RSU or LM) instead of having 2 RSUs/LMs.
The first work to consider placement of LMs given a paths for multiple vehicles is \cite{Rathinam2015}. The authors formulated the landmark placement problem as a multiple vehicle path covering problem and presented an approximation algorithm using geometric arguments to address the problem. This article is an extension of the work in \cite{Rathinam2015} on three fronts: (1) we formulate the joint vehicle routing and landmark placement problem as a mixed-integer linear program (MILP), (2) we present an algorithm to compute an optimal solution to the MILP, and finally, (3) we present extensive computational and simulation results showing the effectiveness of the proposed approach with a suitable estimation algorithm. The rest of the paper is organized as follows: in Sec. \ref{sec:vm}, the vehicle model and the algorithm to localize the vehicle when the position of the LMs are known a priori are detailed; conditions under which the localization algorithm would provide accurate position and orientation estimates are also discussed. Sec. \ref{sec:defn}, \ref{sec:formulation}, and \ref{sec:bandc} present the optimization problem definition, formulation, and the branch-and-cut algorithm to solve the SVRP-LL to optimality, respectively. Finally, the controller architecture and the computational results are detailed in Sec. \ref{sec:arch} and \ref{sec:results}, respectively.
\section{Vehicle model \& Localization algorithm} \label{sec:vm}
For the localization problem, an unmanned ground vehicle or an unmanned aerial vehicle flying at a constant altitude and traveling with a constant velocity $v$ is considered; the kinematic constraints of the vehicle in state-space form is as follows:
\begin{flalign}
\dot{\bm x} = f(\bm x, \bm u) \triangleq
\begin{bmatrix}
v \cos(\psi)\\
v \sin(\psi)\\
\omega
\end{bmatrix}
\end{flalign}
where, $v$ is the velocity of the vehicle, $\psi$ is the heading of the vehicle and $\omega$ is the rate of change of heading with respect to time. The vector $\bm x$ is the vector of state variables given by $(x, y, \psi)^T$; here $x$ and $y$ denote the position of the vehicle and $\psi$ denotes the heading information. The control input vector, $\bm u$, for the vehicle consists of $v$ and $\omega$. The interoceptive sensors detect the velocity, $v$, and angular velocity, $\omega$, of the vehicle. The exteroceptive sensors are used to detect the relative bearing measurement of the vehicle with respect to the known LMs. The vehicle is assumed to have a complete $360^{\circ}$ field of view. Without loss of generality, it is considered that the vehicle cannot move backwards, \emph{i.e.}, $v \geqslant 0$. Furthermore, the sensing range of the vehicle's exteroceptive sensor, denoted by $\rho_s$, is assumed to be constant.
To route the UVs, the precise knowledge of position and heading of the vehicles is necessary. In GPS-denied environments, relative position measurements using range or bearing angle sensors to known landmarks can be used to localize the vehicle. An Extended Kalman Filter (EKF) or its information form given by the Extended Information Filter (EIF) can be used to estimate the vehicle's states $\bm x$. This article uses the EIF instead of the EKF for to estimate the vehicle's state $\bm x$ as the EIF is more favorable from a computational perspective. The estimation algorithm will provide meaningful localization estimates (consistent and bounded) if and only if the sensors provide enough information for localization, or in other words, if the system is observable. It has been shown that the bound on uncertainty is related with the eigenvalues of the observability gramian \cite{Song1992}. In our previous work \cite{Sharma2012}, we have shown that the UV needs bearing angle measurements from 2 different landmarks in order for the system to be observable using the Hermann-Krener criterion \cite{Hermann1977} for local observability. This technique of state estimation using the bearing information alone is referred to as \textit{bearing-only localization} \cite{Sharma2012, Sharma2013}.
The condition for enabling accurate estimation of the states of the vehicle can also be illustrated using a relative position measurement graph (RPMG). A RPMG is a graph that is used to represent the interaction and information flow between the vehicle and LMs. The RPMG consists of two types of nodes: the vehicle position at different time instants and the landmarks. An edge between a landmark $i$ and the vehicle at a particular time instant $t$ indicates that the vehicle obtains bearing measurement from the landmark $i$. An example of RPMG with single vehicle and multiple LMs with edges between the vehicle at time $t_1$ and $t_2$ and the landmarks is illustrated in Fig. \ref{fig:RPMG}. The main result of \cite{Sharma2012} that will be used in the forthcoming sections is that for the estimation algorithm, using an EIF, to provide accurate localization estimates for the vehicle at any given time $t$, the RPMG should contain at least two edges from the node that represents the vehicle at time $t$ to two different LMs. We also remark that having path to more than 2 LMs provides more information to the vehicles, thereby quickening the convergence rate of the estimation algorithm.
\begin{figure}
\centering
\includegraphics[width = 60mm,keepaspectratio]{Exp_pic/RPMG_singleVeh.jpg}
\caption{Relative Position Measurement Graph (RPMG) illustrating the conditions for enabling convergence of the estimation algorithm to the states of the system. The stars represent LMs and the red arrows represent the vehicle at times $t_1$ and $t_2$, respectively. The edge $\eta_{LM_i}$, $i=1,\dots,4$ indicates that the vehicle receives the bearing information from the landmark $LM_i$.}
\label{fig:RPMG}
\end{figure}
\section{Optimization problem definition} \label{sec:defn}
In the previous section, we elucidated the condition that enables localization of the vehicle from bearing measurements which is that, at any given instant of time the vehicle requires bearing information from at least two LMs for observability and for the estimation algorithm to converge to accurate state estimates. We shall now enforce this condition explicitly in the optimization problem to jointly optimize the routes and landmark placement. To that end, let $V$ denote the set of targets $\{v_1, \dots, v_n\}$ and let the vehicle be initially stationed at $v_1$. Let $K$ denote the set of potential locations where a LM can be placed. Associated with each vertex $v_i$ is a subset of locations, $K_{v_i}$; if a LM is placed in a location in $K_{v_i}$, then it can provide bearing measurement for the vehicle when the vehicle is at the target $v_i$. We note that since the vehicle has a $360^{\circ}$ field of view and the sensing range of the exteroceptive sensor on the vehicle is $\rho_s$, a location $k$ is in the set $K_{v_i}$ if and only if the distance between the location and the target $v_i$ is less than $\rho_s$, the sensing range of the exteroceptive sensor. The vehicle can perform localization along an edge $e \in \{(v_i,v_j): i<j\}$ if it has bearing measurements from at least 2 LMs as it traverses that edge. Hence, associated with each edge $e \in \{(v_i,v_j): i<j\}$ is a subset of potential LM locations, $K_e \subseteq K$; for a given edge $e \in \{(v_i,v_j): i<j\}$, $k \in K_e$ if and only if $k \in K_{v_i} \cap K_{v_j}$. For the vehicle to be able to localize itself as it traverses the edge $e$, LMs have to placed at at least two locations in $K_e$. Now, the problem is defined on an undirected graph $G=(V, E)$ where $E$ is the set of edges $\{e=(v_i, v_j):i<j\}$. We say that a set of LMs ``cover'' an edge $e$ if they are installed in a subset of locations $K_1$ such that $|K_1 \cap K_e| \geq 2$. The set of LMs that cover an edge can provide bearing measurements for the vehicle as it traverses that edge and thereby enabling the vehicle localization. Each edge $(v_i, v_j) \in E$ is associated with a non-negative cost $c_{ij}$ required to travel from target $i$ to target $j$. Also associated with each location $k\in K$ is a non-negative cost $d_k$ that denotes the cost for installing a LM at location $k$; if we wish to minimize the number of LMs placed then each $d_k$ takes a value $1$. The SVRP-LL consists for finding a path for the vehicle and placing/installing LMs in a subset of locations $K$ such that (i) each target in the set $V$ is visited at least once by the vehicle, (ii) each edge traversed by the vehicle is covered by at least two installed LMs , and (iii) the sum of the cost of the path and installation cost of the LMs is a minimum.
\section{Mathematical formulation} \label{sec:formulation}
We now present a mathematical formulation for the SVRP-LL, inspired by the models for the standard routing and covering problems \cite{Toth2014} (see \cite{Sundar2016,Sundar2015,Manyam2016,Sundar2016a,Sundar2016b,Sundar2016c} for routing and path planning problems concerning UVs). We associate to each feasible solution $\mathcal F$ for the SVRP-LL, a vector $\bm x \in \mathbb{R}^{|E|}$ (a real vector indexed by the elements of the edge set $E$) where each component $x_e$ of $\bm x$ takes a value one if the vehicle traverses the edge and zero otherwise. Similarly, associated to $\mathcal F$, is also a vector $\bm y \in \mathbb{R}^{|K|}$; the value of the component $y_k$ associated with a potential sensor location $k \in K$ is equal to one if a sensor is placed in the location $k$ and zero otherwise.
For any $S \subseteq V$, we define $\delta(S) = \{(i,j) \in E: i\in S, j\notin S\}$. If $S = \{i\}$, we simply write $\delta(i)$ instead of $\delta(\{i\})$. Finally for any $\mathcal E \subset E$, we define $x(\mathcal E) = \sum_{e \in \mathcal E} x_e$. Using the notations introduced thus far, the SVRP-LL is formulated as a mixed-integer linear program as follows:
\begin{flalign}
& \min \sum_{e\in E} c_e x_e + \sum_{k\in K} d_k y_k &\label{eq:obj} &\\
& \text{subject to: } \notag &\\
& x(\delta(v_i)) = 2 \quad \forall v_i \in V, &\label{eq:degree} \\
& x(\delta(S)) \geqslant 2 \quad \forall S \subset V, &\label{eq:sec} \\
& \sum_{k \in K_e} y_k \geqslant 2 x_e \quad\forall e \in E, \text{ and } &\label{eq:sensor} \\
& x_e \in \{0,1\}, y_k \in \{0,1\} \quad \forall e\in E, k \in K. & \label{eq:integer}
\end{flalign}
In the above formulation, the constraints \eqref{eq:degree} are the degree constraints for the targets and they ensure the number of edges incident on any target is $2$. The constraints \eqref{eq:sec} are the sub-tour elimination constraints and they ensure that any feasible route for the vehicle has no sub-tours of any subset of the targets $V$. The constraints \eqref{eq:sensor} ensure that each edge $e$ that is traversed by the vehicle is covered by a subset of installed LMs to enable vehicle localization as it traverses the edge $e$. Finally, the constraints \eqref{eq:integer} are the binary restrictions on the $x_e$ and $y_k$ variables. In the next section, we present a branch-and-cut algorithm to solve the mathematical formulation to optimality.
\section{Branch-and-cut algorithm} \label{sec:bandc}
In this section, we briefly present the main ingredients of a branch-and-cut algorithm that is used to solve the two formulations presented in the previous section to optimality. The formulation can be provided to off-the-shelf commercial branch-and-cut solvers like Gurobi or CPLEX to obtain an optimal solution to the SVRP-LL. But, the formulation contains exponential number of sub-tour elimination constraints in \eqref{eq:sec} and complete enumeration of all the constraints to provide the formulation to the solver would be computationally intractable. To address this issue, we use the following approach: we relax these constraints from the formulation, and whenever the solver obtains an integer solution feasible to this relaxed problem (or a fractional solution with integrality constraints dropped), we check if any of these constraints are violated by the feasible solution, integer or fractional. If so, we add the infeasible constraint and continue solving the original problem; we refer to this technique as a dynamic cut-generation or cutting plane algorithm. This process of adding constraints to the problem sequentially has been observed to be computationally efficient for the TSP and some of its variants \cite{Sundar2016,Sundar2015a,Sundar2016a}. The algorithms used to identify violated constraints are referred to as separation algorithms. For the sake of completeness, a detailed pseudo-code of the branch-and-cut algorithm for the SVRP-LL is given detailed. To that end, let $\bar \tau$ denote the optimal solution to an instance of SVRP-LL. \\
\noindent S\textlcsc{tep} 1 (Initialization). Set the iteration count $t \gets 1$ and the initial upper bound to the optimal solution $\bar \alpha \gets + \infty$. The initial linear sub-problem is then defined by formulation in Sec. \ref{sec:formulation} without the sub-tour elimination constraints in \eqref{eq:sec} and the binary restrictions on the variables relaxed. The initial sub-problem is solved and inserted in a list $\cal L$.
\noindent S\textlcsc{tep} 2 (Termination check and sub-problem selection). If the list $\cal L$ is empty, then stop. Otherwise, select a sub-problem from the list $\cal L$ with the smallest objective value.
\noindent S\textlcsc{tep} 3 (Sub-problem solution). $t \gets t + 1$. Let $\alpha$ denote the objective value of the sub-problem solution. If $\alpha \geqslant \bar \alpha$, then proceed to S\textlcsc{tep} 2. If the solution is feasible for the SVRP-LL, then set $\bar \alpha \gets \alpha$, update $\bar \tau$ and proceed to S\textlcsc{tep} 2.
\noindent S\textlcsc{tep} 4 (Constraint separation and generation). Using the separation algorithm for the sub-tour elimination constraints, identify the violated constraints \eqref{eq:sec}. Add the violated constraints to the initial linear sub-problem and proceed to S\textlcsc{tep} 3. If no constraints are generated, then proceed to S\textlcsc{tep} 5.
\noindent S\textlcsc{tep} 5 (Branching).
Create two sub-problems by branching on a fractional $x_e$ or $y_i$ variable. Then insert both the sub-problems in the list $\cal L$ and go to \noindent S\textlcsc{tep} 2. \\
Now, we detail the separation algorithm used in \noindent S\textlcsc{tep} 4 to identify violated sub-tour elimination constraints. To that end, let $G^* = (V^*, E^*)$ denote the support graph associated with a given fractional solution $(\bm x^*, \bm y^*)$ i.e., $V^* = T$ and $E^* := \{e \in E: x_e^* > 0\}$. Here, $\bm x$ and $\bm y$ are the vector of decision variable values in SVRP-LL. Next, we examine the connected components in $G^*$. Each connected component that does not contain all the targets in $T$ generates a violated sub-tour elimination constraint for $S = C$. If the number of connected components is one, then the most violated constraint of the form $x(\delta(S)) \geqslant 2$ can be obtained by computing the global minimum on a capacitated undirected graph $G^*$; let the cut be denoted by $(S, V^* \setminus S)$. $S$ defines a violated sub-tour elimination constraint if the value of the cut is strictly less than 2.
\section{System architecture} \label{sec:arch}
The branch-and-cut algorithm in the previous section provides the sequence in which the vehicle should visit the targets and the locations where the LMs should be placed. The constraints ensure that the vehicle can perform localization when the LMs when placed at the specified locations.
\begin{figure}
\centering
\includegraphics[scale=0.7]{ControlArch}
\caption{Block diagram showing the system architecture for estimating the states.}
\label{fig:arch}
\end{figure}
The Fig. \ref{fig:arch} shows the block diagram of the system architecture that is used for estimating the states using the bearing measurements from the LMs. The dashed lines denote that the computation is performed offline, which in this case is the solution to the optimization problem using the branch-and-cut algorithm. The sequence in which the UV visits the targets provides the way points for the paths. The bearing sensors are the exteroceptive sensors on the vehicle which obtain the bearing information from the LMs placed on the locations chosen by the optimization problem. This bearing measurements are in turn provided to the EIF which estimate the states of the system $\bm{\hat x}$. This is provided as input to a proportional controller that computes the corresponding $\omega$, the control input to the vehicle. The effect of choosing different values for gain of the proportional controller is detailed in the forthcoming section on computational and simulation results.
\section{Computational and simulation results} \label{sec:results}
This section presents extensive computational and simulation results for all the algorithms developed thus far.
All the computational experiments were performed on a MacBook Pro with a 2.9 GHz Intel Core i5 processor and 16 GB RAM using CPLEX 12.7 as a mixed-integer linear programming solver.
\subsection{Instance generation} \label{subsec:instance}
The number of targets, $|V|$, for all the test instances was chosen from the set $\{15, 20, 25, 30\}$. For each value of $|V|$, $20$ random instances were generated. The targets were randomly placed in a $100 \times 100$ grid. As for the potential LM locations, $5 \cdot |V|$ locations on the $100 \times 100$ grid was chosen at random. In total, we had $80$ instances on which all the experiments were performed. The sensing range for the bearing sensors, $\rho_s$, was fixed at $35$ units.
\subsection{Branch-and-cut algorithm performance} \label{subsec:bandc}
The branch-and-cut algorithm with the dynamic cut-generation routine presented in Sec. \ref{sec:bandc} was implemented in C++ using the callback functionality of CPLEX. The internal cut-generation routines of CPLEX were switched off and CPLEX was used only to manage the enumeration tree in the branch-and-cut algorithm. All computation times are reported in seconds. The performance of the algorithm was tested on randomly generated test instances. The branch-and-cut algorithm to compute optimal solution to the problem is very effective in computing optimal solutions for instances with less than $30$ targets. The computation time for all the instances was less than a seconds and hence, for all practical purposes it can be converted to an online computation if $|V| \leq 30$.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.9]
\begin{axis}[
x tick label style={
/pgf/number format/1000 sep=},
ylabel=Average values,
xlabel=Number of targets,
enlargelimits=0.05,
legend style={at={(0.38,0.95)}, draw=none,
anchor=north},
ybar interval=0.5,
transpose legend,
legend cell align=left
]
\addplot
coordinates {(15,10.27) (20,12.21) (25,14.90) (30,20.70) (35, 0)};
\addplot
coordinates {(15,8.89) (20,9.05) (25,9.05) (30,8.50) (35, 0)};
\legend{\# constraints \eqref{eq:sec} added, \# landmarks placed }
\end{axis}
\end{tikzpicture}
\caption{Average number of sub-tour elimination constraints in \eqref{eq:sec} and the average number of landmarks placed by the optimal solution to the SVRP-LL instances.}
\label{fig:avg}
\end{figure}
The Fig. \ref{fig:avg} shows the average number of sub-tour elimination constraints in \eqref{eq:sec} added by the dynamic cut generation procedure detailed in Sec. \ref{sec:bandc} and the average number of LMs required in the optimal solution; this indicates the effectiveness of the dynamic cut generation approach. The average number of LMs remains fairly steady as the number of targets is increased indicating that it is a function of the area of the grid.
\subsection{Simulation results} \label{subsec:simulation}
For the simulation \textit{i.e.}, online estimation algorithm using the results of the branch-and-cut algorithm, we consider $3$ cases where the route for the UV and the positions of the landmarks are known a prioi (using the branch-and-cut algorithm).
In all cases, the vehicle is required to travel in an optimal path such that it covers all way points and has connection to at least two LMs at all time; this constraint is enforced by the formulation presented in Sec. \ref{sec:formulation}. For every run, the controller gain was chosen such that a minimum distance requirement condition to each way point was maintained. This in turn implies that for the vehicle to switch from the current way point to the next one, the vehicle needs to come in close proximity to the current way point meeting at least the minimum distance requirement for the switching to take place. For this simulation, it was considered that the vehicle can have a very high turn rate such that on reaching a way point, it can immediately point towards the next way point. The estimated states for the vehicle were used in the way point controller instead of the true states to show that the vehicle can indeed travel optimally in a GPS-restricted environment provided that the condition for path to at least two LMs is always maintained. The simulations were ran for $3000$ iterations. For the purpose of simulation, the unit for distance and time is chosen as meters (\emph{m}) and seconds (\emph{s}), respectively, without loss of generality. An instance of the simulation has been provided below in Fig. \ref{fig:C1_R35L15_OTG} denoting the Landmarks, way points, true and estimated position of the vehicle, the associated uncertainty ($3\sigma$) bound ellipse and vehicle's path to LM(s) within sensing range.
\begin{figure}
\centering
\includegraphics[scale=0.45] {Exp_pic/Case1/OTG_WP15.pdf}
\caption{Plot showing the vehicle motion at an arbitrary point in time during the simulation with sensing range as $35$ units for $|V| = 15$.}
\label{fig:C1_R35L15_OTG}
\end{figure}
In all cases, there were $8$ LMs and the starting position for the vehicle was chosen to be $(0,35)$, without loss of generality. The plots for each of the $4$ cases and the different conditions in which these instances were ran are provided below. There are 2 distinct categories of plots that are provided for each simulation and they contain the following information: the first set of plots show the true and estimated trajectories for the vehicle and the second set of plots show the error plots along with the $3\sigma$ bounds for each case.
\subsubsection{Case 1}
In this instance, $15$ way points (WPs) were provided through which the vehicle needed to route. The first scenario is such that the controller gain is set to $2.0$ and the minimum distance to WPs is $1.0$ unit. On reaching a distance of $1$ unit or closer to the WP, the vehicle can turn sharply and head towards the next WP due to high controller gain value. In the second scenario, the gain is reduced to $1.0$ and the minimum distance to WP is set at a value of $2.0$ units. A third and final scenario is provided for this particular case in which the sensing range was dropped to $20$ units from the required value of $35$ units such that the condition for path to at least 2 LMs are not always maintained. The controller gain was kept at $2.0$ and the minimum distance to WP at $1.0$ unit. The vehicle still routes through the WPs provided, but it does so with larger deviation and higher uncertainty. It can be seen from Fig. \ref{fig:C1_R20L15_M1T} that there exist a large deviation from desired path during transit from WP-15 to WP-1 and from WP-12 to WP-13 as compared to Fig. \ref{fig:C1_R35L15_M1T} and Fig. \ref{fig:C1_R35L15_M2T}. This is because only one or no LM was visible at some points in this path for a reduced sensing range. The errors and $3\sigma$ bounds are also higher in this case (see Fig. \ref{fig:C1_R35L15_M1E}, \ref{fig:C1_R20L15_M1E}, and Fig. \ref{fig:C1_R35L15_M2E}) This validates the necessity for meeting the path to at least 2 LMs condition.
\begin{figure}
\centering
\includegraphics[width = 80mm,keepaspectratio]{Exp_pic/Case1/R35_L15_M1_Traj2.jpg}
\caption{Plot showing trajectory of the vehicle with sensing range as $35$ units and minimum distance to WPs as $1.0$ unit (first scenario) for $|V| = 15$.}
\label{fig:C1_R35L15_M1T}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 80mm,keepaspectratio]{Exp_pic/Case1/R35_L15_M2_Traj2.jpg}
\caption{Plot showing trajectory of the vehicle with sensing range as $35$ units and minimum distance to WPs as $2.0$ units (second scenario) for $|V| = 15$.}
\label{fig:C1_R35L15_M2T}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 80mm,keepaspectratio]{Exp_pic/Case1/R20_L15_M1_Traj2.jpg}
\caption{Plot showing trajectory of the vehicle with sensing range as $20$ units and minimum distance to WPs as $1.0$ unit (third scenario) for $|V| = 15$.}
\label{fig:C1_R20L15_M1T}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 83mm,keepaspectratio]{Exp_pic/Case1/R35_L15_M1_Err2.jpg}
\caption{Plot showing the error in X direction, Y direction and heading $(\psi)$ along with their respective $3\sigma$ bounds for the first scenario with $|V| = 15$}
\label{fig:C1_R35L15_M1E}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 83mm,keepaspectratio] {Exp_pic/Case1/R35_L15_M2_Err2.jpg}
\caption{Plot showing the error in X direction, Y direction and heading $(\psi)$ along with their respective $3\sigma$ bounds for the second scenario with $|V| = 15$}
\label{fig:C1_R35L15_M2E}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 83mm,keepaspectratio] {Exp_pic/Case1/R20_L15_M1_Err2.jpg}
\caption{Plot showing the error in X direction, Y direction and heading $(\psi)$ along with their respective $3\sigma$ bounds for the third scenario with $|V| = 15$}
\label{fig:C1_R20L15_M1E}
\end{figure}
It is evident from the graphs that the error stays within $3\sigma$ bound at all time. The relative orientation of the vehicle with respect to LMs dictates the uncertainty ellipse at any given point.
\subsubsection{Case 2}
In this instance, $20$ WPs were provided through which the vehicle needed to route. Here, the simulation was performed for 2 scenarios; one with controller gain as $2.0$ and minimum distance to WPs as $1.0$ unit and the other with controller gain as $0.4$ and minimum distance to WPs as $5.0$ units. While the first scenario is closer to ideal behavior, it requires tighter turn rate and higher vehicle agility. The vehicle gets closer to the given WPs in the first scenario than the next one in general. As a result, the second scenario produced smoother trajectory due to reduced gain and higher deviation from WPs in general, due to relaxed minimum distance requirement, than the first one. It was observed that in the second scenario, reducing controller gain resulted in requirement for a higher turning radius. Therefore, the vehicle overshot the space covered in $100 \times 100$ sq. units at times, especially when the WPs were placed very close to the edge of the square or rectangular area.
\begin{figure}
\centering
\includegraphics[width = 80mm,keepaspectratio] {Exp_pic/Case2/R35_L20_M1_Traj2.jpg}
\caption{Plot showing trajectory of the vehicle with sensing range as $35$ units and minimum distance to WPs as $1.0$ unit (first scenario) for $|V| = 20$.}
\label{fig:C2_R35L20_M1T}
\end{figure}
Comparing Fig. \ref{fig:C2_R35L20_M1T} and Fig. \ref{fig:C2_R35L20_M5T}, it can be seen that relaxing the minimum distance requirement from 1 unit to 5 units, switching from current to next WP occurs at a much earlier time. As a result, it becomes difficult to distinguish navigation conditions, especially for closely spaced WPs as observed for WP-7, WP-9 and WP-18 particularly in Fig. \ref{fig:C2_R35L20_M5T}
\begin{figure}
\centering
\includegraphics[width = 80mm,keepaspectratio] {Exp_pic/Case2/R35_L20_M5_Traj2.jpg}
\caption{Plot showing trajectory of the vehicle with sensing range as $35$ units and minimum distance to WPs as $5.0$ units (second scenario) for $|V| = 20$.}
\label{fig:C2_R35L20_M5T}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 83mm,keepaspectratio] {Exp_pic/Case2/R35_L20_M1_Err2.jpg}
\caption{Plot showing the error in X direction, Y direction and heading $(\psi)$ along with their respective $3\sigma$ bounds for the first scenario with $|V| = 20$.}
\label{fig:C2_R35L20_M1E}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 83mm,keepaspectratio] {Exp_pic/Case2/R35_L20_M5_Err2.jpg}
\caption{Plot showing the error in X direction, Y direction and heading $(\psi)$ along with their respective $3\sigma$ bounds for the second scenario with $|V| = 20$.}
\label{fig:C2_R35L20_M5E}
\end{figure}
Real life scenarios would have predefined turn radius and turn rate constraints on including vehicle dynamics model. For such cases, way point navigation algorithms such as Dubins path is required to be implemented. Comparing the error plots in Fig. \ref{fig:C2_R35L20_M1E} and Fig. \ref{fig:C2_R35L20_M5E}, it is observed that the $3\sigma$ bounds are small and comparable for both scenarios. This is so because error is not dependent on the minimum distance to WPs condition. Rather, it depends on path to LMs criteria.
\subsubsection{Case 3}
In this instance, $25$ WPs were provided through which the vehicle needed to route. The simulation was performed for 2 scenarios. The first scenario had controller gain set to $2.0$ and the minimum distance to WPs as $1.0$ unit. The second scenario had controller gain tuned to $0.7$ and the minimum distance requirement to WPs as $3.0$ unit.
\begin{figure}
\centering
\includegraphics[width = 80mm,keepaspectratio]{Exp_pic/Case3/R35_L25_M1_Traj2.jpg}
\caption{Plot showing trajectory of the vehicle with sensing range as 35 units and minimum distance to WPs as 1.0 unit (first scenario) for $|V| = 25$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 80mm,keepaspectratio]{Exp_pic/Case3/R35_L25_M3_K0P7_Traj2.jpg}
\caption{Plot showing trajectory of the vehicle with sensing range as 35 units and minimum distance to WPs as 3.0 units (second scenario) for $|V| = 25$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 83mm,keepaspectratio]{Exp_pic/Case3/R35_L25_M1_Err2.jpg}
\caption{Plot showing the error in X direction, Y direction and heading $(\psi)$ along with their respective $3\sigma$ bounds for the first scenario with $|V| = 25$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 83mm,keepaspectratio]{Exp_pic/Case3/R35_L25_M3_K0P7_Err2.jpg}
\caption{Plot showing the error in X direction, Y direction and heading $(\psi)$ along with their respective $3\sigma$ bounds for the second scenario with $|V| = 25$.}
\end{figure}
\section{Conclusion} \label{sec:conclusion}
In this paper, a systematic method to address the problem of joint routing and localization for UVs in a GPS-denied or GPS-restricted environment is presented. The optimization problem computes routes and identify the minimal set of locations where landmarks have to be placed to enable vehicle localization. This solution is combined with estimation algorithms to estimate the states of the vehicle. An efficient algorithm to compute an optimal solution to the optimization problem is presented. The proposed system architecture is tested extensively via simulation experiments. Future work can be focused on multiple vehicle versions of the problem and considering more realistic models for the sensors on the vehicles.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Distributed systems consist of several individual components.
Each component has incomplete information about the other components.
Asynchronous distributed systems have no fixed rate at which components progress but rather each component progresses at its individual rate between synchronizations with other components.
Implementing correct algorithms for asynchronous distributed systems is difficult because they have to both work with the incomplete information of the components and for every possible scheduling between the components.
\emph{Petri nets}~\cite{DBLP:books/sp/Reisig85a,DBLP:journals/tcs/NielsenPW81} are a natural model for asynchronous distributed systems.
Tokens represent components and transitions with more than one token correspond to synchronizations between the components.
\emph{Petri nets with transits}~\cite{DBLP:conf/atva/FinkbeinerGHO19} extend Petri nets with a transit relation to model the data flow in asynchronous distributed systems.
\emph{Flow-LTL}~\cite{DBLP:conf/atva/FinkbeinerGHO19} is a specification language for Petri nets with transits and allows us to specify linear properties on both the global and the local view of the system.
In particular, it is possible to globally select desired runs of the system with LTL (e.g., only fair and maximal runs) and check the local data flow of only those runs again with LTL.
A model checker for Petri nets with transits against Flow-LTL is implemented in the tool \textsc{AdamMC}~\cite{DBLP:conf/cav/FinkbeinerGHO20}.
\emph{Petri games}~\cite{DBLP:journals/iandc/FinkbeinerO17} define the synthesis of asynchronous distributed systems based on Petri nets and causal memory.
With causal memory, players exchange their entire causal past only upon synchronization.
Without synchronization, players have no information of each other.
For safety winning conditions, the synthesis algorithm for Petri games with a bounded number of controllable components and one uncontrollable component is implemented in \textsc{AdamSYNT}~\cite{DBLP:conf/cav/FinkbeinerGO15}\footnote{\textsc{AdamSYNT}{} was previously called \textsc{Adam}. From now on, \textsc{AdamMC}{} and \textsc{AdamSYNT}{} are combined in the tool \textsc{Adam}{} (\href{https://github.com/adamtool/adam\#readme}{https://github.com/adamtool/adam}).}.
Both tools are command-line tools lacking visual support to model Petri nets with transits or Petri games and the possibility to simulate or interactively explore implementations, counterexamples, and parts of the created state space.
In this paper, we present a web interface\footnote{The web interface is open source (\href{https://github.com/adamtool/webinterface\#readme}{https://github.com/adamtool/webinterface})
and a corresponding artifact to set it all up locally in a virtual machine is available~\cite{GHY20}.} for model checking asynchronous distributed systems with data flows and for the synthesis of asynchronous distributed systems with causal memory from safety specification.
The web interface offers an input for Petri nets with transits and Petri games where the user interactively creates places, transitions, and their connections with a few inputs.
As a back-end, the algorithms of \textsc{AdamMC}{} are used to model check Petri nets with transits against a given Flow-LTL formula as specification.
Internally, the problem is reduced to the model checking problem of Petri nets against LTL.
Both, the input Petri net with transits and the constructed Petri net can be visualized and simulated in the web interface.
For a positive result, the web interface lets the user follow the control flow of the combined system and the data flow of the components.
For a negative result, the web interface simulates the counterexample with a visual separation of the global and each local behavior.
The algorithms of \textsc{AdamSYNT}{} solve the given Petri game with safety specification.
Internally, the problem is reduced to solving a finite two-player game with complete information.
For a positive result, a winning strategy for the Petri game and the two-player game can be visualized and
the former can be simulated.
For a negative result, the web interface lets the user interactively construct strategies of the two-player game and highlights why they violate the specification.
These new intuitive construction methods, interactive features, and visualizations are of great impact when developing asynchronous distributed systems.
\section{Web Interface for Petri Nets with Transits}
\label{sec:PNwT}
The web interface can model check Petri nets with transits against Flow-LTL.
We use an example from software-defined networks to showcase the workflow.
\subsubsection{Workflow for Petri Nets With Transits}
\label{sec:PNwTworkflow}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{screenshotPNWT150.png}
\caption{Screenshot from the web interface for the model checking workflow.}
\label{fig:pnwt}
\end{figure}
One application domain for Petri nets with transits are \emph{software-defined networks (SDNs)}~\cite{DBLP:journals/ccr/McKeownABPPRST08,DBLP:journals/cacm/CasadoFG14}.
The nodes of the network are \emph{switches} which forward \emph{packets} along the edges of the network according to the \emph{routing configuration}.
Packets enter the network at \emph{ingress switches} and leave it at \emph{egress switches}.
SDNs separate the packet forwarding process, called the \emph{data plane}, from the routing process, called the \emph{control plane}.
\emph{Concurrent updates} to the routing configuration are difficult to get right~\cite{DBLP:conf/networking/ForsterMW16}.
The separation of data and control plane and updates to the routing configuration can be encoded into Petri nets with transits~\cite{DBLP:conf/atva/FinkbeinerGHO19}.
Using this encoding, we demonstrate the workflow of the web interface for model checking an asynchronous distributed system with data flows.
The packets of the SDN are modeled by the data flow in the Petri net with transits.
The data flow relation as an extension from Petri nets to Petri nets with transits is depicted as colored and labeled arcs.
In \refFig{pnwt}, the web interface presents the resulting Petri net with transits \(\ensuremath\mathcal{N}\).
First, we use the tools on the left to create for each switch a place \(si\) with \(i\in\{0,\ldots,5\}\) and add a token (cf.\ outer parts of \(\ensuremath\mathcal{N}\)).
Then, we create transitions for the connections between the switches and for the origin of packets in the SDN (cf.\ transition \(\mathit{ingress}\) in the top-left corner) and link them with flows in both directions.
Additionally, we create local transits between the switches corresponding to the forwarding of packets.
They are displayed in light blue and red and are identified by the letters.
This constitutes the \emph{data plane}.
Next, we define the \emph{control plane}, i.e., which forwarding is activated.
Each transition to forward packets is connected to a place \(ai\) with \(i\in\{0,\ldots,5\}\)
which has a token when the forwarding is configured initially (cf.\ places \(a3\), \(a4\), and \(a5\)) and no token otherwise (cf.\ places \(a0\), \(a1\), and \(a2\)).
For the concurrent update, we create places \(ui\) with \(i\in\{0,\ldots,7\}\) and transitions \(ti\) with \(i\in\{6,\ldots,11\}\) with corresponding flows (cf.\ inner parts of \(\ensuremath\mathcal{N}\)).
Transitions for the forwarding are set as weak fair, i.e., whenever a transition is infinitely long enabled in a run, it also has to fire infinitely often, indicated by the purple color of the outer transitions.
Transitions for the update do not require fairness assumptions.
A satisfied Flow-LTL formula is $A\,F\,s5 $ specifying that all packets eventually reach switch $s5$.
An unsatisfied formula is \((G\,u0\Rightarrow A\,F\,s2)\) requiring for runs, where the update is never executed, that all packets are taking the lower-left route.
The fairness assumptions and a maximality assumption, i.e., whenever some transition can fire in a run some transition fires, are automatically added to the formula.
In the screenshot, a counterexample for the unsatisfied formula is displayed on the right.
The first packet takes the upper-right route via transitions $t3$, $t4$, and $t5$ and the update never starts.
\subsubsection{Features for Petri Nets with Transits.}
\label{sec:PNwTgeneral}
\textsc{AdamMC}~\cite{DBLP:conf/cav/FinkbeinerGHO20} is a command-line model checking tool for Petri nets with transits and Flow-LTL~\cite{DBLP:conf/atva/FinkbeinerGHO19}.
The model checking problem of Petri nets with transits against Flow-LTL is solved by a reduction to Petri nets and LTL.
The web interface allows displaying and arranging the nodes of the Petri net from the reduction and the input Petri net with transits.
Automatic layout techniques
are applied to avoid the overlapping of nodes.
A physics control, which modifies the repulsion, link, and gravity strength of nodes, can be used to minimize the overlapping of edges.
Heuristics generate coordinates for the constructed Petri net by using the coordinates of the input Petri net with transits to obtain a similar layout of corresponding parts.
For a positive result, the web interface allows visualizing the data flow trees for given firing sequences of the nets.
For a negative result, the counterexample can be simulated both in the Petri net with transits and in the Petri net from the reduction.
The witness of the counterexample for each flow subformula
and the run violating the global behavior
can be displayed by the web interface.
This functionality is helpful when developing an encoding of a problem into Petri net with transits
to ensure that a counterexample is not an error in the encoding.
The constructed Petri net can be exported into a standard format for Petri net model checking (PNML)
and the constructed LTL formula can be displayed.
\section{Web Interface for Petri Games}
The web interface can synthesize local controllers from safety specifications.
The workflow is showcased for a distributed alarm system given as a Petri game.
\subsubsection{Workflow for Petri Games}
\label{sec:PGworkflow}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{screenshotPG150.png}
\caption{Screenshot from the web interface for the synthesis workflow.}
\label{fig:pg}
\end{figure}
We demonstrate the workflow of the web interface for the synthesis of asynchronous distributed systems with causal memory from safety specifications.
Petri games separate the places of an underlying Petri net into \emph{system places} and \emph{environment places}.
Tokens on system places are \emph{system players} and tokens on environment places are \emph{environment players}.
Each player has \emph{causal memory}: only upon synchronization with other players, they exchange their entire causal past.
For safety specifications, the system players have to avoid that a bad place is reached for all behaviors of the environment players.
We want to obtain two local controllers of a distributed alarm system that should indicate the location of a burglary at both controllers.
In \refFig{pg}, the web interface presents the resulting Petri game on the left and the winning strategy for the alarm system on the right.
The burglar is modeled by an environment player and each component of the distributed alarm system by a system player.
Environment players are on white places and system players on gray ones.
We create five environment places $e0$, $e1$, $e2$, $\mathit{eL}$, and $\mathit{eR}$.
The place $e0$ has a token, $e1$ and $e2$ serve for the decision to burgle a location, and $\mathit{eL}$ and $\mathit{eR}$ for actually burgling the location.
Each component \(x\in\{p,q\}\) of the alarm system has one system place $x0$ with a token, two system places $x1$ and $x2$ to detect a burglary and inform the other component, and two system places $\mathit{xL}$ and $\mathit{xR}$ to sound an alarm with the position of a burglary.
We create rows of transitions for the environment player deciding where to burgle (first row), for the components detecting a burglary (second row), for the communication between the components (third row), and for sounding the alarm at each location (fourth row).
At last, we use transitions $\mathit{fa}i$ with $i\in\{0,\ldots,3\}$ and $\mathit{fr}j$ with $j\in\{0,\ldots,7\}$ connected to the bad place $\mathit{bad}$ to define that the implementation of the distributed alarm system should avoid false alarms and false reports.
A \emph{false alarm} occurs if the burglar did not burgle any location but an alarm occurred, i.e., in every pair of places $\{e0\}\times\{\mathit{pL}, \mathit{pR}, \mathit{qL}, \mathit{qR}\}$.
A \emph{false report} occurs if a burglary happened at a location but a component of the alarm system indicates a burglary at the other location, i.e., in every pair of places $\{e1, \mathit{eL}\} \times \{\mathit{pR}, \mathit{qR}\}$ and $\{e2, \mathit{eR}\} \times \{\mathit{pL}, \mathit{qL}\}$.
We add transitions and flows to $\mathit{bad}$ for these cases.
The web interface finds a winning strategy (depicted on the right in \refFig{pg}) for the Petri game described above.
Each component locally monitors its location ($t2$, $t3$) and simultaneously waits for information about a burglary at the other location ($t4$, $t5$).
When a burglary is detected at the location of the component then it first informs the other component ($t4$, $t5$) and then outputs an alarm for the current location ($t7$, $t8$).
When a component is informed about a burglary at the other location, it outputs an alarm for the other location ($t6$, $t9$).
\subsubsection{Features for Petri Games}
\label{sec:PGgeneral}
\textsc{AdamSYNT}~\cite{DBLP:conf/cav/FinkbeinerGO15} is a command-line tool for Petri games~\cite{DBLP:journals/iandc/FinkbeinerO17}.
The synthesis problem for Petri games with a bounded number of system players, one environment player, and a safety objective is reduced to the synthesis problem for two-player games.
A winning strategy in the two-player game is translated into a winning strategy for the Petri game.
Both can be visualized in the web interface.
Here, the web interface provides the same features for visualizing, manipulating, and automatically
laying out the elements as for model checking.
It uses the order of nodes of the Petri game to heuristically provide a positioning of the strategy
and allows simulating runs of the strategy.
The winning strategy of the two-player game provides
an additional view on the implementation to check if it is not bogus
due to a forgotten case in the Petri game or specification.
For an unrealizable synthesis problem, the web interface allows analyzing
the underlying two-player game via a stepwise creation of strategies.
This guides the user towards changes to make the problem realizable.
\section{Implementation Details}
\label{sec:impldetails}
The server is implemented using the Sparkjava micro-framework~\cite{sparkjava} for incoming HTTP and WebSocket connections.
The client is a single-page application written in Javascript using Vue.js~\cite{vue}, D3~\cite{d3}, and the Vuetify component library~\cite{vuetify}.
We constructed libraries out of the tools \textsc{AdamMC}{} and \textsc{AdamSYNT}{} and implemented one interface handling both libraries.
Common features like the physics control of nodes share the same implementation.
All components of the libraries and the web interface~\cite{webinterface} are open source and available on GitHub~\cite{adamtool}.
\section{Conclusion}
\label{sec:conclusion}
We presented a web interface for two tools:
\textsc{AdamMC}{}, a model checker for data flows in asynchronous distributed systems represented by Petri nets with transits, and
\textsc{AdamSYNT}{}, a synthesis tool for local controllers from safety specifications in asynchronous distributed systems with causal memory represented by Petri games.
The web interface makes the modeling and debugging of Petri nets with transits and Petri games user-friendly as it presents visual representations of the input, all intermediate steps, and the output of the tools.
The interactive features are a great assistance for correctly modeling distributed systems.
We plan to extend the web interface and tool support to model checking Petri nets with transits against Flow-CTL$^*$~\cite{DBLP:conf/atva/FinkbeinerGHO20}, to other classes of Petri games with a decidable synthesis problem~\cite{DBLP:conf/fsttcs/FinkbeinerG17,DBLP:conf/concur/BeutnerFH19}, to the bounded synthesis approach for Petri games~\cite{DBLP:conf/birthday/Finkbeiner15,DBLP:journals/corr/abs-1711-10637,DBLP:journals/corr/Tentrup16,DBLP:conf/atva/Hecking-Harbusch19}, and to high-level Petri games~\cite{DBLP:journals/acta/GiesekingOW20}.
As our web interface is open source and easy to extend, we also plan to connect it to other tools for Petri nets like APT~\cite{apt}, LoLA~\cite{DBLP:conf/apn/Wolf18a}, or TAPAAL~\cite{DBLP:conf/tacas/DavidJJJMS12}.
\bibliographystyle{splncs04}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\rev{Metapopulation models of epidemics consider the entire population partitioned into communities (also called households, clusters or subgraphs). Such models assume that each community shares a common environment or is defined by a specific relationship (see, e.g., ~\cite{Hanski, Masuda2010, Allen2007}).}
\so{Several authors also account for the effect of migration between communities \cite{Colizza2008,Poletto2013}. Conversely, the model we are interested in suits better the diffusion of computer
viruses or stable social communities, which do not change during the infection period; hence we do not consider migration.}
In this work, we study the diffusion of epidemics over an undirected graph $G=(V,E)$ with edge set $E$ and node set $V$. The order of $G$, denoted $N$, is the cardinality of $V$, whereas the size of $G$ is the cardinality of $E$, denoted $L$. Connectivity of the graph $G$ is conveniently encoded in the $N \times N$ adjacency matrix $A$.
We are interested in the case of networks that can be naturally partitioned into $n$ communities: they are represented by a node set partition $\pi=\left\{V_1,...,V_n\right\}$, i.e., a sequence of mutually disjoint nonempty subsets of $V$, called cells, whose union is $V$.
The epidemic model adopted in the rest of the paper is a continuous-time Markovian individual-based \rev{susceptible--infected--susceptible} (SIS) model. In the SIS model a node can be repeatedly infected, recover and yet be infected again. The viral state of a node $i$, at time $t$, is thus described
by a Bernoulli random variable $X_i(t)$, where we set $X_i(t) = 0$ if $i$ is healthy and $X_i(t) = 1$ if $i$ is infected.
\so{Every node at time $t$ is either infected with probability $p_i(t) = \mathbb P(X_i(t) = 1)$ or healthy (but susceptible) with probability $ 1 - p_i(t)=\mathbb P(X_i(t) = 0) $}. Each node becomes infected following a Poisson process with rate $\beta$. Also, $i$ recovers following a Poisson process with rate $\delta$. We further assume that infection and curing processes are independent~\cite{VanMieghem2009}. The ratio $\tau=\beta/\delta$ is called the \textit{effective spreading rate}.
Recently, also non Markovian types of epidemic spread were introduced in the literature, by choosing other than the exponential interaction time for infection and/or curing (see \fdp{for instance} \cite{NonM, genInf}). However, this seems to complicate the analysis considerably and \fdp{it is beyond the scope of this work}.
Compared to the homogeneous case where the infection rate is the same for all pairs of nodes, in our model we consider two infection rates: the {\em intra-community} infection rate $\beta$ for infecting individuals in the same community and the {\em inter-community} infection rate $\varepsilon\beta$ i.e., the rate at which individuals
among different communities get infected. \so{ We assume $0<\varepsilon < 1$, the customary physical interpretation being that infection across communities occur at a much smaller rate. Clearly the model can be extended to the case $\varepsilon \geq 1$.}
\rev{ Further models where the epidemic process within communities is faster compared to the rate at which it spreads across communities, have been studied in literature \cite{Bonaccorsi, Ball1997, Ball2008, Ross2010}.}
As described in~\cite{VanMieghem2009,VanMieghem2012b}, the SIS process developing on a graph with $N$ nodes
is modeled as a continuous-time Markov process with $2^N$ states. The dynamics of the nodal infection
probability is obtained by the corresponding Kolmogorov differential equations, but the resulting dynamical system consists of $2^N$ linear differential equations, not a viable approach for large networks. \so{Hence, often, an approximation of the SIS process is needed}. In this work we consider the first-order mean-field approximation NIMFA, proposed by Van Mieghem et al. in~\cite{VanMieghem2009, VanMieghem2012a, VanMieghem2011}.
NIMFA replaces the original $2^N$ linear differential equations by $N$ non-linear differential equations; they represent the time-change of the infection probability of a node. As typical in first-order approximations of SIS dynamics, the only approximation required by NIMFA is that the infectious state of two nodes in the
network are uncorrelated, i.e., $\E{X_i(t)X_j(t)}= \E{X_i(t)}\E{X_j(t)}$.
\subsection{Long-term prediction and epidemic threshold}\label{Epidemic Threshold}
For a network with finite order $N$, the exact SIS Markov process will always converge towards its \so{unique} absorbing state, \so{that is the zero-state where all nodes are healthy}.
\so{The other states
form a transient class, from which one can reach the zero-state with positive probability.} \fdp{ Because transitions from the zero-state have zero probability\footnote{Some models, as, e.g., the $\varepsilon$-SIS model \cite{VanMieghem2012b}, include the possibility of a nodal self-infection, thus making the whole process irreducible.} the stochastic model predicts that the virus will disappear from the network \cite{Pollett90}.
}
\so{However the waiting time to absorption is a random variable whose distribution depends on the initial state of the system, and on the parameters of the model \cite{ NasselCLosed, Nassell2002}. In fact there is a critical value $\tau_c$ of the effective spreading rate $\tau= \beta/\delta$, whereby if $\tau > \tau_c$ \rev{the time to absorption grows exponentially in $N$, while for $\tau < \tau_c$ the infection vanishes exponentially fast in time}.
The critical value $\tau_c$ is often called the \textsl{epidemic threshold} \cite{VanMieghem2009, Bailey1975, Daley1999, Pastor2001}. }
\fdp{Thus above the threshold, a typical realization of the epidemic process \rev{may experience} a very long waiting time before absorption to the zero-state. During such waiting time, the so-called {\em quasi-stationary distribution} can be used in order to approximate the probability distribution of occupancy of the system's states}.
\so{The quasi-stationary distribution is obtained by conditioning on the fact that there is no extinction \cite{NasselCLosed,Nassell2002}. The quasi-stationary distribution can be regarded as the limiting conditional distribution, useful in representing the long-term behavior of the process {\em ``that in some sense terminates, but appears to be stationary over any reasonable time scale''}\cite{Pollett}.
\rev{In fact, numerical simulations of SIS processes also reveal that, already for reasonably small networks $(N \geq 100)$ and when $\tau > \tau_c$, the overall-healthy state is only reached after an unrealistically long time. Hence, the indication of the model is that, in the case of real networks, one should expect that the extinction of epidemics is hardly ever attained \cite{VanMieghem2013, Draief2010}.}
For this reason the literature is mainly concerned with establishing the value of the epidemic threshold, being a key parameter behind immunization strategies related to the network protection against viral infection.
For an SIS process on graphs, $\tau_c$ depends on the spectral radius $\lambda_1(A)$ of the adjacency matrix $A$ \cite{Wang2003,VanMieghem2009}. NIMFA determines the epidemic threshold for the effective spreading rate as $\tau^{(1)}_c =\frac{1}{\lambda_{1}(A)}$, where the superscript (1) refers to the first-order mean-field approximation \cite{VanMieghem2009,VanMieghem2014}. \rev{Farther, in Thm. \ref{thresh}, we shall study the asymptotic behavior of the solutions of the NIMFA system, both above and below the threshold.}
\so{We observe that, with respect to the exact Markovian SIS model, the state of nodes was recently proved to be not negatively correlated ~\cite{Cator_positive_correlations}. It is hence possible to prove that, due to the assumption of independence, NIMFA yields an upper bound for the probability of infection of each node, as well as a lower bound for the epidemic threshold, i.e., $\tau_c = \alpha \tau_c^{(1)}$ with $\alpha \geq 1$. }
\fdp{From the application standpoint, a key issue is to determine for which networks of given order NIMFA performs worst, meaning that $\alpha =\frac{\tau_c}{\tau_c^{(1)}}$ is largest. To this respect, further efforts have been made to satisfactorily quantify the accuracy of the model \cite{Accuracy}.}
\so{Finally, when $\tau > \tau^{(1)}_c$, a limiting occupancy probability appears as the second constant solution\footnote{We remember that all bounded trajectories of an autonomous first-order differential equation tend to an equilibrium, i.e., to a constant solution of the equation.} of the \so{NIMFA} non-linear system which exists, apart from the zero-vector solution.
{\rev Such} non-zero steady-state reflects well the observed viral behavior \cite{VanMieghem2012a}: it can be seen as the {\rev analogous} of the quasi-stationary distribution of the exact stochastic SIS model.}
\begin{figure*}[t]
\centering
\includegraphics[width=0.60\textwidth]{Example.eps}
\caption{A sample graph with equitable partition $V=\{\{v_1\},\{v_2,v_3\},\{v_3,v_4,v_5,v_6\},\{v_7,v_8,v_9,v_{10},v_{11},v_{12},v_{13}\}\}$.} \label{fig:fig11}
\end{figure*}
\subsection{Outline and main results}
As already observed in \cite{Bonaccorsi}, the presence of communities generates a strong mixing effect at local level (e.g., the rate of infection inside a community tends to be homogeneous) as opposed to the much lower speed of mixing (i.e., much larger inhomogeneity) within the whole population. In \cite{Bonaccorsi} a complete graph represents the internal structure of each community. Such assumption appears natural for small community orders, for example, because the members of a small community usually know each other, as they may be friends, relatives, members of a common club, employees of the same department, etc. Moreover, given two connected communities, all of their nodes are mutually linked.
In this work, instead,
\rev{we allow for the case of sparser community structures.} More precisely we consider an \textsl{equitable partition} of the graph. First of all this means that all nodes belonging to the same community have the same internal degree: formally the subgraph $G_i$ of $G(V,E)$ induced by $V_i$ is regular for all $i$'s (recall that $\pi=\left\{V_1,...,V_n\right\}$ is a partition of the node set $V$, which is assumed to be given a priori).
Furthermore, for any two subgraphs $G_i,G_j$, each node
in $G_i$ \rev{is connected with the same number of nodes in $G_j$}.
The macroscopic structure of such a network
can be described by the \emph{quotient graph} $G/\pi$, an oriented graph (possibly) featuring loops and multiple edges. The nodes of the quotient graph are the cells $V_1,\ldots,V_n$ in $G$. \so{In the last part of the work we extend our study to the case of \textit{almost equitable partitions}} that does not require any \so{specific} structural condition inside \rev{each $G_i$}.
Such network structure can be observed, e.g., in the architecture of some computer networks where clusters of clients connect to single routers, whereas the routers' network has a connectivity structure with nodes' degree constrained by the number of ports. Also, graphs representing multi-layer networks may be characterized using equitable and almost equitable partitions \cite{moreno2014}
\so{In Sec. \ref{epid} we \rev{describe} the NIMFA differential equations and provide \rev{an} analysis of the global dynamics that allows us to identify the epidemic threshold $\tau_c^{(1)}$. In Sec.~\ref{sec:equi}, after defining equitable partitions, we introduce the so-called quotient matrix $Q$ that is related to $G/\pi$. \rev{Since} matrix $Q$ has the same spectral radius of adjacency matrix $A$, \rev{a novel expression is found} for the bound on the epidemic threshold $\tau_c^{(1)}$ as a function of network metrics.} Thus, a relation between the epidemic threshold and the spectral properties of the corresponding quotient matrix is obtained.
\so{In Sec.~\ref{sec:InfDyn} we show under which conditions the matrix $Q$ can be used in order to express the whole epidemic dynamics by a system of $n$ equations instead of $N$, where $n < N$. We prove the existence of a positively invariant set for the original system of $N$ differential equations that contains the equilibrium points. Moreover we show that, above the threshold, when a second non-zero equilibrium point appears, we can use the reduced system for its computation. }
In Sec.~\ref{AlmEq} we finally extend our investigations to the case of almost equitable partitions. We consider the special case of almost equitable partitions obtained by perturbing an equitable one, i.e., by adding/deleting
a certain set of edges from an equitable partition. Thus, we relax the assumption that the internal structure of each community is regular. Even in this case we obtain a lower bound for the epidemic threshold.
\section{The epidemic model}\label{epid}
\rev{The NIMFA model describes the process of diffusion of epidemics on a graph by expressing the time-change
of the probability $p_i$ that node $i$ is infected.}
Thus, node $i$ obeys a following differential equation~\cite{VanMieghem2009}
\begin{equation}\label{A}
\frac{ d p_i(t)}{dt} = (1-p_i(t))\beta\left(\sum_{j=1}^N a_{ij}p_j(t)\right) -\delta p_i(t), \; i=1,\ldots,N .
\end{equation}
In \eqref{A} the time-derivative of the infection probability of node $i$ consists of two competing processes:
\begin{enumerate}
\item while healthy with probability $1-p_i(t)$, all infected neighbors, whose average number is
$\sum_{j=1}^N a_{ij}p_j(t)$, infect node $i$ at rate $\beta$.
\item while node $i$ is infected with probability $p_i(t)$, it is cured at rate $\delta$.
\end{enumerate}
\rev{The following matrix representation of \eqref{A} holds}
\begin{equation}\label{mat}
\frac{dP(t)}{dt}= \beta AP(t)-\mathop{\rm diag}(p_i(t))(\beta AP(t)+ \delta u),
\end{equation}
where $P(t)=(\,p_1(t) \, p_2(t) \dots p_N(t)\,)^T$, $\operatorname{diag}(p_i(t))$ is the diagonal
matrix with elements $p_1(t), p_2(t), \dots ,p_N(t)$ and $u$ is the all-one vector. From \eqref{mat},
considering $P(t)= \operatorname{diag}(p_i(t))u$, we can write
\begin{eqnarray}\label{mat2}
\frac{dP(t)}{dt}&&= \beta A P(t)-\delta \operatorname{diag}(p_i(t))u - \operatorname{diag}(p_i(t))\beta AP(t)\nonumber \\
&&=(\beta A-\delta I)P(t) - \beta \operatorname{diag}(p_i(t)) A P(t).
\end{eqnarray}
\so{Clearly we study the system for $(p_1, \dots, p_N) \in I_N= [0,1]^N$. It can be shown that the system \eqref{mat2} is positively invariant in $I_N$, i.e. if $P(0) \in I_N$ then $P(t) \in I_N$ for all $t > 0$ \cite[Lemma ~3.1]{Stab}.}
\so{\rev{The} analysis of the global dynamics of \eqref{mat2} leads to identify the epidemic threshold $\tau^{(1)}_c$ in terms of the effective spreading rate $\tau=\beta/\delta$ where, as mentioned in Sec. \ref{Epidemic Threshold}},
\begin{equation}\label{tauc}
\tau^{(1)}_c = \frac{1}{\lambda_{1}(A)},
\end{equation}
with $\lambda_1(A)$ spectral radius of $A$. This critical value separates the absorbing phase from the endemic phase. We shall prove this, in Thm \ref{thresh}, by studying the stability of the equilibrium points of \eqref{mat2}, that are solutions of the equation
\begin{equation}\label{eqpoints}
P= \frac{\beta}{\delta} (I-\operatorname{diag}(p_i))A P.
\end{equation}
To this aim we shall adapt the results in \cite{Stab} to our individual-based SIS model. Let us denote by $f$ the right hand side of \eqref{mat2}, i.e., \eqref{mat2} can be re-written as a vector-valued differential equation
\begin{equation}\label{f}
\frac{dP}{dt}=f(P),
\end{equation}
where $\displaystyle f:[0,1]^N\rightarrow \mathbb{R}^N$ is a $C^{\infty}$ function.
Let $P_0=0$ be the vector of all zero components, one can easily check that $P_0$ is an equilibrium point of the system (\ref{f}), i.e. $f(P_0)=0$. Also, the following holds
\begin{theorem}\label{thresh}
If $\tau \leq 1/\lambda_1(A)$ then $P_0$ is a globally asymptotically stable equilibrium of (\ref{mat2}).\\
If $\tau > 1/\lambda_1(A)$, $P_0$ is unstable and there exists another equilibrium point $P_{\infty}$
that is globally asymptotically stable in $I_N - \left\{0\right\}$.
\end{theorem}
\begin{proof}
We can rewrite the system \eqref{f} in the following form (see \cite[p. 108]{DiffEqandDynamicalSystem})
\begin{equation}\label{Df}
\dot{P}= D_f P + F(P),
\end{equation}
where $D_f$ is the Jacobian matrix of $f$ at $P_0$ and $F(P)$ is a column vector whose $i$-th component is $-\beta \sum_{j=1}^N a_{ij} p_i p_j$.
From (\ref{mat2}) we hav
\begin{equation*}
\left(Df(P_0)\right)_{ij}= \begin{cases}
\beta a_{ij} & i \neq j\\
- \delta & i=j
\end{cases}
\end{equation*}
that is $D_f=\beta A- \delta I$.
\rev{Since adjacency matrix $A$ is real and symmetric its eigenvalues are real. Hence, the eigenvalues of $D_f$ are real as well and of the form }
\begin{equation*}
\lambda_i(D_f)=\beta\lambda_i(A)-\delta.
\end{equation*}
In particular, let $\lambda_1(D_f)= \max_{i} \lambda_i(D_f)$, since the spectral radius of $A$ is positive we have
\begin{equation*}
\lambda_1(D_f)=\beta \lambda_1(A)-\delta.
\end{equation*}
Now we can apply \cite[Thm.~3.1]{Stab} to the system \eqref{Df} and assert that when $\lambda_1(D_f) \leq 0$, i.e., $\tau \leq 1/\lambda_1(A)$, $P_0$ is a globally asymptotically stable equilibrium of \eqref{mat2}.
Conversely, if $\lambda_1(D_f)>0$, i.e. $\tau > 1/\lambda_1(A)$, there exists another equilibrium point $P_{\infty}$. $P_0$ and $P_{\infty}$ are the only equilibrium points in $I_N$ and $P_{\infty}$ is globally asymptotically stable in $I_N - \left\{0\right\}$.
Finally, since $\tau > 1/\lambda_1(A)$, we have $\lambda_1(D_f)>0$. \rev{From Lyapunov's Linearization (or First) Method, it follows that} $P_0$ is an unstable equilibrium point in $I^N$.
\end{proof}
\section{\so{Equitable Partitions}}\label{sec:equi}
\so{In this section we describe the SIS individual-based model for graphs with equitable partitions.}
The original definition of equitable partition is due to Schwenk \cite{Schwenk}.
\begin{definition}\label{def:eqpart}
Let $G=(V,E)$ be a graph. The partition $\pi=\left\{V_1,...,V_n\right\}$ of the node set $V$ is called \emph{equitable} if
\so{ for all $i,j \in \left\{1, \dots ,n \right\}$}, there is an integer $d_{ij}$ such that
\begin{equation*}
d_{ij}=\mbox{\rm deg}(v,V_j):=\# \left\{e \in E : e=\left\{v,w\right\}, w \in V_j \right\}.
\end{equation*}
independently of $v \in V_i$.
\end{definition}
We shall identify the set of all nodes in $V_i$ \rev{with} the $i$-th {\em community} of the whole population.
In particular, each $V_i$ induces a subgraph of $G$ that is necessarily regular.
\begin{remark}\label{rem1}
We use the notation $\mathrm{ lcm}$ and $\gcd$ to denote the least common multiple and greatest common divisor, respectively.
We can observe that the partition of a graph is equitable if and only if
\begin{equation}
d_{ij} = \alpha \frac{\mathrm{ lcm}(k_i,k_j)}{k_i}\nonumber
\end{equation}
where $\alpha$ is an integer satisfying $1 \leq \alpha \leq \mathop{\gcd}(k_i,k_j)$ and $k_i$ the number of nodes in $V_i$, for all $ i=1,...,n$.
\end{remark}
An equitable partition generates the \emph{quotient graph} $G/\pi$, which is a
\emph{multigraph} with cells as vertices and $d_{ij}$ edges between $V_i$ and $V_j$.
For the sake of explanation, in the following we will identify $G/\pi$ \rev{with the}
(simple)
graph having the same cells vertex set, and where an edge exists between
$V_i$ and $V_j$ if at least one exists in the original multigraph. We shall denote by $B$ the adjacency matrix of the graph $G/\pi$.
\begin{remark}\label{rem2}
In
\cite{Bonaccorsi}
\rev{it has been considered the special case} when each community has a clique structure, i.e, $d_{ii}=k_i-1$ for all $i=1,...,n$. Moreover all nodes belonging to two linked
communities $i$ and $j$ are connected, $d_{ij}=k_j$. By considering the theory of equitable partition,
we generalize the cited work and consider any kind of regular graph to represent the
internal structure of each community. Moreover, unlike before, if two communities $i$ and $j$
are connected, each node in community $i$ is connected with \rev{$d_{ij}\leq k_j$ nodes in community $j$.}
\end{remark}
\subsection{Example} \label{subset:example}
\rev{Let us assume that the adjacency matrix $B$ of the quotient graph is given and that,}
for any \so{$i,j \in \left\{1, \dots, n\right\}$}, $b_{ij} \not=0$ implies $d_{ij} = k_j$, i.e., each node in $V_i$ is connected with every node inside $V_j$.
We can explicitly write the adjacency matrix $A$ in a block form. Let
$C_{V_{i}}=(c_{ij})_{k_i \times k_i}$ be the adjacency matrix of the
subgraph induced by $V_i$
and $J_{k_i \times k_j}$ is an all ones $k_i \times k_j$ matrix; then
\begin{equation}\label{e:**}
A=
\begin{bmatrix}
C_{V_1}&\varepsilon J_{k_1\times k_2}b_{12}&.&.& \varepsilon J_{k_1 \times k_n}b_{1n} \\
\varepsilon J_{k_2\times k_1}b_{21}&C_{V_2}&.&.& \varepsilon J_{k_2 \times k_n}b_{2n}\\
.&.&.&.&.\\
.&.&.&.&.\\
.&.&.&.&C_{V_n}
\end{bmatrix}
\end{equation}
\rev{We observe that \eqref{e:**} represents a block-weighted version of the adjacency matrix $A$.
The derivation of NIMFA for the case of two different infection rates, considered in this paper, results
in the replacement of the unweighted adjacency matrix in the NIMFA system \eqref{mat2} with its weighted
version (see \cite{scoglio} for a deeper explanation)}.
\subsection{The quotient matrix}\label{sec:Q}
\so{We search for a smaller matrix $Q$ that contains the relevant information for the evolution of the system.}
Such a matrix is the \emph{quotient matrix} of the equitable partition.
\rev{ In Prop. \ref{cor} we will see that} $Q$ and $A$ have the same spectral radii. As a consequence, we can compute its spectral radius in order to estimate the epidemic threshold, instead of computing \rev{the spectral
radius of matrix $A$.}
The quotient matrix $Q$ can be defined for any equitable partition: in view of the internal structure of a graph with an equitable partition, it is natural to consider the cell-wise average value of a function on the node set, that is to say the projection of the node space into the subspace of cell-wise constant functions.
\begin{definition}\label{def:proj}
Let $G=(V,E)$ a graph. Let $\pi = \{V_i,\ i = 1, \dots, n\}$ be any partition of the node set $V$, let us consider the $n \times N$ matrix $S=(s_{iv})$,
where
\begin{equation*}
s_{iv}=\begin{cases}
\frac{1}{\sqrt{|V_i|}} & \text{ $v \in V_i$}\\
0 & \text{otherwise}.
\end{cases}
\end{equation*}
The \emph{quotient matrix} of $G$ (with respect to the given partition) is
$$Q:=SAS^T.$$
\end{definition}
Observe that by definition $SS^T=I$.
In the case of the example in Sec. \ref{subset:example} the form of $Q$ is rather simple:
\begin{equation*}\label{eq:qii}
q_{ii}
\sum_{h=1}^{k_i}
\left(\frac{1}{\sqrt{k_i}}\right)^2\sum_{k=1}^{k_i}(C_{V_i})_{kh}=\frac{1}{k_i}\sum_{h,k=1}^{k_i}(C_{V_i})_{kh}
\end{equation*}
and
\begin{equation*}\label{eq:qij}
q_{ij}
\frac{1}{\sqrt{k_i k_j}}\sum_{z \in V_i, \\ l \in V_j} a_{zl}= \sqrt{k_ik_j}\varepsilon b_{ij}.
\end{equation*}
Hence we obtain that
\begin{equation*}\label{Qspec}
Q=\mathop{\rm diag}(d_{ii})+ (\sqrt{k_ik_j}\varepsilon b_{ij})_{i,j=1,...n},
\end{equation*}
where $d_{ii}=\frac{1}{k_i}\sum_{h,k=1}^{k_i}(C_{V_i})_{kh}$ is the internal degree of the subgraph induced by $V_i$.
In the case of general equitable partitions,
the expression for $Q$ writes
\begin{equation*}\label{eq:q}
Q=\mathop{\rm diag}(d_{ii})+ (\sqrt{d_{ij}d_{ji}}\varepsilon b_{ij})_{i,j=1,...n}.
\end{equation*}
There exists a close relationship between the spectral properties of $Q$ and that of $A$. Being the order of $Q$ smaller of that of $A$, a result in
\cite{godsil} basically shows that $\sigma(Q)\subseteq\sigma(A)$, \rev{where with $\sigma(A)$ we refer, hereafter, to the spectrum of a square matrix $A$. Furthermore it holds the following}
\begin{proposition}\label{cor}
Let $G=(V,E)$ a graph. Let $\pi = \{V_i,\ i = 1, \dots, n\}$ be an equitable partition of the node set $V$.
The adjacency matrix $A$ and the quotient matrix $Q$ have the same spectral radius, i.e.
\[
\lambda_1(Q)=\lambda_1(A).
\]
\end{proposition}
\begin{proof} \rev{See \cite[art. 62]{Graph}}.
%
\end{proof}
\subsection{Complexity reduction}
\fdp{ Prop.~\ref{cor} further details that, once the network structure is encoded in the connectivity of a quotient graph $Q$, then the epidemic threshold $\tau_c^{(1)}$ is expressed by the spectral radius of $Q$.}
\fdp{Now, since the order of $Q$ is smaller than the order of $A$, this can provide a computational advantage. The complexity reduction can be evaluated easily, e.g, in the case of the power iteration method \cite{MatAn}. The power iteration method is a numerical technique for approximating a dominant eigenpair of a diagonalizable matrix $L$,
using the following iteration
\begin{equation*
y^{h}=L \, y^{h-1}, \quad h=1,2,\ldots
\end{equation*}
for a given initial vector $y^{0}$. As the iteration step $h$ increases, $y^{h}$ approaches a vector which
is proportional to a dominant eigenvector of $L$. If we order the eigenvalues of $L$ such as as $|\lambda_1(L)|\geq |\lambda_2(L)| \geq \ldots \geq|\lambda_n(L)|$, the rate of convergence of the method is ruled by $ |\lambda_2(L)|/|\lambda_1(L)|$.}
\fdp{In our case, for the Perron-Frobenius Theorem the dominant eigenvalue $\lambda_1(A)$ is positive and by Prop. \ref{cor}, $\lambda_1(A)=\lambda_1(Q)$. Furthermore $\sigma(Q) \subseteq \sigma(A)$, hence $\max_{i \geq 2} |\lambda_i (A)| \geq \max_{i \geq 2} |\lambda_i (Q)|$: this means that
the convergence of power iteration for matrix $Q$ is never slower than for matrix $A$. Finally, it is immediate that at each step the computational complexity is $O(n^2)$ for $Q$ whereas for $A$ it is $O(N^2)$.
}
\subsection{A lower bound for $\tau^{(1)}_c$}
We can write $Q=D + \widehat{B}$, where $D=\mathop{\rm diag}(d_{ii})$ and $\widehat{B}=(\sqrt{d_{ij}d_{ji}}\varepsilon b_{ij})_{i,j=1,...n}$. %
\so{By the Weyl's theorem \cite{MatAn} we have}
\begin{equation}\label{lowbound}
\lambda_1(Q) \leq \lambda_1(D)+\lambda_1(\widehat{B})=\max_{1\le i\le n} d_{ii} + \lambda_1(\widehat{B}).
\end{equation}
\rev{From (\ref{tauc}) and by Proposition \ref{cor} }
\begin{equation*}
\tau^{(1)}_c = 1/\lambda_1(A)=1/\lambda_1(Q),
\end{equation*}
\rev{thus a lower bound for the epidemic threshold can be derived from (\ref{lowbound})}
\begin{equation}\label{low2}
\tau^{(1)}_c \geq \tau^\star
= \min_{i} \frac{1}{d_{ii} + \lambda_1(\widehat{B})} ,
\end{equation}
In applications, when designing or controlling a network, this value can be adopted to determine a safety region
$\{\tau \le \tau^\star\}$ for the effective spreading rate that guarantees the extinction of epidemics.
\fdp{Fig.~\ref{fig:low} reports on the comparison of the lower bound and the actual threshold \rev{value}: \rev{it refers to} the case of a sample equitable partition composed of interconnected rings for increasing values of the community order.}
\rev{We observe that obtaining a lower bound for $\tau_c^{(1)}$ is meaningful because $\tau_c^{(1)}$ is itself a lower bound for the epidemic threshold $\tau_c$ of the exact stochastic model, i.e. $\tau_c = \alpha \tau_c^{(1)}$ with $\alpha \geq 1$, as anticipated in Sec.~\ref{Epidemic Threshold}}. In fact, smaller values of the effective spreading rate $\tau$, namely $\delta>\beta/\tau_c^{(1)}$, correspond, in the exact stochastic model, to a region where the decay towards the healthy state decreases exponentially fast in time. %
\rev{By forcing the effective spreading rate below $\tau^*$, one ensures that the epidemic will go extinct in a reasonable time frame (we recall that, above the threshold, the overall-healthy state is only reached after an unrealistically long time.).}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{Lower.eps}
\caption{\so{Lower bound \eqref{low2} versus epidemic threshold: comparison for different values of $k$ in a $40$-communities network. The internal structure of each community is a ring and $d_{ij}=2$ for all $i,j=1, \ldots, n$.}}\label{fig:low}
\end{figure}
\so{Equality can be attained in \eqref{low2}: consider for instance the graph described by the adjacency matrix $A$ in \eqref{e:**}. Furthermore, we may require that all $V_i$'s have the same number of nodes $k_i=k$ and same internal degree $d_{ii}=d$, $i=1,\ldots,n$. In this case $Q= d\,{\rm Id}_n + \widehat{B}$, where $\widehat{B}:=(k \varepsilon b_{ij})_{i,j=1,...n}$, and
\begin{equation*}
\lambda_1(Q)= d + k\varepsilon \lambda_1(B),
\end{equation*}
which is the exact value of $\lambda_1(A)$ and consequently of $\tau^{(1)}_c$.}
\section{Infection Dynamics for Equitable Partitions}\label{sec:InfDyn}
\so{In this section we show under which conditions matrix $Q$ can be used in order to express the epidemic dynamics
introduced in \eqref{mat2}. This allows us to describe the time-change of the infection probabilities by a system of $n$
differential equations instead of $N$. }
\so{
\begin{theorem}\label{reduction}
Let $G=(V,E)$ a graph and $\pi = \{V_j,\ j = 1, \dots, n\}$ an equitable partition of the node set $V$. Let $G_j$ be the subgraph of $G=(V,E)$ induced by cell $V_j$.
If $p_h(0)=p_w(0)$ for all $h, w \in G_j$ and \rev{for all $j=1, \dots, n$}, then $p_h(t)=p_w(t)$ for all $t > 0$. In this case we can reduce the number of equations representing the time-change of infection probabilities using the quotient matrix $Q$.
\end{theorem}
}
\begin{proof}
Let $\overline{p}_j(t)= \frac{1}{k_j}\sum_{h \in G_j} p_h(t)$ be
the average value of the infection probabilities at time $t$ of nodes in $G_j$.
Then starting from \eqref{mat2}, we can write a new system of differential equations
\begin{eqnarray}\label{mean}
&&\frac{d \left(p_h(t) - \overline{p}_j(t)\right)}{dt}= - \delta (p_h(t)- \overline{p}_j(t))+ \beta (1-p_h(t)) \sum_{z=1}^N a_{h z} p_z (t)\nonumber\\
&&\hskip32mm - \frac{1}{k_j} \beta \sum_{l \in G_j} (1-p_l(t))\sum_{z=1}^N a_{lz}p_z(t), \qquad \forall h \in G_j, \quad j=1, \ldots, n.
\end{eqnarray}
\rev{From \eqref{mean} we have
\begin{align*}
\frac{d \left(p_h(t) - \overline{p}_j(t)\right)}{dt} &= - \delta (p_h(t)-\overline{p}_j(t)) + \beta \left(\sum_{m=1}^n \sum_{z \in G_m} a_{hz} p_z(t)- \frac{1}{k_j} \sum_{l \in G_j}\sum_{m=1}^n \sum_{z \in G_m} a_{lz} p_z(t)\right)\\
& - \beta \left(p_h(t) \sum_{m=1}^n \sum_{z \in G_m} a_{hz} p_z(t) - \frac{1}{k_j} \sum_{l \in G_j} p_l(t) \sum_{m=1}^n \sum_{z \in G_m} a_{lz} p_z(t) \right),
\end{align*}
that can be written as
\begin{align*}
\frac{d \left(p_h(t) - \overline{p}_j(t)\right)}{dt} &= - \delta (p_h(t)-\overline{p}_j(t)) + \beta \left(\frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} \left(a_{hz}-a_{lz}\right)p_z(t)\right)\\
& -\beta \frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} \left( a_{hz} p_h(t)- a_{lz} p_l(t)\right) (p_z(t) -{\overline{p}}_m(t))\\
& - \beta\frac{1}{k_j} \left ( \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} \left(a_{hz}p_h(t)- a_{lz} p_l(t) \right)\right) {\overline{p}}_m(t).
\end{align*}
Whence, since $\sum_{z \in G_m} a_{hz} = d_{jm}$, for $h \in G_j$ and for all $m=1, \dots, n$, we have
\begin{align}\label{mean11}
\frac{d \left(p_h(t) - \overline{p}_j(t)\right)}{dt} &= - \left[ \sum_{m=1}^n \beta d_{jm} \overline{p}_m(t)+ \delta \right](p_h(t)- \overline{p}_j(t)) \\
& + \beta \frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} \left(a_{hz}-a_{lz}\right)p_z(t)\nonumber \\
& - \beta \frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} \left( a_{hz} p_h(t)- a_{lz} p_l(t)\right) (p_z(t) -{\overline{p}}_m(t)). \nonumber
\end{align}
Now, we note that
$$
- \frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} \left( a_{hz} p_h(t)- a_{lz} p_l(t)\right) (p_z(t) -{\overline{p}}_m(t))
$$
can be written as
$$
- \frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} \left((p_h(t)-{\overline{p}}_j(t))a_{hz} - (p_l(t) - {\overline{p}}_j(t))a_{lz}\right)\left(p_z(t) - {\overline{p}}_m(t)\right)
$$
$$
-\frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} {\overline{p}}_j(t) (a_{hz}-a_{lz})\left(p_z(t)-{\overline{p}}_m(t)\right),
$$
whence we can rewrite \eqref{mean11} as
\begin{align*}
\frac{d \left(p_h(t) - \overline{p}_j(t)\right)}{dt} &= - \left[ \sum_{m=1}^n \beta d_{jm} \overline{p}_m(t)+ \delta \right](p_h(t)- \overline{p}_j(t)) \\
& + \beta \frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} \left(a_{hz}-a_{lz}\right) \left(p_z(t)- {\overline{p}}_m(t) + {\overline{p}}_m(t) \right)\\
& - \beta \frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} \left((p_h(t)-{\overline{p}}_j(t))a_{hz} - (p_l(t) - {\overline{p}}_j(t))a_{lz}\right)\left(p_z(t) - {\overline{p}}_m(t)\right)\\
& - \beta \frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} {\overline{p}}_j(t) (a_{hz}-a_{lz})\left(p_z(t)-{\overline{p}}_m(t)\right).
\end{align*}
Finally, since $\frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z \in G_m} \left(a_{hz}-a_{lz}\right){\overline{p}}_m(t) = 0$,
we can consider the following system
\begin{eqnarray}\label{mean2}
&&\frac{d \left(p_h(t) - \overline{p}_j(t)\right)}{dt}= -\left[ \sum_{m=1}^n \beta d_{jm} \overline{p}_m(t)- \delta \right](p_h(t)- \overline{p}_j(t)) \nonumber\\
&&+ \beta \frac{1}{k_j} \sum_{l \in G_j}\sum_{m=1}^n \sum_{z\in G_m} (a_{hz}-a_{lz})(p_z(t) -\overline{p}_m(t))(1-\overline{p}_j(t))
\nonumber\\
&&-\beta \frac{1}{k_j} \sum_{l \in G_j} \sum_{m=1}^n \sum_{z\in G_m} ((p_h(t)- \overline{p}_j(t))a_{h z}- (p_l(t)-\overline{p}_j(t))a_{lz})(p_z(t)-\overline{p}_m(t)),\nonumber\\
&&\hskip80mm \forall h \in G_j, \quad j=1, \ldots, n\nonumber
\end{eqnarray}}
Now let us denote by $g(t)$ the solution of \eqref{mean}, where $g: \mathbb{R} \rightarrow \mathbb{R}^N$ and consider the case where
\begin{equation}\label{SameIniz}
p_h(0)- \overline{p}_j(0)=0, \quad \forall h \in G_j, \quad j=1, \ldots, n,
\end{equation}
i.e., $p_h(0)=p_w(0)$ for all $h, w \in G_j$. Then, from \eqref{mean2}, we can easily see that the identically zero function $g \equiv 0$
is the unique solution of \eqref{mean} with initial conditions \eqref{SameIniz}.
Indeed $g\equiv 0$ means that for all $t \geq 0$,
$p_h(t)=p_w(t)$ for all $h,w \in G_j$, $j=1, \ldots, n$. \rev{Moreover the vector $P(t)$ such that $p_h(t)=p_w(t)$ for all $h,w \in G_j$, $j=1, \ldots, n$, is a solution of \eqref{mat2} and it is unique in $[0,1]^N$ with respect to the initial conditions \eqref{SameIniz}, \cite[Cap. 2, Sec. 2.2]{DiffEqandDynamicalSystem}. Thus we can conclude that also $g=0$ is a unique solution of \eqref{mean} in $[-1,1]^N$.}
Basically we have shown that the following subset of $I_N$
\begin{eqnarray*}
M= \left\{ P \in [0,1]^N | p_1= \ldots = p_{k_1}= \overline{p}_1, p_{k_1+1}= \ldots = p_{k_1 + k_2}= \overline{p_2} ,\right . \nonumber\\
\left . \ldots, p_{(k_1+..k_{n-1}+1)}= \ldots = p_{N} =\overline{p}_n\right\}
\end{eqnarray*}
is a positively invariant set for the system \eqref{mat2}.
This allows us to reduce the system \eqref{mat2} of $N$ differential equations and describe the time-change of the infection probabilities by a system of $n$ equations involving the matrix $Q$.
Indeed, let us consider $P(0) \in M$ and $\overline{P}=({\overline{p}}_1, \dots, {\overline{p}}_n) $, we can write
\begin{eqnarray}\label{eq:red_sys_1}
\frac{d\overline{p}_j(t)}{dt}& =& \beta (1-\overline{p}_j(t))\sum_{m=1}^n \varepsilon b_{jm} d_{jm} \overline{p}_m(t) \\
& +& \beta d_j(1-\overline{p}_j(t))\overline{p}_j(t)- \delta \overline{p}_j(t), \qquad j=1,\ldots,n\nonumber
\end{eqnarray}
Hence, based on Thm. 2.1 in \cite{godsil}, we observe that
\begin{equation*}
q_{ij}=(k_j/k_i)^{1/2}d_{ji},
\end{equation*}
This relation in our case brings
\begin{equation*}
d_{jm}=\left(\frac{k_j}{k_m}\right)^{- 1/2}\frac{ q_{mj}}{\varepsilon}=\left(\frac{k_j}{k_m}\right)^{- 1/2}\frac{q_{jm}}{\varepsilon},
\end{equation*}
where the last equality holds because $Q$ is symmetric. We can rewrite (\ref{eq:red_sys_1}) as
\begin{eqnarray}\label{eq:red_sys_2}
\frac{d\overline{p}_j(t)}{dt}&& = \beta(1-\overline{p}_j(t))\sum_{m=1, m \neq j}^n \left(\frac{k_j}{k_m}\right)^{- 1/2}%
q_{jm} p_m(t) \nonumber \\
&& +\beta q_{jj}(1-\overline{p}_j( t))p_j(t)- \delta \overline{p}_j(t) ; \qquad j=1,\ldots,n
\end{eqnarray}
where $q_{jj}=d_{jj}= \lambda_1(C_{V_j})$.
The matrix representation of \eqref{eq:red_sys_2}
is the following
\begin{equation}\label{eq:red_sys_3}
\frac{d\overline{P}(t)}{dt}= \beta \left({\rm I}_n- \operatorname{diag} (\overline{p}_j(t)) \right) \widetilde Q \overline{P}(t) - \delta \overline{P}(t),
\end{equation}
where $\widetilde Q= \operatorname{diag}\left(\frac{1}{\sqrt{k_j}}\right) Q\operatorname{diag} (\sqrt{k_j})$. It is immediate to observe that $\sigma(Q)= \sigma(\tilde{Q})$.%
\end{proof}
\begin{corollary}\label{corSteady}
When $\tau > \tau^{(1)}_c$ the non-zero steady-state $P_\infty$ of the system \eqref{mat2} belongs to $M -\left\{0\right\}$.
\end{corollary}
\so{
\begin{proof}
In Theorem \ref{thresh} we have shown that when $\tau >\tau^{(1)}_c$, the system \eqref{mat2} has a globally asymptotically stable equilibrium $P_\infty$ in $I_N - \left\{0\right\}$; hence for any initial state $P(0) \in I_N - \left\{0\right\}$
\begin{equation*}
\lim_{t \rightarrow \infty} ||P(t) - P_{\infty}||= 0.
\end{equation*}
We have proved in Thm. \ref{reduction} that if $P(0) \in M$ then $P(t) \in M$ for all $t> 0$, thus we can conclude that $P_\infty$ must be in $M- \left\{0\right\}$ when $\tau >\tau^{(1)}_c$.
\end{proof}}
\so{Basically, Corollary~\ref{corSteady} says that one can compute the $n \times 1$ vector, $\overline{P}_{\infty}$, of the reduced system \eqref{eq:red_sys_3} in order to obtain the $N \times 1$ vector, $P_{\infty}$, of \eqref{mat2}: indeed $ p_{z \infty}, \ldots , p_{x \infty}=\overline{p}_{j \infty}$, for all $z, x \in G_j$ and $j=1, \ldots, n$. This provides a computational advantage by solving a system of $n$ equations instead of $N$.}
Moreover, since $P_{\infty}$ is a globally asymptotically stable equilibrium in $I^N-\left\{0\right\}$, the trajectories starting outside $M$ will approach those starting in $M-\left\{0\right\}$. The same holds clearly for trajectories starting in $I^N$ and in $M$ when $\tau \leq \tau^{(1)}_c$. Numerical experiments in Fig.~\ref{fig:averaged} depict this fact.
\so{The statements proved above can be easily \rev{verified}, with a direct computation, in the simple case of graphs considered in \cite{Bonaccorsi} (see Remark \ref{rem2}). Indeed for all $h,w \in G_j$, $j=1, \ldots, n$, we have
\begin{eqnarray}\label{compexp}
\frac{d(p_h(t)-p_w(t))}{dt}=&& -\, \delta \, (p_h(t)-p_w(t)) + \beta \sum_{z \notin G_j} \left[(1-p_h(t)) a_{hz} - (1-p_w(t))a_{wz}\right] p_z(t)\nonumber\\
&&+ \, \beta \ \sum_{z \in G_j, z \neq h,w} \left[(1-p_h(t)) a_{hz} - (1-p_w(t))a_{wz}\right] p_z(t) \nonumber\\
&&+\, \beta \sum_{z= h,w} \left[(1-p_h(t)) a_{hz} - (1-p_w(t))a_{wz}\right] p_z(t)
\end{eqnarray}
Since in this special case $a_{hz}=a_{wz}$, for all $z \in V$ s.t. $z \neq h,j$, we can rewrite \eqref{compexp} as
\begin{equation*}
\frac{d(p_h(t)-p_w(t))}{dt}= - \left[\delta + \beta\left( \sum_{z=1, z \neq h,w} ^N a_{hz} p_z(t) + 1 \right) \right] \left(p_h(t)-p_w(t)\right).
\end{equation*}
whence
\begin{equation*}\label{sol}
p_h(t)-p_w(t)= \left(p_h(0)-p_w(0)\right) e^{-\int_0^{t} \delta + \beta\left( \sum_{z=1, z \neq h,w} ^N a_{hz} p_z(s) + 1\right) ds }.
\end{equation*}
Thus if $p_h(0)=p_w(0)$ for the uniqueness of solution it will occur $p_h(t)=p_w(t)$ for all $t>0$, as we have proved in Thm. \ref{reduction}, but if the initial conditions are different, the distance between $p_w(t)$ and $p_z(t)$ decreases exponentially.}
\begin{remark}
\fdp{The framework of quotient graphs extends the NIMFA model to graphs with prescribed community network structure. It reduces to the original NIMFA model when $k_j=1$ for all $j=1,..,n$.
\end{remark}
\subsection{\so{Steady-state}}
We focus now on the \so{computation of the} steady-state \rev{$P_\infty = \big(p_{i\infty} \big)_{i=1,\dots,N}$} of system \eqref{mat2}.
\so{To this aim, by Corollary \ref{corSteady}, we can compute the steady-state \rev{$\overline{P}_\infty = \big({\overline{p}}_{j\infty} \big)_{j=1,\dots,n}$} of the reduced system \eqref{eq:red_sys_3} and obtain}
\begin{equation*}
\beta(1-{\overline{p}}_{j \infty})\sum_{m=1}^n \left(\frac{k_j}{k_m}\right)^{- 1/2}
q_{jm} {\overline{p}}_{m\infty}- \delta {\overline{p}}_{j \infty}=0, \qquad j=1, \dots, n
\end{equation*}
\so{whence
\begin{eqnarray}\label{meta}
{\overline{p}}_{j \infty} &=& \frac{\beta \sum_{m=1}^n \left(\frac{k_j}{k_m}\right)^{- 1/2}
q_{jm} {\overline{p}}_{m \infty}}{\beta \sum_{m=1}^n \left(\frac{k_j}{k_m}\right)^{- 1/2}
q_{jm} {\overline{p}}_{m \infty}+ \delta}\nonumber = 1-\frac{1}{1+ \tau \sum_{m=1}^n \left(\frac{k_j}{k_m}\right)^{- 1/2}
q_{jm} {\overline{p}}_{m \infty}}\nonumber \\
\hskip -2mm &=& 1-\frac{1}{1+\tau g_j\left(\overline{P}\right)}
\end{eqnarray}}
\so{where
{\small\begin{equation*}
g_j\left(\overline{P}\right):=\left(d_{jj}+ \varepsilon\sum_{m=1}^n \left(\frac{k_j}{k_m}\right)^{- 1/2} \sqrt{d_{jm}d_{mj}}\right)
-\sum_{m=1}^n \left(\frac{k_j}{k_m}\right)^{- 1/2}\!\!q_{jm}(1-{\overline{p}}_{m\infty}).
\end{equation*}}}
\so{From \eqref{meta} follows that the steady-state infection probability of any node $j$ is bounded by
\begin{equation}\label{boundmeta}
0 \leq {\overline{p}}_{j \infty} \leq 1-\frac{1}{1+\tau(d_{jj}+ \varepsilon\sum_{m=1}^n \left(\frac{k_j}{k_m}\right)^{- 1/2} \sqrt{d_{jm}d_{mj}})},
\end{equation}
where the inequality holds true because ${\overline{p}}_{j \infty} \in [0,1]$ for all $j=1, \dots, n$.}
\so{By introducing $1-{\overline{p}}_{m \infty}=\frac{1}{1+\tau \sum_{z=1}^n \left(\frac{k_m}{k_z}\right)^{- 1/2} q_{mz} {\overline{p}}_{z \infty}}$ in \eqref{meta}, we can express ${\overline{p}}_{j \infty}$ as a continued fraction iterating the formula
\begin{equation*}
x_{j,s+1}=f(x_{1;s},..,x_{n;s})
=1- \frac{1}{1+ \tau g_j(x_{1;s},..,x_{n;s})},
\end{equation*}}
\so{As showed in~\cite{VanMieghem2009}, after a few iterations of the formula above, one can obtain a good approximation of ${\overline{p}}_{j \infty}$, with a loss in the accuracy of the calculation around $\tau=\tau_c$. Ultimately, such numerical estimation can be used to improve the bound in (\ref{boundmeta}).
}
\so{If we consider a regular graph where communities have the same number of nodes, then
\begin{equation*}\label{eq:approx}
{\overline{p}}_{j \infty}= 1-\left(1/\tau \left(d_{jj}+ \varepsilon\sum_{m=1}^n \left(\frac{k_j}{k_m}\right)^{- 1/2} \sqrt{d_{jm}d_{mj}}\right)\right)
\end{equation*}
is the exact solution of \eqref{meta}.}
Now let
$ r_j= d_{jj}+ \varepsilon\sum_{m=1}^n \left(\frac{k_j}{k_m}\right)^{- 1/2} \sqrt{d_{jm}d_{mj}}$ and $r(1)=\min_j r_j$;
relying on the estimate ${\overline{p}}_{j \infty} \approx 1-\left(1/\tau r_j\right)$ we can express the steady-state average fraction of infected nodes $y_{\infty}(\tau)=(1/N)\sum_{j=1}^n k_j p_{j \infty}(\tau)$ by
\begin{equation}\label{frac}
y_{\infty}(\tau) \approx 1 - \frac{1}{\tau N} \sum_{j=1}^n k_j \frac{1}{d_{jj}+ \varepsilon\sum_{m=1}^n \left(\frac{k_j}{k_m}\right)^{- 1/2}\sqrt{d_{jm}d_{mj}}}.
\end{equation}
\rev{According to the analysis reported in \cite{VanMieghem2009}, approximation \eqref{frac} becomes the more precise the more the difference $r(2)-r(1)$ is small, where $r(2)$ is the second smallest of the $r_j$'s.
In Sec.~\ref{exp} we report on some related numerical experiments.}
\subsection{Examples}\label{ex}
In Fig.~\ref{fig:fig11} we provide an example of a graph which has an equitable partition {with respect to } $V_1=\{v_1\}$, $V_2=\{v_2,v_3\}$, $V_3=\{v_3,v_4,v_5,v_6\}$, $V_4=\{v_7,v_8,v_9,v_{10},v_{11},v_{12},v_{13}\}\}$.
The corresponding quotient matrix reads
\[
Q=
\begin{bmatrix}
0&\varepsilon \sqrt{2} & \varepsilon 2 & 0 \\
\varepsilon \sqrt{2} & 1 &\varepsilon \sqrt{2} & \varepsilon \sqrt{3} \\
\varepsilon 2 & \varepsilon \sqrt{2} & 2 & 0\\
0&\varepsilon \sqrt{3} & 0 & 3 \\
\end{bmatrix}
\]
\so{From ~\eqref{eq:red_sys_3} we have that the steady-state can be computed by
\begin{equation*}\label{matM}
\overline{P}_{\infty}= \frac{\beta}{\delta}({\rm I}_n - \mathop{\rm diag} (\overline{p}_{\infty}))\mathop{\rm diag}(s_j) Q \mathop{\rm diag}(1/s_j) \overline{P}_{\infty} ,
\end{equation*}
where \rev{$s_j$ is the $j$-th entry} of vector $s=(1, \sqrt{2}, 2, \sqrt{6})$.}
\subsection{Numerical experiments}\label{exp}
In Figures~\ref{fig:fig2} and~\ref{fig:fig3} \so{we provide a comparison between the solution of
the reduced ODE system \eqref{eq:red_sys_3} for the graph in Fig. \ref{fig:fig11} and the
averaged $50\cdot 10^4$ sample paths resulting from a discrete event simulation} \rev{of the exact SIS process}. \rev{The discrete event simulation is based on the generation} of independent Poisson processes for both the infection of healthy nodes and the recovery of infected ones. We observe that, as expected, NIMFA provides an upper bound to the dynamics of the infection probabilities. Also, in Fig.~\ref{fig:fig2} we observe that the dynamics for the communities that are initially healthy is characterized by a unique maximum for the infection probability, which decreases afterwards. The communities initially infected, conversely, show a monotonic decrease of the infection probability.
\begin{figure}[th!]
\centering
\includegraphics[width=0.750\textwidth]{SovrappSotto1inf.eps
\caption{Dynamics of infection probabilities for each community of the network in Fig.\ref{fig:fig11}: simulation versus numerical solutions of (\ref{eq:red_sys_3}); $\tau = \beta/\delta < \tau_c^{(1)}=0.3178$, with $\beta=0.29$ and $\delta=1$, \rev{$\varepsilon=0.3$}. At time $0$ the only infected node is node $1$.}
\label{fig:fig2}
\end{figure}
\begin{figure}[th!]
\centering
\includegraphics[width=0.75\textwidth]{SovrappSopra1inf
\caption{Dynamics of infection probabilities for each community of the network in Fig.\ref{fig:fig11}: simulation versus numerical solutions of (\ref{eq:red_sys_3}); $\tau = \beta/\delta > \tau_c^{(1)}=0.3178$, with $\beta=1.5$ and $\delta=0.3$, \rev{$\varepsilon=0.3$}; initial conditions as in Fig.~\ref{fig:fig2}.} \label{fig:fig3}
\end{figure}
\rev{Fig.~\ref{fig:completo2} depicts the same comparison in the case of} a network with eighty nodes partitioned into four communities; each community is a complete graph and all nodes belonging to two linked communities are connected (see Remark~\ref{rem2}). The agreement between NIMFA and simulations improves compared to Fig.~\ref{fig:fig3}. This is expected, because the accuracy of NIMFA is known to increase with network order $N$, under the assumption that the nodes' degree also increases with the number of nodes. Conversely, it is less accurate, e.g., in lattice graphs or regular graphs with fixed degree not depending on $N$ ~\cite{VanMieghem2009,Accuracy}.
\begin{figure}[t]
\centering
\includegraphics[width=0.80\textwidth]{SopraCOMPLETO
\caption{\so{Infection probabilities for each community in a network with $N=80$, $d_{ii}=k_i-1=19$ and $d_{ij}=20$, for all $i,j=1,..,4$: simulation versus numerical solutions of (\ref{eq:red_sys_3}); $\tau = \beta/\delta > \tau_c^{(1)}=0.0348$, with $\beta=5$ and $\delta=2$, \rev{$\varepsilon=0.3$}; at time 0 all nodes of the 1-st community are infected.}}\label{fig:completo2}
\end{figure}
\rev{Fig.~\ref{fig:averaged} depicts the solutions of system~\eqref{mat2} for each node belonging to $V_3$ in the graph of Fig.~\ref{fig:fig11}; here nodes in $V_3$ have different initial infection probabilities $p_i(0)$'s. These solutions are compared with the one computed using the reduced system ~\eqref{eq:red_sys_3}, in the case when the initial conditions for those nodes are the same, precisely equal to the mean value of the $p_i(0)$'s. As expected, trajectories starting outside invariant set $M$ described in Thm.~\ref{reduction} tend to approach the one starting in $M$ as time elapses.}
\begin{figure}[t]
\centerin
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\textwidth]{AQdynSottoSoglia.eps}\put(-168,120){a)}
\end{minipage}
\hskip4mm
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\textwidth]{AQdynSopraSoglia.eps}\put(-162,120){b)}
\end{minipage}
\caption{\so{Comparison between the dynamics of the original system~\eqref{mat2} for each of the nodes belonging to $V_3$ in Fig. \ref{fig:fig11}, for different initial conditions and the dynamics of the reduced system \eqref{eq:red_sys_3}. In the latter case the initial conditions for each node are the mean value of the $p_i(0)$s. a) case below the threshold: $\beta=0.29$, $\delta=1$, \rev{$\varepsilon=0.3$} b) case above the threshold: $\beta=1.5$, $\delta=0.3$}, \rev{$\varepsilon=0.3$}.}
\label{fig:averaged}
\end{figure}
\rev{Finally, we report on numerical experiments about the steady-state average fraction of infected nodes. More precisely, Fig.~\ref{fig:frac1} compares the value obtained by solving the original system \eqref{eqpoints} and the value obtained from approximation \eqref{frac}, as a function of $\tau$.}
\so{In Fig. \ref{fig:SIS}, instead, we have reported on the comparison between the steady-state average fraction of infected nodes, as function of $\tau$, computed via NIMFA and via simulations. We consider a graph of regular degree $d=10$ and $N=500$, whose communities are clique, \rev{each with the same number of elements $k$. We repeat the same calculation for different values of $k$ in the communities}. As it can be observed, our model and the exact SIS model are in good agreement and the root mean square error between them decreases as $k$ increases.}
\begin{figure}[t]
\centering
\includegraphics[width=0.60\textwidth]{FracNodes.eps
\put(-220,160){a)}\put(-220,70){b)}
\caption{\so{Steady-state average fraction of infected nodes, for different values of $\tau$: comparison between the approximation \eqref{frac} and the exact computation \eqref{eqpoints}; a) the graph is the one considered in Fig.~\ref{fig:fig11} and b) the one considered in Fig.~\ref{fig:completo2}.}}\label{fig:frac1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[height=50mm,width=0.60\textwidth]{Nfisso500_NimfaSIS_3b.eps}
\put(-120,80){\vector(-1,1){45}\hskip14mm $k=1,2,5,10$}
\caption{\so{ Steady-state average fraction of infected nodes for different values of $k$ and $\tau$,
for a graph of regular degree $d=10$ and $N=500$. The internal structure of each community is a clique. Both NIMFA and simulations are shown. The inserted plot represents the root mean square error between the simulated and the approximated fraction of infected nodes.}}\label{fig:SIS}
\end{figure}
\section{Almost equitable partitions}\label{AlmEq}
\rev{In this section we consider graphs where the partition of the vertex set is \textsl{almost equitable}. Thus, we can relax the initial assumption on the regularity of the internal community structure implied by the definition of equitable partition.}
\begin{definition}\label{de:aep}
The partition $\pi=\left\{V_1,...,V_n\right\}$ is called \emph{almost equitable} if \so{ for all $i,j \in \left\{1, \dots ,n \right\}$} with $i \neq j$,
there is an integer $d_{ij}$ such that for all $v \in V_i$, it holds
\begin{equation*}
d_{ij}=deg(v,V_j):=\# \left\{e \in E : e=\left\{v,w\right\}, w \in V_j \right\}
\end{equation*}
independently of $v \in V_i$.
\end{definition}
\rev{The difference between equitable and almost equitable partitions is that, in the former case, subgraph $G_i$ of $G$ induced by $V_i$ has regular structure, whereas the latter definition does not impose any structural condition into $G_i$.}
Ideally we can think of a network $\tilde{G}$ whose node set has an almost equitable partition as a network $G$ with equitable partition where links between nodes in one or more communities have been added or removed.
\rev{The objective is to obtain lower bounds on threshold $\tau_c^{(1)}$, useful in determining a safety region for the extinction of epidemics. We start assuming that links are added only.}
To this aim, let us consider two graphs $G=(V,E)$ and $\tilde{G}=(V,\tilde{E})$ with the same partition $\{V_1, \dots, V_n\}$, but different edge sets $E\varsubsetneq \tilde{E}$, and assume $G$ to have an equitable partition but $\tilde{G}$ to have merely an almost equitable partition. Then if $\tilde{A}$ and $A$ are the adjacency matrices of $\tilde{G}$ and $G$ respectively it holds
\[
\tilde{A} = A + R,
\]
where $R = \mathop{\rm diag}(R_1, \dots, R_n)$; the dimension of $R_i$ is $k_i \times k_i$ for $i=1,...,n$, as before $k_i$ is the order of $G_i$ and $n$ is the number of the communities.
The theorem of Weyl can be applied to $\tilde{A}=A+R$ and then it yields
\begin{equation}\label{IneqA}
\lambda_1(\tilde{A}) \leq \lambda_1(A) + \lambda_1(R).
\end{equation}
\rev{In the following we shall provide a more explicit formulation of the right hand side of \eqref{IneqA} involving the number of added edges.}
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{almostUpperDensity.eps}
\caption{Comparison of the bound and the spectral radius for a $40$-communities network. Each community has $k=25$ nodes, whose
internal structure is a initially a ring; the perturbation graph is obtained by adding in each of them the same increasing number of links. The spectral radius of the adjacency matrix $\tilde{A}$ \rev{($A_{ae}$ in the legend, where the subscript ``ae'' stays for ``almost equitable'')} is compared to the upper bound as a function of the links added in each community.} \label{fig:fig4}
\end{figure}
\begin{proposition}\label{propprop}
Let $G=(V,E)$ and $\tilde{G}=(V,\tilde{E})$ be two graphs
and consider a partition $\{V_1, \dots, V_n\}$ of the set of vertices $V$; we shall denote by $G_i=(V_i,E_i)$ and $\tilde{G_i}=(V_i,\tilde{E_i})$ the subgraph of $G$ and $\tilde{G}$ induced by the cell $V_i$, respectively, for $i=1,...n$. Assume this partition to be equitable for $G$ and almost equitable for $\tilde{G}$. Let $E\subset \tilde{E}$ with
\[
\tilde{E}\setminus E=\bigcup_{i=1}^n (\tilde{E_i}\setminus E_i)
\]
(i.e., the edge sets can only differ within cells) and denote by $R$ the adjacency matrix corresponding to a graph with $\tilde{E}\setminus E$ as edge set. Finally, let us denote by $G_i^C$ the graph with edge set $\tilde{E_i}\setminus E_i$ and whose node set is simply the set of endpoints of its edges (i.e., no further isolated nodes).
\begin{enumerate
\item If $\Delta(G_i^C)$ denotes the maximal degree in $G_i^C$, $i=1, \dots, n$,
then
\begin{equation*}
\lambda_1(R)\leq \max_{1\le i\le n} \min \left\{\sqrt{\frac{2e_i(k_i-1)}{k_i}}, \Delta(G_i^C)\right\}\ ,
\end{equation*}
where $e_i$ is the number of edges added to $G_i$, i.e., $e_i = (|\tilde E_i| - |E_i|)$, and $k_i$ is the number of nodes in $V_i$.
\item If additionally $G^C_i$ is connected for each $i=1, \dots, n$, then
\begin{equation*}
\lambda_1(R)\leq \max_{1\le i\le n} \min \left\{\sqrt{2e_i-k'_i+1}, \Delta(G_i^C)\right\}\ ,
\end{equation*}
\end{enumerate}
where $k'_i$ is the number of nodes of $G_i^C$.
\end{proposition}
\begin{proof}
(1) By assumption, $R$ is a diagonal block matrix whose blocks $R_i$ are the adjacency matrices of the induced subgraphs $G_i^C$. Thus, $\lambda_1(R)$ is the maximum of all spectral radii $\lambda_1(R_i)$. On the other hand, one has by~\cite[(3.45)]{Graph} that \begin{equation*}
\lambda_1(R_i) \leq \min \left\{\sqrt{\frac{2e_i(k_i-1)}{k_i}}, \Delta(G_i^C)\right\}.
\end{equation*}
and the claim follows.\\
(2) By Gershgorin's theorem, the spectral radius of an adjacency matrix \rev{of a graph without loops} is never larger than the graph's maximal degree, i.e., $\lambda_1(R_i)\le \Delta(G_i^C)$.
By assumption, there exists a permutation of the vertices in $V_i$ such that the matrix $R_i$ has the form
\[
R_i=
\begin{bmatrix}
R'_{i} & \textbf{0} \\
\textbf{0} & \textbf{0} \\
\end{bmatrix}
\]
where $R'_i$ is the adjacency matrix of a connected graph with $k_i'$ nodes
(i.e., the block $R'_i$ has dimension $k'_i \times k'_i$).
Now, we deduce from~\cite[art.~50]{Graph} that
\begin{equation*
\lambda_1(R'_i) \leq \sqrt{2e_i -k'_i-1},
\end{equation*}
and since $\lambda(R_i)=\lambda(R'_i)$, the statement follows.
\end{proof}
By using estimate \eqref{lowbound} and Proposition~\ref{propprop} in the first and the second term on the right hand side of~\eqref{IneqA}, respectively, we deduce
\begin{equation}\label{upperAlmost}
\lambda_1(\tilde{A}) \leq
\max_{1\le i\le n} \lambda_1(C_{V_i})
+ \lambda_1(\widehat{B}) + \max_{1\le i\le n} \min \left\{\sqrt{\frac{2e_i(k_i-1)}{k_i}}, \Delta(G_i^C)\right\}.
\end{equation}
The inequality in (\ref{upperAlmost}) gives us a lower bound for the epidemic threshold in the case of a graph whose partition of nodes set is almost equitable. Actually
\begin{equation}\label{boundAE}
\tau_c^{(1)}=\frac{1}{\lambda_1(\tilde{A})} \ge \tau^\star =
\frac{1}{\max\limits_{1\le i\le n} \lambda_1(C_{V_i}) + \lambda_1(\widehat{B})+ \max\limits_{1\le i\le n} \min \left\{\sqrt{\frac{2e_i(k_i-1)}{k_i}}, \Delta(G_i^C)\right\}}.
\end{equation}
Now let us consider the case where we remove edges, inside the communities, in a network whose set nodes has an equitable partition, thus because the spectral radius of an adjacency matrix is monotonically non increasing under the deletion of edges, we have
$$\lambda_1(\tilde{A}) \leq \lambda_1(A)$$
whence
\begin{equation*}\label{boundAEdel}
\frac{1}{\lambda_1(\tilde{A})} \geq \frac{1}{\lambda_1(A)} \geq \min_{i} \frac{1}{d_{ii} + \lambda_1(\widehat{B})}.
\end{equation*}
The bounds developed so far support the design of community networks with safety region for the effective spreading rate, that guarantees the extinction of
epidemics.
E.g. if we consider some $G_i$, $i=1,\ldots,n$, it is possible to connect them such in a way to form a graph $\tilde{G}=(V,\tilde{E})$ with an almost equitable partition. Now, any subgraph obtained from $\tilde{G}$, by removing edges inside the communities, will have smaller spectral radius than $\tilde{G}$, and consequently a larger epidemic threshold. Thus the lower bound in \eqref{boundAE} still holds.
\section{Conclusion}
\rev{In this work we have discussed the relation between the epidemic threshold of a given graph with equitable partitioning of its node set, and the spectral properties of the corresponding quotient matrix.} Because the quotient matrix $Q$ has the same spectral radius of $A$, \rev{this may lead to a significative computational advantage in the calculation} of $\lambda_1(A)$ and, consequently, of $\tau_c^{(1)}$, since the order of $Q$ is smaller than that of $A$.
\rev{A novel expression has been derived for the lower bound on $\tau_c^{(1)}$ as function of network metrics, e.g., the maximum among the internal degrees of the nodes over all communities. In practice this value can be adopted to determine a safety region for the extinction of epidemics, i.e., by forcing the effective spreading rate below the lower bound; it can be also useful in order to design new network architectures
robust to long-term, massive infections.}
\rev{In the analysis, we have showed} that it is possible to reduce the number of equations representing the time-change of infection probabilities using the quotient matrix $Q$, when all nodes belonging to the same community have the same initial conditions. After proving the existence of a positively invariant set for the original system of $N$ differential equations, we have shown that the non-zero steady-state infection probabilities \rev{belongs to this invariant set, and that it can be computed by the reduced system of $n$ equations}
Finally, we have also considered the case when the partition is almost equitable. \rev{An input graph whose partition is equitable can be perturbed, by adding or removing edges inside communities, in order to obtain a graph with an almost equitable partition. A lower bound for the epidemic threshold
has been derived, and the effect of perturbations of the communities' structure has been explored.}
\subsection*{Acknowledgments}
The authors would like to thank Piet Van Mieghem for providing interesting and useful comments to an
early draft of this work. \rev{The helpful comments of two anonymous reviewers are also
gratefully acknowledged.}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
The (binary) deletion channel accepts bits as inputs,
and deletes each transmitted bit independently with probability $d$.
Computing or providing systematic approximations to its capacity is
one of the outstanding problems in information theory
\cite{MitzenmacherReview}. An important
motivation comes from the need to understand synchronization errors and
optimal ways to cope with them.
In this paper we suggest a new approach. We demonstrate that capacity
can be computed in a series expansion for small deletion probability,
by computing the first two orders of such an expansion. Our main result
is the following.
\begin{thm}\label{thm:main_theorem}
Let $C(d)$ be the capacity of the deletion channel with deletion
probability $d$. Then, for small $d$ and
any ${\epsilon}>0$,
\begin{align}
C(d)=1+ d\log d - A_1\, d+ O(d^{3/2-\epsilon}) \, ,\label{eq:MainFormula}
\end{align}
where $A_1\equiv \log(2e)-\sum_{l=1}^\infty 2^{-l-1}l\log l$.
Further, the iid Bernoulli$(1/2)$ process achieves capacity up to
corrections of order $O(d^{3/2-\epsilon})$.
\end{thm}
Logarithms here (and in the rest of the paper) are understood to be
in base $2$.
The constant $A_1$ can be easily evaluated to yield
$A_1\approx 1.154163765$.
While one might be skeptical about the concrete meaning
of asymptotic expansions of the type (\ref{eq:MainFormula}),
they often prove surprisingly accurate. For instance at $10\%$ deletion
probability,
Eq.~(\ref{eq:MainFormula}) is off the best lower bound
proved in \cite{Drinea07} by about $0.010$ bits.
More importantly they provide useful design insight. For instance, the above
result shows that Bernoulli$(1/2)$ is an excellent starting point for
the optimal input distribution. Next terms in expansion indicate
how to systematically modify the input distribution for $d>0$
\cite{InProgress}.
\begin{figure}
\centering
\includegraphics[width=3.in]{bd}
\caption{Comparison of the asymptotic formula (\ref{eq:MainFormula})
(continuous line) with upper bounds from \cite{Fertonani09}
(stars $\ast$) and lower bounds from \cite{Drinea07} (squares, $\Box$). The $O(d^{3/2-{\epsilon}})$ term in (\ref{eq:MainFormula}) was simply dropped.}\label{fig_bd}
\vspace{-0.3cm}
\end{figure}
We think the strategy adopted here might be useful in other
information theory problems. The underlying philosophy is
that whenever capacity is known for a specific value of
the channel parameter, and the corresponding
optimal input distribution is unique and well characterized,
it should be possible to compute an asymptotic expansion around that value.
Here the special channel is the perfect channel,
i.e. the deletion channel with deletion probability $d=0$. The
corresponding input distribution is the iid Bernoulli$(1/2)$ process.
\subsection{Related work}
Dobrushin \cite{Dobrushin} proved a coding theorem for the deletion channel,
and other channels with synchronization errors. He showed that the maximum
rate of reliable communication is given by the maximal mutual information per
bit, and proved that this can be achieved through a random coding scheme.
This characterization has so far found limited use in proving concrete
estimates.
An important exception is provided by the work of Kirsch and Drinea
\cite{KirschDrinea} who use Dobrushin coding theorem to prove
lower bounds on the capacity of channels with deletions and
duplications.
We will also use Dobrushin theorem in a crucial way, although
most of our effort will be devoted to proving upper bounds on the capacity.
Several capacity bounds have been developed over the last few years,
following alternative approaches, and are surveyed in
\cite{MitzenmacherReview}.
In particular, it has been proved that $C(d)=\Theta(1-d)$
as $d\to 1$. However determining the asymptotic behavior in
this limit (i.e. finding a constant $B_1$ such that
$C(d) = B_1(1-d)+o(1-d)$) is an open problem.
When applied to the small $d$ regime, none of the known upper bounds
actually captures the correct behavior (\ref{eq:MainFormula}).
As we show in the present paper, this behavior can be
controlled exactly.
When this paper was nearing submission, a preprint
by Kalai, Mitzenmacher and Sudan \cite{KMS} was posted online,
proving a statement analogous to Theorem \ref{thm:main_theorem}.
The result of \cite{KMS} is however not the same as in Theorem
\ref{thm:main_theorem}: only the $d\log d$ term of the series
is proved in \cite{KMS}. Further,
the two proofs are based on very different approaches.
\section{Preliminaries}
For the reader's convenience, we restate here some known results
that we will use extensively, along with with some definitions
and auxiliary lemmas.
Consider a sequence of channels $\{W_n\}_{n\ge 1}$, where $W_n$ allows exactly
$n$ inputs bits, and deletes each bit independently with probability $d$.
The output of $W_n$ for input $X^n$ is a binary vector denoted by $Y(X^n)$.
The length of $Y(X^n)$ is a binomial random variable.
We want to find maximum rate at which we can send information
over this sequence of channels
with vanishingly small error probability.
The following characterization follows from \cite{Dobrushin}.
\begin{thm}\label{lemma:cap_limit}
Let
\begin{align}
C_n= \frac{1}{n}\max_{p_{X^n}}\, I(X^n; Y(X^n)) \, .
\end{align}
Then, the following limit exists
\begin{align}
C=\lim_{n\rightarrow \infty}C_n = \inf_{n\ge 1} C_n\, ,
\label{eq:cap_defined}
\end{align}
and is equal to the capacity of the deletion channel.
\end{thm}
\begin{IEEEproof}
This is just a reformulation of Theorem 1 in \cite{Dobrushin},
to which we add the remark $C = \inf_{n\ge 1} C_n$, which is of independent
interest.
In order to prove this fact, consider the channel $W_{m+n}$,
and let $X^{m+n}= (X_1^m,X_{m+1}^{m+n})$ be its input.
The channel $W_{m+n}$ can be realized as follows.
First the input is passed through a channel
$\widetilde{W}_{m+n}$ that introduces deletions independently in the two strings
$X_{1}^m$ and $X_{m+1}^{m+n}$ and outputs
$\widetilde{Y}(X_1^{m+n})\equiv (Y(X_{1}^m),|,Y(X_{m+1}^{m+n}))$
where $|$ is a marker. Then the marker is removed.
This construction proves that $W_{m+n}$ is physically degraded
with respect to $\widetilde{W}_{m+n}$, whence
\begin{eqnarray*}
(m+n)C_{m+n}&\le &\max_{p_{X^{m+n}}} I(X^{m+n};\widetilde{Y}(X_{1}^{m+n}))\\
&\le & mC_m+nC_n\, .
\end{eqnarray*}
Here the last inequality follows from the fact that $\widetilde{W}_{m+n}$ is the product
of two independent channels, and hence the mutual information is maximized
by a product input distribution.
Therefore the sequence $\{nC_n\}_{n\ge 1}$ is sub-additive, and
the claim follows
from Fekete's lemma.
\end{IEEEproof}
A last useful remark is that, in computing capacity, we can
assume $(X_1,\dots,X_n)$ to be $n$ consecutive coordinates of a stationary
ergodic process.
\begin{lemma}\label{lemma:stationary_suffices}
Let $\mathbb{X}= \{X_i\}_{i\in{\mathbb Z}}$ be a stationary and ergodic
process, with $X_i$ taking values in $\{0,1\}$. Then the limit
$I(\mathbb{X})=\lim_{n \rightarrow \infty} \frac{1}{n}I(X^n; Y(X^n))$ exists and
\begin{align}
C=\max_{\mathbb{X}\; \textup{stat. erg.}} I(\mathbb{X})\,.
\end{align}
\end{lemma}
\begin{IEEEproof}
Take any stationary $\mathbb{X}$, and let $I_n= I(X^n; Y(X^n))$. Notice that
$Y(X_1^n)-X_1^n-X_{n+1}^{n+m}-Y(X_{n+1}^{n+m})$ form a Markov chain.
Define $\widetilde{Y}(X^{n+m})$ as in the proof of Theorem \ref{lemma:cap_limit}.
As before we have $I_{n+m} \leq I(X^{n+m},\widetilde{Y}(X^{n+m})) \leq I(X_1^m;\widetilde{Y}(X_1^m)) +I(X_{m+1}^{m+n};Y(X_{m+1}^{m+n})) = I_m+I_n$.
(the last identity follows by
stationarity of $\mathbb{X}$). Thus $I_{m+n}\leq I_n+I_m$ and the limit
$\lim_{n\to\infty}I_n/n$
exists by Fekete's lemma, and is equal to $\inf_{n\geq 1}I_n/n$.
Clearly, $I_{n} \leq C_n$ for all $n$.
Fix any ${\varepsilon}>0$. We will construct a process $\mathbb{X}$ such that
\begin{align}
I_{N}/N\geq C - {\varepsilon} \qquad \forall \; N>N_0({\varepsilon})\, ,
\label{eq:liminf_closeto_limsup}
\end{align}
thus proving our claim.
Fix $n$ such that
$C_n \geq C - {\varepsilon}/2$. Construct $\mathbb{X}$ with
iid blocks of length $n$ with common distribution $p^*(n)$ that
achieves the supremum in the definition of $C_n$.
In order to make this process stationary,
we make the first complete block to the
right of the position $0$ start at position
$s$ uniformly random in $\{1,2,\dots,n\}$.
We call the position $s$ the offset.
The resulting process is clearly stationary and ergodic.
Now consider $N=kn+r$ for some $k \in {\mathbb N}$
and $r \in \{0, 1, \ldots, n-1\}$. The vector $X_1^N$ contains
at least $k-1$ complete blocks of size $n$, call them $X(1), X(2),
\ldots, X(k-1)$ with $X(i) \sim p^*(n)$. The block $X(1)$
starts at position $s$. There will be further $r+n - s+1$ bits
at the end, so that $X_1^N=(X_1^{s-1}, X(1), X(2), \ldots, X(k-1),
X_{s+kn}^N)$.
Abusing notation, we write $Y(i)$ for $Y(X(i))$.
Given the output $Y$, we define
$\widetilde{Y}= ( Y(X_1^{s-1})| Y(1) | Y(2) | \ldots |Y(k-1)|Y(X_{s+(k-1)n}^N))$,
by introducing $k$ synchronization symbols $|$.
There are at most $(n+1)^k$ possibilities for $\widetilde{Y}$ given $Y$
(corresponding to potential placements of synchronization symbols).
Therefore we have
\begin{align*}
H(Y) &= H(\widetilde{Y}) - H(\widetilde{Y}|Y)\\
&\geq H(\widetilde{Y}) - \log((n+1)^k)\\
&\geq (k-1)H(Y(1)) - k\log(n+1)\, ,
\end{align*}
where we used the fact that the $(X(i),Y(i))$'s are iid.
Further
\begin{align*}
H(Y|X^N) \leq H(\widetilde{Y}|X^N) \leq (k-1)H(Y(1)|X(1)) + 2n\, ,
\end{align*}
where the last term accounts for bits outside the blocks.
We conclude that
\begin{align*}
I(X^N;Y(X^N)) &= H(Y) - H(Y|X^N)\\
&\geq (k-1)nC_n - k \log(n+1) - 2n\\
&\geq N(C_n - {\varepsilon}/2) \, ,
\end{align*}
provided $\log(n+1) /n < {\varepsilon} / 10 $, $N > N_0\equiv 10n/{\varepsilon}$.
Since $C_n\ge C-{\varepsilon}/2$, this in turn implies
Eq.~(\ref{eq:liminf_closeto_limsup}).
\end{IEEEproof}
\section{Proof of the main theorem: Outline}
\label{sec:outline}
In this section we provide the proof of Theorem \ref{thm:main_theorem}.
We defer the proof of several technical lemmas to the next section.
The first step consists in proving achievability by
estimating $I(\mathbb{X})$ for the iid Bernoulli$(1/2)$ process.
\begin{lemma}\label{lemma:iidhalf}
Let $\mathbb{X}^*$ be the iid {\rm Bernoulli}$(1/2)$ process.
For any ${\epsilon} >0$, we have
\begin{align}
I(\mathbb{X}^*) = 1+ d\log d - A_1\, d + O(d^{2-{\epsilon}})\, .
\end{align}
\end{lemma}
Lemma \ref{lemma:stationary_suffices} allows us to restrict our
attention to stationary ergodic processes in proving the converse.
In light of Lemma \ref{lemma:iidhalf}, we can further restrict
consideration to processes $\mathbb{X}$ satisfying $I(\mathbb{X}) > 1+2d\log d$
and hence $H(\mathbb{X}) > 1+2d\log d$ (here and below, for a process
$\mathbb{X}$, we denote by $H(\mathbb{X})$ its \emph{entropy rate}).
Given a (possibly infinite) binary sequence,
a \emph{run} of $0$'s (of $1$'s) is a maximal subsequence of consecutive $0$'s
($1$'s), i.e. an subsequence of $0$'s bordered by $1$'s
(respectively, of $1$'s bordered by $0$'s).
Denote by $\mathcal{S}$ the set of all stationary ergodic processes and
by $\mathcal{S}_L$ the set of stationary ergodic processes such
that, with probability one, no run has length larger than $L$.
The next lemma shows that we don't lose much by restricting
ourselves to $\mathcal{S}_{L^*}$ for large enough $L^*$.
\begin{lemma}\label{lemma:small_loss_by_restricting_runs}
For any ${\epsilon}>0$ there exists $d_0=d_0({\epsilon})>0$
such that the following happens for all $d < d_0$.
For any $\mathbb{X} \in \mathcal{S}$ such that $H(\mathbb{X}) > 1 + 2d \log d$ and for any
$L^* > \log (1/d)$, there exists
$\mathbb{X}_{L^*} \in \mathcal{S}_{L^*}$ such that
\begin{align}
I(\mathbb{X}) \leq I(\mathbb{X}_{L^*}) + d^{1/2-\epsilon}(L^*)^{-1}\log L^*\, .
\end{align}
\end{lemma}
We are left with the problem of bounding $I(\mathbb{X})$ from above for all
$\mathbb{X} \in \mathcal{S}_{L^*}$. The next lemma establishes such a bound.
\begin{lemma}\label{lemma:converse_for_restricted_runs}
For any ${\epsilon}>0$ there exists $d_0=d_0({\epsilon})>0$ such that the following
happens.
For any $L^* \in \mathbb{N}$ and any $\mathbb{X} \in \mathcal{S}_{L^*}$
if $d < d_0(\epsilon)$, then
\begin{align}
I(\mathbb{X}) \leq 1+ d\log d - A_1d + d^{2-\epsilon}(1+d^{1/2}L^*)\, .
\end{align}
\end{lemma}
\begin{IEEEproof}[Proof of Theorem \ref{thm:main_theorem}]
Lemma \ref{lemma:iidhalf} shows achievability.
The converse follows from Lemmas \ref{lemma:small_loss_by_restricting_runs}
and \ref{lemma:converse_for_restricted_runs} with
$L^*= \lfloor 1/d \rfloor$.
\end{IEEEproof}
\section{Proofs of the Lemmas}
In Section \ref{subsec:run_charac} we characterize any stationary
ergodic $\mathbb{X}$ in terms of its `bit perspective' and `block perspective'
run-length distributions, and show that these distributions must be
close to the distributions obtained for the iid Bernoulli$(1/2)$ process.
In Section \ref{subsec:modified_deletion} we construct a modified
deletion process that allows accurate estimation of $H(Y|X^n)$ in
the small $d$ limit. Finally, in Section \ref{subsec:lemma_proofs}
we present proofs of the Lemmas quoted in Section \ref{sec:outline}
using the tools developed.
We will often write $X_{a}^b$ for the random vector
$(X_a,X_{a+1},\dots, X_b)$ where the $X_i$'s are distributed according
to the process $\mathbb{X}$.
\subsection{Characterization in terms of runs}
\label{subsec:run_charac}
Consider a stationary ergodic process $\mathbb{X}$.
Without loss of generality we can assume that almost surely all runs
have finite length (by ergodicity and stationarity this only excludes the
constant $0$ and constant $1$ processes).
Let $L_0$ be the length of the
run containing position $0$ in $\mathbb{X}$.
Let $L_1$ be the length of
first run to occur to the right of position $0$ in $\mathbb{X}$ and,
in general, let $L_i$ be the length of the $i$-th run to the right of
position $0$. Let $p_{L,\mathbb{X}}$
denote the limit of the empirical distribution of $L_1, L_2, \ldots,L_K$,
as $K\to\infty$. By ergodicity
$p_{L,\mathbb{X}}$ is a well defined probability
distribution on ${\mathbb N}$.
We call $p_{L,\mathbb{X}}$ the \emph{block-perspective} run length distribution
for obvious reasons, and use $L$ to denote a random variable
drawn according to $p_{L,\mathbb{X}}$.
It is not hard to see that,
for any $l\ge 1$,
\begin{align}
{\mathbb P}(L_0= l)=\frac{lp_{L,\mathbb{X}}(l)}{\mathbb{E}[L]} \;
\end{align}
In other words $L_0$ is distributed according to the size biased
version of $p_{L,\mathbb{X}}$.
We call this the \emph{bit perspective} run length distribution,
and shall often drop the subscript $\mathbb{X}$ when clear from the context.
Notice that since $L_0$ is a well defined and almost surely finite,
we have $\mathbb{E}[L] < \infty$. It follows that the empirical distribution of
run lengths in
$X_1^n$ also converges to $p_{L,\mathbb{X}}$ almost surely, since
the first and last run do not matter in the limit.
If $L_0^+,L_1,\dots, L_K$ are the run lengths in the block
$X_0^n$, it is clear that $H(X_0^n) \le 1+H(L_1,\dots,L_{K_n},K_n)$
(where one bit is needed to remove the $0,1$ ambiguity).
By ergodicity $K_n/n\to 1/\mathbb{E}[L]$ almost surely as $n\to\infty$.
This also implies $H(K_n)/n\to 0$.
Further,
$\limsup_{n\rightarrow \infty} H(L_1,\dots,L_{K_n})/n \le \lim_{n\rightarrow \infty}
H(L) K_n/n = H(L)/\mathbb{E}[L] $.
If $H(\mathbb{X})$ is the entropy rate of the process $\mathbb{X}$,
by taking the $n\to\infty$ limit, it is easy to deduce that
\begin{align}
H(\mathbb{X}) \leq \frac{ H(L) }{\mathbb{E}[L]} \, ,
\label{eq:run_hx_upper_bd}
\end{align}
with equality if and only if $\mathbb{X}$ consists of iid runs with common
distribution $p_L$.
For convenience of notation, define $\mu(\mathbb{X}) \equiv \mathbb{E}[L]$.
We know that given $\mathbb{E}[L]=\mu$, the probability distribution
with largest possible entropy $H(L)$ is geometric with mean $\mu$, i.e.
$p_L(l) = (1-1/\mu)^{l-1}1/\mu $ for all $ l \geq 1$, leading to
\begin{align}
\frac{H(L)}{\mathbb{E}[L]} \leq -\big(1 - \frac{1}{\mu}\big) \log
\big(1 - \frac{1}{\mu}\big) -
\frac{1}{\mu}\log \frac{1}{\mu} \equiv h(1/\mu) \, .
\label{eq:BoundMu}
\end{align}
Here we introduced the notation $h(p) = -p \log p - (1-p) \log(1-p)$
for the binary entropy function.
In light of Lemma \ref{lemma:iidhalf} we can
restrict ourselves to $H(\mathbb{X}) > 1+2\, d\log d$.
Using this, we are able to obtain sharp bounds on $p_L$ and
$\mu(\mathbb{X})$.
\begin{lemma}
There exists $d_0>0$
such that, for any $\mathbb{X}\in\mathcal{S}$ with $H(\mathbb{X}) > 1 + 2d \log d$,
\begin{align}
|\mu(\mathbb{X}) - 2 | \leq \sqrt{100 \; d \log (1/d)} \, .
\end{align}
for all $d < d_0$.
\label{lemma:mean_closeto2}
\end{lemma}
\begin{IEEEproof}
By Eqs. (\ref{eq:run_hx_upper_bd}) and (\ref{eq:BoundMu}),
we have $h(1/\mu)\ge 1+2 d\log d$. By Pinsker's inequality
$h(p)\le 1-(1-2p)^2/(2\ln 2)$, and therefore
$|1-(2/\mu)|^2\le (4\ln 2)d\log(1/ d)$.
The claim follows from simple calculus.
\end{IEEEproof}
\begin{lemma}\label{lemma:L_TV}
There exists $K' < \infty$ and $d_0 >0$ such that, for any
$\mathbb{X}\in\mathcal{S}$ with $H(\mathbb{X}) > 1 + 2d \log d$, and any $d<d_0$,
\begin{align}
\sum_{l=1}^\infty \left|p_L(l) - \frac{1}{2^l} \right| \leq
K'\sqrt{ d \log (1/d)}\, .
\end{align}
\end{lemma}
\begin{IEEEproof}
Let $p_L^*(l) = 1/2^l, \ l \geq 1$ and recall that
$\mu(\mathbb{X}) =\mathbb{E}[L]=\sum_{l\ge 1}
p_L(l)l$. An explicit calculation yields
\begin{align}
H(p_L)= \mu(\mathbb{X})- D(p_L || p_L^*) \, .
\label{eq:Lentropy}
\end{align}
Now, by Pinsker's inequality,
\begin{align}
D(p_L || p_L^*) \geq \frac{2}{\ln 2}||p_L-p_L^*||_{\rm TV}^2 \, .
\label{eq:pinsker}
\end{align}
Combining Lemma \ref{lemma:mean_closeto2},
and Eqs.~(\ref{eq:run_hx_upper_bd}), (\ref{eq:Lentropy})
and (\ref{eq:pinsker}), we
get the desired result.
\end{IEEEproof}
\begin{lemma}\label{lemma:L0_TV}
There exists $K'' < \infty$ and $d_0 >0$ such that, for any
$\mathbb{X}\in\mathcal{S}$ with $H(\mathbb{X}) > 1 + 2d \log d$, and any $d<d_0$,
\begin{align}
\sum_{l=1}^\infty\left|{\mathbb P}(L_0=l) - \frac{l}{2^{l+1}} \right| \leq K''\sqrt{ d (\log (1/d))^3}\, .
\end{align}
\end{lemma}
\begin{IEEEproof}
Let $l_0 = \lfloor -\log(K'\sqrt{ d \log (1/d)}) \rfloor$. It follows from Lemma \ref{lemma:L_TV}
that
\begin{align}
\sum_{l=1}^{l_0} \left|p_L(l) - \frac{1}{2^l} \right| \leq
K'\sqrt{ d \log (1/d)}\, ,
\end{align}
which in turn implies
\begin{align}
\sum_{l=0}^{l_0} l p_L(l) \geq \sum_{l=0}^{l_0-1} \frac{l}{2^l}\, .
\label{eq:bound_firstl0_lpl}
\end{align}
Summing the geometric series, we find that there exists a constant
$K_1<\infty$ such that
\begin{align}
\sum_{l=l_0}^{\infty} \frac{l}{2^l} = (l_0+1) 2^{1-l_0} \le
K_1 \sqrt{d (\log (1/ d))^3}\, .
\label{eq:bound_afterl0_lby2tol}
\end{align}
Using the identity $\sum_{l=0}^{\infty} l\, 2^{-l}=2$, together
with Eqs.~(\ref{eq:bound_firstl0_lpl}) and (\ref{eq:bound_afterl0_lby2tol}),
we get
\begin{align}
\sum_{l=0}^{l_0} l p_L(l) \geq 2-K_1 \sqrt{d (\log (1/d))^3}\, .
\end{align}
Combining this result with
Lemma \ref{lemma:mean_closeto2}, we conclude (eventually enlarging the constant
$K_1$)
\begin{align}
\sum_{l=l_0+1}^{\infty} l p_L(l) \leq 2K_1 \sqrt{d (\log (1/d))^3}\, .
\end{align}
Using this result together with
Eq.~(\ref{eq:bound_afterl0_lby2tol}), we get
\begin{align}
\sum_{l=l_0+1}^{\infty}| l p_L(l) -\frac{l}{2^l}| \leq 4K_1 \sqrt{d (\log (1/d))^3}\, .
\label{eq:nonnormalized_L0_TV_afterl0}
\end{align}
From a direct application of Lemma \ref{lemma:L_TV} it follows that
there exists a constant $K_2<\infty$, such that
\begin{align}
\sum_{l=1}^{l_0}\Big| l p_L(l) -\frac{l}{2^l}
\Big| \leq K_2 \sqrt{d (\log (1/d))^3}\, .
\label{eq:nonnormalized_L0_TV_beforel0}
\end{align}
and therefore summing Eqs. (\ref{eq:nonnormalized_L0_TV_beforel0}) and
(\ref{eq:nonnormalized_L0_TV_afterl0})
\begin{align}
\sum_{l=1}^{\infty}\Big| \frac{l p_L(l)}{2} -\frac{l}{2^{l+1}}\Big| \leq
2(K_1+K_2) \sqrt{d (\log (1/d))^3}\, .
\label{eq:nonnormalized}
\end{align}
We know that ${\mathbb P}(L_0=l) = lp_L(l) / \mu(\mathbb{X})$. The proof is completed by
using Eq.~(\ref{eq:nonnormalized}) and bounding $\mu(\mathbb{X})$ with the
Lemma \ref{lemma:mean_closeto2}.
\end{IEEEproof}
\subsection{A modified deletion process}
\label{subsec:modified_deletion}
We define an auxiliary sequence of channels
$\widehat{W}_n$ whose output --denoted by $\widehat{Y}(X^n)$--
is obtained by modifying the deletion channel output in the following way.
If an `extended run' (i.e. a run $\mathcal{R}$ along with one additional bit at each
end of $\mathcal{R}$) undergoes more than one deletion under the deletion
channel, then $\mathcal{R}$ will experience no deletion in channel $\widehat{W}_n$, i.e. the corresponding bits are
\textit{present} in $\widehat{Y}(X^n)$. Note that (deletions in) the additional bits at the ends are not affected.
Formally, we construct this sequence of channels as follows
when the input is a stationary process $\mathbb{X}$.
Let $\mathbb{D}$ be an iid Bernoulli$(d)$ process, independent
of $\mathbb{X}$, with $D_1^n$ being the $n$-bit vector that contains a $1$ if
and only if the corresponding bit in $X^n$ is deleted by the channel $W_n$.
We define $\widehat{\mathbb{D}}(\mathbb{D}, \mathbb{X})$ to be the process containing
a subset of the $1$s in $\mathbb{D}$. The process $\widehat{\mathbb{D}}$ is obtained
by deterministically flipping some of the $1$s in $\mathbb{D}$ as
described above, simultaneously for all runs. The output of the channel ${\widehat{W}}_n$ is simply defined by
deleting from $X^n$ those bits whose positions correspond to $1$s in
$\widehat{\mathbb{D}}$.
Notice that $(\mathbb{X},\mathbb{D},\widehat{\mathbb{D}})$ are jointly stationary.
The sequence of channels $W_n$ are defined by $\mathbb{D}$, and
the coupled sequence of channels $\widehat{W}_n$ are defined
by $\widehat{\mathbb{D}}$. We emphasize that $\widehat{\mathbb{D}}$ is a function of
$(\mathbb{X},\mathbb{D})$.
Let $\mathbb{Z} \equiv \mathbb{D} \oplus \widehat{\mathbb{D}}$ (where $\oplus$ is componentwise
sum modulo $2$). The process $\mathbb{Z}$ is stationary
with ${\mathbb P}(Z_0=1)\equiv z=\mathbb{E}[d- d(1-d)^{L_0+1}] \leq 2\,d^2\, \mathbb{E}[L_0]$.
Note that $z=O(d^2)$ for $\mathbb{E}[L_0]=O(1)$.
The following lemma shows the utility of the modified deletion process.
\begin{lemma}
Consider any $\mathbb{X} \in \mathcal{S}$ such that $\mathbb{E}[L_0 \log L_0] < \infty$. Then
\begin{align}
\lim_{n\to\infty}\frac{1}{n}H(\widehat{D}^n|X^n, \widehat{Y}^n)=
d\, \mathbb{E}[\log L_0] - \delta\, ,
\end{align}
where
$0\leq \delta = \delta (d, \mathbb{X})\leq 2d^2\mathbb{E}[L_0\log L_0]$.
\label{lemma:hatD_givenxy}
\end{lemma}
\begin{IEEEproof}
Fix a channel input $x^n$ and any
possible output ${\widehat{y}} =\widehat{y}(x^n)$
(i.e. an output that occurs with positive probability under $\widehat{W}_n$).
The proof consists in estimating (the logarithm of)
the number of realizations of $\widehat{D}^n$ that
might lead to the input/ouput pair $(x^n,{\widehat{y}})$, and then taking the expectation
over $(x^n,{\widehat{y}})$.
Proceeding from left to right, and using the constraint on $\widehat{\mathbb{D}}$,
we can map unambiguously each run in $\widehat{y}$ to
one or more runs in $x^n$, that gave rise to it through the deletion process.
Consider a run of length $\ell$ in ${\widehat{y}}$.
If there is a unique `parent' run, it must have length
$\ell$ or $\ell+1$.
If the length of the parent run is $\ell$, then no deletion
occurred in this run, and hence the contribution to
$H(\widehat{D}^n|x^n, \widehat{y})$ of such runs vanishes.
If the length of the parent run is $\ell+1$,
one bit was deleted by $\widehat{W}^n$ and each of the
$\ell+1$ possibilities is equally likely, leading to a
contribution $\log(\ell+1)$ to $H(\widehat{D}^n|x^n, \widehat{y})$.
Finally, if there are multiple parent runs of lengths $l_1, l_2, \ldots, l_k$,
they must be separated by single bits of taking the opposite
value in $x^n$, all of which were deleted.
It also must be the case that $\sum_{i=1}^kl_i = \ell$ i.e. there is no
ambiguity in $\widehat{D}^n$.
This also implies $l_1<\ell$.
Notice that the three cases described corresponds to three different lengths
for the run in ${\widehat{y}}$. This allows us to sequentially
associate runs in $\widehat{y}$ with runs in $x^n$, as claimed.
By the above argument,
$H(\widehat{D}^n|x^n, \widehat{y}^n)
= \sum_{r\in {\cal D}} \log(\ell_r)$ where ${\cal D}$ is the set
of runs on which deletions did occur, and $\ell_r$ are their
lengths. Using the definition of $\widehat{\mathbb{D}}$, the sum can be expressed
as $\sum_{i=1}^n\widehat{D}_i\log(\ell_{(i)})$, with $\ell_{(i)}$
the length of the run containing the $i$-th bit.
Using the definition of $\widehat{\mathbb{D}}$, we get
${\mathbb P}(\widehat{D}_i=1)= d(1-d)^{\ell_{(i)}+1} \in (d-(\ell_{(i)}+1)d^2, d)$
(except for the last and first block in $x^n$, that can be disregarded).
Taking expectation and letting $n\to\infty$ we get
the claim.
\end{IEEEproof}
\begin{coro}
Under the assumptions of the last Lemma, and
denoting by $h(p)$ the binary entropy function, we have
\begin{align*}
\lim_{n \rightarrow \infty} \frac{1}{n}H(Y(X^n)|X^n) = h(d) - d\,
\mathbb{E}[\log L_0] + \delta \, ,
\end{align*}
where $-2h(z)\leq \delta=
\delta (d, \mathbb{X}) \leq 2d^2\mathbb{E}[L_0\log L_0]+2h(z)$ and $z= d- \mathbb{E}[d(1-d)^{L_0+1}]$.
\label{coro:hygivenx}
\end{coro}
\begin{IEEEproof}
By definition, $D^n$ is independent of $X^n$.
We have, for $Y= Y(X^n)$,
\begin{align*}
H(Y|X^n)&= H(D^n|X^n) - H(D^n|X^n, Y)\\
&= nh(d) - H(\widehat{D}^n|X^n, \widehat{Y}) + n \delta_1 \, ,
\end{align*}
with $|\delta_1(d, \mathbb{X}) |\leq 2 H(Z^n)/n\le 2 h(z)$.
In the second equality we used the fact that the pairs $((X^n, Y, D^n), (X^n, \widehat{Y}, \widehat{D}^n))$ and
$((X^n, Y), (X^n, \widehat{Y}))$ are both of the form $(A, B)$ such that $A$ is a function of
$(B, Z^n)$ and $B$ is a function of $(A, Z^n)$, $\Rightarrow |H(A) - H(B)| \leq H(Z^n)$.
\end{IEEEproof}
\subsection{Proofs of Lemmas \ref{lemma:iidhalf}, \ref{lemma:small_loss_by_restricting_runs} and \ref{lemma:converse_for_restricted_runs}}
\label{subsec:lemma_proofs}
\begin{IEEEproof}[Proof of Lemma \ref{lemma:iidhalf}]
Clearly, $\mathbb{X}^*$
has run length distribution $p_L(l) = 2^{-l}$, $l \geq 1$. Moreover,
$Y(X^{*,n})$ is also a iid Bernoulli$(1/2)$ string of length
$\sim \textup{Binomial}(n,1-d)$.
Hence, $H(Y)= n (1-d) +O(\log n)$. We now use the estimate of $H(Y|X^{*,n})$
from Corollary \ref{coro:hygivenx}. We have $z= O(d^2)$ and $\mathbb{E}[L_0\log L_0] < \infty$, leading to
\begin{align*}
H(Y|X^{*,n}) = n( h(d) - d\, \mathbb{E} [ \log L_0] + O(d^{2-{\epsilon}}) )+o(n)\, .
\end{align*}
Computing $H(Y) - H(Y|X^{*,n})$, we get the claim.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Lemma \ref{lemma:small_loss_by_restricting_runs}]
We construct $\mathbb{X}_{L^*}$ by flipping a bit each time it is the $(L^*+1)$-th
consecutive bit with the same value (either $0$ or $1$).
The density of such bits in $\mathbb{X}$ is upper bounded by
$\alpha={\mathbb P}(L_0>L^*)/L^*$. The expected fraction of bits in the
channel output $Y_{L^*}=Y(X^n_{L^*})$ that have been flipped relative to
$Y=Y(X^n)$ (output of the same channel realization
with different input) is also
at most $\alpha$. Let $F=F(\mathbb{X}, \mathbb{D})$ be the binary vector having the same
length as $Y$, with a $1$ wherever the corresponding bit in $Y_{L^*}$
is flipped relative to $Y$, and $0$s elsewhere. The expected fraction of $1$'s
in $F$ is at most $\alpha$. Therefore
\begin{align}
H(F) \leq n (1-d) h(\alpha) + \log (n+1)\, .
\label{eq:hflips_bound}
\end{align}
Notice that $Y$ is a deterministic function of $(Y_{L^*}, F)$ and $Y_{L^*}$ is a deterministic function of $(Y, F)$, whence
\begin{align}
|H(Y) - H(Y_{L^*})| \leq H(F)\, .
\label{eq:hy_bound_flips}
\end{align}
Further, $\mathbb{X}-\mathbb{X}_{L^*}-X_{L^*}^n-Y_{L^*}$ form a Markov chain,
and $\mathbb{X}_{L^*}$, $X_{L^*}^n$ are deterministic functions of $\mathbb{X}$.
Hence, $H(Y_{L^*} | X_{L^*}^n) = H(Y_{L^*} | \mathbb{X})$. Similarly,
$H(Y | X^n) = H(Y| \mathbb{X})$. Therefore (the second step is analogous to
Eq.~(\ref{eq:hy_bound_flips}))
\begin{align}
|H(Y_{L^*} | X_{L^*}^n) & - H(Y | X^n)| =\label{eq:hygivenx_bound_flips}\\
&=\;
|H(Y_{L^*} | \mathbb{X}) - H(Y | \mathbb{X})|
\leq \; H(F)\, . \nonumber
\end{align}
It follows from Lemma \ref{lemma:L0_TV} and $L^* > \log(1/d)$ that
$\alpha \leq 2K'' \sqrt{d (\log (1/d))^3}/L^*$ for sufficiently small $d$.
Hence, $h(\alpha) \leq d^{1/2-\epsilon} \log L^* /(2 L^*)$ for $d < d_0( \epsilon)$,
for some $d_0( \epsilon) > 0$.
The result follows by combining Eqs.~(\ref{eq:hflips_bound}),
(\ref{eq:hy_bound_flips}) and (\ref{eq:hygivenx_bound_flips}) to bound
$|I(\mathbb{X}) - I(\mathbb{X}_{L^*})|$.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Lemma \ref{lemma:converse_for_restricted_runs}]
If $H(\mathbb{X}) \leq 1+ 2d \log d$, we are done. Else we proceed as follows.
We know that $Y(X^n)$ contains Binomial$(n,1-d)$ bits, leading immediately to
\begin{align}
H(Y) \leq n(1-d) + \log(n+1) \, .
\end{align}
We use the lower bound on $H(Y|X^n)$ from Corollary \ref{coro:hygivenx}.
We have $z \leq 2 d^2 \mathbb{E} [ L_0]$. It follows from Lemma \ref{lemma:L0_TV}
that $\mathbb{E} [L_0] \leq K_1 (1 + \sqrt{d (\log (1/d))^3} L^*)$, leading to $h(z) \leq .5d^{2-\epsilon}(1+(1/2)d^{1/2}L^*)$ for all $d < d_0$, where $d_0=
d_0(\epsilon)>0$. Thus, we have the bound
\begin{align*}
\lim_{n \rightarrow \infty}\frac{1}{n} H(Y|X^n) \geq h(d) - d\mathbb{E} [ \log L_0] - d^{2-\epsilon}(1+.5d^{1/2}L^*)
\end{align*}
Using Lemma \ref{lemma:L0_TV}, we have $|\mathbb{E} [ \log L_0] - \sum_{l=1}^\infty 2^{-l-1}l\log l| = o(d^{(1/2)-\epsilon}\log L^*)$.
The result follows.
\end{IEEEproof}
\vskip8pt
{\bf Acknowledgments.}
Y. Kanoria is supported by
a 3Com Corporation Stanford Graduate Fellowship. Y. Kanoria and A. Montanari were
supported by NSF, grants
CCF-0743978 and CCF-0915145, and a Terman fellowship.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Although in actual experiments with classical systems it might not always be possible to measure the system without
disturbing it, at least theoretically one can consider the ideal limit of a non-invasive measurement.
This idea has led to the theory of stochastic processes, a major mathematical toolbox used across many scientific
disciplines~\cite{VanKampenBook2007, GardinerBook2009}. Since the limit of an ideal non-disturbing measurement
does not exist for quantum systems, a widely accepted consensus of what a quantum stochastic process
actually \emph{is} has not yet emerged. However, recent progress (see Ref.~\cite{MilzEtAlArXiv2017} and references
therein) strongly suggests that a quantum stochastic process is conceptually similar to classical causal
modeling~\cite{PearlBook2009} and here we will follow this approach. Understanding under which circumstances a
projectively measured quantum system can be effectively described in a classical way is therefore of fundamental
interest as it sheds light on the gap between quantum and classical stochastic processes. In addition, it enables
us to distinguish quantum from classical features which is a relevant task for future technologies (e.g., in quantum
information or quantum thermoydnamics) and for the field of quantum biology. Finally, it also has practical relevance
as classical stochastic processes are easier to simulate.
The relation between classical and quantum stochastic processes was first addressed by Smirne and
co-workers~\cite{SmirneEtAlQST2018}, who showed that the
answer to the question whether a quantum system effectively behaves classical is closely related to the question
whether coherences play a role in its evolution. More specifically, for a quantum dynamical semigroup obeying
the regression theorem (i.e., a time-homogeneous quantum Markov process), it was shown that the statistics obtained
from rank-1 projective measurements of a given system observable are compatible with a classical stochastic process
if and only if the dynamics is ``non-coherence-generating-and-detecting (NCGD)''~\cite{SmirneEtAlQST2018}.
The purpose of the present paper is to extend the results of Smirne \emph{et al.}~in various directions.
We will provide an operationally motivated definition of \emph{incoherent dynamics}, which is supposed to capture
the absence of any detectable coherence in the dynamics. It is applicable to any open systems dynamics and it is
different from the NCGD notion. Our definition allows us to prove the following:
first, for non-degenerate observables described by rank-1 projectors, any process which can be effectively described
classically is incoherent (i.e., cannot generate any detectable coherence), whereas the converse is only true for
invertible Markovian, but not necessarily time-homogeneous dynamics. Second, for degenerate observables, we lose the
property that classicality implies incoherent dynamics because detectable coherence can be hidden in degenerate
subspaces.
The rest of the paper is structured as follows. In Sec.~\ref{sec mathematical preliminaries} we set the stage and
introduce some basic definitions. Our main results are reported in Sec.~\ref{sec non degenerate} for non-degenerate
observables and in Sec.~\ref{sec degenerate} for degenerate observables. We conclude in Sec.~\ref{sec conclusions}.
A thorough comparison with the framework of Ref.~\cite{SmirneEtAlQST2018} is given in Appendix~\ref{sec app comparison}
showing that our results reduce to the ones of Smirne \emph{et al.}~in the respective limit. Various counterexamples,
which demonstrate that our main theorems in Sec.~\ref{sec results} are tight, are postponed to
Appendix~\ref{sec app counterexamples}.
\section{Mathematical preliminaries}
\label{sec mathematical preliminaries}
We start by reviewing basic notions of a classical stochastic process. We label the classical distinguishable
states of the system of interest by $r$ and we assume that the system gets measured at an arbitrary set of discrete
times $\{t_1,\dots,t_n\}$. We denote the result at time $t_i$ by $r_i$. Furthermore, for reasons which will become
clearer later on, we explicitly denote the initial preparation of the experiment by $\C A_0$. At
this stage the reader can think of this as merely a verbal description of how to initialize the experiment
(e.g., `wait long enough such that the system is equilibrated and start measuring afterwards'), later on it will
mathematically turn out to be a completely positive and trace-preserving map. We then denote the
joint probability distribution to get the sequence of measurement results $\bb r_n = r_1,\dots,r_n$ at times
$t_1,\dots,t_n$ given the initial preparation $\C A_0$ by
\begin{equation}\label{eq joint probability}
p(r_n,t_n;\dots;r_1,t_1|\C A_0) \equiv p(\bb r_n|\C A_0).
\end{equation}
The following definition is standard:
\begin{mydef}
The probabilities $p(\bb r_n|\C A_0)$ are said to be classical with repect to a given preparation procedure
$\C A_0$ if they fulfill the consistency condition
\begin{equation}\label{eq def classicality}
\sum_{r_k} p(r_\ell,\dots,r_k,\dots,r_j|\C A_0) = p(r_\ell,\dots,\cancel{r_k},\dots,r_j|\C A_0)
\end{equation}
for all $\ell\ge k\ge j\ge1$. Here, the probability on the right hand side is constructed by measuring the states
$r_i$ of the system only at the set of times $\{t_\ell,\dots,t_j\}\setminus\{t_k\}$.
\end{mydef}
We remark that, if the consistency requirement~(\ref{eq def classicality}) is fulfilled, then -- by the
Kolmogorov-Daniell extension theorem -- we know that there exists an underlying continuous-in-time
stochastic process, which contains all joint proabilities~(\ref{eq joint probability}) as marginals. The importance
of this theorem lies in the fact that it allows us to bridge experimental reality (where any measurement statistics
is always finite) with its theoretical description (which often uses continuous-time dynamics in form of, e.g.,
stochastic differential equations).
Albeit condition~(\ref{eq def classicality}) is in general not fulfilled for quantum dynamics, the joint probability
distribution~(\ref{eq joint probability}) is nevertheless a well-defined object in quantum mechanics. For this purpose
we assume that the experimentalist measures at time $t_k$ an arbitrary system observable $R_k = \sum_{r_k} r_k P_{r_k}$
with projectors $P_{r_k} = P_{r_k}^2$ and eigenvalues $r_k\in\mathbb{R}$. If all projectors are rank-1, i.e.,
$P_{r_k} = |r_k\rl r_k|$, we talk about
a non-degenerate system observable, otherwise we call it degenerate. Furthermore, following the conventional picture of
open quantum systems~\cite{BreuerPetruccioneBook2002}, we allow the system $S$ to be coupled to an arbitrary environment
$E$. The initial system-environment state at time $t_0<t_1$ is denoted by $\rho_{SE}(t_0)$. Then, by using superoperator
notation, we can express Eq.~(\ref{eq joint probability}) as
\begin{equation}\label{eq probability}
\begin{split}
& p(\bb r_n|\C A_0) \\
&= \mbox{tr}_{SE}\left\{\C P_{r_n}\C U_{n,n-1}\dots \C P_{r_2}\C U_{2,1}\C P_{r_1}\C U_{1,0}\C A_0\rho_{SE}(t_0)\right\} \\
&\equiv \mbox{tr}_S\left\{\mf T_{n+1}[\C P_{r_n},\dots,\C P_{r_2},\C P_{r_1},\C A_0]\right\}.
\end{split}
\end{equation}
Here, the preparation procedure $\C A_0$ is an arbitrary completely positive (CP) and trace-preserving map acting on the
system only (we suppress identity operations in the tensor product notation). Notice that the preparation procedure
could itself be the identity operation (i.e., `do nothing') denoted by $\C A_0 = \C I_0$. Furthermore, $\C U_{k,k-1}$
denotes the unitary time-evolution propagating the system-environment state from time $t_{k-1}$ to $t_k$ (we make no
assumption about the underlying Hamiltonian here). We also introduced the projection superoperator
$\C P_{r_k} \rho \equiv P_{r_k}\rho P_{r_k}$, which acts only on the system and corresponds to result $r_k$ at time
$t_k$. Finally, in the last line of Eq.~(\ref{eq probability}) we have introduced the $(n+1)$-step `process tensor'
$\mf T_{n+1}$~\cite{PollockEtAlPRA2018} (also called `quantum comb'~\cite{ChiribellaDArianoPerinottiPRL2008,
ChiribellaDArianoPerinottiPRA2009} or `process matrix'~\cite{CostaShrapnelNJP2016, OreshkovGiarmatziNJP2016}). It is
a formal but operationally well-defined object: it yields the (subnormalized) state of the system
$\tilde\rho_S(\C P_{r_n},\dots,\C P_{r_2},\C P_{r_1},\C A_0) = \mf T_{n+1}[\C P_{r_n},\dots,\C P_{r_2},\C P_{r_1},\C A_0]$
conditioned on a certain sequence of interventions $\C P_{r_n},\dots,\C P_{r_2},\C P_{r_1},\C A_0$. Its norm,
as given by the trace over $S$, equals the probability to obtain the measurement results $\bb r_n$. Recently,
it was shown that the process tensor allows for a rigorous definition of quantum stochastic processes (or quantum causal
models) fulfilling a generalized version of the Kolmogorov-Daniell extension theorem~\cite{MilzEtAlArXiv2017}.
We also add that complete knowledge of the process tensor $\mf T_n$ implies complete knowledge of the process
tensor $\mf T_\ell$ for $\ell\le n$, i.e., $\mf T_n$ contains $\mf T_\ell$.
We now have the main tools at hand to precisely state the question we are posing in this paper: Which conditions
does a quantum stochastic process need to fulfill in order to guarantee that the resulting measurement statistics can
(or cannot) be explained by a classical stochastic process? That is, when is Eq.~(\ref{eq def classicality}) fulfilled
or, in terms of the process tensor, when is
\begin{equation}
\begin{split}\label{eq classicality process tensor}
& \mbox{tr}_S\{\mf T_{\ell+1}[\C P_{r_\ell},\dots,\Delta_k,\dots,\C P_{r_j},\dots,\C A_0]\} \\
&\stackrel{?}{=} \mbox{tr}_S\{\mf T_{\ell+1}[\C P_{r_\ell},\dots,\C I_k,\dots,\C P_{r_j},\dots,\C A_0]\}.
\end{split}
\end{equation}
Here, we have introduced the \emph{dephasing operation} at time $t_k$, $\Delta_k \equiv \sum_{r_k} \C P_{r_k}$,
which plays an essential role in the following. Furthermore, the dots in Eq.~(\ref{eq classicality process tensor})
denote either projective measurements (if the system gets measured at that time) or identity operations (if the
system does not get measured at that time).
To answer the question, we will need a suitable notion of an `incoherent' quantum stochastic process, defined as
follows:
\begin{mydef}
For a given set of observables $\{R_k\}$, $k\in\{1,\dots,\ell\}$, we call the dynamics of an open quantum system
$\ell$-incoherent with respect to the preparation $\C A_0$ if all process tensors
\begin{equation}\label{eq incoherent}
\mf T_{\ell+1}\left[\Delta_\ell,\left\{\begin{matrix} \Delta_{\ell-1} \\ \C I_{\ell-1} \\ \end{matrix}\right\},\dots,
\left\{\begin{matrix} \Delta_1 \\ \C I_1 \\ \end{matrix}\right\},\C A_0\right]
\end{equation}
are equal. Here, the angular bracket notation means that at each time step we can freely choose to perform either
a dephasing operation ($\Delta$) or nothing ($\C I$). If the dynamics are $\ell$-incoherent for all
$\ell\in\{1,\dots,n\}$, we simply call the dynamics incoherent with respect to the preparation procedure $\C A_0$.
\end{mydef}
This definition is supposed to capture the situation where the experimentalist has no ability to detect the presence
of coherence during the course of the evolution. For this purpose we imagine that the experimentalist can manipulate
the system in two ways: first, she can prepare the initial system state in some way via $\C A_0$ (which could be
only the identity operation) and she can projectively measure the system observables $R_k$ at times
$t_k\in\{t_1,\dots,t_n\}$. The question is then: if the final state got dephased with respect to the observable $R_\ell$
(e.g., by performing a final measurement of $R_\ell$), is the experimentalist able to infer whether the system was
subjected to additional dephasing operations at earlier times, i.e., can possible coherences at earlier times become
manifest in different populations at the final time $t_\ell$? If that is not the case, the dynamics are called
$\ell$-incoherent. We remark that a process that is $\ell$-incoherent is not necessarily $k$-incoherent for $k\neq\ell$.
It is therefore important to specify at which (sub)set of times the process is incoherent. In the following we will be
only interested in processes which are $\ell$-incoherent for all $\ell\in\{1,\dots,n\}$, henceforth dubbed simply
`incoherent' (with respect to the preparation $\C A_0$). We repeat that our definition of incoherence is different
from the NCGD notion introduced in Ref.~\cite{SmirneEtAlQST2018}, see Appendix~\ref{sec app comparison}.
Furthermore, a similar idea restricted to two times was introduced in Ref.~\cite{GessnerBreuerPRL2011} in order
to detect nonclassical system-environment correlations in the dynamics of open quantum systems.
\section{Results}
\label{sec results}
\subsection{Non-degenerate observables}
\label{sec non degenerate}
Our first main result is the following:
\begin{thm}\label{thm classical implies incoherent}
If the measurement statistics are classical with respect to $\C A_0$, then the dynamics is incoherent with respect
to $\C A_0$.
\end{thm}
Before we prove it, we remark that this theorem holds for any quantum stochastic process (especially without imposing
Markovianity). Furthermore, a classical process for the times $\{t_n,\dots,t_1\}$ is also classical for all subsets of
times and hence, the theorem implies incoherence, i.e., $\ell$-incoherence for all $\ell\in\{1,\dots,n\}$.
In the following proof we will only display the case $\ell=n$, as the rest follows immediately.
\begin{proof}
We start by noting that
\begin{equation}\label{eq step 1 theorem 1}
\mf T_{n+1}[\C P_{r_n},\dots,\C P_{r_1},\C A_0] = p(r_n,\dots,r_1|\C A_0) |r_n\rl r_n|,
\end{equation}
which is a general identity as we have not made any assumption about the joint probability $p(r_n,\dots,r_1|\C A_0)$.
Obviously, if we choose to perform nothing at any time $t_\ell<t_n$, we have
\begin{equation}
\begin{split}
& \mf T_{n+1}[\C P_{r_n},\dots,\C I_\ell,\dots,\C P_{r_1},\C A_0] \\
&= p(r_n,\dots,\cancel{r_\ell},\dots,r_1|\C A_0)|r_n\rl r_n|.
\end{split}
\end{equation}
But by assumption of classicality, this is equal to
\begin{equation}\label{eq help 1}
\begin{split}
& \mf T_{n+1}[\C P_{r_n},\dots,\C I_\ell,\dots,\C P_{r_1},\C A_0] \\
&= \sum_{r_\ell} p(r_n,\dots,r_\ell,\dots,r_1|\C A_0)|r_n\rl r_n| \\
&= \sum_{r_\ell} \mf T_{n+1}[\C P_{r_n},\dots,\C P_{r_\ell},\dots,\C P_{r_1},\C A_0] \\
&= \mf T_{n+1}[\C P_{r_n},\dots,\Delta_\ell,\dots,\C P_{r_1},\C A_0].
\end{split}
\end{equation}
Hence, by summing Eq.~(\ref{eq help 1}) over the remaining $r_k\neq r_\ell$, we confirm
\begin{equation}
\begin{split}
& \mf T_{n+1}[\Delta_n,\dots,\C I_\ell,\dots,\Delta_1,\C A_0] \\
&= \mf T_{n+1}[\Delta_n,\dots,\Delta_\ell,\dots,\Delta_1,\C A_0]
\end{split}
\end{equation}
for arbitrary $t_\ell < t_n$ and where the dots denote dephasing operations at the remaining times. We can now pick
another arbitrary time $t_k\neq t_\ell$ and repeat essentially the same steps as above to arrive at the conclusion
\begin{equation}
\begin{split}
& \mf T_{n+1}[\Delta_n,\dots,\C I_\ell,\dots,\C I_k,\dots,\Delta_1,\C A_0] \\
&= \mf T_{n+1}[\Delta_n,\dots,\Delta_\ell,\dots,\Delta_k,\dots,\Delta_1,\C A_0]
\end{split}
\end{equation}
for any two times $t_k\neq t_\ell$. By repeating this argument further, we finally confirm that the dynamics are
incoherent.
\end{proof}
The converse of Theorem~\ref{thm classical implies incoherent} holds only in a stricter sense. For this purpose we
need the notion of Markovianity as defined in Ref.~\cite{PollockEtAlPRL2018}. In there, it was shown that the
definition of a quantum Markov process implies the notion of \emph{operational CP divisibility}. This means that
for an arbitrary set of independent interventions (CP maps)
$\C A_{r_n},\dots,\C A_{r_0}$ the process tensor `factorizes' as
\begin{equation}\label{eq Markov process tensor}
\mf T_{n+1}[\C A_{r_n},\dots,\C A_{r_0}] = \C A_{r_n}\Lambda_{n,n-1}\dots\Lambda_{1,0}\C A_{r_0}\rho_S(t_0).
\end{equation}
Here, the set $\{\Lambda_{\ell,k}\}$ is a family of CP and trace-preserving maps fulfilling the composition law
$\Lambda_{\ell,j} = \Lambda_{\ell,k}\Lambda_{k,j}$ for any $\ell>k>j$. We remark that a CP divisible process (which is
commonly refered to as being `Markovian') is in general not operationally CP divisible (also see the recent
discussion in Ref.~\cite{MilzEtAlArXiv2019}). In a nutshell, an operationally CP divisible process always fulfills the
quantum regression theorem, but a CP divisible process does not (a counterexample is in fact shown in
Appendix~\ref{sec app comparison}).
Furthermore, to establish the converse of Theorem~\ref{thm classical implies incoherent} we also need the following
definition:
\begin{mydef}
A Markov process $\{\Lambda_{\ell,k}\}$ is said to be invertible, if the inverse of any $\Lambda_{k,0}$ exists
for all $k$, i.e., the CP and trace-preserving maps $\Lambda_{\ell,k}$ are identical to
$\Lambda_{\ell,0}\Lambda_{k,0}^{-1}$.
\end{mydef}
We are now ready to prove the next main theorem:
\begin{thm}\label{thm incoherent classical}
If the dynamics are Markovian, invertible and incoherent for all preparations $\C A_0$, then the statistics are
classical for any preparation.
\end{thm}
\begin{proof}
By using Eq.~(\ref{eq Markov process tensor}) and the property of incoherence, we can conclude that for any two
times $t_{\ell+1}, t_\ell \in\{t_1,\dots,t_n\}$ (with $t_{\ell+1} > t_\ell$)
\begin{equation}
\Delta_{\ell+1} \Lambda_{\ell+1,\ell}\Delta_\ell \Lambda_{\ell,0}\C A_0\rho_S(t_0)
= \Delta_{\ell+1} \Lambda_{\ell+1,\ell} \Lambda_{\ell,0}\C A_0\rho_S(t_0).
\end{equation}
Since the dynamics are invertible and incoherent for all preparations $\C A_0$, this implies the superoperator identity
$\Delta_{\ell+1} \Lambda_{\ell+1,\ell}\Delta_\ell = \Delta_{\ell+1} \Lambda_{\ell+1,\ell}$. By multiplying this
equation with $\C P_{r_{\ell+1}}$, we arrive at
\begin{equation}\label{eq superoperator identity}
\sum_{r_\ell} \C P_{r_{\ell+1}}\Lambda_{\ell+1,\ell}\C P_{r_\ell} = \C P_{r_{\ell+1}}\Lambda_{\ell+1,\ell}.
\end{equation}
From this general identity we immediately obtain that
\begin{equation}
\begin{split}
& \sum_{r_\ell}p(\bb r_n) \\
&= \mbox{tr}\left\{\C P_{r_n}\Lambda_{n,n-1}\dots\sum_{r_\ell}\C P_{r_{\ell+1}}\Lambda_{\ell+1,\ell}\C P_{r_\ell}
\dots\C P_{r_1}\Lambda_{1,0}\C A_0\rho\right\} \\
&= \mbox{tr}\{\C P_{r_n}\Lambda_{n,n-1}\dots\C P_{r_{\ell+1}}\Lambda_{\ell+1,\ell}
\dots\C P_{r_1}\Lambda_{1,0}\C A_0\rho\} \\
&= p(r_n,\dots,\cancel{r_\ell},\dots,r_1).
\end{split}
\end{equation}
This concludes the proof as the above argument also holds for all possible subsets of times.
\end{proof}
We add that the counterexamples in Appendix~\ref{sec app counterexamples} demonstrate that
Theorem~\ref{thm incoherent classical} is also tight in the sense that a process, which is incoherent only for a subset
of preparations or which is not invertible, does not imply classical statistics. One remaining open
question concerns the assumption of Markovianity. At the moment it is not clear whether relaxing this condition is
meaningful as it requires to define the notion of invertibility for a non-Markovian process, which is not unambiguous.
Furthermore, the superoperator identity~(\ref{eq superoperator identity}) implies that, if we write $\Lambda_{\ell,k}$
as a matrix in an ordered basis where populations precede coherences with respect to the measured observable $R_k$
(input) and $R_\ell$ (output), it has the form
\begin{equation}
\Lambda_{\ell,k} = \begin{pmatrix}
A_{\ell,k} & 0 \\
C_{\ell,k} & D_{\ell,k} \\
\end{pmatrix},
\end{equation}
where $A_{\ell,k}$ is a stochastic matrix and $C_{\ell,k}$ and $D_{\ell,k}$ are matrices, which are only
constrained by the requirement of complete positivity.
\subsection{Degenerate observables}
\label{sec degenerate}
If the measured observable contains degeneracies, the picture above somewhat reverses. First,
Theorem~\ref{thm classical implies incoherent} ceases to hold even in the Markovian and invertible regime because
the assumption of a non-degenerate observable already entered in the first step of its proof, see
Eq.~(\ref{eq step 1 theorem 1}). Physically speaking, the reason is that it now becomes possible to hide coherences in
degenerate subspaces and this can have a detectable effect on the output state~(\ref{eq incoherent}). This is
demonstrated with the help of an example in Appendix~\ref{sec app counterexamples}. In contrast,
Theorem~\ref{thm incoherent classical} still holds true for degenerate observables. In fact, in the proof of
Theorem~\ref{thm incoherent classical} we never used that the measured observable is non-degenerate.
\section{Conclusions}
\label{sec conclusions}
We have investigated whether the outcomes of a projectively measured quantum system can be described classically
depending on the capability of an open quantum system to show detectable effects of coherence. The question whether
the quantum stochastic process is (invertible) Markovian and whether the measured observables are degenerate (or not)
had a crucial influence on the results. Together with the counterexamples in Appendix~\ref{sec app counterexamples}
we believe that we have provided a fairly complete picture about the interplay between classicality, coherence
and Markovianity. It remains, however, still open whether our definition of `incoherent dynamics' is the most
meaningful one. One clear advantage of our proposal is that it is operationally and theoretically well-defined for
arbitrary quantum processes. Therefore, it could help to extend existing resource theories, which crucially
rely on the existence of dynamical maps~\cite{StreltsovAdessoPlenioRMP2017}, to arbitrary multi-time processes.
We further point out that our investigation is closely related to the study of Leggett-Garg inequalities and possible
violations thereof~\cite{LeggettGargPRL1985, EmaryLambertNoriRPP2014}. In fact, the classicality
assumption~(\ref{eq def classicality}) plays a crucial role in deriving any Leggett-Garg inequality.
Therefore, we can conclude that all incoherent quantum systems, which evolve in an invertible Markovian way, will
never violate a Leggett-Garg inequality if the measured observable is non-degenerate. Interestingly, incoherent
quantum systems could potentially violate Leggett-Garg inequalities if the measured observable is degenerate.
Another interesting open point of investigation concerns the question whether the property of incoherence implies
a particular structure on the generator of a quantum master equation, which is still the primarily used tool in open
quantum system theory. This question is indeed further pursued by one of us~\cite{Diaz}.
\emph{Note added.} While this manuscript was under review, we became aware of the work of
Milz \emph{et al.}~\cite{MilzEgloffEtAlArXiv2019} where an identical question is analysed from a related perspective.
\subsection*{Acknowledgments}
PS is financially supported by the DFG (project STR 1505/2-1) and MGD by `la Caixa' Foundation, grant LCF/BQ/DE16/11570017. We also acknowledge funding from the Spanish MINECO FIS2016-80681-P (AEI-FEDER, UE).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Interstellar ices are formed in regions where UV photons are attenuated (\citealt{whittet2001}). In these regions, dust grains grow thick icy mantles (tens of monolayers) from accretion and deposition of atomic and molecular species onto their surfaces. This has been confirmed by observations of starless cores (\citealt{tafalla2006}) which show that the abundances of most gas-phase species suffer of a significant drop toward the core center. The missing gas species constitute the icy mantles that cover dust grains and that are largely composed of water (\citealt{williams1992}; \citealt{whittet1998}; \citealt{gibb2004}; \citealt{pontoppidan2004}; \citealt{boogert2008}). Other abundant species in the ices are CO$_2$, CO, CH$_3$OH and NH$_3$ (\citealt{gibb2004}). As a star forms and heats up the surrounding environment, ices in the vicinity undergo thermal processes and evaporate from the dust grains, hence delivering their constituents in the gas-phase. This star formation stage, called the hot core (massive stars) or hot corino (low mass stars) phase, exhibits a very complex chemistry rich in oxygen- and nitrogen-bearing molecules (\citealt{caselli1993}; \citealt{cazaux2003}), but also shows species such as formaldehyde (H$_{2}$CO) and methanol (CH$_{3}$OH). These species are precursors of more complex organic species which can be formed under thermal (\citealt{theule2013}) and Vacuum UV (VUV) processing (\citealt{oberg2009}) or upon continuing atom additions (\citealt{fedoseev2014}). Another property from these ices is that they can help dust grains to coagulate and then form bigger bodies, and therefore support the processes that will ultimately lead to planet formation (\citealt{ormel2007}).
The majority of the studies investigating interstellar ices, both observationally and in the laboratory have been focusing on ice composition, rather than on ice structure (morphology). The ice composition reflects the chemical history, whereas the ice structure provides information on a number of physical processes: accretion, desorption, segregation and local radiation fields. The frost of water on interstellar grains in dense molecular clouds is thought to consist mainly of an amorphous water ice with trapped impurities (e.g., \citealt{tielens1987}). This is consistent with several theoretical studies where amorphous and porous ices are grown by a hit and stick mechanism (\citealt{cuppen2007}). In the laboratory, background deposition of gas-phase constituents on a cold substrate results in a multi-phase composite ice sample by taking the presence of pores into account, which are inevitably formed during growth (\citealt{brown1996}; \citealt{dohnalek2003}; \citealt{mate2012}; \citealt{bossa2014}). However, water ices formed experimentally from atomic hydrogen and oxygen present a compact structure (\citealt{oba2009}). Recent theoretical studies showed that the porosity of ices formed by surface reactions depends on the density of the gas (\citealt{garrod2013}). There is presently no observational evidence that interstellar ices present a porous structure because the OH dangling bonds of water on the surface of the pores, at 2.7 $\mu$m, has not been observed (\citealt{keane2001}; \citealt{gibb2004}). Moreover, many laboratory studies monitoring the 2.7 $\mu$m feature have shown that pores initially present in amorphous solid water (ASW) ices disappear with UV photon and ion irradiation (\citealt{palumbo2006}; \citealt{raut2007}; \citealt{palumbo2010}). The level of porosity is also found to decrease during an ASW ice growth (\citealt{mispelaer2013}). The authors explain the observed porosity evolution during an isothermal deposition by an ice relaxation process and/or by the increasing amount of water molecules forming the ice. Therefore, it seems that the lack of porosity in interstellar ices may be attributed to a change of ice morphology under external influences such as ion impact, VUV irradiation, H-atom bombardment, or during long time scale accretion.
In a number of recent experimental studies, however, it was proven that a lack of dangling OH bond does not necessarily exclude a remaining level of porosity in the ice (\citealt{raut2007}; \citealt{isokoski2014}) and special care needs to be taken to prove that water ice in space is fully compact. Porous ice -- even when largely compacted -- offers more area for surface dominated reactions, also within the bulk of the ice. Depending on the structure of the pores -- many small versus a few larger pores, connected or not -- this will affect the catalytic potential of an icy dust grain. Recent studies (\citealt{bossa2012}, \citealt{isokoski2014}) have focused on measuring the compaction of an ice upon heating. This allows to conclude on the level of porosity in an ice, but not on the actual pore size.
Experiments performed by \cite{bossa2012} and \cite{isokoski2014} have shown that the thickness of a background deposited porous ASW sample decreases by about 12 $\pm$1$\%$ (upper limit) upon heating from 20 to 120 K. The thickness decrease of less-porous ASW is smaller, and negligible for crystalline solid water. While this compaction was first attributed to a loss of porosity, \cite{isokoski2014} found that pores do survive upon warming up and that the porosity of a porous ice initially deposited at 20~K and then annealed to 120~K is still around 20$\%$. The porosity of water ice is related to the temperature at which ices are deposited (but also to VUV and ion irradiation). The goal of the present study is to simulate the temperature evolution of porosity in an interstellar ice using a theoretical approach.
ASW exhibits tetrahedral H-bonding structure. This structure was first modeled by \cite{polk1973} with amorphous Si or Ge using a concept based on a tetrahedrally coordinated random-network structure. Each atom has a first coordination number of four, and only a small variation in the nearest neighbour distance is allowed. Non-crystallinity is due to variations in the tetrahedral bond angle and rotations along bonds. These models were extended in order to simulate the scattering of X-rays and neutrons on amorphous water ice (\citealt{boutron1975}).
The mobility of water molecules in amorphous water ices has been theoretically treated by \cite{zhdanov2000} using a ballistic deposition (the water molecules hit and can form at least one bond) followed by a defined number of jumps to near neighbour sites, the direction of the jumps being chosen randomly. These simulations, using a Monte Carlo approach, consider diffusion, crystallization and desorption of water in porous amorphous films in order to quantify the desorption processes. In the liquid phase, \cite{laage2006} have used molecular dynamic calculations and have shown that water molecules break and reform hydrogen bonds, and that this process is accompanied by a reorientation mechanism which involves large-amplitude angular jumps, rather than a sequence of small diffusive steps. The diffusion in the solid-phase through breaking and reforming bonds depends on the hydrogen bond strengths of the molecules in the ices. In a recent study \cite{hus2012}, using density functional theory, have shown that hydrogen bond strengths increase or decrease linearly with the number of water molecules in their surrounding. A simple empirical linear relationship was discovered between maximum hydrogen bond strength and the number of water molecules in the local environment. The hydrogen bond strengths in the study by \cite{hus2012} range from 4.96 kcal/mol (2490~K) to 9.41 kcal/mol (4735~K), depending on the number of donors and acceptors in the local environment.
In this study, we use Monte Carlo simulations to follow the migration of water molecules within the ices, and the thermal evolution of pores with temperature. Our simulations show that when ices become warmer, the water molecules rearrange and the empty spaces between the agglomerate of strongly bound water molecules expand. This implies that pores grow and merge as the temperature of the ices increases. While the merging/growth of pores in ice was previously suspected (\citealt{raut2007}; \citealt{isokoski2014}), this work highlights for the first time this behaviour in a quantitative manner.
\section{Modeling amorphous solid water}
We use a step-by-step Monte Carlo simulation to follow the formation of amorphous water ices as well as the temperature evolution of the pores within the ices. Water molecules originating from the gas-phase arrive at a random time and location and follow a random walk within the ice. The arrival time depends on the rate at which gas species collide with the grain. The time for accretion and diffusion of the water molecules is defined in the next sections.
\subsection{Structure of the ices: assumptions}
Low-density amorphous ice consists mainly of the four coordinated tetrahedrally ordered water molecules (\citealt{brovchenko2006} and references therein). In order to model amorphous water ice, we define our grid model to allow the water molecules to be organised as tetrahedrons. We therefore pre-define which sites of the grid can be occupied and which sites should stay vacant. We concentrate on the molecular centers, determined by the oxygen positions, and consider a four - coordinated structure since it is amorphous. An example of our grid is shown in figure~\ref{grid}, where each knot represents an oxygen atom, and the different colors illustrate odd and even grid cell heights. The bonds between the oxygens represent the hydrogen bonds. This figure shows how water molecules are organised in a 4$\times$4$\times$4 grid, which means that only positions at the knots of the grid can be occupied. We consider the distance between two oxygens to be 2.75 \AA\ (\citealt{boutron1975}) which defines our grid cells with a size of 1.58 \AA (which corresponds to the diagonal of the surface of a grid cell d=2.75$\times \sin(\frac{109}{2})$=2.24 \AA, divided by $\sqrt{2}$). The surface and volume of the cell is described as tetrahedrons (grey lines in figure~\ref{grid}, left panel). Each tetrahedron of our grid has a edge length of 4.5 \AA (2$\times$d), a surface of 4$\times$ 8.7 \AA$^2$ (each face has a surface of $\frac{\sqrt(3)}{4}\times d^2$ and a volume of 10.75 \AA$^3$ ($\frac{1}{6\sqrt(2)}\times d^3$). In our simulations, we consider three different grid sizes, with a base of 40$\times$40, 60$\times$60 and 100$\times$100 grid cells. We set the height of the grid at 150, so that we can study ices of different thicknesses.
\subsection{Accretion}
Water molecules from the gas-phase arrive on the grid, and can be bound to water molecules already present in the grid through hydrogen bonding. The accretion rate (in s$^{-1}$), depends on the density of the species, their velocity and the cross section of the dust surface, and can be written as:
\begin{equation}
R_{acc} = n_{\rm{H_2O}} v_{\rm{H_2O}} \sigma S,
\end{equation}
where $v_{\rm{H_2O}} \sim 0.43 \sqrt{\frac{T_{gas}}{100}}$ \rm{km~s$^{-1}$ is the thermal velocity, S the sticking coefficient that we consider to be unity for the low temperature taken as a starting point in this study. We also assume that every water molecule arriving on the substrate (grid not covered) or on another water molecule has a probability of 100$\%$ to stick. $\sigma$ is the cross section of the surface and directly scales with the size of the grid we use for the simulations as 1.58$\times\ n_{sites}^2$ \AA$^2$. n$_{\rm{H_2O}}$ is the density of water molecules in the gas in cm$^{-3}$, we here use 10$^{10}$ cm$^{-3}$ in our computations to mimic experimental conditions. In the experiments, ice samples are grown by background deposition in the high-vacuum chamber with a rate of 0.4 nm s$^{-1}$. Note that for the deposition temperature considered in this study, the water molecules stick to each other and do not diffuse and re-organise. This implies that the density of water molecules in the gas does not affect the results of our simulations.)}\rm\ Water molecules arrive in a random coordinate onto the grid, but can occupy only pre-determined positions, as described in the previous subsection. As the water molecules arrive on a location of the grid, hydrogen bonds can be made with already bound water molecules. In this sense, we do not take into account chemical processes and we assume that the ice matrix is constructed upon accretion, rather than through oxygen allotrope hydrogenation (\citealt{dulieu2010}; \citealt{romanzin2011}; \citealt{oba2009}; \citealt{ioppolo2008}).
\subsection{Mobility of water molecules within the ice}
The binding energy of a water molecule increases with its number of neighbours and is estimated at 0.22 eV per hydrogen bond (\citealt{brill1967}; \citealt{isaacs1999}; \citealt{dartois2013}). In fact, the hydrogen bond strength depends on the number of water molecules involved as donors and acceptors in the neighborhood (\citealt{hus2012}). In this study we assume that the binding energies increase linearly with the number of neighbours, as also previously assumed by \cite{cuppen2007}, so that water molecules surrounded by one neighbour have a binding energy of 0.22 eV and water molecules surrounded by four neighbours have a binding energy of 0.88 eV. The water molecules present in the grid can move from one site to another one as shown in figure~\ref{grid}, right panel. The water molecules located in odd vertical coordinates (yellow knots) can change their coordinates as (+1, -1, -1); (+1, +1, +1); (-1, +1, -1) and (-1, -1, +1), represented with blue arrows, while the molecules located at even vertical coordinates (blue knots) can change in the opposite direction (red arrows). According to their positions, the molecules can move in 4 different directions in unoccupied sites. The diffusion rate, in s$^{-1}$, for a water molecule to move from one site to another can be written as:
\begin{equation}
\alpha({n}) = \nu \exp\left[{-0.4\times\frac{nn\ E_b}{T}}\right],
\end{equation}
where $\nu$ is the vibrational frequency of a water molecule in its site (that we consider as 10$^{12}$ s$^{-1}$), E$_b$ is the energy of a single hydrogen bond, and \it{nn}\rm\ the number of neighbours. Here we consider the activation energy for diffusion to be 40$\%$ of the binding energy. This value is comparable to the value of 30$\%$ derived from experiments of water-on-water diffusion (\citealt{collings2003}) and 50$\%$ used in \cite{cuppen2007}. This number is however very uncertain and can reach 90$\%$ (\citealt{barzel2007}).
\subsection{Evaporation}
The species present at the top layers of the ices can return into the gas-phase because they thermally evaporate. This evaporation rate depends on the binding energy of the water molecules and is therefore directly dependent on the number of neighbours \it{nn}\rm. The evaporation rate (in s$^{-1}$) of one species with \it nn\rm\ neighbours can therefore be written:
\begin{equation}
evap= \nu \exp({-\frac{nn\ E_b}{T}}).
\end{equation}
We do allow evaporation of water molecules \it in\rm\ the ice, and consider that these molecules can be re-adsorbed at another location of the ice or be released in the gas-phase. The molecules desorbing from the ice can travel through the pores to meet another adsorbed water molecule (on the other side of the pore) and recreate a H-bound. Note that water molecules at the top layer of the ice have less neighbours and therefore have a higher probability to thermally evaporate than water molecules in the bulk (equation 3 with \it nn\rm\ smaller for molecules on top layers).
\section{Simulations}
In this section we describe several simulations to study the evolution of water ices with increasing temperature. Our calculations consider different grid sizes with different thicknesses. We follow the position of each molecule as well as the number of H-bonds with it neighbours. Therefore, molecules can have one to four neighbours and their total binding energies increase accordingly. Since we are interested in the porosity of the ices, we quantify the size of the pores as well as the total cross section. To do so, we tag every empty grid space with a number that corresponds to the number of grid cells around this cell which are empty. In our simulations, these tags can range from 0 (an empty spot with at least one water molecule as neighbour) to 6 (an empty spot with no water molecule present at a distance of six grid cells around, which means that the hole is the center of a sphere with six grid cells radius). This provides a direct measurement of the total pore volume, the total surface area but also the individual pore volume.
\subsection{Ice and pores thermal evolution}
We have performed several simulations on different grids with sizes of 40$\times$40, 60$\times$60 and 100$\times$100. The first motivation of this work is to reproduce the thickness decrease of porous ASW upon thermal annealing. We therefore consider temperatures during deposition (10~K) and heating rates (2~K/min) identical to the values used in the laboratory studies. The water ices are deposited at 10~K and are subsequently warmed up at a rate of 2~K per minute until a temperature of 120~K is reached.
Since we assume that the binding energy of each water molecule is proportional to the number of H-bonded neighbours, water molecules with only one neighbour are the most mobile and can break and reform bonds to arrive in more stable configurations (more neighbours). As the temperature increases, molecules with two and then three neighbours also become mobile. This re-arrangement of molecules is monitored as a function of temperature. In figure ~\ref{move}, we show the number of vertical grid cells, water molecules visit during warming of the ice. Molecules scout a few grid cells (two to five) at temperatures up to 60~K, whereas they can travel through much higher numbers (6-20) at 100~K. Therefore, as the temperature increases, the water molecules travel much larger distances, which allows them to rearrange in a more stable configuration. In figure~\ref{grid1} we show the 3D arrangement of water molecules in the grid of a size 60$\times$60. The left panel presents the ice at 10~K. The water molecules are mostly bound with one or two neighbours, and these molecules fill up the grid homogeneously. As the temperature increases, the molecules re-arrange in order to be in a more stable configuration. Most of the molecules have two to three neighbours at 60~K (middle panel). At 100~K, as shown in the right panel, the water molecules in the ice are very strongly bounded and have in majority three to four neighbours. The ice is not as homogeneous as at low temperatures, and large empty spaces do appear which represent large pores. The pores and their evolution as function of temperature are presented in figure~\ref{grid2}. This figure is a negative of figure~\ref{grid1}. This figure shows the distribution of pores during the increase of temperature, and illustrates their growth from 10~K (left panel) to 60~K (middle panel) and to 100~K (right panel). In these figures, we represent the pores with a tag of 2 (no water molecules are present at a radius of two grid cells around) until 8 (no water molecules at a radius of height grid cells around). The color shows the tags associated with each empty cell. At low temperatures, the ice is filled with holes and water molecules. As the temperature increases, the water molecules get re-organised to be in a more stable configuration, which leads to the creation of larger empty spaces, the pores. These pores become larger and can reach radii of a few \AA\ at 60~K. At even larger temperatures, the pores grow and connect to each other. The sizes of the pores can reach radii of $\sim$15 \AA. Note that these radii correspond to empty spheres present in the ices (which is how we defined pore sizes), while pores can be extended to empty volumes as represented in figure~\ref{grid2}. Therefore, our values represent lower limits for the volume of the pores. However, the sizes we computed for individual pores are in good agreement with the diameter $\le$20 \AA\ found in the literature (\citealt{raut2007}).
In figure~\ref{nnpores}, left panel, this finding is summarized quantitatively and we report the evolution of the number of neighbours of the water molecules as function of the temperature. As mentioned above, at low temperatures water molecules have in majority two neighbours. As the temperature increases, all molecules with two neighbours (or less) will diffuse in the ice until they have at least three bonds. At temperatures higher than 90~K, the molecules have in majority four neighbours and are very strongly bond to each other. The corresponding evolution of the pores in the ices is reported in figure ~\ref{nnpores}, right panel. While the ice is dominated by little holes (pores with radii of 1 or less) at low temperatures ($\le$ 4 \AA\ radii), the re-organisation of the ice implies that these holes become connected to form bigger entities. The size of the pores ( radii $\ge$2) grows with temperature, and the ice becomes dominated by larger empty spaces.
\subsection{Pore volume and surface area}
The tags on the empty cells can directly be related to the volume of the pores.The cross section of the pores is estimated by adding the faces of tetrahedrons that are located between a pore and a water molecule. To do so, we select the empty cells which have water neighbours (they have a tag = 0) and check how many water molecules are around each empty cell. Each time a water molecule is next to an empty space in our grid, one surface area of a tetrahedron is added. Therefore, an empty space surrounded by four water molecules adds four faces of tetrahedron to the total surface. To calculate the total surface formed by the pores we therefore add all the surfaces at the interface between empty and filled cells. The total volume of the pores is calculated by adding the volume of the empty tetrahedrons. The total surface and the porosity (volume of the pores divided by total volume) are reported in figure ~\ref{vol} for different grid sizes. While the relative volume of the pores does not change drastically during warming up, the total surface decreases by a factor 3.5 between 10 and 120~K, influencing the increase in average pore size. This is consistent with a recent study from \cite{mitterdorfer2014} who show, by using small angle neutron scattering of ASW ice deposited at 77~K, that the specific surface area decreases from 90 to 100K. The pores present in our simulations reach these sizes at $\sim$100~K.The volume and surface of the pores calculated with different grid sizes, show similar behaviours as the temperature of the ice is increased. In this study, we consider the barrier for diffusion to be 40$\%$ of the binding energy. This barrier is uncertain, as mentioned before, and a change would imply that the decrease in surface area would start at lower temperatures for lower diffusion barrier, or higher temperatures for higher diffusion barriers. The change in this barrier would therefore change the temperature at which the re-arrangement of water molecules occurs, and pores coalesce.
\subsection{Comparison with experimental results.}
In the previous experimental studies (\citealt{isokoski2014}; \citealt{bossa2014}; \citealt{bossa2012}), several thousands of monolayers of water are deposited on a 2.5 x 2.5 cm slab. A clear finding is that the ice gets thinner for higher temperatures, and this cannot be concluded from the present simulations where the thermal decrease of thickness seems to stop for ices thicker than several tens of monolayers. This is likely an artifact that is caused by the nature of the small grid sizes that necessarily have to be used in the simulation. Whereas the large size of the experimental surface allows the molecules to diffuse through many different routes to reach a more stable configuration, this is not the case in the simulated ices, simply as the simulation boxes are too limited to allow a re-allocation of the ices on a large scale. Already for the deposition of 60 ML, as shown in figure ~\ref{nnpores}, the compaction is not well reproduced in the simulation; the ice clearly does not get thinner. Instead the water molecules, by re-arranging and becoming more bound, create pillar-like structures blocking the diffusion of species within the box. On a macroscopic scale, therefore, the present work should not be regarded as an attempt to reproduce the experimental results. For this larger boxes are needed that are currently out of reach in terms of computer time. However, at the level of individual molecules, our simulation boxes allow to follow the evolution of porosity in ASW ices. The results show that pores do remain within the ices upon warming up, which is consistent with the findings by \cite{isokoski2014}. The density of the ices in our simulations varies from 0.57 to 0.63 g cm$^{-3}$ during deposition. A value of 0.61 g cm$^{-3}$ has been found experimentally. The main conclusion is that thermal evolution of porous ices involves pore growth rather than eradication of pores, a process not considered so far and fully consistent with the experimental findings. The porosity of the ice can be obtained by comparing the average density of the ice $\rho$ to the density of the compact ice ($\rho_c$=0.94 g / cm$^{3}$, \citealt{dohnalek2003}) as $\phi$=1-$\frac{\rho}{\rho_c}$. The porosity of the ices derived experimentally is $\sim$ 32$\%$. With our simulations, we derive a porosity of $\sim$30-40$\%$ as shown in figure~\ref{vol}. Our simulations are therefore in good agreement with the density and porosity derived experimentally. \cite{raut2007} showed that for ice deposited at 30-40~K, the pores are very small (micropores) and are narrow (less than three molecular diameters wide). As the temperature increases, the decline of microporosity has been suggested to be due to pore clustering (until $\sim$90~K) and at higher temperatures (between 90 and 130~K), it has been attributed to pore collapse (\citealt{wu2010}). Our simulations show that microporosity decreases due to pore clustering. However, we could not simulate the pore collapse as found for high temperatures. In the present study, we do not model the effect of the deposition angle on the porosity of the ice as investigated by \cite{Kimmel2001} and \cite{dohnalek2003}. Initial density and porosity are indeed strongly dependent on the deposition angle as deposition angles above 40 degree create a filamentary structure within the ice. We do not observe a real pore connection network after deposition, mainly because there is no preferential orientation of the impinging water molecules. However, our model reproduces well the background deposition, since we obtain an initial density and an initial porosity close to the experimental results that started from low temperature background deposition.
\section{Discussion and Astronomical implications}
In this study we confirm that the amount of porosity of interstellar ices is not fully reflected by detections of OH dangling bounds. The surface area of the pores in ASW water ices decreases with increasing temperatures, which implies a decrease of the OH dangling bonds, while the total volume of the pores does not change significantly. We therefore conclude that a measurement of the OH dangling bonds does not give an unambiguous measure of the level of porosity of interstellar ices and we find that pores remain in ice upon warming up. This has already been discussed by \cite{isokoski2014} and \cite{raut2007} who showed experimentally that some porosity of the ices remains at high temperatures. Depending on the growth temperature, the residual porosity after annealing at 120~K is still around 20$\%$.
In dense molecular clouds temperatures are around 10-14 K. In this case, the mechanism of re-structuring the ices would take time scales much longer than 10$^7$ years since it takes around t$\sim$$(\nu \exp{\frac{-0.4\times Ea}{T_{dust}}})^{-1}$$\sim$10$^{11}$ years for a water molecule (with one neighbour) to break and reform bonds at 14~K. Therefore the evolution of interstellar ices through diffusion should not be considerable in molecular clouds. Once a star is born and heats up its environment, however, the dust grains become warmer and water molecules in the ice diffuse faster. We modelled the evolution of the porosity in star forming environments by using a heating schema similar to that used by \cite{garrod2006}. In this schema, the temperature of gas and dust, initially at 10~K, reaches 200~K after 5 $\times$ 10$^4$, 2 $\times$ 10$^5$ and 10$^6$ years, to mimic the thermal evolution of environments surrounding high-mass, intermediate, and low-mass star-formation, respectively. The increase of temperature follows time t as T=10+kt$^2$. In this study, we used the thermal evolution of a massive star, in 5$\times$10$^4$ years. In figure~\ref{ism}, we show the rough evolution of the volume and surface of the pores in an environment heated up by a newly formed star. We show that as temperature increases reaching 30~K, which corresponds to $\sim$8$\times$10$^3$ years, the porosity of the ice begins to change. The re-structuring occurs until temperatures of 50~K have been reached, which corresponds to a time scale of 10$^4$ years. In these time scales, with increasing temperature, pores connect to each other to form bigger entities. The chemical species trapped in these pores can encounter only when a certain temperature is reached. In this sense, ices could lock up many chemical species during long time scales and let them meet when conditions for reactions between these species are more probable. In this study, we used a heating schema from \cite{garrod2006} to mimic the temperature of the dust around a massive protostar. However, recent studies suggest that the heating schema could be much longer than the one considered here (\citealt{esplugues2014}). In this case, the decrease of surface area would occur at even lower temperatures than shown in figure~\ref{ism}. Also, one should notice that the total surface area decreases by a factor of 2 in the ISM between 20 and 50 K while this decrease occurs between 28 and 90~K under laboratory conditions.These differences are due to the different time scales between the ISM and laboratory settings.
While we show that in star forming environments the pores present in interstellar ices could connect and form bigger pores, several previous studies have shown that pores may disappear during the molecular cloud phase because of radiation (\cite{palumbo2006} for cosmic rays). The corresponding timescale for ice compaction by a distribution of cosmic rays was extrapolated and demonstrated to be shorter (a few millions years) than the estimated lifetime of dense clouds, above a few 10$^7$ years.
We show here that the evolution of interstellar ices as temperature increases is associated with the growth of pores instead of their eradication. Therefore if pores do survive the molecular cloud phase, they will evolve during warming up, grow and connect to form larger entities. This process could be of relevance for the formation of complex species as illustrated in figure~\ref{sketch}. This provides a new pathway for reactions taking place within the bulk of interstellar ices, and may boost the solid state formation of new molecules in space.
\begin{acknowledgements}
S. Cazaux is supported by the Netherlands Organization for Scientific Research (NWO). Part of this work was supported by NOVA, the Netherlands Research School for Astronomy, a Vici grant from the Netherlands Organisation for Scientific Research (NWO), and the European Community 7th Framework Programme (FP7/2007-2013) under grant agreement n.238258. Support for J.-B. Bossa from the Marie Curie Intra-European Fellowship (FP7-PEOPLE- 2011-IEF-299258) is gratefully acknowledged. S.Cazaux would like to thank Dr. Thomas la Cour Jansen for very fruitful discussions. We would like to thank the referee for providing constructive comments which considerably improved our manuscript.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
String theory has been an invaluable tool to study the properties of quantum field theories in various dimensions. In particular, it predicts the existence of the so-called $(2,0)$ conformal field theory in six dimensions; this CFT has become of great interest in recent years (\cite{Gaiotto:2009we,Gaiotto:2009hg,Witten:2009at}). Most importantly, it gave new insights into lower-dimensional supersymmetric gauge theories, for example in four dimensions (see \cite{Chacaltana:2012zy,Balasubramanian:2014dia}). The $(2,0)$ CFT is labeled by an $ADE$ Lie algebra $\mathfrak{g}$, and arises in string theory after taking a double limit. One sends the string coupling to zero in type IIB string theory on an $ADE$ surface $X$ to decouple the bulk modes; this gives a six-dimensional string theory, called the $(2,0)$ little string theory. Second, one takes the string mass $m_s$ to infinity, while keeping the moduli of the $(2,0)$ string fixed.
After taking these limits, there is no scale left in the theory, and we are left with the $(2,0)$ CFT. The lack of a good mathematical definition of the CFT, however, is a limitation in the predictions we can make about gauge theories. Instead, it proves fruitful to keep $m_s$ finite and study the $(2,0)$ little string. In \cite{Aganagic:2015cta}, the $(2,0)$ little string was studied on a Riemann surface $\mathcal{C}$ with defects. Specifically, the $(2,0)$ little string is placed on $\mathcal{C}$, and codimension-two defects are introduced as D5 branes at points on $\mathcal{C}$ and wrapping non-compact 2-cycles of the $ADE$ surface $X$.
In a different context, Gukov and Witten analyze the surface defects of 4d $\mathcal{N}=4$ SYM from a gauge theory perspective, by studying the singular behavior of the gauge and Higgs fields in SYM near the defect \cite{Gukov:2006jk}. In this paper, we explain the origin of these defects using the $(2,0)$ little string on $\mathcal{C}$, compactified on an additional torus $T^2$. At energies below the Kaluza--Klein scale of compactification and the string scale, this becomes the 4d $\mathcal{N}=4$ SYM theory. The defects come from D5 branes wrapping the $T^2$, or equivalently, from D3 branes at points on $T^2$. In particular, the S-duality of 4d SYM theory with defects is realized in little string theory as T-duality on the torus; the case without defects was studied already some time ago in \cite{Vafa:1997mh}. In the CFT limit, the resolution \cite{Springer:1969,*Steinberg:1976,*Fu:2003} of the (singular) Coulomb branch of the theory on the D3 branes is in fact described by the cotangent bundle $T^*(G/\mathcal{P})$, where $\mathcal{P}$ is a parabolic subgroup of the gauge group $G$. This space already appeared in \cite{Gukov:2006jk} as an alternate way to describe surface defects. It comes about as a moduli space of solutions to Hitchin's equations, which are precisely the equations obeyed by the brane defects in the low energy limit. We will see here that the space $T^*(G/\mathcal{P})$ arises from the geometry; indeed, a given set of D3 branes wrapping 2-cycles of $X$ will determine a unique parabolic subalgebra of $\mathfrak{g}$ at low energies.
Everywhere except at the origin of the moduli space, which is singular, the $(2,0)$ little string theory with the D5 brane defects is in effect described by the theory on the branes themselves. Specifically, it has a description at low energy as a 4d $\mathcal{N}=2$ superconformal quiver gauge theory of Dynkin shape.
In particular, the gauge theory description of the so-called $T_N$ theory (when viewed as a 5d $\mathcal{N}=1$ theory), or full puncture, was derived for any simply-laced Lie algebra $\mathfrak{g}=A, D, E$ in \cite{Aganagic:2015cta} (see also \cite{Bergman:2014kza,Hayashi:2014hfa} for the $A_n$ case). In this paper, we give the full classification of punctures of the $(2,0)$ little string theory on $\mathcal{C}$. Each ``class'' of defects will be given by a collection of certain weights of $\mathfrak{g}$, from which one can read off a superconformal quiver gauge theory. Taking the string mass $m_s$ to infinity, we generically lose the low energy quiver gauge theory description of the defects. Indeed, if $\tau$ is the D-brane gauge coupling, it must go to zero, as the combination $\tau m_s^2$ is a modulus of the $(2,0)$ theory, kept fixed in the limit. Nonetheless, we obtain the full list of punctures given in the literature in terms of nilpotent orbits \cite{Chacaltana:2012zy} (see also \cite{Tachikawa:2009rb} for an M-theory approach in the specific case of $D_n$).
Finally, the AGT correspondence \cite{Alday:2009aq} relates 4d $\mathcal{N}=2$ theories compactified on a Riemann surface to 2d Toda conformal field theory on the surface. In the little string setup, the precise statement is that the partition function of the $(2,0)$ little string on $\cal{C}$ with brane defects is in fact equal to a $q$-deformation of the Toda CFT conformal block on $\cal{C}$. The vertex operators are determined by positions and types of defects. So in particular, the codimension-two defects of the 6d $(2,0)$ CFT are expected to be classified from the perspective of the 2d Toda theory. This can be done by studying the Seiberg-Witten curve of the 4d $\mathcal{N}=2$ quiver gauge theory on the D5 branes, or equivalently, after $T^2$ compactification, the curve of the 2d $\mathcal{N}=(4,4)$ theory on the D3 branes. At the root of the Higgs branch, and in the $m_s$ to infinity limit, the curve develops poles at the puncture locations. The residues at each pole obey relations which describe the level 1 null states of the Toda CFT; this was previously studied in the $A_n$ case in \cite{Kanno:2009ga}. We argue that this characterization of defects as null states of the CFT naturally gives the same parabolic subalgebra classification obtained in this note, for $\mathfrak{g}=A, D, E$.\\
The paper is organized as follows. In section 2, we review the description of surface defects of $\mathcal{N}=4$ SYM given in \cite{Gukov:2006jk}, and give its $(2,0)$ little string theory origin. We further derive the action of S-duality from T-duality on $T^2$. In section 3, we explain how to extract a parabolic subalgebra and characterize the sigma model $T^*(G/\mathcal{P})$ from the D3 brane defect data of the little string. In section 4, we make contact with the nilpotent orbit classification of defects given in the literature \cite{Chacaltana:2012zy}. In section 5, we explain how the parabolic subalgebras determined in section 3 can also be recovered from null states of the $\mathfrak{g}$-type Toda CFT, and how this is related to the nilpotent orbit classification. In section 6, we explain the differences between the defects of the little string proper, and its $(2,0)$ CFT limit. In order to give the exhaustive list of defects of the little string, we will need to extend our definition of defects to characterize punctures on $\mathcal{C}$ that do not specify a definite parabolic subalgebra. In section 7, we provide a plethora of detailed examples for $\mathfrak{g}=A,D,E,$ and illustrate all the statements made in the rest of the paper.
\newpage
\section{SYM Surface Defects From Little String Theory}
\label{sec:review}
In this section, we begin by recalling the description of two-dimensional surface defects in 4d $\mathcal{N}=4$ SYM given by Gukov and Witten in \cite{Gukov:2006jk}. We then review the analysis of little string theory on a Riemann surface \cite{Aganagic:2015cta}, use it to describe these surface defects and derive their S-duality transformation.
\subsection{Gukov--Witten Surface Defects of $\mathcal{N}=4$ SYM}
Surface defects of $\mathcal{N}=4$ SYM are $\tfrac{1}{2}$-BPS operators; to describe them, one starts with a four-dimensional manifold $M$, which is locally $M=D\times D'$, where $D$ is two-dimensional, and $D'$ is a fiber to the normal bundle to $D$. Surface defects are then codimension-two objects living on $D$, and located at a point on $D'$; they are introduced by specifying the singular behavior of the gauge field near this defect. A surface operator naturally breaks the gauge group $G$ to a subgroup $\mathbb{L}\subset G$, called a Levi subgroup.\\
The story so far is in fact valid for $\mathcal{N}=2$ SUSY, but $\mathcal{N}=4$ SUSY has additional parameters $\vec{\beta}$ and $\vec{\gamma}$, which describe the singular behavior of the Higgs field $\phi$ near the surface operator; choosing $D'=\mathbb{C}$ with coordinate $z=r e^{i\theta}=x_2+i x_3$, we have:
\begin{align}\label{GWfields}
A &=\vec\alpha d\theta+\ldots,\\
\phi &= \frac{1}{2}\left(\vec{\beta}+i\vec{\gamma}\right)\frac{dz}{z}+\ldots\label{eq:betagamma},
\end{align}
which solve the Hitchin equations \cite{Hitchin:1986vp}:
\begin{align}\label{Hitch}
F &=[\phi,\overline{\phi}],\\
\overline{D}_z\phi &=0=D_z\overline{\phi}.
\end{align}
As written above, we have chosen a complex structure which depends holomorphically on $\beta+i\gamma$, while the K\"ahler structure depends on $\alpha$. Quantum mechanics also requires the consideration of the Theta angle, denoted by $\eta$; by supersymmetry, it will complexify the K\"ahler parameter $\alpha$.\\
S-duality is the statement that this theory is equivalent to $\mathcal{N}=4$ gauge theory with a dual gauge group and coupling constant
\[
g'_{4d}=1/g_{4d}.
\]
The action of S-duality on the surface defect parameters is a rescaling of the Higgs field residue
\begin{equation}
\label{eq:bgsduality}
(\beta,\gamma)\rightarrow\left(\frac{4\pi}{g^2_{4d}}\right)(\beta,\gamma),
\end{equation}
and an exchange of the gauge field and Theta angle parameters \cite{Gukov:2006jk}
\begin{equation}
\label{eq:aesduality}
(\alpha,\eta)\rightarrow(\eta,-\alpha).
\end{equation}
The analysis of \cite{Gukov:2006jk} gives a second description of the surface operators of $\mathcal{N}=4$ SYM, which will be of great relevance to us; one couples the 4d theory to a 2d non-linear sigma model on $D$. In the $\mathcal{N}=4$ case, the 2d theory is a sigma model to $T^*(G/\mathcal{P})$, where $\mathcal{P}\subset G$ is a parabolic subgroup of the gauge group. The quotient describes a partial flag manifold when the Lie algebra $\mathfrak{g}$ is $A_n$. In the case of a general Lie algebra, the quotient is a generalized flag variety. This target space is in fact the moduli space of solutions to the Hitchin equations \eqref{Hitch}.\\
Then, to describe a surface operator, one can either specify the parameters $(\beta,\gamma,\alpha)$ for the singular Higgs and gauge fields, or spell out the sigma model $T^*(G/\mathcal{P})$. It turns out that both of these descriptions have an origin in string theory, and we will now show this explicitly; our starting point will be the $(2,0)$ little string theory.
\subsection{Little String on a Riemann surface and D5 Branes}
We first review some basic facts about the little string (\cite{Seiberg:1997zk,Witten:1995zh,Losev:1997hx}; see \cite{Aharony:1999ks} for a review), and discuss the role of D5 branes in the theory.
\vskip 0.5cm
{\it --(2,0) $ADE$ Little String Theory--}
\vskip 0.5cm
The $ADE$ little string theory with $(2,0)$ supersymmetry is a six dimensional string theory, and therefore has 16 supercharges. It is obtained by sending the string coupling $g_s$ to zero in type IIB string theory on an $ADE$ surface $X$; this has the effect of decoupling the bulk modes of the full type IIB string theory. $X$ is a hyperk\"ahler manifold, obtained by resolving a ${\mathbb C}^2/\Gamma$ singularity where $\Gamma$ is a discrete subgroup of $SU(2)$, related to ${\bf g}$ by the McKay correspondence \cite{Reid:1997zy}. The little string is not a local QFT, as the strings have a tension $m_s^2$. The $(2,0)$ little string reduces to a $(2,0)$ 6d conformal field theory at energies well below the string scale $m_s$. The moduli space of the little string is $\left(\mathbb{R}^4\times S^1\right)^{\mbox{rk}(\mathfrak{g})}/W$, with $W$ the Weyl group of $\mathfrak{g}$. The scalars parametrizing this moduli space come from the periods of the NS B-field $m_s^2/g_s\int_{S_a}B_{NS}$, the RR B-field $m_s^2\int_{S_a}B_{RR}$, and a triplet of self-dual two-forms obtained from deformations of the metric on $X$, $m_s^4/g_s\int_{S_a}\omega_{I,J,K}$. Here, $S_a$ are two-cycles generating the homology group $H_2(X,\mathbb{Z})$. The $(S^1)^{\mbox{rk}(\mathfrak{g})}$ have radius $m_s^2$ and are parametrized by the periods of $B_{RR}$. When $g_s$ is sent to zero, we keep the above periods fixed in that limit. We set for all $a$'s
\begin{equation}\label{FI}
\int_{S_a} \omega_{J,K} =0,\; \int_{S_a} B_{NS}=0,
\end{equation}
and let
\begin{equation}\label{taua}
\tau_a = \int_{S_a} \, ( m_s^2 \,\omega_I/g_s + i \, B_{RR} )
\end{equation}
be arbitrary complex numbers with ${\rm Re}(\tau_a)>0$.\\
We start by compactifying the $(2,0)$ little string theory on a fixed Riemann surface $\cal{C}$, which is chosen to have a flat metric. This guarantees $X\times \cal{C}$ to be a solution of type IIB string theory. We want to introduce codimension-two defects in the little string, at points on $\cal{C}$ and filling the four remaining directions $\mathbb{C}^2$. These correspond to D5 branes in IIB string theory, wrapping non-compact 2-cycles in $X$ and $\mathbb{C}^2$ \cite{Aganagic:2015cta}. Their tension remains finite in the little string limit, so they are the correct objects to study (D3 branes also keep finite tension, but they do not describe the codimension-two defects we are after; other objects of type IIB either decouple or get infinite tension when $g_s\rightarrow 0$).\\
In \cite{Aganagic:2015cta}, it is argued that the dynamics of the $(2,0)$ little string theory on ${\cal C}\times {\mathbb C}^2$, with an arbitrary collection of D5 brane defects at points on $\cal C$, is captured by the theory on the branes themselves.
Because the Riemann surface $\cal{C}$, which is transverse to the D5 branes, has a flat metric, the theory on the D5 branes is four dimensional at low energies. In fact, it has 4d $\mathcal{N}=2$ super Poincare invariance, since the D5 branes break half the supersymmetry. We will focus specifically on the class of D5 branes that retain some conformal invariance in the resulting low energy 4d theory. This corresponds to a very specific choice of non-compact 2 cycles of $X$ wrapped by the D5 branes, which we review here.
\vskip 0.5cm
{\it --D5 Branes and $ADE$ quiver gauge theories--}
\vskip 0.5cm
For definiteness, we will choose the Riemann surface $\mathcal{C}$ to be the complex plane in what follows (one could equally choose to work on the cylinder as in \cite{Aganagic:2015cta}, or on the torus.)
The four-dimensional theory on the D5 branes is a quiver gauge theory, of shape the Dynkin diagram of $\mathfrak{g}$ \cite{Douglas:1996sw}.
The 4d gauge couplings are the $\tau_a$ defined in equation \eqref{taua}, which are the moduli of the $(2,0)$ theory in 6d. The masses of fundamental hypermultiplets are the positions of the D5 branes on $\cal C$ wrapping non-compact two-cycles of $X$. Finally, the Coulomb moduli are the positions of the D5 branes on $\cal C$ wrapping compact two-cycles of $X$.\\
In order to specify a defect D5 brane charge, we pick a class $[S^*]$ corresponding to non-compact two-cycles in the relative homology $H_2(X, \partial X; {\mathbb Z}) = \Lambda_*$, which we identify with the (co-)weight lattice of $\bf g$:
\begin{equation}\label{ncomp}
[S^*] = -\sum_{a=1}^n \, m_a \, w_a \;\; \in \Lambda_*
\end{equation}
with non-negative integers $m_a$ and fundamental weights $w_a$. A necessary condition for conformal invariance in 4d is that the net D5 brane flux vanishes at infinity. This constrains the form of the coefficients $m_a$. To satisfy the condition, we add D5 branes that wrap a compact homology class $[S]$ in $H_2(X, {\mathbb Z})=\Lambda$, which we identify with the root lattice of $\bf g$:
\begin{equation}\label{comp}
[S] = \sum_{a=1}^n \,d_a\,\alpha_a\;\; \in \Lambda
\end{equation}
with non-negative integers $d_a$ and the simple roots $\alpha_a$,
such that
\begin{equation}\label{conf}
[S+S_*] =0.
\end{equation}
The vanishing of $S+S^*$ in homology is equivalent to vanishing of $\# (S_a \cap (S+S_*))$ for all $a$. We can therefore rewrite \eqref{conf} as
\begin{equation}\label{conformal}
\sum_{b=1}^n C_{ab} \;d_b = m_a
\end{equation}
where $C_{ab}$ is the Cartan matrix of $\mathfrak{g}$.
On the Higgs branch of the low energy gauge theory, the gauge group $\prod_{a=1}^n U(d_a)$ is broken to its $U(1)$ centers, one for each node. There, the D5 branes wrapping the compact cycles $S$ and the non-compact cycles $S^*$ recombine to form D5 branes wrapping a collection of non-compact cycles $S_i^*$, whose homology classes are elements $\omega_i$ of the weight lattice $\Lambda^* = H_2(X, \partial X; {\mathbb Z})$:
\begin{equation}\label{weightsfr}
\omega_i = [S_i^*] \qquad \in\Lambda^*.
\end{equation}
It is these weights $\omega_i$ that will classify the defects of the little string.
Each of the $\omega_i$'s comes from one of the non-compact D5 branes on $S^*$.
For the branes to bind, the positions on $\mathcal{C}$ of the compact branes must coincide with the positions of one of the non-compact D5 branes. Recall that the positions of non-compact D5 branes are mass parameters of the quiver gauge theory, while the positions of compact D5 branes on ${\cal C}$ are Coulomb moduli; when a Coulomb modulus coincides with one of the masses, the corresponding fundamental hypermultiplet becomes massless and can get expectation values, describing the root of the Higgs branch. One can reasonably worry that the binding of the D5 branes will break supersymmetry, but it is in fact preserved when one turns on the FI terms, which are the periods $ \int_{S_a} \omega_{J,K},\; \int_{S_a} B_{NS}$.\\
Then, the $\omega_i$'s can always be written as a negative fundamental weight $-w_a$ plus the sum of positive simple roots $\alpha_a$, from bound compact branes. Not any such combination will correspond to truly bound branes: a sufficient condition is that $\omega_i $ is in the Weyl orbit of $-w_a = [S_a^*]$ (we will relax this condition in section \ref{sec:types} and end up with a new class of defects of the little string). Furthermore, the collection of weights
\begin{equation}\label{WS}
{\cal W}_{\cal S} = \{ \omega_i\}
\end{equation}
we get must be such that it accounts for all the D5 brane charges in $[S^*]$ and in $[S]$. One simple consequence is that the number of $\omega_i$'s is the total rank of the 4d flavor group, $\sum_{a=1}^n m_a$. The fact that the net D5 charge is zero, $[S+S^*]=0$, implies that
\[
\sum_{\omega_i\in{\cal W}_{\cal S}}\omega_i=0,
\]
which is equivalent to \eqref{conformal}.\\
The most canonical type of defect is the one analyzed in \cite{Aganagic:2015cta}, which makes use of the fact that the weight lattice of a Lie algebra of rank $n$ is $n$-dimensional. Then we can construct a set ${\cal W}_{\cal S}$ by picking any $n+1$ weight vectors which lie in the Weyl group orbits of the fundamental weights $-w_a$ such that they sum up to zero and $n$ of them span $\Lambda_*$. This leads to a full puncture defect of the $(2,0)$ little string on ${\cal C}$. The example below features $\mathfrak{g}=A_3$.
\begin{example}
Let us look at the set of all the weights in the antifundamental representation of $A_3$; these weights all add up to 0, and all weights are in the orbit of (minus) the fundamental weight $[-1,0,0]$, written in Dynkin labels, so this set defines a valid set ${\cal W}_{\cal S}$.
Writing $w_i$ for the $i$-th fundamental weight, we note that:
\begin{alignat*}{2}
\omega_1&=[-1,\phantom{-}0,\phantom{-}0]& &=-w_1,\\
\omega_2&=[\phantom{-}1,-1,\phantom{-}0]& &=-w_1+\alpha_1,\\
\omega_3&=[\phantom{-}0,\phantom{-}1,-1]& &=-w_1+\alpha_1+\alpha_2,\\
\omega_4&=[\phantom{-}0,\phantom{-}0,\phantom{-}1]& &=-w_1+\alpha_1+\alpha_2+\alpha_3.
\end{alignat*}
Written in this fashion, the set ${\cal W}_{\cal S}$ defines a 4d superconformal quiver gauge theory, shown in Figure \ref{fig:a3full}. This is called the full puncture.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$4$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{The quiver theory describing a full puncture for $\mathfrak{g}=A_3$}
\label{fig:a3full}
\end{figure}
\end{example}
The full classification of defects for simply-laced $\mathfrak{g}$ is obtained by constructing the set ${\cal W}_{\cal S}$ to have size $n+1$ \textit{or less}. As we will explain in later sections, this is where the rich structure of the parabolic subalgebras of $\mathfrak{g}$ will emerge, and it will be our main object of study.\\
When the string scale $m_s$ is taken to infinity, the $(2,0)$ little string reduces to the $(2,0)$ CFT of type $\mathfrak{g}$ compactified on $\cal C$; we lose the Lagrangian description in general, and the Coulomb branch dimension, previously equal to $\sum_{a=1}^n d_a$, generically decreases. This loss of Coulomb moduli is expected, as the theory loses degrees of freedom in the limit. This distinction in counting Coulomb moduli between the little string and CFT cases will be important to keep in mind throughout the rest of our discussion.
\subsection{Little String Theory origin of SYM surface defects and S-duality}
\vskip 0.5cm
{\it --Hitchin System and Higgs Field Data--}
\vskip 0.5cm
In the little string theory, the brane defects we are studying are solutions to Bogomolny equations on $\mathcal{C}$ times an extra circle $S^1(R_1)$ \cite{Aganagic:2015cta}:
\begin{equation}\label{bogomolny}
D\phi=*F.
\end{equation}
Little string theory enjoys T-duality, so in particular, the $(2,0)$ $ADE$ Little String of type IIB compatified on $S^1(R_1)$ is dual to the (1,1) $ADE$ Little String of type IIA compatified on $S^1(\hat{R}_1)$ of radius $\hat{R}_1=1/m_s^2 R_1$.
The defects are then D4 branes after T-dualizing, and are points on ${\cal C} \times S^1(\hat{R}_1)$. These are monopoles, magnetically charged under the gauge field coming from the (1,1) little string. The $n$ scalars are $\phi_a=\int_{S^2_a}m_s^3\omega^I/g'_s$, where $g'_s$ is the IIA string coupling, related to the IIB one by $1/g'_s={R_1}m_s/g_s$. $F$ is the curvature of the gauge field coming from the (1,1) little string.\\
If we want to recover the original description of the defects as D5 branes, we can take the dual circle size $\hat{R}_1$ to be very small; the upshot is that the Bogomolny equations simplify and we recover the Hitchin equations \eqref{Hitch} we considered previously:
\begin{align}
F &=[\phi,\overline{\phi}],\\
\overline{D}_z\phi &=0=D_z\overline{\phi}.
\end{align}
A subtlety here is that the field $\phi$ got complexified in passing from D4 branes back to D5 branes. The imaginary part of $\phi$ is the holonomy of the (1,1) gauge field around $S^1(\hat{R}_1)$; this comes from the fact that the D4 branes are magnetically charged under the RR 3-form: $R_1 \int_{S^2_a\times S^1(R_1)}m_s^2 \; C^{(3)}_{RR}$. In type IIB language, after T-duality, the D5 branes are charged under the RR 2-form instead: $1/\hat{R}_1 \int_{S^2_a} B_{RR}$.
All in all, the Higgs field is then written in IIB variables as
\begin{equation}\label{higgsIIB}
\phi_a = (\alpha_a, \phi) = 1/{\hat{R}}_1 \int_{S^2_a} (m_s^2 \omega_I/g_s + i B_{RR})=\tau_a/{\hat{R}}_1.
\end{equation}
The Seiberg-Witten curve of the quiver gauge theory on the D5 branes arises as the spectral curve of the Higgs field $\phi$, taken in some representation $\mathfrak{R}$ of $\mathfrak{g}$:
\begin{equation}
\label{eq:swcurve5d}
\det{}_\mathfrak{R}(e^{\hat{R}_1\phi}-e^{\hat{R}_1 p})=0.
\end{equation}
In the absence of monopoles, $\phi$ is constant: the vacuum expectation value of the Higgs field is $\hat{R}_1 \phi = \tau$.\\
By construction, then, the Coulomb branch of the $ADE$ quiver theory on the D5 branes is the moduli space of monopoles on ${\cal C} \times S^1({\hat R_1})$. As we described in the previous section, we ultimately want to go on the Higgs branch of the theory, where we get a description of quiver theories as a fixed set of weights ${\cal W}_{\cal S}$ in $\mathfrak{g}$; there, all the non-abelian monopoles reduce to Dirac monopoles. The effect on $\phi$ of adding a Dirac monopole of charge $\omega_i^{\vee}$, at a point $x_i=\hat{R}_1 \hat\beta_i$ on ${\cal C}$, is to shift:
\begin{equation}\label{addone}
e^{\hat{R}_1\phi} \rightarrow e^{\hat{R}_1\phi} \cdot (1-z\, e^{-\hat{R}_1\hat\beta_i})^{-w_i^{\vee}}.
\end{equation}
Here, $z$ is the complex coordinate on $\mathcal{C}=\mathbb{C}$.
Thus, the Higgs field solving the Hitchin equations at the point where the Higgs and the Coulomb branches meet is
\begin{equation}\label{Higgs}
e^{\hat{R}_1\phi(x)} = e^{\tau} \prod_{\omega^V_i \in {\cal W}_{{\cal S}} }\;(1-z\, e^{-\hat{R}_1\hat\beta_i})^{-\omega^\vee_{i}}.
\end{equation}
To take the string mass $m_s$ to infinity, we relabel $e^{\hat R_1 \hat \beta_i}=z_{\mathcal P}\, e^{\hat{R}_1\beta_{i,\mathcal P}}$. We can then safely take the limit $\hat{R}_1\to 0$; the imaginary part of $\phi$ decompactifies, and equation \eqref{eq:swcurve5d} becomes the spectral curve of the Hitchin integrable system \cite{Gaiotto:2009hg}:
\begin{equation}
\label{eq:swcurve4d}
\det{}_\mathfrak{R}(\phi-p)=0.
\end{equation}
In this limit, the Higgs field near a puncture of $\mathcal{C}$ has a pole of order one, and takes the form
\begin{equation}\label{Higgs2}
\phi(z) = {\beta_0\over z} + \sum_{{\cal P}} \sum_{\omega_i \in {\cal W}_{{\cal P}} }\;{ \beta_{i,{\cal P}} \, \omega_i^{\vee}\over z_{\cal P} - z},
\end{equation}
with $\beta_0=\tau/\hat{R}_1$ and ${\cal P}$ the set of punctures.
Therefore, in the $(2,0)$ CFT, we have poles on ${\cal C}$ at $z= z_{\cal P}$, with residues
\[
\beta_{\cal P} = \sum_{\omega_i \in {\cal W}_{{\cal P}}} \beta_{i,{\cal P}} \, \omega_i^{\vee}.
\]
These residues are what we called $\beta+i\gamma$ in the $\mathcal{N}=4$ SYM setup of eq. \eqref{eq:betagamma}.
\vskip 0.5cm
{\it -- 4d S-duality is T-duality of the Little String--}
\vskip 0.5cm
To provide evidence that the surface defects of $\mathcal{N}=4$ SYM really are branes at points on $\mathcal{C}$ in the $(2,0)$ little string, we now derive four-dimensional S-duality from T-duality of the string theory, compactified on an additional torus $T^2$. Here, $T^2$ is the product of two $S^1$'s, one from each of the two complex planes $\mathbb{C}^2$. We label those circles as $S^1(R_1)$ and $S^1(R_2)$, of radius $R_1$ and $R_2$ respectively.\\
\begin{figure}[htbp]
\tcbset{enhanced,size=small,nobeforeafter,tcbox raise=-2.3mm,colback=white,colframe=white}
\centering
\begin{tikzpicture}
\node at (0,0) {\tcbox[borderline={0.2mm}{0mm}{violet!70!black}]{$(1,1)$ string on $S^1(\hat R_1)\times S^1(R_2)\times \mathbb{R}^2\times\mathcal{C}$ with $(\text{D}4,\text{D}4)$ branes}};
\draw (0,-0.5)[<->, thick] -- (0,-1.1);
\node at (1.2,-0.8) {$T_1$-duality};
\node at (0,-1.55) {\tcbox[borderline={0.2mm}{0mm}{green!30!black}]{$(2,0)$ string on $S^1(R_1)\times S^1(R_2)\times \mathbb{R}^2\times\mathcal{C}$ with $(\text{D}3,\text{D}5)$ branes}};
\draw (0,-2.05)[<->, thick] -- (0,-2.65);
\node at (1.2,-2.35) {$T_2$-duality};
\node at (0,-3.1) {\tcbox[borderline={0.2mm}{0mm}{violet!70!black}]{$(1,1)$ string on $S^1(R_1)\times S^1(\hat R_2)\times \mathbb{R}^2\times\mathcal{C}$ with $(\text{D}4,\text{D}4)$ branes}};
\end{tikzpicture}
\caption{One starts with the $(1,1)$ little string theory on $T^2\times\mathbb{R}^2\times\mathcal{C}$. After doing two T-dualities in the torus directions, we get the $(1,1)$ little string theory on the T-dual torus; in the low energy limit, the pair of $(1,1)$ theories gives an S-dual pair of $\mathcal{N}=4$ SYM theories. D3 branes at a point on $T^2$ map to D4 branes in either $(1,1)$ theory, while D5 branes wrapping $T^2$ map to another set of D4 branes.}
\label{fig:sdual}
\end{figure}
First, without any D5 branes, S-duality was derived in \cite{Vafa:1997mh}, and the line of reasoning went as follows:
suppose we first compactify on, say, $S^1(R_1)$; this is what we just did in the previous section to make contact with D4 branes as magnetic monopoles.
Then we are equivalently studying the (1,1) little string on $S^1(\hat{R}_1)$. Compactifying further on $S^1(R_2)$, this theory is the same as the (1,1) little string on $S^1(R_1)\times S^1(\hat{R}_2)$, by $T^2$-duality.
4d SYM S-duality then naturally follows from the $T^2$-duality of this pair of (1,1) theories. Indeed, at low energies, both $(1,1)$ little string theories become the maximally supersymmetric 6d SYM, with gauge group dictated by $\mathfrak{g}$ and gauge coupling $1/g_{6d}^2=m_s^2$. We wish to take the string scale $m_s$ to infinity; in the case of the $(1,1)$ string on $S^1(\hat R_1)$, since $m_s^2 \hat{R}_1=1/R_1$, the radius $\hat{R}_1$ goes to 0 in that limit. The theory then becomes 5d $\mathcal{N}=2$ SYM, with inverse gauge coupling $1/g_{5d}^2=1/R_1$. After the further compactification on $S^1(R_2)$, we obtain at low energies 4d $\mathcal{N}=4$ SYM, with inverse gauge coupling $1/g_{4d}^2=R_2/g_{5d}^2=R_2/R_1$.
Now, the same reasoning applied to the $T^2$-dual theory $S^1(R_1)\times S^1(\hat{R}_2)$ gives 4d $\mathcal{N}=4$ SYM in the $m_s$ to infinity limit, with inverse gauge coupling $1/g_{4d}'^2=R_1/R_2$.
Note that $1/g'_{4d}=g_{4d}$. This is just the action of S-duality on the gauge coupling of $\mathcal{N}=4$ SYM.
Writing $R_2/R_1\equiv \mbox{Im}(\tau')$, with $\tau'$ the modular parameter of the $T^2$, we see that S-duality is a consequence of $T^2$-duality for the pair of $(1,1)$ little string theories. An illustration of the dualities is shown in Figure \ref{fig:sdual}.\\
Now, we extend this argument and introduce the D5 brane defects;
since the D5 branes were initially wrapping $T^2\times\mathbb{C}$, note that we can equivalently consider the defects to be D3 branes at a point on $T^2$. We now argue that the S-duality action on the half BPS surface defects of SYM has its origin in the same $T^2$-duality of $(1,1)$ theories we presented in the previous paragraph.
First, recall that after $S^1(R_1)$ compactification, the D5 branes are charged magnetically, with period:
\[
\phi_a = 1/\hat{R}_1 \int_{S_a} (m_s^2 \omega_I/g_s + i B_{RR}).
\]
In type IIB variables, we call this period $\beta+i\gamma$. By T-dualizing along $S^1(R_1)$ we obtain D4 branes wrapping $S^1(R_2)$ in the $(1,1)$ little string. Now suppose we T-dualize the D5 branes along $S^1(R_2)$ instead; then we have D4 branes wrapping $S^1(R_1)$, in the $T^2$-dual $(1,1)$ little string. The D4 brane tensions in both $(1,1)$ theories are proportional to each other, with factor $R_2/R_1$. But then $(\beta,\gamma) \rightarrow R_2/R_1 \, (\beta,\gamma)$ after $T^2$-duality. The D4 branes are then heavy, magnetic objects in one $(1,1)$ theory, while they are light, electric objects in the other. In the $m_s\to\infty$ limit, $(\beta,\gamma)$ are the parameters of the Higgs field in 4d SYM. This is precisely the action of S-duality for the Higgs field data: $(\beta,\gamma)\rightarrow\mbox{Im}(\tau')(\beta,\gamma)$ \eqref{eq:bgsduality}.\\
Second, after $T^2$ compactification, the D3 branes, which are points on $T^2$, are charged under the RR 4-form: $\int_{S_a\times \widetilde{S^1}\times S^1(R_1)} C^{(4)}_{RR}$, where $\widetilde{S^1}$ is a circle around the point defect on $\cal{C}$. As before, $S^1(R_1)$ is one of the 1-cycles of $T^2$, and $S_a$ is a compact 2-cycle in the ALE space $X$. We call this period $\alpha$. The D3 branes are also charged under $\int_{S_a\times \widetilde{S^1}\times S^1(R_2)} C^{(4)}_{RR}$, where $S^1(R_2)$ is the other 1-cycle of $T^2$; we call this period $\eta$.
Suppose we T-dualize in the $S^1(R_1)$ direction. Then $\alpha$ becomes the period of the RR 3-form on $S_a\times \widetilde{S^1}$; this period is in fact an electric coupling for the holonomy of the $(1,1)$ gauge field around $\widetilde{S^1}$. Also, $\eta$ becomes the period of the RR 5-form on $S_a\times \widetilde{S^1}\times S^1(R_2)\times S^1(\hat{R}_1)$; this period is in fact a magnetic coupling for the holonomy of the $(1,1)$ gauge field around $\widetilde{S^1}$. T-dualizing on $S^1(R_2)$ instead, we reach the $T^2$-dual $(1,1)$ theory. We see that $\alpha$ gets mapped to $\eta$, while $\eta$ gets mapped to $-\alpha$ (the minus sign arises because the 5-form is antisymmetric). So in the end, under $T^2$-duality, the periods change as $(\alpha,\eta)\rightarrow(\eta,-\alpha)$.
Note that because the 1-cycles generating the $T^2$ appear explicitly in the definition of these periods, $T^2$-duality does not amount to a simple rescaling of $(\alpha,\eta)$, as \ was the case for $(\beta,\gamma)$. In the low energy limit, we recover the S-duality of the gauge field and Theta angle parameters of 4d SYM $\alpha$ and $\eta$ in the presence of a defect \eqref{eq:aesduality}.
\vskip 0.5cm
{\it -- $T^*(G/\mathcal{P})$ sigma model and Coulomb branch of the Defect Theory--}
\vskip 0.5cm
We made contact with the surface defects of Gukov and Witten after compactifying the $(2,0)$ little string on $T^2$ and T-dualizing the D5 branes to D3 branes. In this process, as long as $m_s$ is kept finite, the 4d $ADE$ quiver gauge theory that describes the D5 branes at low energies simply becomes a two-dimensional quiver theory for the D3 branes, with the same gauge gauge groups and fundamental matter. In the rest of this paper, we will label this low energy $ADE$ quiver theory on the D3 branes, together with the set of weights $\cal{W}_{\cal{S}}$ that specified it, as $T^{2d}$. In the CFT limit, we will label the theory as $T^{2d}_{m_s\rightarrow \infty}$. As we mentioned already, unlike $T^{2d}$, the theory $T^{2d}_{m_s\rightarrow \infty}$ generically has no Lagrangian description.\\
Now, Gukov and Witten showed that surface operators of $\mathcal{N}=4$ SYM can also be described by a 2d sigma model $T^*(G/\mathcal{P})$, which is a moduli space of solutions to the Hitchin equations \eqref{Hitch}. After taking the CFT limit of the little string theory, we saw that this moduli space is also the Coulomb branch of the $(2,0)$ CFT theory on the Riemann surface $\mathcal{C}$ times a circle $S^1(R_1)$ (the radius $R_1$ here being very big).
As an algebraic variety, this Coulomb branch is singular, while $T^*(G/\mathcal{P})$ is smooth. The statement is then that the (resolution of the) Coulomb branch of the 2d $ADE$ quiver gauge theories on the D3 branes we presented, in the appropriate $m_s$ to infinity limit, is expected to be the sigma model to $T^*(G/\mathcal{P})$. In other terms, the Coulomb branch of $T^{2d}_{m_s\rightarrow \infty}$ can be identified with $T^*(G/\mathcal{P})$.\\
A natural question arises: how do parabolic subgroups $\mathcal{P}$ in $T^*(G/\mathcal{P})$ arise from the point of view of the defects of the $(2,0)$ little string?
We will now see that to every $ADE$ theory $T^{2d}$ on the D3 branes, we will be able to associate a unique parabolic subalgebra from the geometry (specifically, the non-compact 2-cycles of $X$), or equivalently, from the representation theory of $\mathfrak{g}$ (the Higgs field we introduced is valued in the Lie algebra $\mathfrak{g}$, so we will speak of parabolic subalgebras rather than parabolic subgroups); in particular, after taking the CFT limit, we will be able to read it from the data of the weight system ${\cal W}_{\cal S}$ that defines the theory $T^{2d}_{m_s\rightarrow \infty}$.\\
As a side note, it is known (\cite{Chacaltana:2012zy, Gaiotto:2008ak,Hanany:2011db,Cremonesi:2014uva}) that $T^*(G/\mathcal{P})$ is the resolution of the Higgs branch of different theories from the ones we have been considering. In the little string setup, as we reviewed, the moduli space of monopoles naturally arises as a Coulomb branch instead of a Higgs branch. A natural guess is that those two descriptions could be related by mirror symmetry, and this is indeed the case in all the cases we could explicitly check (all defects in the $A_n$ case, and some low rank defects in the $D_n$ case; see also \cite{Hanany:2016gbz}). We will not investigate this point further here, but it would be important to get a clear understanding of the mirror map.
\section{From Brane Defects to Parabolic Subalgebra Classification}
\label{sec:parastrings}
We now explain how the $ADE$ quiver theories $T^{2d}$ determine the parabolic subalgebras of $\mathfrak{g}$.
\subsection{Mathematics Preliminaries}
\label{ssec:levi}
Because they will be so crucial to our story, we review here the mathematics of parabolic and Levi subalgebras of a Lie algebra $\mathfrak{g}$.\\
A \emph{Borel subalgebra} of $\mathfrak{g}$ is a maximal solvable subalgebra of $\mathfrak{g}$, and always has the form $\mathfrak{b}=\mathfrak{h}\oplus\mathfrak{m}$, where $\mathfrak{h}$ is a Cartan subalgebra of $\mathfrak{g}$ and $\mathfrak{m}=\sum_{\alpha\in\Phi^+}\mathfrak{g}_\alpha$ for some choice of positive roots $\Phi^+$. A \emph{parabolic subalgebra} $\mathfrak{p}$ is defined to be a subalgebra of $\mathfrak{g}$ that contains a Borel subalgebra $\mathfrak{b}$, so $\mathfrak{b}\subseteq\mathfrak{p}\subseteq\mathfrak{g}$.\\
There are many different choices of Borel subalgebras of $\mathfrak{g}$, but we will choose one for each $\mathfrak{g}$ and keep it fixed. Since the Borel subalgebra is the sum of all the positive root spaces, we can get any $\mathfrak{p}$ by adding the root spaces associated to any closed system of negative roots.\\
Let us extend our notations to differentiate between distinct parabolic subalgebras:
We denote the set of positive simple roots by $\Delta$.
Take an arbitrary subset $\Theta\subset\Delta$. We define $\mathfrak{p}_\Theta$ to be the subalgebra of $\mathfrak{g}$ generated by $\mathfrak{b}$ and all of the root spaces $\mathfrak{g}_\alpha$, with $\alpha\in\Delta$ or $-\alpha\in\Theta.$ Then $\mathfrak{p}_\Theta$ is a parabolic subalgebra of $\mathfrak{g}$ containing $\mathfrak{b}$, and every parabolic subalgebra of $\mathfrak{g}$ containing $\mathfrak{b}$ is of the form $\mathfrak{p}_\Theta$ for some $\Theta\subset\Delta$. In fact, every parabolic subalgebra of $\mathfrak{g}$ is conjugate to one of the form $\mathfrak{p}_\Theta$ for some $\Theta\subset\Delta$. We state the important result:\\
Let $\langle\Theta\rangle$ denote the subroot system generated by $\Theta$ and write $\langle\Theta\rangle^+= \langle\Theta\rangle\cap\Phi^+.$
There is a \emph{direct sum decomposition} $\mathfrak{p}_\Theta=\mathfrak{l}_\Theta\oplus\mathfrak{n}_\Theta$, where $\mathfrak{l}_\Theta=\mathfrak{h}\ \oplus\sum_{\alpha\in\langle\Theta\rangle}\mathfrak{g}_\alpha$ is a reductive subalgebra (a reductive Lie algebra is a direct sum of a semi-simple and an abelian Lie algebra), called a Levi subalgebra, and $\mathfrak{n}_\Theta=\sum_{\alpha\in\Phi^+ \backslash \langle\Theta\rangle^+}\mathfrak{g}_\alpha$, is called the nilradical of $\mathfrak{p}_\Theta$. Here, $\alpha\in\Phi^+ \backslash \langle\Theta\rangle^+$ means that $\alpha$ is a positive root not in $\langle\Theta\rangle^+$.
Note that $\mathfrak{n}_{\Theta}\cong \sum_{\alpha\in\Phi^- \backslash \langle\Theta\rangle^-}\mathfrak{g}_\alpha\cong\mathfrak{g}/\mathfrak{p}_{\Theta}$.
Furthermore, all Levi subalgebras of a given parabolic subalgebra are conjugate to each other \cite{Malcev:1942}. We illustrate the above statements in the examples below:
\begin{example}
Consider $\mathfrak{g}=A_2$ in the fundamental, three-dimensional representation. Then the elements in the Cartan subalgebra have the form
\begin{equation}
\mathfrak{h}=\begin{pmatrix}
*&0&0\\
0&*&0\\
0&0&*
\end{pmatrix}.
\end{equation}
We associate to a root $\alpha_{ij}=h_i-h_j$ the space $\mathbb{C}E_{ij}$, where $E_{ij}$ is the matrix that has a $+1$ in the $i$-th row and $j$-th column, and zeroes everywhere else. Thus, we see that
\begin{equation}
\mathfrak{b}=\begin{pmatrix}
*&*&*\\
0&*&*\\
0&0&*
\end{pmatrix},
\end{equation}
and the parabolic subalgebras are
\begin{align}
\mathfrak{p}_\varnothing=\mathfrak{b}&=\begin{pmatrix}
*&*&*\\
0&*&*\\
0&0&*
\end{pmatrix},\\
\mathfrak{p}_{\{\alpha_1\}}&=\begin{pmatrix}
*&*&*\\
*&*&*\\
0&0&*
\end{pmatrix},\\
\mathfrak{p}_{\{\alpha_2\}}&=\begin{pmatrix}
*&*&*\\
0&*&*\\
0&*&*
\end{pmatrix},\\
\mathfrak{p}_{\{\alpha_1,\alpha_2\}}=\mathfrak{g}&=\begin{pmatrix}
*&*&*\\
*&*&*\\
*&*&*
\end{pmatrix}.
\end{align}
\end{example}
Let us look at the Levi decompositions of the above:
\begin{example}
\label{ex:levisl3}
For $\mathfrak{g}=A_2$, we get the following decompositions:
\begin{align}
\mathfrak{p}_\varnothing&=\begin{pmatrix}
*&*&*\\
0&*&*\\
0&0&*
\end{pmatrix}=\begin{pmatrix}
*&0&0\\
0&*&0\\
0&0&*
\end{pmatrix}\oplus
\begin{pmatrix}
0&*&*\\
0&0&*\\
0&0&0
\end{pmatrix}=\mathfrak{l}_\varnothing\oplus\mathfrak{n}_\varnothing,\\
\mathfrak{p}_{\{\alpha_1\}}&=\begin{pmatrix}
*&*&*\\
*&*&*\\
0&0&*
\end{pmatrix}=\begin{pmatrix}
*&*&0\\
*&*&0\\
0&0&*
\end{pmatrix}\oplus
\begin{pmatrix}
0&0&*\\
0&0&*\\
0&0&0
\end{pmatrix}=\mathfrak{l}_{\{\alpha_1\}}\oplus\mathfrak{n}_{\{\alpha_1\}},\\
\mathfrak{p}_{\{\alpha_2\}}&=\begin{pmatrix}
*&*&*\\
0&*&*\\
0&*&*
\end{pmatrix}=\begin{pmatrix}
*&0&0\\
0&*&*\\
0&*&*
\end{pmatrix}\oplus
\begin{pmatrix}
0&*&*\\
0&0&0\\
0&0&0
\end{pmatrix}=\mathfrak{l}_{\{\alpha_2\}}\oplus\mathfrak{n}_{\{\alpha_2\}},\\
\mathfrak{p}_{\{\alpha_1,\alpha_2\}}&=\begin{pmatrix}
*&*&*\\
*&*&*\\
*&*&*
\end{pmatrix}=\begin{pmatrix}
*&*&*\\
*&*&*\\
*&*&*
\end{pmatrix}\oplus
\begin{pmatrix}
0&0&0\\
0&0&0\\
0&0&0
\end{pmatrix}=\mathfrak{l}_{\{\alpha_1,\alpha_2\}}\oplus\mathfrak{n}_{\{\alpha_1,\alpha_2\}}.
\end{align}
\end{example}
\begin{example} In the table below, we show the root spaces that the Borel subalgebra of $A_3$ is made of:
\tcbset{enhanced,size=fbox,nobeforeafter,tcbox raise=-2.3mm,colback=white,colframe=white}
\begin{table}[htp]
\renewcommand{\arraystretch}{1.2}
\begin{center}
\begin{tabular}{c|c|c|c}
$\Theta$&$\mathfrak{p}_\Theta$&$\mathfrak{l}_\Theta$&$\mathfrak{n}_\Theta$\\
\hline
$\varnothing$&$\begin{pmatrix}
*&\tcbox[colframe=lime]{*}&\tcbox[borderline={0.2mm}{0mm}{green!95!black,dashed}]{*}&\tcbox[borderline={0.2mm}{0mm}{blue,dotted}]{*}\\
0&*&\tcbox[colframe=brown]{*}&\tcbox[borderline={0.2mm}{0mm}{gray!80!black,dashed}]{*}\\
0&0&*&\tcbox[colframe=purple!80!black]{*}\\
0&0&0&*
\end{pmatrix}$
&
$\begin{pmatrix}
*&0&0&0\\
0&*&0&0\\
0&0&*&0\\
0&0&0&*
\end{pmatrix}$&$\begin{pmatrix}
0&*&*&*\\
0&0&*&*\\
0&0&0&*\\
0&0&0&0
\end{pmatrix}$
\end{tabular}\\
\end{center}
\hspace*{5em}\tcbox[colframe=lime]{\phantom{*}}: $\alpha_1$\hspace{3em} \tcbox[borderline={0.2mm}{0mm}{green!95!black,dashed}]{\phantom{*}}: $(\alpha_1+\alpha_2)$\hspace{3em}\tcbox[borderline={0.2mm}{0mm}{blue,dotted}]{\phantom{*}}:$(\alpha_1+\alpha_2+\alpha_3)$\\
\hspace*{5em}\tcbox[colframe=brown]{\phantom{*}}: $\alpha_2$\hspace{3em} \tcbox[borderline={0.2mm}{0mm}{gray!80!black,dashed}]{\phantom{*}}: $(\alpha_2+\alpha_3)$\\
\hspace*{5em}\tcbox[colframe=purple!80!black]{\phantom{*}}: $\alpha_3$
\caption{This table illustrates the Levi decomposition of $\mathfrak{p}_\Theta$, when $\Theta$ is the empty set and $\mathfrak{g}=A_3$. $\mathfrak{p}_\Theta$ consists of all the matrices in $A_3$ with zeroes in the indicated places and the other entries are arbitrary. The color code shows which positive root is denoted by which nonzero entry.}
\label{tab:a3ex}
\end{table}
\end{example}
\subsection{Parabolic Subalgebras from Weight Data}
\label{ssec:3d}
We reviewed in section 2 how we could specify a defect of the little string from a set of weights
\begin{equation}
{\cal W}_{\cal S} = \{ \omega_i\},
\end{equation}
all in the orbit of some (possibly different) fundamental weights, and adding up to 0. We make the claim that to each set $\mathcal{W}_\mathcal{S}$ we can associate a parabolic subalgebra $\mathfrak{p}$. This map is not injective, as many different sets of weights will typically determine the same $\mathfrak{p}$.\\
As reviewed in the last section, all parabolic subalgebras of $\mathfrak{g}$ are determined by a subset $\Theta$ of the simple positive roots $\Delta$ of $\mathfrak{g}$. Thus, our strategy will be to extract such a set $\Theta$ from the weights in $\mathcal{W}_\mathcal{S}$.
We do so by first computing the inner product $\langle \alpha_i, \omega_j\rangle$, for all weights $\omega_j$ in ${\cal W}_{\cal S}$, and for all positive simple roots $\alpha_i$ of $\mathfrak{g}$. Then all the $\alpha_i$ which satisfy
\begin{equation}
\langle \alpha_i, \omega_j\rangle =0
\end{equation}
for all weights $\omega_j$ in $\mathcal{W}_{\mathcal{S}}$ will make up the set $\Theta$. There is one caveat to the above procedure: The set $\Theta$ defined as such is not invariant under the global action of the Weyl group on $\mathcal{W}_{\mathcal{S}}$. Thus, we modify the above prescription and define $\Theta$ as the maximal such set in the Weyl group orbit of $\mathcal{W}_{\mathcal{S}}$.\footnote{Note that the Weyl group acts on all the weights in $\mathcal{W}_{\mathcal{S}}$ simultaneously.}
Moreover, the positive roots $e_\gamma$ for which
\begin{equation}
\langle e_{\gamma}, \omega_i\rangle <0,\text{\footnotemark}
\end{equation}
\footnotetext{Or equivalently, $\langle e_{\gamma}, \omega_i\rangle >0$.}
for at least one $\omega_i\in\mathcal{W}_{\mathcal S}$, form a nilradical $\mathfrak{n}$; this nilradical specifies the Coulomb branch of $T^{2d}_{m_s\rightarrow \infty}$.
This $\mathfrak{n}$ can always be obtained from the Levi decomposition $\mathfrak{p}_{\Theta}=\mathfrak{l}_{\Theta}\oplus\mathfrak{n}_{\Theta}$ of the parabolic subalgebra $\mathfrak{p}_{\Theta}$.\\
As mentioned already, we emphasize here that the Coulomb branch of $T^{2d}$ is generically bigger than the Coulomb branch of $T^{2d}_{m_s\rightarrow \infty}$. In the little string case, the Coulomb branch of $T^{2d}$ has dimension $\sum_{a=1}^{n}d_a$, where $d_a$ are the ranks of the gauge groups (here, we include the $U(1)$ centers of the $U(d_a)$ gauge groups). In the CFT limit, the space $X\times\mathcal{C}$ can be reinterpreted as a Calabi--Yau manifold. Thus, one can use the techniques of complex geometry to count the Coulomb moduli of $T^{2d}_{m_s\rightarrow \infty}$ as the complex structure deformations of this Calabi--Yau \cite{Cachazo:2001gh}. For instance, for $\mathcal{C}$ a sphere with 3 full punctures, meaning the residues of the Higgs fields $\phi(z)$ are generic, the dimension of the Coulomb branch of $T^{2d}_{m_s\rightarrow \infty}$ is the number $|\Phi^{+}|$ of positive roots of $\mathfrak{g}$. Note that for $A_n$, the full puncture of $T^{2d}$ has Coulomb branch dimension $\sum_{a=1}^{n}d_a=|\Phi^+|$, so in that specific case the CFT counting is the same as the little string counting. This is generally not so for $\mathfrak{g}=D_n$ and $E_n$.\\
The dimension of $T^{2d}_{m_s\rightarrow \infty}$ can be conveniently recovered from the representation theory of $\mathfrak{g}$. Indeed, by just keeping track of which positive roots satisfy $\langle e_{\gamma}, \omega_i\rangle <0$ for an $\omega_i\in\mathcal{W}_\mathcal{S}$, and not recording the actual value of the inner product, the positive roots are counting Coulomb moduli of the defect theory in the CFT limit. This point is crucial in the $D_n$ and $E_n$ cases, where higher positive root multiplicity has to be ignored to identify a nilradical of $\mathfrak{g}$.\\
\begin{example}[$A_3$ example]
From Table \ref{tab:a3ex} above, we will read off a nilradical from a set of weights ${\cal W}_{\cal S}$ for an $A_3$ theory. We choose ${\cal W}_{\cal S}$ to be the set of all four weights in the antifundamental representation (note that they add up to 0, as they should); they make up the full puncture of $A_3$. Next, we note the following:\\
$[-1,\phantom{-}0,\phantom{-}0]$ has a negative inner product with $h_1-h_2, h_1-h_3, h_1-h_4$.
$[\phantom{-}1,-1,\phantom{-}0]$ has a negative inner product with $h_2-h_3, h_2-h_4.$
$[\phantom{-}0,\phantom{-}1,-1]$ has a negative inner product with $h_3-h_4$.
$[\phantom{-}0,\phantom{-}0,\phantom{-}1]$ has no negative inner product with any of the positive roots.\\
We see that all positive roots of $\mathfrak{g}$ are accounted for, so the nilradical $\mathfrak{n}_{\Theta}$ is constructed using all the positive roots, and thus, $\Theta=\varnothing.$ From the Levi decomposition, we therefore identify the parabolic subalgebra as $\mathfrak{p}_\varnothing$. This is consistent with the fact that no simple root $\alpha_i$ has a vanishing inner product $\langle \alpha_i, \omega_j\rangle$ with all the weights $\omega_j$ in $\mathcal{W}_\mathcal{S}$. The discussion is summarized in Figure \ref{fig:weightreading3} below.
\end{example}
\begin{figure}[htpb]
\begin{center}
\tcbset{enhanced,size=fbox,nobeforeafter,tcbox raise=-2.3mm,colback=white,colframe=white}
\begin{tikzpicture}[baseline]
\node at (-0.5,0) {\tcbox[colframe=violet!70!black]{$\Theta=\varnothing$}};
\draw[->, -stealth, line width=0.4em, postaction={draw,-stealth,white,line width=0.2em,
shorten <=0.10em,shorten >=0.26em}](2,0) -- (1,0);
\node[align=justify] at (4,0) {$\omega_1:[-1,\phantom{-}0,\phantom{-}0]$\\$\omega_2:[\phantom{-}1,-1,\phantom{-}0]$\\$\omega_3:[\phantom{-}0,\phantom{-}1,-1]$\\$\omega_4:[\phantom{-}0,\phantom{-}0,\phantom{-}1]$};
\draw[->, -stealth, line width=0.4em](6,0) -- (7,0);
\node at (9,0) {\begin{tikzpicture}
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$4$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}};
\node[text width=9em,align=center] at (-0.5,-2) {Simple root subset of $T^{2d}_{m_s\to\infty}$};
\node[text width=9em] at (5,-2) {Weights};
\node[text width=9em] at (10,-2) {2d Gauge Theory};
\end{tikzpicture}
\end{center}
\caption{From the set of weights ${\cal W}_{\cal S}$, we read off the parabolic subalgebra $\mathfrak{p}_\varnothing$ of $A_3$ (in this case, the choice of weights is unique up to global $\mathbb{Z}_2$ action on the set). Reinterpreting each weight as a sum of ``minus a fundamental weight and simple roots,'' we obtain the 2d quiver gauge theory shown on the right. The white arrow implies we take the CFT limit.}
\label{fig:weightreading3}
\end{figure}
\begin{figure}[htpb]
\begin{center}
\tcbset{enhanced,size=fbox,nobeforeafter,tcbox raise=-2.3mm,colback=white,colframe=white}
\begin{tikzpicture}[baseline]
\node at (-0.5,0) {\tcbox[colframe=violet!70!black]{$\Theta=\{\alpha_3,\alpha_4\}$}};
\draw[->, -stealth, line width=0.4em, postaction={draw,-stealth,white,line width=0.2em,
shorten <=0.10em,shorten >=0.26em}](1.8,1.4) -- (1,0.5);
\node[align=justify] at (4,2) {$\omega_1:[-1,\phantom{-}0,\phantom{-}0,\phantom{-}0]$\\$\omega_2:[\phantom{-}1,-1,\phantom{-}0,\phantom{-}0]$\\$\omega_3:[\phantom{-}0,\phantom{-}1,\phantom{-}0,\phantom{-}0]$};
\draw[->, -stealth, line width=0.4em](6.5,2) -- (7.5,2);
\node at (9.5,2) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (2*0.9cm,0.6*0.9cm) {$2$};
\node[circle, draw](k4) at (2*0.9cm,-0.6*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k2) -- (N2);
\end{scope}
\end{tikzpicture}};
\draw[->, -stealth, line width=0.4em, postaction={draw,-stealth,white,line width=0.2em,
shorten <=0.10em,shorten >=0.26em}](1.8,-1.4) -- (1,-0.5);
\node[align=justify] at (4,-2) {$\omega_1:[\phantom{-}1,\phantom{-}0,\phantom{-}0,\phantom{-}0]$\\$\omega_2:[\phantom{-}1,\phantom{-}0,\phantom{-}0,\phantom{-}0]$\\$\omega_3:[-2,\phantom{-}1,\phantom{-}0,\phantom{-}0]$\\$\omega_4:[\phantom{-}0,-1,\phantom{-}0,\phantom{-}0]$};
\draw[->, -stealth, line width=0.4em](6.5,-2) -- (7.5,-2);
\node at (9.5,-2) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$4$};
\node[circle, draw](k2) at (1*0.9cm,0) {$6$};
\node[circle, draw](k3) at (2*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k4) at (2*0.9cm,-0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k2) -- (N2);
\end{scope}
\end{tikzpicture}};
\node[text width=9em, align=center] at (-0.5,-4) {Simple root subset of $T^{2d}_{m_s\to\infty}$};
\node[text width=9em] at (5,-4) {Weights};
\node[text width=9em] at (10,-4) {2d Gauge Theory};
\end{tikzpicture}
\end{center}
\caption{From the two sets of weights ${\cal W}_{\cal S}$, we read off the parabolic subalgebra $\mathfrak{p}_{\{\alpha_3,\alpha_4\}}$ of $D_4$. Reinterpreting each weight as a sum of ``minus a fundamental weight and simple roots,'' we obtain two different 2d quiver gauge theories shown on the right. The white arrows imply we take the CFT limit.}
\label{fig:weightreading4}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[baseline]
\tcbset{enhanced,size=fbox,nobeforeafter,tcbox raise=-2.3mm,colback=white,colframe=white}
\node at (-1,2) {\tcbox[colframe=violet!70!black]{$\Theta=\varnothing$}};
\draw[->, -stealth, line width=0.4em, postaction={draw,-stealth,white,line width=0.2em,
shorten <=0.10em,shorten >=0.26em}](1.6,2) -- (0.6,2);
\node[align=justify] at (4,2) {$\omega_1:[\phantom{-}0,\phantom{-}1,\phantom{-}0,\phantom{-}0]$\\$\omega_2:[\phantom{-}1,-2,\phantom{-}1,\phantom{-}1]$\\$\omega_3:[-1,\phantom{-}1,\phantom{-}0,-1]$\\$\omega_4:[\phantom{-}0,\phantom{-}0,-1,\phantom{-}0]$};
\draw[->, -stealth, line width=0.4em](6.5,2) -- (7.5,1);
\node at (-1,-2) {\tcbox[colframe=violet!70!black]{$\Theta=\{\alpha_1,\alpha_4\}$}};
\draw[->, -stealth, line width=0.4em, postaction={draw,-stealth,white,line width=0.2em,
shorten <=0.10em,shorten >=0.26em}](1.6,-2) -- (0.6,-2);
\node[align=justify] at (4,-2) {$\omega_1:[\phantom{-}0,\phantom{-}1,\phantom{-}0,\phantom{-}0]$\\$\omega_2:[\phantom{-}0,-1,\phantom{-}2,\phantom{-}0]$\\$\omega_3:[\phantom{-}0,\phantom{-}0,-1,\phantom{-}0]$\\$\omega_4:[\phantom{-}0,\phantom{-}0,-1,\phantom{-}0]$};
\draw[->, -stealth, line width=0.4em](6.5,-2) -- (7.5,-1);
\node at (9.5,0) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$6$};
\node[circle, draw](k3) at (2*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k4) at (2*0.9cm,-0.6*0.9cm) {$4$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (3*0.9cm,-0.60.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k2) -- (N2);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}};
\node[text width=9em,align=center] at (-1,-4) {Simple root subset of $T^{2d}_{m_s\to\infty}$};
\node[text width=9em] at (5,-4) {Weights};
\node[text width=9em] at (10,-4) {2d Gauge Theory};
\end{tikzpicture}
\end{center}
\caption{Two sets of weights ${\cal W}_{\cal S}$ which spell out the same quiver, but denote two different defects; we see it is really the weights, and not quivers, that define a defect. This is clear in the CFT limit, where two distinct parabolic subalgebras are distinguished.}
\label{fig:samequiverd4}
\end{figure}
\begin{example}[$D_4$ example]
As a nontrivial example, let us first study the set at the top of Figure \ref{fig:weightreading4} for $\mathfrak{g}=D_4$: ${\cal W}_{\cal S}=\{[-1,0,0,0],[1,-1,0,0],[0,1,0,0]\}$. Except for the two simple roots $\alpha_3$ and $\alpha_4$, all the other positive roots $e_{\gamma}$ satisfy $\langle e_{\gamma}, \omega_i\rangle <0$ for at least one $\omega_i\in{\cal W}_{\cal S}$. Indeed, it is easy to check that $\langle\alpha_3, \omega_i\rangle=0=\langle\alpha_4, \omega_i\rangle$ for all the $\omega_i\in{\cal W}_{\cal S}$;
the set of positive roots we obtain defines the nilradical $\mathfrak{n}_{\{\alpha_3,\alpha_4\}}$. We then conclude from the Levi decomposition that ${\cal W}_{\cal S}$ characterizes the parabolic subalgebra $\mathfrak{p}_{\{\alpha_3,\alpha_4\}}$.
Now, in this example, we could have very well studied a different set:
\[
{\cal W}_{\cal S}=\{[1,0,0,0],[1,0,0,0],[-2,1,0,0],[0,-1,0,0]\},
\]
shown at the bottom of Figure \ref{fig:weightreading4}. It is an easy exercise to show that one identifies the same nilradical $\mathfrak{n}_{\{\alpha_3,\alpha_4\}}$ as previously, so the same parabolic subalgebra $\mathfrak{p}_{\{\alpha_3,\alpha_4\}}$.
This illustrates that theories $T^{2d}$ that have different quiver descriptions can end up determining the same parabolic subalgebra after taking $m_s$ to infinity.
In particular, the two 2d theories of Figure \ref{fig:weightreading4} have different Coulomb branch dimensions. In the CFT limit, we lose the quiver description of the theories, and the complex Coulomb branch dimension of both theories reduces to 10, which is the dimension of $\mathfrak{n}_{\{\alpha_3,\alpha_4\}}$.
\end{example}
We want to emphasize that throughout this discussion, it really is the set of weights ${\cal W}_{\cal S}$, not the resulting quiver, that characterizes a defect, since two different defects in the CFT limit can have the same quiver origin in the little string; see Figure \ref{fig:samequiverd4} for an illustration.\\
Note that the case of $\mathfrak{g}=A_n$ is special, in that one can start from a parabolic subalgebra of $A_n$ and obtain a 2d quiver theory from it, without any explicit reference to a set of weights; see Figure \ref{fig:nilradical} for an illustration.
All the resulting quivers obey equation \ref{conformal}, as a consequence of the Levi decomposition. Indeed, the nilradical gives the exact Coulomb moduli of the quiver, read in a diagonal fashion from a matrix representative. The masses are read off from the Levi subalgebra, since the latter specifies a partition.\\
\begin{figure}[htpb]
\begin{tabular}{lc}
$\Theta$&$\mathfrak{l}_\Theta$\\[0.3cm]
\noalign{\vskip-0.2cm}\toprule\noalign{\vskip0.2cm}
$\varnothing$&$\begin{pmatrix}
*&0&0&0\\
0&*&0&0\\
0&0&*&0\\
0&0&0&*
\end{pmatrix}$\tikzmark{empt}\\[1.0cm]
$\{\alpha_1\}$&$\begin{pmatrix}
*&*&0&0\\
*&*&0&0\\
0&0&*&0\\
0&0&0&*
\end{pmatrix}$\tikzmark{a1}\\[1.0cm]
$\{\alpha_2\}$&$\begin{pmatrix}
*&0&0&0\\
0&*&*&0\\
0&*&*&0\\
0&0&0&*
\end{pmatrix}$\tikzmark{a2}\\[1.0cm]
$\{\alpha_3\}$&$\begin{pmatrix}
*&0&0&0\\
0&*&0&0\\
0&0&*&*\\
0&0&*&*
\end{pmatrix}$\tikzmark{a3}\\[1.0cm]
$\{\alpha_1,\alpha_2\}$&$\begin{pmatrix}
*&*&*&0\\
*&*&*&0\\
*&*&*&0\\
0&0&0&*
\end{pmatrix}$\tikzmark{a12}\\[1.0cm]
$\{\alpha_2,\alpha_3\}$&$\begin{pmatrix}
*&0&0&0\\
0&*&*&*\\
0&*&*&*\\
0&*&*&*
\end{pmatrix}$\tikzmark{a23}\\[1.0cm]
$\{\alpha_1,\alpha_3\}$&$\begin{pmatrix}
*&*&0&0\\
*&*&0&0\\
0&0&*&*\\
0&0&*&*
\end{pmatrix}$\tikzmark{a13}\\[1.0cm]
$\{\alpha_1,\alpha_2,\alpha_3\}$&$\begin{pmatrix}
*&*&*&*\\
*&*&*&*\\
*&*&*&*\\
*&*&*&*
\end{pmatrix}$\tikzmark{a123}
\end{tabular}
\begin{tikzpicture}[overlay, remember picture]
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray]
($(empt)+(0,1.1)$) -- ($(empt)+(0,-1)$);
\draw[dotted] ($(empt)+(1.1,0)$) node {[1,1,1,1]};
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray]
($(a1)+(0,1.1)$) -- ($(a3)+(0,-1)$);
\draw[dotted] ($(a2)+(1,0)$) node {[2,1,1]};
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray]
($(a12)+(0,1.1)$) -- ($(a23)+(0,-1)$);
\draw[dotted] ($0.5*(a12)+0.5*(a23)+(0.9,0)$) node {[3,1]};
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray]
($(a13)+(0,1.1)$) -- ($(a13)+(0,-1)$);
\draw[dotted] ($(a13)+(0.9,0)$) node {[2,2]};
\end{tikzpicture}
\hspace*{2cm}
\begin{tabular}{c}
$\mathfrak{n}_\Theta$\\[0.3cm]
\noalign{\vskip-0.2cm}\toprule\noalign{\vskip0.2cm}
$\begin{pmatrix}
0&*&*&*\\
0&0&*&*\\
0&0&0&*\\
0&0&0&0
\end{pmatrix}$\tikzmark{emptnil}\\[1.0cm]
$\begin{pmatrix}
0&0&*&*\\
0&0&*&*\\
0&0&0&*\\
0&0&0&0
\end{pmatrix}$\tikzmark{a1nil}\\[1.0cm]
$\begin{pmatrix}
0&*&*&*\\
0&0&0&*\\
0&0&0&*\\
0&0&0&0
\end{pmatrix}$\tikzmark{a2nil}\\[1.0cm]
$\begin{pmatrix}
0&*&*&*\\
0&0&*&*\\
0&0&0&0\\
0&0&0&0
\end{pmatrix}$\tikzmark{a3nil}\\[1.0cm]
$\begin{pmatrix}
0&0&0&*\\
0&0&0&*\\
0&0&0&*\\
0&0&0&0
\end{pmatrix}$\tikzmark{a12nil}\\[1.0cm]
$\begin{pmatrix}
0&*&*&*\\
0&0&0&0\\
0&0&0&0\\
0&0&0&0
\end{pmatrix}$\tikzmark{a23nil}\\[1.0cm]
$\begin{pmatrix}
0&0&*&*\\
0&0&*&*\\
0&0&0&0\\
0&0&0&0
\end{pmatrix}$\tikzmark{a13nil}\\[1.0cm]
$\begin{pmatrix}
0&0&0&0\\
0&0&0&0\\
0&0&0&0\\
0&0&0&0
\end{pmatrix}$\tikzmark{a123nil}
\end{tabular}
\begin{tikzpicture}[overlay, remember picture,font=\small]
\draw[blue!50!green] ($(emptnil)+(-1.7,1)$) -- ($(emptnil)+(-0.3,-0.3)$) node {\hspace*{0.6cm} 3};
\draw[blue!50!green] ($(emptnil)+(-1.15,1)$) -- ($(emptnil)+(-0.3,0.2)$) node {\hspace*{0.6cm} 2};
\draw[blue!50!green] ($(emptnil)+(-0.6,1)$) -- ($(emptnil)+(-0.3,0.7)$) node {\hspace*{0.6cm} 1};
\draw[blue!50!green] ($(a1nil)+(-1.7,1)$) -- ($(a1nil)+(-0.3,-0.3)$) node {\hspace*{0.6cm} 2};
\draw[blue!50!green] ($(a1nil)+(-1.15,1)$) -- ($(a1nil)+(-0.3,0.2)$) node {\hspace*{0.6cm} 2};
\draw[blue!50!green] ($(a1nil)+(-0.6,1)$) -- ($(a1nil)+(-0.3,0.7)$) node {\hspace*{0.6cm} 1};
\draw[blue!50!green] ($(a2nil)+(-1.7,1)$) -- ($(a2nil)+(-0.3,-0.3)$) node {\hspace*{0.6cm} 2};
\draw[blue!50!green] ($(a2nil)+(-1.15,1)$) -- ($(a2nil)+(-0.3,0.2)$) node {\hspace*{0.6cm} 2};
\draw[blue!50!green] ($(a2nil)+(-0.6,1)$) -- ($(a2nil)+(-0.3,0.7)$) node {\hspace*{0.6cm} 1};
\draw[blue!50!green] ($(a3nil)+(-1.7,1)$) -- ($(a3nil)+(-0.3,-0.3)$) node {\hspace*{0.6cm} 2};
\draw[blue!50!green] ($(a3nil)+(-1.15,1)$) -- ($(a3nil)+(-0.3,0.2)$) node {\hspace*{0.6cm} 2};
\draw[blue!50!green] ($(a3nil)+(-0.6,1)$) -- ($(a3nil)+(-0.3,0.7)$) node {\hspace*{0.6cm} 1};
\draw[blue!50!green] ($(a12nil)+(-1.7,1)$) -- ($(a12nil)+(-0.3,-0.3)$) node {\hspace*{0.6cm} 1};
\draw[blue!50!green] ($(a12nil)+(-1.15,1)$) -- ($(a12nil)+(-0.3,0.2)$) node {\hspace*{0.6cm} 1};
\draw[blue!50!green] ($(a12nil)+(-0.6,1)$) -- ($(a12nil)+(-0.3,0.7)$) node {\hspace*{0.6cm} 1};
\draw[blue!50!green] ($(a23nil)+(-1.7,1)$) -- ($(a23nil)+(-0.3,-0.3)$) node {\hspace*{0.6cm} 1};
\draw[blue!50!green] ($(a23nil)+(-1.15,1)$) -- ($(a23nil)+(-0.3,0.2)$) node {\hspace*{0.6cm} 1};
\draw[blue!50!green] ($(a23nil)+(-0.6,1)$) -- ($(a23nil)+(-0.3,0.7)$) node {\hspace*{0.6cm} 1};
\draw[blue!50!green] ($(a13nil)+(-1.7,1)$) -- ($(a13nil)+(-0.3,-0.3)$) node {\hspace*{0.6cm} 1};
\draw[blue!50!green] ($(a13nil)+(-1.15,1)$) -- ($(a13nil)+(-0.3,0.2)$) node {\hspace*{0.6cm} 2};
\draw[blue!50!green] ($(a13nil)+(-0.6,1)$) -- ($(a13nil)+(-0.3,0.7)$) node {\hspace*{0.6cm} 1};
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray]
($(emptnil)+(0.5,1.1)$) -- ($(emptnil)+(0.5,-1)$);
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray]
($(a1nil)+(0.5,1.1)$) -- ($(a3nil)+(0.5,-1)$);
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray]
($(a12nil)+(0.5,1.1)$) -- ($(a23nil)+(0.5,-1)$);
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray]
($(a13nil)+(0.5,1.1)$) -- ($(a13nil)+(0.5,-1)$);
\node at ($(emptnil)+(2.5,0.3)$) {\begin{tikzpicture}
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N1) at (0*0.9cm,0.9cm) {$4$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}};
\node at ($(a2nil)+(2.5,0.3)$) {\begin{tikzpicture}
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N2) at (1*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw (k1) -- (N1);
\draw (k2) -- (N2);
\end{scope}
\end{tikzpicture}};
\node at ($0.5*(a23nil)+0.5*(a12nil)+(2.5,0.3)$) {\begin{tikzpicture}
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$1$};
\node[circle, draw](k3) at (2*0.9cm,0) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N1) at (0*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N3) at (2*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw (k1) -- (N1);
\draw (k3) -- (N3);
\end{scope}
\end{tikzpicture}};
\node at ($(a13nil)+(2.5,0.3)$) {\begin{tikzpicture}
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N2) at (1*0.9cm,0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw (k2) -- (N2);
\end{scope}
\end{tikzpicture}};
\end{tikzpicture}
\caption{How to read off $A_n$ quiver theories directly from the Levi decomposition of a parabolic subalgebra; here we show $n=3$. The matter content is written as a partition, specified by the Levi subalgebra. The nilradical, read off in diagonal fashion in the upper triangular matrix, gives the Coulomb content. Note the resulting quivers automatically obey the condition \ref{conformal}. This way of reading off a quiver gauge theory directly from a parabolic subalgebra is a peculiarity of the $\mathfrak{g}=A_n$ case.}
\label{fig:nilradical}
\end{figure}
Lastly, there are extra ``special'' punctures which cannot be obtained from the sets of weights ${\cal W}_{\cal S}$ as defined so far. They are special in the sense that they do not determine a parabolic subalgebra of $\mathfrak{g}$. We defer the analysis of these extra theories to section \ref{sec:types}.
\subsection{Parabolic subalgebras from Higgs field data}
\label{sec:higgstopara}
The characterization of defects so far has relied on identifying a subset of simple roots $\Theta$ of the algebra $\mathfrak{g}$. There is yet another way the above classification can be recovered, which relies on identifying a Levi subalgebra of $\mathfrak{g}$ instead. This Levi subalgebra appears in the Levi decomposition of $\mathfrak{p}_\Theta$ as $\mathfrak{p}_{\Theta}=\mathfrak{l}_{\Theta}\oplus\mathfrak{n}_{\Theta}$. Either way, we obtain the same parabolic subalgebra $\mathfrak{p}_{\Theta}$. Let us derive this explicitly.
Recall that the Seiberg--Witten curve of the quiver gauge theory on the D5 branes is the spectral curve of the Higgs field $\phi$, taken in some representation $\mathfrak{R}$ of $\mathfrak{g}$ (\cite{Nekrasov:2012xe,Nekrasov:2013xda,Nanopoulos:2009uw}). We described the $m_s$ to infinity limit after which the Seiberg--Witten curve of the theory becomes the spectral curve of the Hitchin integrable system
\begin{equation*}
\det{}_\mathfrak{R}(\phi-p)=0.
\end{equation*}
After $T^2$ compactification, the same equation is solved by D3 branes instead, so we can say that the above spectral curve is the Seiberg--Witten curve of the two-dimensional theory $T^{2d}_{m_s\rightarrow\infty}$.
At the root of the Higgs branch, where the Coulomb and Higgs branches meet, this expression simplifies: the Higgs field near a puncture of $\mathcal{C}$ has a pole of order one. After shifting this pole to $z=0$, we get
\begin{equation}\label{Scurve}
0=\det\left(p\cdot \mathds{1}-\frac{\sum_{\omega_i\in \cal{W}_S}\hat\beta_i \omega_i}{z} +\text{reg.}\right),
\end{equation}
where $\cal{W}_S$ is the set of weights introduced in section \ref{sec:review}.
The $\hat\beta_i$ are mass parameters of the gauge theory, which correspond to insertion points of the D3 branes on $\mathcal{C}$.\\
Thus, the residue at the pole diagonalizes, and the diagonal entries can be interpreted as hypermultiplet masses. So at the root of the Higgs branch, the Higgs field is described by an honest semi-simple element of $\mathfrak{g}$. From this semi-simple element, we can once again recover a parabolic subalgebra $\mathfrak{p}$.
Indeed, given a semi-simple (diagonalizable) element $S$ (in our cases, we'll always have $S\in \mathfrak{h}$), its centralizer
\begin{equation}
\mathfrak{g}^S:=\{X\in \mathfrak{g}\,\big|\,[X,S]=0\}
\end{equation}
is reductive and is in fact a Levi subalgebra $\mathfrak{l}_S$ of some parabolic subalgebra $\mathfrak{p}_S$.\\
Since the Higgs field at a puncture of $\mathcal{C}$ has a pole with semi-simple residue, we can use this construction to associate a Levi subalgebra $\mathfrak{l}$ to a defect. The smallest parabolic subalgebra containing $\mathfrak{l}$ is then the parabolic subalgebra defining the theory. Thus, we achieved our goal of building a parabolic subalgebra, starting from a given Higgs field of a quiver theory $T^{2d}$.
\begin{example}
For $\mathfrak{g}=A_2$, assume that the Higgs field has a pole with semi-simple residue $\phi=\frac{S}{z}$ near $z=0$. In the fundamental representation of $\mathfrak{sl}_3$, a possible choice for $S$ is
\begin{equation}
S=\begin{pmatrix}
\beta&0&0\\
0&\beta&0\\
0&0&-2\beta
\end{pmatrix}.
\end{equation}
The Levi subalgebra of $\mathfrak{sl}_3$ associated to this semi-simple element is the centralizer of $S$, which has the form
\begin{equation}
\mathfrak{g}^S=\begin{pmatrix}
*&*&0\\
*&*&0\\
0&0&*
\end{pmatrix}=\mathfrak{l}_{\{\alpha_1\}}
\end{equation}
The parabolic subalgebra associated to this $S$ is then $\mathfrak{p}_{\{\alpha_1\}}$ from example \ref{ex:levisl3}.
\end{example}
\section{Surface defects and Nilpotent Orbits}
\label{sec:nil}
We now explain how the classification of surface defects presented here is connected to the classification of codimension-two defects via nilpotent orbits.
\subsection{A short review}
The characterization of a puncture as studied in the 6d $(2,0)$ CFT literature \cite{Chacaltana:2012zy} is given in terms of a \emph{nilpotent orbit} of the algebra: An element $X\in \mathfrak{g}$ is nilpotent if the matrix representative (in some faithful representation) is a nilpotent matrix. If $X$ is nilpotent, then the whole orbit $\mathcal{O}_X$ of $X$ under the adjoint action of $G$ is nilpotent -- we call this a nilpotent orbit. For readers interested in details and applications, the textbook \cite{Collingwood:1993} serves as an excellent introduction.
For a simple Lie algebra, the number of such nilpotent orbits is finite, and studying their properties leads to many connections to different branches of representation theory.
For $\mathfrak{g}=A_n$, these orbits are labeled by Young diagrams with $n+1$ boxes; for $\mathfrak{g}=D_n$, they are classified by Young diagrams with $2n$ boxes which satisfy some conditions (see \cite{Collingwood:1993} for details.)
An important fact is that for any nilpotent orbit $\mathcal{O}$, the closure $\xbar{\mathcal{O}}$ is always a union of nilpotent orbits. Furthermore, there is a maximal orbit $\mathcal{O}_{\text{max}}$ whose union contains all other nilpotent orbits of $\mathfrak{g}$. This allows us to define an ordering on these orbits:
Given two nilpotent orbits $\mathcal{O}_1,\mathcal{O}_2\subset\mathfrak{g}$, we define the relation
\begin{equation}
\mathcal{O}_1\preceq\mathcal{O}_2:\Leftrightarrow \mathcal{O}_1\subseteq\xbar{\mathcal{O}_2},
\end{equation}
where $\xbar{\mathcal{O}}$ is the closure in the Zariski topology. This turns the set of all nilpotent orbits into a partially ordered set.
For $A_n$ and $D_n$, this order corresponds to the dominance order of the Young diagrams used to label the orbits.
\begin{example}[$A_3$]
For an $A_n$ nilpotent orbit labeled by a partition $[d_1,\ldots,d_k]$, a matrix representative is given by $k$ Jordan blocks of size $d_i\times d_i$. Taking the example of $n=3$, there are five different nilpotent orbits. Their Hasse diagram can be found below in Figure \ref{fig:hassea3}. For instance, the sub-dominant diagram $[3,1]$ labels the orbit of
\begin{equation}
X_{[3,1]}=\begin{pmatrix}
0&1&0&0\\
0&0&1&0\\
0&0&0&1\\
0&0&0&0
\end{pmatrix}.
\end{equation}
\begin{figure}[htpb]
\centering
\ytableausetup{smalltableaux}
\begin{tikzpicture}[align=center,font=\small]
\begin{scope}[auto, every node/.style={minimum size=1cm,inner sep=1}]
\def 0.9cm {1.5cm}
\def 1cm {1cm}
\def 0 {0}
\def 180 {180}
\def 200 {200}
\def 340 {340}
\node(k1) at (0,0) {\ydiagram{1,1,1,1}};
\node(k2) at (0,-1*0.9cm) {\ydiagram{2,1,1}};
\node(k3) at (0,-2*0.9cm) {\ydiagram{2,2}};
\node(k4) at (0,-3*0.9cm) {\ydiagram{3,1}};
\node(k5) at (0,-4*0.9cm) {\ydiagram{4}};
\draw (k1) -- (k2);
\draw (k2) -- (k3);
\draw (k3) -- (k4);
\draw (k4) -- (k5);
\end{scope}
\end{tikzpicture}
\caption{This diagram represents the inclusion relations between the nilpotent orbits of $A_3$.}
\label{fig:hassea3}
\end{figure}
\end{example}
In \cite{Chacaltana:2012zy}, boundary conditions of the 6d $(2,0)$ CFT are determined by solutions to Nahm's equations. These equations admit singular solutions near a puncture which are labeled by embeddings $\rho:\mathfrak{sl}_2\to\mathfrak{g}$. Since $\sigma_+\in \mathfrak{sl}_2$ is nilpotent, its image $\rho(\sigma_+)$ is as well, and defines a nilpotent orbit. By the Jacobson--Morozov theorem, this gives a one-to-one correspondence between such embeddings and nilpotent orbits.
Thus, by dimensional reduction, $\frac12$-BPS surface defects of 4d $\mathcal{N}=4$ super Yang--Mills are typically labeled by nilpotent orbits.
\subsection{Nilpotent orbits from Levi subalgebras}
\label{sec:nillevi}
Since we now have two different constructions of surface defects, we should explain how we can relate them (a related discussion can be found in \cite{Chacaltana:2012zy}):
Given a parabolic subalgebra $\mathfrak{p}=\mathfrak{l}\oplus\mathfrak{n}$, the nilpotent orbit $\mathcal{O}_\mathfrak{p}$ associated to it is the maximal orbit that has a representative $X\in \mathcal{O}_\mathfrak{p}$ for which $X\in \mathfrak{n}$.
This induced orbit agrees with what is referred to as the Richardson orbit of $\mathfrak{p}$.
If $\mathfrak{g}$ is $A_n$ or $D_n$, this map can be most easily described using the semi-simple pole of the Higgs field. We represent the pole in the first fundamental representation, and assign a Young diagram (with $n+1$ or $2n$ boxes, respectively) to it by counting the multiplicities of the eigenvalues. For $A_n$, these Young diagrams are given by the sizes of the blocks making up the Levi subalgebra $\mathfrak{l}$ (see Figure \ref{fig:nilradical}).
To this Young diagram, we can apply the so-called Spaltenstein map \cite{Spaltenstein:1982}, which gives another Young diagram of the same size \cite{Collingwood:1993}. For $A_n$, this map is just the transposition.
This Young diagram labels the nilpotent orbit describing a defect according to \cite{Chacaltana:2012zy}; adding this nilpotent element to the Higgs field describes a Coulomb deformation of the theory $T^{2d}_{m_s\rightarrow\infty}$, meaning we are moving away from the root of the Higgs branch.
Young diagrams are not available for exceptional Lie algebras, but this correspondence can be described at any rate by using the so-called Bala-Carter labels \cite{Bala:1976msaa,*Bala:1976msab, Haouzi:2016}.
Thus, we get a map which associates one of the theories in \cite{Chacaltana:2012zy} to the 2d theory $T^{2d}_{m_s\rightarrow\infty}$. This was checked explicitly by comparing to the data in \cite{Chacaltana:2011ze,Chacaltana:2010ks,Chacaltana:2014jba}. Furthermore, we will revisit this correspondence when considering the Seiberg--Witten curves of our theories in section \ref{sec:nullstaterels}.
\begin{example}
Let us show how to get the nilpotent orbits of $A_3$ in Figure \ref{fig:hassea3} from parabolic subalgebras. To assign the right nilpotent orbit to them, we take the transpose of the partition describing the Levi subalgebra. The resulting Young diagram labels a nilpotent orbit, which describes a Coulomb deformation of the theory. Since this partition is the same one that is assigned to the pole of the Higgs field (in the first fundamental representation), we can also directly get the nilpotent orbit from the Higgs field data.
The correspondence we get can be read off from Table \ref{tab:spalta3} below.
\begin{table}[htp]
\renewcommand{\arraystretch}{1.2}
\centering
\begin{tabular}{cc}
$\Theta$&$\mathcal{O}$\\
\hline
$\varnothing$&[4]\\
$\{\alpha_i\}\,\scriptstyle i=1,2,3$&[3,1]\\
$\{\alpha_1,\alpha_2\}$&[2,2]\\
$\{\alpha_1,\alpha_3\}$&[2,1,1]\\
$\{\alpha_1,\alpha_2,\alpha_3\}$&[1,1,1,1]
\end{tabular}
\caption{In this table, we read off which parabolic subalgebras of $A_3$ (labelled by a subset $\Theta$ of positive simple roots) induce which nilpotent orbits $\mathcal{O}$ (labelled by Young diagrams).}
\label{tab:spalta3}
\end{table}
\end{example}
\clearpage
\section{Surface Defect classification and $\mathcal{W}(\mathfrak{g})$-algebras}
\label{sec:toda}
In \cite{Aganagic:2015cta}, the partition function of the $(2,0)$ $\mathfrak{g}=ADE$ little string on $\cal{C}$ with certain D5 brane defects is shown to be equal to a $q$-deformation of the $\mathfrak{g}$-Toda CFT conformal block on $\cal{C}$, with vertex operators determined by positions and types of defects.
In this section, we analyze the previous classification of defects of the little string and its relation to parabolic subalgebras from the point of view of the dual $\mathfrak{g}$-type Toda CFT.
Strictly speaking, the theory dual to the little string is a $q$-deformation of $\mathfrak{g}$-type Toda, which has a deformed $\mathcal{W}(\mathfrak{g})$-algebra symmetry, and is therefore not a CFT \cite{Frenkel:1998}; for an analysis in this deformed setting, see \cite{kimura:2015rgi}. For our purposes, it will be enough to turn off that deformation and work with the usual Toda CFT and its $\mathcal{W}(\mathfrak{g})$-algebra symmetry; this is the counterpart to the $m_s$ to infinity limit in the $(2,0)$ little string description, which gives the $(2,0)$ 6d CFT.
\subsection{Levi subalgebras from level-1 null states of Toda CFT}
\label{sec:nullstates}
In free field formalism, the $ADE$ Toda field theory can be written in terms of $n={\rm rk}({\bf g})$ free bosons in two dimensions with a background charge contribution and the Toda potential that couples them:
\begin{equation}\label{Todaaction}
S_{Toda} = \int dz d{\bar z} \;\sqrt g \; g^{z{\bar z}}[\left(\partial_z \vec\varphi\cdot \partial_{\bar z} \vec\varphi\right)+ \left(\vec{\rho}\cdot \vec\varphi\right)\,Q R + \sum_{a=1}^n e^{\vec{\alpha}_a\cdot\vec\varphi/b} ].
\end{equation}
The field $\varphi$ is a vector in the $n$-dimensional (co-)weight space, the inner product is the Killing form on the Cartan subalgebra of $\bf g$, $\vec\rho$ is the Weyl vector, and $Q=b+ 1/b$. The $\vec \alpha_a$ label the simple positive roots.\\
The Toda CFT has an extended conformal symmetry, a ${\mathcal{W}}({{\mathfrak{g}}})$-algebra symmetry.
The elements of the Cartan subalgeba $\mathfrak{h}\subset \mathfrak{g}$ define the highest weight states $|\vec\beta\rangle$ of the $\mathcal{W}(\mathfrak{g})$-algebra. It turns out that null states of this algebra play a crucial role in classifying the defects we have identified from the gauge theory perspective. Indeed, as shown in \cite{Kanno:2009ga} for $\mathfrak{g}=A_n$, punctures can be classified via level 1 null states of the Toda CFT. This is also true for $D_n$ and $E_n$; in this section, we will review how to construct these null states, and we will see that they distinguish the same parabolic subalgebras $\mathfrak{p}_\Theta$ of $\mathfrak{g}$ we encountered before. As we will explain, the set of simple roots $\Theta$ plays a very clear role in the $\mathcal{W}(\mathfrak{g})$-algebra null state condition.\\
We can use the vertex operators to construct highest weight states $|\vec\beta\rangle$ of the $\mathcal{W}(\mathfrak{g})$-algebra by acting on the vacuum, $|\vec\beta\rangle=\lim_{z\to0} e^{\vec{\beta}\cdot \vec{\phi}(z)}|0\rangle$. These give rise to a Verma module over $|\vec\beta\rangle$ by acting with $\mathcal{W}(\mathfrak{g})$-algebra generators. For some of the $|\vec\beta\rangle$, these representations are degenerate, because they contain a null state; we say that $|\chi\rangle$, in the Verma module over $|\vec\beta\rangle$, is a \emph{level $k$ null state} of the $\mathcal{W}(\mathfrak{g})$-algebra if for all spins $s$:
\begin{align}
W^{(s)}_{n}|\chi\rangle&=0,\quad \forall n>0,\\
W^{(2)}_{0}|\chi\rangle&=(E_{\beta}+k)|\chi\rangle,
\end{align}
where $W^{(2)}_{0}|\vec\beta\rangle=E_{\beta}|\vec\beta\rangle$.\\
The Verma module over $|\vec\beta\rangle$ contains such a null state at level $k$ if the Ka\v c determinant at level $k$ vanishes. For any semi-simple $\mathfrak{g}$, this determinant at level $k$ is a non-zero factor times
\begin{equation}
\label{eq:kac}
\prod_{\substack{\vec\alpha\in\Phi\\m,n\leq k}} \left((\vec{\beta}+\alpha_+\vec{\rho}+\alpha_- \vec{\rho}^{\,\vee})\cdot \vec{\alpha}-(\tfrac12\vec\alpha^2\,m\alpha_++n\alpha_-)\right)^{p_N(k-mn)},
\end{equation}
where $p_N(l)$ counts the partitions of $l$ with $N$ colours and $\Phi$ is the set of all roots of $\mathfrak{g}$ \cite{Bouwknegt:1992wg}. For us, $(\alpha_+,\alpha_-)=(b,1/b)$.
Note that this determinant is invariant only under the shifted action of the Weyl group,
\begin{equation}
\vec{\beta}\mapsto w(\vec{\beta}+\alpha_+\vec{\rho}+\alpha_- \vec{\rho}^{\,\vee})-(\alpha_+\vec{\rho}+\alpha_- \vec{\rho}^{\,\vee}),
\end{equation}
where $w$ is the ordinary Weyl action.
If $\mathfrak{g}$ is simply laced, and $\vec\alpha=\vec\alpha_i$ is a simple root, the condition that this determinant vanishes can be phrased as
\begin{equation}
\vec\beta\cdot\vec\alpha_i=(1-m)\alpha_++(1-n)\alpha_-.
\end{equation}
We see that any $\vec{\beta}$ with $\vec\beta\cdot \vec\alpha_i=0$ for a \emph{simple root} $\vec\alpha_i$ gives rise to a level 1 null state, and if $Q:=(\alpha_++\alpha_-)\to 0$, a null state at level 1 occurs if $\vec\beta\cdot \vec\alpha=0$ for any $\vec\alpha\in\Phi$. Furthermore, in this limit, the shift in the Weyl group action disappears. It is enough to work in this ``semi-classical'' limit for our purposes, so we will set $Q$ to 0 in what follows.\\
We can explicitly construct these null states: Consider the \emph{screening charge operators}
\begin{equation}
Q_i^{\pm}=\oint \frac{dz}{2\pi i} \exp(i\alpha_\pm \vec\alpha_i\cdot \vec\phi)
\end{equation}
and observe that
\begin{equation}
[W^{(k)}_n,Q^\pm_i]=0.
\end{equation}
The level 1 null state is then
\begin{equation}
S_i^+|\vec{\beta}-\alpha_+\vec{\alpha}_i\rangle.
\end{equation}
Explicit forms of these null states for $\mathfrak{g}=A_n$ or $D_n$ are shown in the examples of section \ref{sec:examples}. The relation to the parabolic subalgebras introduced in section \ref{sec:parastrings} is immediate: we simply associate a generic null state $|\vec\beta\rangle$ satisfying
\[
\vec\beta\cdot\vec\alpha_i=0 \quad \forall \vec\alpha_i\in\Theta
\]
with the parabolic subalgebra $\mathfrak{p}_\Theta$.
We also note that this $\vec\beta$ defines a semi-simple element in $\mathfrak{g}$; this is just the residue of the Higgs field at the puncture, as explained in section \ref{sec:higgstopara}.\\
We show next that these these null states induce relations in the Seiberg--Witten curve of the theory $T^{2d}_{m_s\rightarrow\infty}$. Indeed, the Seiberg--Witten curve of $T^{2d}_{m_s\rightarrow\infty}$ \eqref{Scurve} can be obtained from a free field realization of the $\mathcal{W}(\mathfrak{g})$-algebra. We will simply read off the null states as relations between the curve coefficients. Generically, these relations only involve semi-simple elements of the algebra $\mathfrak{g}$. In \ref{sec:nullstaterels}, we will see these relations are still preserved when one additionally introduces certain nilpotent deformations.\\
When working in the $q$-deformed setting, the formula for the Ka\v c determinant is an exponentiated version of \eqref{eq:kac} \cite{Bouwknegt:1998da}. This implies that the null states can be defined analogously for the $q$-deformed $\mathcal{W}(\mathfrak{g})$-algebra. \\
\subsection{Seiberg--Witten curves from $\mathcal{W}(\mathfrak{g})$-algebras}
\label{sec:curves}
As we reviewed previously, the Seiberg--Witten curve of $T^{2d}_{m_s\rightarrow\infty}$ is the spectral curve equation
\begin{equation}
\label{eq:swcurve}
\det{}_\mathfrak{R}(\phi-p)=0.
\end{equation}
In our case, $\phi$ has a simple pole such that the residue is a semi-simple element of $\mathfrak{g}$, which we can write as
\begin{equation}
\vec\beta=\sum\limits_{\omega_i\in W_S}\hat\beta^i \omega_i.
\end{equation}
To find the curve near the pole, which we assume to be at $z=0$, we can just choose some convenient representation $\mathfrak{R}$, where the residue of $\phi$ is diagonal, and given by $\text{diag}(\beta_1,\beta_2,\ldots)=:M$. Then $\phi=\frac{M}{z}+A$, with $A$ a generic element in $\mathfrak{g}$.
We now expand eq.\ \eqref{eq:swcurve} and write the curve as
\begin{equation}
0=\det\left(-p\cdot\mathds{1}+\frac{M}{z}+A\right)=(-p)^{\dim(\mathfrak{R})}+\sum_{s}p^{\dim(\mathfrak{R})-s}\varphi^{(s)},
\end{equation}
where $\varphi^{(s)}$ is a meromorphic differential, i.e.\ $\varphi^{(s)}=\sum\limits_{k=0}^s \frac{\varphi^{(s)}_{k}}{z^k}$, where the $\varphi^{(s)}_k$ are regular functions of $\beta^i$ and $a_{ij}$ (the entries of $A$).
Since $M$ is diagonal, this determinant just picks up the diagonal terms $a_{ii}$ of $A$, which we identify with the gauge couplings of the quiver theory.\\
Now, we can also construct the Seiberg--Witten curve of $T^{2d}_{m_s\rightarrow\infty}$ from the $\mathcal{W}(\mathfrak{g})$-algebra \cite{Kanno:2009ga,Keller:2011ek}: For this, we need to perform a Drinfeld--Sokolov reduction to obtain explicit $\mathcal{W}(\mathfrak{g})$-algebra generators in the free field realization\footnote{We thank Kris Thielemans for sending us his \texttt{OPEDefs.m} package \cite{Thielemans:1991uw}, which allowed us to do these calculations}.
Setting $Q=0$ gives us a direct connection to the two dimensional quiver defined by the semi-simple element $\vec\beta\in\mathfrak{g}$ (cf.\ section \ref{sec:higgstopara}): We can identify the poles of the Seiberg-Witten differentials with expectation values of these $\mathcal{W}(\mathfrak{g})$ algebra generators in the state $|\vec\beta\rangle$:
\begin{equation}
\varphi^{(s)}=\langle\vec\beta|W^{(s)}|\vec\beta\rangle .
\end{equation}
We checked this relation explicitly for $A_n$ and $D_n$ theories.
\begin{example}
\label{ex:swcurvedet}
Let us look at the curve describing the full puncture for $\mathfrak{g}=A_2$:
Take the fundamental three-dimensional representation of $\mathfrak{sl}_3$ and write
\begin{equation}
M=\begin{pmatrix}
\beta_1&0&0\\
0&\beta_2&0\\
0&0&-\beta_1-\beta_2
\end{pmatrix},\quad
A=\begin{pmatrix}
a_{11}&a_{12}&a_{13}\\
a_{21}&a_{22}&a_{23}\\
a_{31}&a_{32}&-a_{11}-a_{22}\\
\end{pmatrix}.
\end{equation}
Then the curve can be expanded, and we read off the differentials. For example, $\varphi^{(2)}$, the coefficient multiplying $p$, has the form
\begin{equation}
\varphi^{(2)}=\frac{\varphi^{(2)}_2}{z^2}+\frac{\varphi^{(2)}_1}{z}+\varphi^{(2)}_0,
\end{equation}
where \begin{align}
\varphi^{(2)}_2&=\frac12\left(\beta_1^2+\beta_2^2+(-\beta_1-\beta_2)^2\right):=\frac12(\vec\beta)^2,\\
\varphi^{(2)}_1&=a_{11}(2\beta_1+\beta_2)+a_{22}(\beta_1+2\beta_2).
\end{align}
Furthermore,
\begin{equation}
\begin{split}
\label{eq:a2l1det}
\varphi^{(3)}_3&=-\beta_1^2\beta_2-\beta_2^2\beta_1,\\
\varphi^{(3)}_2&=a_{11}(-2\beta_1\beta_2-\beta_2^2)+a_{22}(-2\beta_1\beta_2-\beta_1^2).
\end{split}
\end{equation}
Now from the CFT side, for $\mathfrak{g}=A_2$, define $X^j=i\partial\phi^j$. In the fundamental representation, $X^1+X^2+X^3=0$. Then the generators are just the energy momentum tensor
\begin{equation*}
T(z)=W^{(2)}(z)=\frac13 (\normOrd{X^1X^1} +\normOrd{X^2X^2}+\normOrd{X^3X^3} -\normOrd{X^1X^2} - \normOrd{X^1X^3} -\normOrd{X^2X^3})
\end{equation*}
and the spin 3 operator
\begin{equation*}
\begin{split}
W^{(3)}(z)=\normOrd{&\left(\frac23 X^1 - \frac13 X^2 - \frac13 X^3\right)\cdot\left(-\frac13 X^1 + \frac23 X^2 - \frac13 X^3\right)\cdot\\
&\cdot\left(-\frac13 X^1 - \frac13 X^2 + \frac23 X^3\right)}\, .
\end{split}
\end{equation*}
For the full puncture, we find at once that $\langle \vec\beta|L_0|\vec\beta\rangle$ is equal to $\varphi^{(2)}_2$ from above, while $\langle \vec\beta|W^{(3)}_{0}|\vec\beta\rangle$ is equal to $\varphi^{(3)}_3$, as expected.
For the level 1 modes, one finds
\begin{align}
\langle\vec\beta|W^{(2)}_{-1}|\vec\beta\rangle&=(2\beta_1+\beta_2)\langle\vec\beta|j^1_{-1}|\vec\beta\rangle+(\beta_1+2\beta_2)\langle\vec\beta|j^2_{-1}|\vec\beta\rangle,\\
\langle\vec\beta|W^{(3)}_{-1}|\vec\beta\rangle&=(-2\beta_1\beta_2-\beta_2^2)\langle\vec\beta|j^1_{-1}|\vec\beta\rangle+(-\beta_1^2-2\beta_1\beta_2)\langle\vec\beta|j^2_{-1}|\vec\beta\rangle,
\end{align}
where $j_k^i$ denotes the $k$-th mode of $X^i$.
Observe that this has the form \eqref{eq:a2l1det} if we identify $\langle\vec\beta|j^i_{-1}|\vec\beta\rangle$ with the $i$-th gauge coupling constant.
\end{example}
For more complicated defects, the $\mathcal{W}(\mathfrak{g})$-algebra generators will have terms that are derivatives of $X$ --- these are set to zero in the semiclassical $Q\rightarrow 0$ limit we are considering; after doing so, the reasoning is as above.
\subsection{Null state relations}
\label{sec:nullstaterels}
Punctures that are not fully generic are determined by semi-simple elements $\vec\beta\in\mathfrak{g}$ whose Verma modules contain null states at level one. Since the eigenvalues of the level one $\mathcal{W}(\mathfrak{g})$-algebra generators appear as coefficients in the curve, the existence of these null states induces some relations between these coefficients.
For $\mathfrak{g}=A_n$ and $D_n$ in the fundamental representation, the pattern is easy to see. The condition $\vec\beta\cdot\vec\alpha=0$ for some positive root $\vec\alpha$ will cause some of the entries of $M=\text{diag}(\beta_1,\beta_2,\ldots)$ to be equal to each other; if the entry $\beta_i$ occurs $k$ times, we get null states by letting the operator
\begin{equation}
\sum_{s} \beta_i^s W_{-1}^{(\dim(\mathfrak{R})-s)},
\end{equation}
and its $k-1$ derivatives with respect to $\beta_i$, act on $|\vec\beta\rangle$. Thus, each theory induces some characteristic null state relations which are realized in the Seiberg--Witten curve.
We now use this observation to connect these curves to nilpotent orbits: note that all the curves considered so far were written as
\begin{equation}
\label{eq:swsemi}
\det\left(-p\cdot\mathds{1}+\frac{M}{z}+A\right)=0
\end{equation}
for some diagonal $M$ and a generic $A$ in $\mathfrak{g}$. In the nilpotent orbit literature, the curves considered in \cite{Chacaltana:2010ks,Chacaltana:2011ze,Chacaltana:2014jba} have the form
\begin{equation}
\det\left(-p\cdot\mathds{1}+\frac{X}{z}+A\right)=0,
\end{equation}
where, again, $A$ is a generic element in $\mathfrak{g}$, and $X$ is a representative of a nilpotent orbit $\mathcal{O}_X$.
We can now simply combine these two poles and form a curve of the form
\begin{equation}
\label{eq:swall}
\det\left(-p\cdot\mathds{1}+e \; \frac{X}{z}+\frac{M}{z}+A\right)=0,
\end{equation}
where $M$ is semi-simple, $X\in\mathcal{O}_X$ is nilpotent and $e$ is a parameter.
We will test the correspondence between theories defined by nilpotent orbits and theories defined by semi-simple elements from this vantage point.
Recall from section \ref{sec:nil} that the semi-simple element $M\in\mathfrak{g}$ induces a nilpotent orbit $\mathcal{O}$. We observe the following facts:
\begin{itemize}
\item Whenever an orbit $\mathcal{O}^\prime\preceq\mathcal{O}$, it is \emph{always} possible to find an $X\in\mathcal{O}^\prime$ such that all the null state relations of the curve \eqref{eq:swsemi} are still satisfied by the curve \eqref{eq:swall}.
\item Whenever an orbit $\mathcal{O}^\prime\npreceq\mathcal{O}$, it is \emph{never} possible find an $X\in\mathcal{O}^\prime$ such that all the null state relations of the curve \eqref{eq:swsemi} are still satisfied by the curve \eqref{eq:swall}.
\end{itemize}
This gives a prescription for allowed deformations; from the perspective of the theory $T^{2d}_{m_s\rightarrow\infty}$, this corresponds to leaving the root of the Higgs branch by turning on certain Coulomb moduli.
\begin{example} For $\mathfrak{g}=A_2$, the only interesting state is $\vec\beta=(\beta_1,\beta_1,-2\beta_1)$; we can get the level one coefficients of the curve by setting $\beta_1=\beta_2$ in example \ref{ex:swcurvedet}:
\begin{equation}
\begin{split}
\phi^{(2)}_1=\langle W^{(2)}_{-1}\rangle&=3\beta_1(a_{11}+a_{22}),\\
\phi^{(3)}_2=\langle W^{(3)}_{-1}\rangle&=-3\beta_1^2(a_{11}+a_{22}),
\end{split}
\end{equation}
so we see that
\begin{equation}
\label{eq:nullstatea2}
\langle W^{(3)}_{-1}\rangle+\beta_1\langle W^{(2)}_{-1}\rangle=0.
\end{equation}
If we now add the nilpotent element $X=\begin{pmatrix}
0&0&1\\
0&0&0\\
0&0&0
\end{pmatrix},$ then
\begin{equation}
\begin{split}
\phi^{(2)}_1&=3\beta_1(a_{11}+a_{22})+e \; a_{31},\\
\phi^{(3)}_2&=-3\beta_1^2(a_{11}+a_{22})-e \; \beta_1 a_{31},
\end{split}
\end{equation}
and the null state relation \eqref{eq:nullstatea2} is still satisfied.
\end{example}
\section{Defects of (2,0) Little String versus Defects of (2,0) CFT}
Up until now, we have been been using the little string theory as a tool to derive codimension-two defects of the $(2,0)$ CFT, and in particular exhibit the parabolic subalgebras that arise in that limit. In this section, we keep $m_s$ finite, and comment on the classification of defects of the $(2,0)$ little string proper. In particular, as we have emphasized in section 3, when we work with the little string and not its conformal field theory limit, parabolic subalgebras are in general not visible (exceptions are when $\mathfrak{g}=A_n$, as we had illustrated in Figure \ref{fig:nilradical}, and in a few low rank cases when $\mathfrak{g}=D_n$ and $\mathfrak{g}=E_n$.)\\
We also address a question that was not answered so far: certain nilpotent orbits of $\mathfrak{g}$ are not induced from any parabolic subalgebra. The simplest example would be the minimal nilpotent orbit of $D_4$. These denote nontrivial defects of the $(2,0)$ CFT, so one should ask if they arise at all from the little string, since so far all the quiver theories we constructed distinguished a parabolic subalgebra. We will see that these exotic defects do indeed originate from the little string. To properly analyze them, we must first understand how flowing on the Higgs branch of a defect is realized from representation theory.
\subsection{$T^{2d}$ and Higgs flow as Weight Addition}
\label{ssec:weightadd}
In this section, we describe an effective and purely group-theoretical way to flow on the Higgs branch of different 2d quiver theories $T^{2d}$, for any simple $\mathfrak{g}$. We show that in the $A_n$ case, this agrees with standard brane engineering and Hanany-Witten transitions \cite{Hanany:1996ie}. As an application, this procedure will be used to analyze the punctures that fall outside of the parabolic subalgebra classification we have spelled out so far.
Our setup will be the usual one in this paper: we consider the quiver gauge theory $T^{2d}$ that describes the low energy limit of D3 branes wrapping 2-cycles of the ALE space $X$ times $\mathbb{C}$. The D3 branes are points on the Riemann surface $\mathcal{C}$ and on the torus $T^2$.\\
We claim that moving on the Higgs branch of $T^{2d}$ translates to a weight addition procedure in the algebra: this makes use of the fact that a weight belonging to a fundamental representation can always be written as the sum of new weights. Each of them should be in the orbit of some fundamental weight (the two orbits do not have to be the same, here), while obeying the rule that no subset adds up to the zero weight.
After moving on the Higgs branch of $T^{2d}$, we obtain a new 2d theory $T^{2d'}$ with a new set of weights, but the same curve. When the gauge theory can be engineered using branes, going from $T^{2d}$ to $T^{2d'}$ is called a Hanany--Witten transition \cite{Hanany:1996ie}. There, as we will see, a D5 brane passing an NS5 brane creates or removes D3 branes stretching between the two. When a brane construction is not available, the weight description we give is still valid, for an arbitrary simply laced Lie algebra..\\
Note that this weight addition formalism also gives a generalization of the S-configuration \cite{Hanany:1996ie}:
No weight in a fundamental representation can ever be written as the sum of two identical weights.
In the $A_n$ case, where we have a brane picture, this statement translates immediately to the S-rule, which is then automatically satisfied. This argument is however applicable to $D_n$ and $E_n$ theories as well, so this gives an $ADE$-type S-rule.
\subsection{Brane Engineering and Weights}
\begin{figure}[h!]
\begin{center}
\resizebox{0.8\textwidth}{!}{%
\begin{tikzpicture}[font=\small]
\node at (0,1) {\begin{tikzpicture}[baseline]
\draw [-] (0,1) -- (0,-1);
\draw [-] (0.4,1) -- (0.4,-1);
\draw [-] (0.8,1) -- (0.8,-1);
\draw [-] (1.2,1) -- (1.2,-1);
\draw[-,red,thick] (0.4,0.6)--(1.6,0.6);
\draw (1.6,0.6) node[cross=3pt,red,thick]{};
\node[align=center] at (0.6,-1.7) {$[-1,1,0]$\\\footnotesize{\color{red}$-w_3+\alpha_2+\alpha_3$}};
\end{tikzpicture}
};
\node at (3,1) {\begin{tikzpicture}[baseline]
\draw [-] (0,1) -- (0,-1);
\draw [-] (0.4,1) -- (0.4,-1);
\draw [-] (0.8,1) -- (0.8,-1);
\draw [-] (1.2,1) -- (1.2,-1);
\draw [-] (1.6,1) -- (1.6,-1);
\draw (0.6,0.6) node[cross=3pt,red,thick]{};
\node[align=center] at (0.8,-1.7) {$[0,-1,0,0]$\\\footnotesize{\color{red}$-w_2$}};
\end{tikzpicture}
};
\node at (6,1) {\begin{tikzpicture}[baseline]
\draw [-] (0,1) -- (0,-1);
\draw [-] (0.4,1) -- (0.4,-1);
\draw [-] (0.8,1) -- (0.8,-1);
\draw[-,red,thick] (0.4,0.6)--(0.6,0.6);
\draw (0.6,0.6) node[cross=3pt,red,thick]{};
\node[align=center] at (0.4,-1.7) {$[-1,0]$\\\footnotesize{\color{red}$-w_1$}};
\end{tikzpicture}
};
\node at (0,-3) {\begin{tikzpicture}[baseline]
\draw [-] (0,1) -- (0,-1);
\draw [-] (0.4,1) -- (0.4,-1);
\draw [-] (0.8,1) -- (0.8,-1);
\draw[-,red,thick] (0.4,0.63)--(1.2,0.63);
\draw[-,red,thick] (0.8,0.57)--(1.2,0.57);
\draw (1.2,0.6) node[cross=3pt,red,thick]{};
\node[align=center] at (0.4,-1.7) {$[-1,0]$\\\footnotesize{\color{red}$-w_1$}};
\end{tikzpicture}
};
\node at (3,-3) {\begin{tikzpicture}[baseline]
\draw [-] (0,1) -- (0,-1);
\draw [-] (0.4,1) -- (0.4,-1);
\draw [-] (0.8,1) -- (0.8,-1);
\draw [-] (1.2,1) -- (1.2,-1);
\draw [-] (1.6,1) -- (1.6,-1);
\draw[-,red,thick] (0.8,0.6)--(1.4,0.6);
\draw (1.4,0.6) node[cross=3pt,red,thick]{};
\node[align=center] at (0.8,-1.7) {$[0,-1,1,-1]$\\\footnotesize{\color{red}$-w_3+\alpha_3$}};
\end{tikzpicture}
};
\node at (6,-3) {\begin{tikzpicture}[baseline]
\draw [-] (0,1) -- (0,-1);
\draw [-] (0.4,1) -- (0.4,-1);
\draw [-] (0.8,1) -- (0.8,-1);
\draw [-] (1.2,1) -- (1.2,-1);
\draw [-] (1.6,1) -- (1.6,-1);
\draw[-,red,thick] (0.6,0.6)--(1.6,0.6);
\draw (0.6,0.6) node[cross=3pt,red,thick]{};
\node[align=center] at (0.8,-1.7) {$[0,-1,0,1]$\\\footnotesize{\color{red}$-w_3+\alpha_3+\alpha_4$}};
\end{tikzpicture}
};
\end{tikzpicture}%
}
\end{center}
\label{fig:branereading}
\caption{How to read off weights from a system of D3, D5, and NS5 branes.}
\end{figure}
For $A_n$ theories and $D_n$ theories obtainable by an orbifolding procedure, the above discussion can be realized by brane engineering of the theory.
We can conveniently represent the weights of the algebra, and in particular, their Dynkin labels, using a configuration of D3 branes stretching between NS5's and D5 branes. To see how this works, let us focus on the i-th Dynkin label of a weight:
\begin{itemize}
\item
A D3 brane coming from the left ending on the $i-th$ NS5 contributes $-1$ to the weight's i-th label.
\item
A D3 brane coming from the right ending on the $i$-th NS5 contributes $+1$ to the weight's $i$-th label.
\item
A D3 brane coming from the left ending on the $i+1$-th NS5 contributes $+1$ to the weight's $i$-th label.
\item
A D3 brane coming from the right ending on the $i+1$-th NS5 contributes $-1$ to the weight's $i$-th label.
\item
Finally, a D5 brane present between the $i$-th and $i+1$-th NS5's contributes $-1$ to the weight's $i$-th label.
\end{itemize}
All in all, a D3 brane stretching between a D5 brane and an NS5 brane (while possibly going through some other NS5 branes) produces a weight, whose Dynkin labels are a combination of 1's, $-1$'s, and 0's. The map is not injective: for a given weight, there can be many brane configurations.
So the Dynkin labels record the total charge of the D3 brane configuration. The statement that the sum of weights is 0 is then a statement about vanishing of D3 brane flux. Note that the configuration of branes spells out a quiver gauge theory at low energies, which is the expected theory $T^{2d}$ we would write based on the weight data $\mathcal{W}_S$. See Figure \ref{fig:quivtrans} for some examples.\\
\begin{figure}[h!]
\begin{subfigure}{1.0\textwidth}
\centering
\begin{tikzpicture}[font=\small]
\node at (0,3) {\begin{tikzpicture}[baseline]
\draw [-] (0,1) -- (0,-1);
\draw [-] (0.4,1) -- (0.4,-1);
\draw [-] (0.8,1) -- (0.8,-1);
\draw [-] (1.2,1) -- (1.2,-1);
\draw [-] (1.6,1) -- (1.6,-1);
\draw[-,red,thick] (-0.4,0.57)--(0,0.57);
\draw[-,red,thick] (-0.4,0.63)--(0.4,0.63);
\draw[-,violet,thick] (0,-0.4)--(0.4,-0.4);
\draw[-,violet,thick] (0.4,-0.35)--(0.8,-0.35);
\draw[-,violet,thick] (0.4,-0.45)--(0.8,-0.45);
\draw[-,violet,thick] (0.8,-0.4)--(1.2,-0.4);
\draw[-,violet,thick] (0.8,-0.5)--(1.2,-0.5);
\draw[-,violet,thick] (1.2,-0.35)--(1.6,-0.35);
\draw (-0.4,0.6) node[cross=3pt,red,thick]{};
\draw (1.0,-0.1) node[cross=3pt,violet,thick]{};
\node[align=left] at (0.6,-1.7) {$[1,-1,0,0]+[-1,0,0,0]$\\\footnotesize{\color{red}$-w_1+\alpha_1\qquad\; -w_1$}};
\end{tikzpicture}
};
\node at (8,3) {\begin{tikzpicture}[baseline]
\draw [-] (0,1) -- (0,-1);
\draw [-] (0.4,1) -- (0.4,-1);
\draw [-] (0.8,1) -- (0.8,-1);
\draw [-] (1.2,1) -- (1.2,-1);
\draw [-] (1.6,1) -- (1.6,-1);
\draw[-,red,thick] (0.8,0.7)--(1.4,0.7);
\draw[-,red,thick] (1.2,0.5)--(1.8,0.5);
\draw[-,violet,thick] (0,-0.4)--(0.4,-0.4);
\draw[-,violet,thick] (0.4,-0.35)--(0.8,-0.35);
\draw[-,violet,thick] (0.4,-0.45)--(0.8,-0.45);
\draw[-,violet,thick] (0.8,-0.4)--(1.2,-0.4);
\draw[-,violet,thick] (0.8,-0.5)--(1.2,-0.5);
\draw[-,violet,thick] (1.2,-0.35)--(1.6,-0.35);
\draw (1.4,0.7) node[cross=3pt,red,thick]{};
\draw (1.8,0.5) node[cross=3pt,red,thick]{};
\draw (1.0,-0.1) node[cross=3pt,violet,thick]{};
\node[align=left] at (0.6,-1.7) {$[0,-1,1,-1]+[0,0,-1,1]$\\\footnotesize{\color{red}$-w_3+\alpha_3\qquad\quad -w_4+\alpha_4$}};
\end{tikzpicture}
};
\node at (4,0.3) {\begin{tikzpicture}[baseline]
\draw [-] (0,1) -- (0,-1);
\draw [-] (0.4,1) -- (0.4,-1);
\draw [-] (0.8,1) -- (0.8,-1);
\draw [-] (1.2,1) -- (1.2,-1);
\draw [-] (1.6,1) -- (1.6,-1);
\draw[-,violet,thick] (0,-0.4)--(0.4,-0.4);
\draw[-,violet,thick] (0.4,-0.35)--(0.8,-0.35);
\draw[-,violet,thick] (0.4,-0.45)--(0.8,-0.45);
\draw[-,violet,thick] (0.8,-0.4)--(1.2,-0.4);
\draw[-,violet,thick] (0.8,-0.5)--(1.2,-0.5);
\draw[-,violet,thick] (1.2,-0.35)--(1.6,-0.35);
\draw (0.6,0.6) node[cross=3pt,red,thick]{};
\draw (1.0,-0.1) node[cross=3pt,violet,thick]{};
\node[align=left] at (0.6,-1.7) {$[0,-1,0,0]$\\\footnotesize{\color{red}$-w_2$}};
\end{tikzpicture}
};
\node at (0,-3) {\begin{tikzpicture}[baseline]
\draw [-] (0,1) -- (0,-1);
\draw [-] (0.4,1) -- (0.4,-1);
\draw [-] (0.8,1) -- (0.8,-1);
\draw [-] (1.2,1) -- (1.2,-1);
\draw [-] (1.6,1) -- (1.6,-1);
\draw[-,red,thick] (1.2,0.7)--(1.4,0.7);
\draw[-,red,thick] (0.8,0.5)--(1.8,0.5);
\draw[-,violet,thick] (0,-0.4)--(0.4,-0.4);
\draw[-,violet,thick] (0.4,-0.35)--(0.8,-0.35);
\draw[-,violet,thick] (0.4,-0.45)--(0.8,-0.45);
\draw[-,violet,thick] (0.8,-0.4)--(1.2,-0.4);
\draw[-,violet,thick] (0.8,-0.5)--(1.2,-0.5);
\draw[-,violet,thick] (1.2,-0.35)--(1.6,-0.35);
\draw (1.4,0.7) node[cross=3pt,red,thick]{};
\draw (1.8,0.5) node[cross=3pt,red,thick]{};
\draw (1.0,-0.1) node[cross=3pt,violet,thick]{};
\node[align=left] at (0.6,-1.7) {$[0,-1,1,0]+[0,0,-1,0]$\\\footnotesize{\color{red}$-w_4+\alpha_3+\alpha_4\quad -w_3$}};
\end{tikzpicture}
};
\node at (8,-3) {\begin{tikzpicture}[baseline]
\draw [-] (0,1) -- (0,-1);
\draw [-] (0.4,1) -- (0.4,-1);
\draw [-] (0.8,1) -- (0.8,-1);
\draw [-] (1.2,1) -- (1.2,-1);
\draw [-] (1.6,1) -- (1.6,-1);
\draw[-,red,thick] (0.6,0.7)--(1.6,0.7);
\draw[-,red,thick] (1.6,0.5)--(1.8,0.5);
\draw[-,violet,thick] (0,-0.4)--(0.4,-0.4);
\draw[-,violet,thick] (0.4,-0.35)--(0.8,-0.35);
\draw[-,violet,thick] (0.4,-0.45)--(0.8,-0.45);
\draw[-,violet,thick] (0.8,-0.4)--(1.2,-0.4);
\draw[-,violet,thick] (0.8,-0.5)--(1.2,-0.5);
\draw[-,violet,thick] (1.2,-0.35)--(1.6,-0.35);
\draw (0.6,0.7) node[cross=3pt,red,thick]{};
\draw (1.8,0.5) node[cross=3pt,red,thick]{};
\draw (1.0,-0.1) node[cross=3pt,violet,thick]{};
\node[align=left] at (0.6,-1.7) {$[0,-1,0,1]+[0,0,0,-1]$\\\footnotesize{\color{red}$-w_3+\alpha_3+\alpha_4\quad -w_4$}};
\end{tikzpicture}
};
\draw[->, -stealth, line width=0.4em](3.25,2.5) -- (2.25,3.25);
\draw[->, -stealth, line width=0.4em](5,2.5) -- (6,3.25);
\draw[->, -stealth, line width=0.4em](3.25,-1.5) -- (2.25,-2.25);
\draw[->, -stealth, line width=0.4em](5,-1.5) -- (6,-2.25);
\end{tikzpicture}
\label{fig:FourTransitions}
\end{subfigure}
\\
\begin{subfigure}{1.0\textwidth}
\centering
\begin{tikzpicture}[font=\small]
\node at (-1,2) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$2$};
\node[circle, draw](k4) at (3*0.9cm,0) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N3) at (2*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw (k1) -- (N1);
\draw (k3) -- (N3);
\end{scope}
\end{tikzpicture}
};
\node at (8,2) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$3$};
\node[circle, draw](k4) at (3*0.9cm,0) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N3) at (2*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N4) at (3*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}
};
\node at (3.75,0) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$2$};
\node[circle, draw](k4) at (3*0.9cm,0) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N2) at (1*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N3) at (2*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw (k2) -- (N2);
\draw (k3) -- (N3);
\end{scope}
\end{tikzpicture}
};
\node at (-1,-2) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$3$};
\node[circle, draw](k4) at (3*0.9cm,0) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N3) at (2*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N4) at (3*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}
};
\node at (8,-2) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$3$};
\node[circle, draw](k4) at (3*0.9cm,0) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N3) at (2*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N4) at (3*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}
};
\draw[->, -stealth, line width=0.4em](2.5,0.75) -- (1.5,1.25);
\draw[->, -stealth, line width=0.4em](4.9,0.75) -- (5.9,1.25);
\draw[->, -stealth, line width=0.4em](2.5,-1.25) -- (1.5,-1.75);
\draw[->, -stealth, line width=0.4em](4.9,-1.25) -- (5.9,-1.75);
\end{tikzpicture}
\label{fig:FourQuivers}
\end{subfigure}
\caption{Flowing on the Higgs branch of $T^{2d}$: starting from the theory in the middle, these are all the theories one can obtain by replacing the weight on node 2 by a sum of two weights. The top picture shows the detailed brane picture for each of the quivers. These all have a low-energy 2d quiver gauge theory description (the ones shown below). At the root of the Higgs branch, the partition functions of all 5 theories are equal.}
\label{fig:quivtrans}
\end{figure}
\subsection{Polarized and Unpolarized Punctures of the Little String}
\label{sec:types}
\begin{figure}[htb]
\begin{subfigure}{1.0\textwidth}
\centering
\begin{tikzpicture}[font=\footnotesize]
\draw[dashed, line width=1.5pt, green!70!black] (0,3.5) -- (0,-3.5);
\draw (-3.6,3) -- (-3.6,-3);
\draw (-2.8,3) -- (-2.8,-3);
\draw (-2,3) -- (-2,-3);
\draw (-1.2,3) -- (-1.2,-3);
\draw (-0.4,3) -- (-0.4,-3);
\draw (0.4,3) -- (0.4,-3);
\draw (1.2,3) -- (1.2,-3);
\draw (2,3) -- (2,-3);
\draw (2.8,3) -- (2.8,-3);
\draw (3.6,3) -- (3.6,-3);
\draw[white, line width=10pt] (-3.7,0) -- (3.7,0);
\node(c1) at (1.6,-1.5) {};
\node at ($(c1)+(0,0.5)$) {\color{red!70!yellow}$\omega_1$};
\draw[-,line width=5pt, red!70!yellow] ($(c1)+(-0.2,0.2)$) -- ($(c1)+(0.2,-0.2)$);
\draw[-,line width=5pt,red!70!yellow] ($(c1)+(-0.2,-0.2)$) -- ($(c1)+(0.2,0.2)$);
\node(c3) at (-1.6,-1.5) {};
\node at ($(c3)+(0,0.5)$) {\color{red!70!yellow}$\omega_1$};
\draw[-,line width=5pt, red!70!yellow] ($(c3)+(-0.2,0.2)$) -- ($(c3)+(0.2,-0.2)$);
\draw[-,line width=5pt, red!70!yellow] ($(c3)+(-0.2,-0.2)$) -- ($(c3)+(0.2,0.2)$);
\node(c2) at (3.2,2) {};
\node at ($(c2)+(0,0.5)$) {\color{red}$\omega_1$};
\draw[-,line width=5pt, red] ($(c2)+(-0.2,0.2)$) -- ($(c2)+(0.2,-0.2)$);
\draw[-,line width=5pt, red] ($(c2)+(-0.2,-0.2)$) -- ($(c2)+(0.2,0.2)$);
\node(c4) at (-3.2,2) {};
\node at ($(c4)+(0,0.5)$) {\color{red}$\omega_1$};
\draw[-,line width=5pt, red] ($(c4)+(-0.2,0.2)$) -- ($(c4)+(0.2,-0.2)$);
\draw[-,line width=5pt, red] ($(c4)+(-0.2,-0.2)$) -- ($(c4)+(0.2,0.2)$);
\draw[red!70!yellow] (-2.8,-1.4) -- (2,-1.4);
\draw[red!70!yellow] (-2,-1.6) -- (2.8,-1.6);
\node[align=left,text width=6.8cm] at (7.5,2) {$\omega_1: [-1,0,0,0,0]=\color{red}-w_1$};
\node[align=left,text width=6.8cm] at (7.5,-1.5) {$\omega_1: [-1,0,0,0,0]=\color{red!70!yellow}-w_3+\alpha_2+2\alpha_3+\alpha_4+\alpha_5$};
\end{tikzpicture}
\caption{The brane realization of the weight $[-1,0,0,0,0]$ of $D_5$. We started with $A_9$ theory and performed a $\mathbb{Z}_2$ orbifold to obtain the picture. This weight can be written in two ways: by placing the D5 brane between the first two NS5 branes (top), the weight is written in an ``appropriate way'' . By placing the D5 brane between the ``wrong'' set of NS5 branes (bottom), the resulting quiver will be unpolarized and will not distinguish a parabolic subalgebra.}
\label{fig:cases1and2}
\end{subfigure}
\\
\begin{subfigure}{1.0\textwidth}
\centering
\begin{tikzpicture}[font=\footnotesize]
\draw[dashed, line width=1.5pt, green!70!black] (0,3.5) -- (0,-0.5);
\draw (-2.8,3) -- (-2.8,0);
\draw (-2,3) -- (-2,0);
\draw (-1.2,3) -- (-1.2,0);
\draw (-0.4,3) -- (-0.4,0);
\draw (0.4,3) -- (0.4,0);
\draw (1.2,3) -- (1.2,0);
\draw (2,3) -- (2,0);
\draw (2.8,3) -- (2.8,0);
\node(c2) at (1.6,1.5) {};
\node at ($(c2)+(0,0.5)$) {\color{red}$\omega_1$};
\draw[-,line width=5pt, red!70!yellow] ($(c2)+(-0.2,0.2)$) -- ($(c2)+(0.2,-0.2)$);
\draw[-,line width=5pt, red!70!yellow] ($(c2)+(-0.2,-0.2)$) -- ($(c2)+(0.2,0.2)$);
\node(c4) at (-1.6,1.5) {};
\node at ($(c4)+(0,0.5)$) {\color{red}$\omega_1$};
\draw[-,line width=5pt, red!70!yellow] ($(c4)+(-0.2,0.2)$) -- ($(c4)+(0.2,-0.2)$);
\draw[-,line width=5pt, red!70!yellow] ($(c4)+(-0.2,-0.2)$) -- ($(c4)+(0.2,0.2)$);
\draw[red!70!yellow] (-2.8,1.6) -- (2,1.6);
\draw[red!70!yellow] (-2,1.4) -- (2.8,1.4);
\node[align=left,text width=6.8cm] at (7,1.5) {$\omega_1: [0,0,0,0]=\color{red!70!yellow}-w_2+\alpha_1+2\alpha_2+\alpha_3+\alpha_4$};
\end{tikzpicture}
\caption{Simplest unpolarized defect: the null weight $[0,0,0,0]$ of $D_4$, realized here with branes. We started with $A_7$ theory and performed a $\mathbb{Z}_2$ orbifold to obtain the picture. This theory does not distinguish a parabolic subalgebra.}
\label{fig:case3}
\end{subfigure}
\end{figure}
We finally come to the description of defects in the little string that happen to fall outside the parabolic subalgebra classification we have spelled out so far.\\
Suppose we pick a weight in the $i$-th fundamental representation. Unless it is the null weight, it is in the orbit of one and only one fundamental weight, say the $j$-th one. In our entire discussion so far, and in all the examples of \cite{Aganagic:2015cta}, we had $i=j$. In terms of the gauge theory, if all weights are chosen so that $i=j$, then $T^{2d}_{m_s\rightarrow\infty}$ distinguishes a parabolic subalgebra, as explained in section \ref{ssec:3d}.
We call such a 2d theory \emph{polarized}. \footnote{The terminology here comes from the fact that the parabolic subgroup $\mathcal{P}$ in $T^*(G/\mathcal{P})$ is often called a polarization of some nilpotent orbit $\mathcal{O}$, through the resolution map $T^*(G/\mathcal{P})\rightarrow \overline{\mathcal{O}}$, with $\overline{\mathcal{O}}$ the closure of $\mathcal{O}$.}\\
However, in the $D_n$ and $E_n$ cases, it can also happen that $i\neq j$, or that the weight we pick is the null weight. See Figures \ref{fig:cases1and2} and \ref{fig:case3}. In terms of the gauge theory, if \emph{at least one} of the weights in ${\cal W}_{\cal S}$ falls under this category, the theory $T^{2d}_{m_s\rightarrow\infty}$ does \emph{not} distinguish a parabolic subalgebra.
We call such a 2d theory \emph{unpolarized}.\\
We saw in section \ref{ssec:weightadd} that if we start with a polarized theory $T^{2d}$, then after flowing on the Higgs branch, we still end up with a polarized theory $T^{2d'}$.
What happens to unpolarized theories? If we start with a such a theory $T^{2d}$, then after moving on the Higgs branch, it is in fact always possible to end up with a theory $T^{2d'}$ that is polarized. This resulting polarized theory $T^{2d'}$ is of course highly specialized, since some masses have to be set equal to each other as a result of the Higgs flow.
This is the viewpoint we take to analyze all unpolarized theories: we will flow on the Higgs branch until they transition to polarized theories. In practice, it means that every ``problematic'' weight in an unpolarized theory can be written as a sum of weights to give a polarized theory. Note that for $A_n$, every quiver theory $T^{2d}$ is polarized, while this is not the case for $D_n$ and $E_n$. An illustration of how one can start with an unpolarized theory and arrive at a polarized theory is shown in Figure \ref{fig:D4null} below.\\
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[font=\footnotesize]
\draw[dashed, line width=1.5pt, green!70!black] (0,3.5) -- (0,-3.5);
\draw (-2.8,3) -- (-2.8,-3);
\draw (-2,3) -- (-2,-3);
\draw (-1.2,3) -- (-1.2,-3);
\draw (-0.4,3) -- (-0.4,-3);
\draw (0.4,3) -- (0.4,-3);
\draw (1.2,3) -- (1.2,-3);
\draw (2,3) -- (2,-3);
\draw (2.8,3) -- (2.8,-3);
\node(c1) at (3.6,-2.5) {};
\node at ($(c1)+(0.5,0.3)$) {\color{blue!80!black}$\omega_1^\prime$};
\node at ($(c1)+(0.5,-0.3)$) {\color{blue!80!black}$\omega_1^{\prime\prime}$};
\draw[-,line width=5pt, blue!80!black] ($(c1)+(-0.2,0.2)$) -- ($(c1)+(0.2,-0.2)$);
\draw[-,line width=5pt,blue!80!black] ($(c1)+(-0.2,-0.2)$) -- ($(c1)+(0.2,0.2)$);
\node(c3) at (-3.6,-2.5) {};
\node at ($(c3)+(-0.5,0.3)$) {\color{blue!80!black}$\omega_1^{\prime}$};
\node at ($(c3)+(-0.5,-0.3)$) {\color{blue!80!black}$\omega_1^{\prime\prime}$};
\draw[-,line width=5pt, blue!80!black] ($(c3)+(-0.2,0.2)$) -- ($(c3)+(0.2,-0.2)$);
\draw[-,line width=5pt, blue!80!black] ($(c3)+(-0.2,-0.2)$) -- ($(c3)+(0.2,0.2)$);
\node(c2) at (1.6,2.3) {};
\node at ($(c2)+(0,0.5)$) {\color{red!70!yellow}$\omega_1$};
\draw[-,line width=5pt, red!70!yellow] ($(c2)+(-0.2,0.2)$) -- ($(c2)+(0.2,-0.2)$);
\draw[-,line width=5pt, red!70!yellow] ($(c2)+(-0.2,-0.2)$) -- ($(c2)+(0.2,0.2)$);
\node(c4) at (-1.6,2.3) {};
\node at ($(c4)+(0,0.5)$) {\color{red!70!yellow}$\omega_1$};
\draw[-,line width=5pt, red!70!yellow] ($(c4)+(-0.2,0.2)$) -- ($(c4)+(0.2,-0.2)$);
\draw[-,line width=5pt, red!70!yellow] ($(c4)+(-0.2,-0.2)$) -- ($(c4)+(0.2,0.2)$);
\draw[red!70!yellow, thick] (-2.8,-2.4) -- (2,-2.4);
\draw[red!70!yellow, thick] (-2.8,2.4) -- (2,2.4);
\draw[red!70!yellow, thick] (-2,-2.6) -- (2.8,-2.6);
\draw[red!70!yellow, thick] (-2,2.2) -- (2.8,2.2);
\draw[blue!80!black, thick] (-3.6,-2.6) -- (-2,-2.6);
\draw[blue!80!black, thick] (-3.6,-2.4) -- (-2.8,-2.4);
\draw[blue!80!black, thick] (3.6,-2.6) -- (2.8,-2.6);
\draw[blue!80!black, thick] (3.6,-2.4) -- (2,-2.4);
\node at (5,2) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$1$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k2) -- (N2);
\end{scope}
\end{tikzpicture}};
\node at (5,-1.4) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$1$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}};
\draw[->, -stealth, line width=0.5em](5.0,1.2) -- (5.0,-1.0);
\draw[->, -stealth, line width=0.5em](-4.7,1.2) -- (-4.7,-1.0);
\node at(-4.7,2) {\large\textbf{Unpolarized}};
\node at(-4.7,-1.5) {\large\textbf{Polarized}};
\node[align=left] at (-4,-3.8) {\normalsize$\omega_1^{\prime\phantom{\prime}}: [-1,0,0,0]=\color{blue!80!black}-w_1$\\[0.1em]\normalsize$\omega_1^{\prime\prime}: [\phantom{-}1,0,0,0]={\color{blue!80!black}-w_1}{\color{red!70!yellow}+}{\color{blue!80!black}2}\color{red!70!yellow}\alpha_1+2\alpha_2+\alpha_3+\alpha_4$};
\node[align=left] at (-4,3.5) {\normalsize $\omega_1: [0,0,0,0]=\color{red!70!yellow}-w_2+\alpha_1+2\alpha_2+\alpha_3+\alpha_4$};
\draw[white, line width=10pt] (-2.9,0) -- (2.9,0);
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray] (-7.79,3.13) -- (-7.79,3.78);
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray] (-7.99,-4.42) -- (-7.99,-3.31);
\end{tikzpicture}
\end{center}
\caption{The brane picture for the zero weight of $D_4$ (top of the figure), which makes up an unpolarized theory at low energies. It is obtained after $\mathbb{Z}_2$ orbifolding of $A_7$. The D5 branes sit on top of the D3 branes, and all the D3 branes are stacked together.
After flowing on the Higgs branch, we end up with a polarized theory, with the two masses equal to each other.}
\label{fig:D4null}
\end{figure}
One should ask what happens to unpolarized defects in the context of Toda CFT. As we have seen in section \ref{sec:toda}, polarized defects are described by momenta that obey null state relations of the corresponding $\cal{W}(\mathfrak{g})$-algebra. This is consistent with what can be found in the class $S$ literature \cite{Chacaltana:2012zy}; for instance, for the minimal puncture of $D_4$ from Figure \ref{fig:D4null}, the defect in the CFT limit is predicted to have no flavor symmetry. In particular, it is unclear what vertex operator one would write in $D_4$-Toda; indeed, in the little string formalism, the defect is the null weight, which suggests a trivial conformal block with no vertex operator insertion! To investigate this issue more carefully, it is useful to keep $m_s$ finite and work in the little string proper; there, a computation in the spirit of \cite{Aganagic:2013tta,Aganagic:2014oia,Aganagic:2015cta}, shows that the partition function of $T^{2d}$ is in fact \emph{not} a $q$-conformal block of $D_4$ Toda, due to subtleties of certain non-cancelling fugacities. In other words, the claim that the partition function of $T^{2d}$ is a $q$-conformal block of $\mathfrak{g}$-type Toda fails precisely when $T^{2d}$ is an unpolarized defect, and only for those cases.
\subsection{All Codimension-Two Defects of the (2,0) Little String}
From the considerations above, we get a complete list of the D3 brane defects of the $(2,0)$ little string that are points on $\mathcal{C}\times T^2$, and which preserve conformality (in a 4d sense, before $T^2$ compactification). These are the polarized and unpolarized punctures we presented. Each of them is characterized by a set of weights in $\mathfrak{g}$, which produce a quiver gauge theory at low energies, satisfying \ref{conformal}. Enumerating the $(2,0)$ little string defects, for a given $\mathfrak{g}$, is then a \emph{finite} counting problem.
For $D_n$ and $E_n$, we find that the number of resulting theories $T^{2d}$ one obtains from specifying a set of weights, although finite, far exceeds the number of the CFT defects as enumarated in \cite{Chacaltana:2012zy}. What is happening is that in the CFT limit, many distinct defect theories $T^{2d}$ typically coalesce to one and the same defect theory $T^{2d}_{m_s\rightarrow \infty}$. The discussion in Figure \ref{fig:samequiverd4} illustrates this phenomenon. See also Figure \ref{fig:fullpuncturesd4} for the example of all theories $T^{2d}$ describing a generic full puncture of the $D_4$ little string.\\
\begin{figure}[hptb]
\begin{center}
\begin{tikzpicture}
\node at (-5,0) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$4$};
\node[circle, draw](k2) at (1*0.9cm,0) {$5$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}};
\node at (0,0) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$4$};
\node[circle, draw](k2) at (1*0.9cm,0) {$7$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$4$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$4$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k2) -- (N2);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}};
\node at (5,0) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$6$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$4$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$4$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k2) -- (N2);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}};
\node at (-6,-3) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$5$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$4$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$3$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}};
\node at (-2,-3) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$5$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$4$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}};
\node at (2,-3) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$4$};
\node[circle, draw](k2) at (1*0.9cm,0) {$6$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$4$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k2) -- (N2);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}};
\node at (6,-3) {\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$4$};
\node[circle, draw](k2) at (1*0.9cm,0) {$6$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$4$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k2) -- (N2);
\draw (k3) -- (N3);
\end{scope}
\end{tikzpicture}};
\end{tikzpicture}
\end{center}
\caption{All $D_4$ 2d quiver theories one obtains from a set $\cal{W}_{\cal S}$ of 5 weights, and which all denote full punctures. In the CFT limit, all these theories produce the same full puncture, denoted by the parabolic subalgebra $\mathfrak{p}_{\varnothing}$. In particular, the Coulomb branch of $T^{2d}_{m_s\rightarrow \infty}$ for all these theories has dimension twelve.}
\label{fig:fullpuncturesd4}
\end{figure}
An important point is that even though we focused on the case of a sphere with two full punctures and an additional arbitrary puncture, the formalism we developed is automatically suited to study a sphere with an arbitrary number of defects.
Simply
choose a set of weights ${\cal W}_{\cal S}$, as done before. If there are $k$ subsets of weights which add up to zero in ${\cal W}_{\cal S}$, then the little string is in fact compactified on a sphere with $k+2$ punctures. This just follows from linearity of equation \eqref{conformal}. In particular, for the case of the sphere with 3 punctures we have been analyzing, there are then no proper subset of weights in ${\cal W}_{\cal S}$ that add up to zero. An immediate consequence is that not all quiver theories characterize a sphere with two full punctures and a third arbitrary one: some quivers represent composite arbitrary defects (and two full punctures). See Figure \ref{fig:fourpoint}.\\
\begin{figure}[htpb]
\begin{center}
\begin{tikzpicture}
\node at (0,0) {\begin{tikzpicture}[baseline=0pt]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2*0.9cm,0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw (k1) -- (N1);
\draw (k3) -- (N3);
\end{scope}
\end{tikzpicture}};
\node at (6,0.5) {\begin{tikzpicture}[baseline=0pt]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$6$};
\node[circle, draw](k3) at (2*0.9cm,0*0.9cm) {$9$};
\node[circle, draw](k4) at (3*0.9cm,0*0.9cm) {$6$};
\node[circle, draw](k5) at (4*0.9cm,0*0.9cm) {$3$};
\node[circle, draw](k6) at (2*0.9cm,1*0.9cm) {$6$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N6) at (2*0.9cm,2*0.9cm) {$3$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k3) to (k6);
\draw (k6) -- (N6);
\end{scope}
\end{tikzpicture}};
\node[align=left, text width=10em] at (0.5,-2.3) {$\omega_1: [-1,\phantom{-}0,\phantom{-}0]$\\$\omega_2: [\phantom{-}1,\phantom{-}0,\phantom{-}0]$\\\vspace*{1em}$\omega_3:[\phantom{-}1,-1,\phantom{-}0]$\\$\omega_4: [-1,\phantom{-}1,\phantom{-}0]$};
\node[align=left, text width=14em] at (6.2,-2.3) {$\omega_1: [\phantom{-}0,\phantom{-}0,\phantom{-}0,\phantom{-}0,\phantom{-}0,\phantom{-}0]$\\\vspace*{1em} $\omega_2: [\phantom{-}0,\phantom{-}0,\phantom{-}0,\phantom{-}0,\phantom{-}0,-1]$\\$\omega_3: [\phantom{-}0,\phantom{-}0,\phantom{-}0,\phantom{-}0,\phantom{-}0,\phantom{-}1]$};
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray] (-1.65,-2.15) -- (-1.65,-1.05);
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray] (-1.65,-3.58) -- (-1.65,-2.48);
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray] (3.25,-3.31) -- (3.25,-2.21);
\draw [decoration={brace,amplitude=0.5em},decorate,ultra thick,gray] (3.25,-1.9) -- (3.25,-1.30);
\end{tikzpicture}
\end{center}
\caption{Left: a four-punctured sphere of $A_3$, with two maximal (full) punctures and two minimal (simple) punctures, both denoted by the parabolic subalgebra $\mathfrak{p}_{\{\alpha_2, \alpha_3\}}$. The two simple punctures indicate that there are two subsets of weights in ${\cal W}_{\cal S}$ that add up to zero. In this specific example, the fact that the weights $[1,-1,0]$ and $[-1,1,0]$ denote a simple puncture can easily be seen by applying a Weyl reflection about the first simple root of $A_3$. Right: a four-punctured sphere of $E_6$, with two maximal punctures and two other punctures; the first of these is the minimal puncture, denoted by the zero weight in the 6-th fundamental representation, and is unpolarized. The second puncture is polarized, and distinguishes the parabolic subalgebra $\mathfrak{p}_{\{\alpha_1, \alpha_2, \alpha_3, \alpha_4, \alpha_5\}}$.}
\label{fig:fourpoint}
\end{figure}
As a final remark, let us mention that the techniques we used in this note to study codimension-two defects of the little string can also be applied to analyze codimension-four defects; these defects do not originate as D5 branes in the $(2,0)$ little string, but as D3 branes instead, before considering any $T^2$ compactification.
\clearpage
\section{Examples}
\label{sec:examples}
\subsection{$A_n$ Examples}
\begin{figure}[h!]
\hspace{3em}
\begin{tikzpicture}[align=center,font=\small, trim left]
\begin{scope}[auto, every node/.style={minimum size=1.25cm}]
\def 0.9cm {1.75cm}
\def 1cm {1cm}
\def 0 {0}
\def 180 {180}
\def 200 {200}
\def 340 {340}
\node[circle, draw](k1) at (0,0) {$n$};
\node[circle, draw](k2) at (1*0.9cm,0) {$n-1$};
\node[circle, draw](k3) at (2*0.9cm,0) {$n-2$};
\node[circle, draw](k4) at (3.5*0.9cm,0) {$2$};
\node[circle, draw](k5) at (4.5*0.9cm,0) {$1$};
\node[align=left] at (6.25*0.9cm,0) {\small $\Theta=\varnothing$};
\node[draw, inner sep=0.1cm,minimum size=1cm](N1) at (0*0.9cm,0.9cm) {$n+1$};
\draw[-] (k1) to[out=0,in=180] (k2);
\draw[-] (k2) to[out=0,in=180] (k3);
\draw[dashed] (k3) -- (k4);
\draw (k4) -- (k5);
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}
\vspace{1em}
\hspace{3em}
\begin{tikzpicture}[align=center,font=\small, trim left]
\begin{scope}[auto, every node/.style={minimum size=1.25cm}]
\def 0.9cm {1.75cm}
\def 1cm {1cm}
\def 0 {0}
\def 180 {180}
\def 200 {200}
\def 340 {340}
\node[circle, draw](k1) at (0,0) {$n-1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$n-1$};
\node[circle, draw](k3) at (2*0.9cm,0) {$n-2$};
\node[circle, draw](k4) at (3.5*0.9cm,0) {$2$};
\node[circle, draw](k5) at (4.5*0.9cm,0) {$1$};
\node[align=left] at (6.25*0.9cm,0) {$\Theta=\{\alpha_i\}$,\\ \footnotesize $i=1,\ldots,n$.};
\node[draw, inner sep=0.1cm,minimum size=1cm](N1) at (0*0.9cm,0.9cm) {$n-1$};
\node[draw, inner sep=0.1cm,minimum size=1cm](N2) at (1*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to[out=0,in=180] (k2);
\draw[-] (k2) to[out=0,in=180] (k3);
\draw[dashed] (k3) -- (k4);
\draw (k4) -- (k5);
\draw (k1) -- (N1);
\draw (k2) -- (N2);
\end{scope}
\end{tikzpicture}
\vspace{1em}
\hspace{3em}
\begin{tikzpicture}[align=center,font=\small, trim left]
\begin{scope}[auto, every node/.style={minimum size=1.25cm,inner sep=1}]
\def 0.9cm {1.75cm}
\def 1cm {1cm}
\def 0 {0}
\def 180 {180}
\def 200 {200}
\def 340 {340}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$1$};
\node[circle, draw](k3) at (2*0.9cm,0) {$1$};
\node[circle, draw](k4) at (3.5*0.9cm,0) {$1$};
\node[circle, draw](k5) at (4.5*0.9cm,0) {$1$};
\node[align=left] at (6.25*0.9cm,0) {\small $\Theta=\Delta\setminus\{\alpha_1\}$ or\\ \small $\Theta=\Delta\setminus\{\alpha_n\}$.};
\node[draw, inner sep=0.1cm,minimum size=1cm](N1) at (0*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=1cm](N5) at (4.5*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to[out=0,in=180] (k2);
\draw[-] (k2) to[out=0,in=180] (k3);
\draw[dashed] (k3) -- (k4);
\draw (k4) -- (k5);
\draw (k1) -- (N1);
\draw (k5) -- (N5);
\end{scope}
\end{tikzpicture}
\caption{The top quiver is the full puncture, denoted by the partition $[1^{n+1}]$. The middle quiver is the next to maximal puncture, with partition $[2,1^{n-1}]$. The bottom quiver is the simple puncture. It is denoted by the partition $[n,1]$, and has two associated parabolic subalgebras: $\mathfrak{p}_{\Delta\backslash \{\alpha_1\}}$ and $\mathfrak{p}_{\Delta\backslash \{\alpha_n\}}$.}
\label{fig:AnExamples}
\end{figure}
We can explicitly write the parabolic subalgebras in some representation; for $A_n$, it is customary to do so in the fundamental representation. Therefore, in what follows, the matrices are valued in $\mathfrak{s}\mathfrak{l}(n+1)$; a star $*_i$ denotes a nonzero complex number, and the label ``$i$'' stands for the positive root $e_i$. A star $*_{-i}$ denotes a nonzero complex number, and the label ``$-i$'' stands for the negative root $-e_i$. Unless specified otherwise, a partition refers to a semi-simple element denoting the Higgs field structure of the theory. These partitions are related to the nilpotent element partitions from section \ref{sec:nil} by transposition in the $A_n$ case, and more generally by the Spaltenstein map (cf.\ \cite{Collingwood:1993}). \\
\subsubsection{Maximal (``full'') Puncture}
We start with the set ${\cal W}_{\cal S}$ of all weights in the $n$-th fundamental representation (antifundamental). Writing $w_i$ for the highest weight of the $i$-th fundamental representation, the weights can be written as:
\begin{align*}
&\omega_1=-w_1\\
&\omega_2=-w_1+\alpha_1\\
&\omega_3=-w_1+\alpha_1+\alpha_2\\
&\vdots\;\;\;=\;\;\;\vdots\\
&\omega_{n+1}=-w_1+\alpha_1+\alpha_2+\ldots+\alpha_n,
\end{align*}
from which we read the top 2d quiver in Figure \ref{fig:AnExamples}. This is called the full puncture. We compute the inner product of the weights with the positive roots:\\
$\omega_1\equiv[-1,0,0,\ldots,0]$ has a negative inner product with:\\
$\alpha_1\; , \;\alpha_1 + \alpha_2\; ,\; \ldots, \; \alpha_1+\alpha_2+\ldots+\alpha_n$
\\
$\omega_2\equiv[1,-1,0,\ldots,0]$ has a negative inner product with:\\
$\alpha_2\; , \;\alpha_2 + \alpha_3\; ,\; \ldots, \; \alpha_2+\alpha_3+\ldots+\alpha_n$
\\
$\omega_3\equiv[0,1,-1,0,\ldots,0]$ has a negative inner product with:\\
$\alpha_3\; , \;\alpha_3 + \alpha_4\; ,\; \ldots, \; \alpha_3+\alpha_4+\ldots+\alpha_n$
\\
$\vdots\qquad\qquad\qquad\qquad\qquad\qquad\vdots$\\
$\omega_{n+1}\equiv[0,\ldots,0,0,1]$ has no negative inner product with any of the positive roots.\\
Since all of the positive roots of $\mathfrak{g}$ have a negative inner product with some weight, they define the nilradical $\mathfrak{n}_{\varnothing}$.
The parabolic subalgebra is $\mathfrak{p}_{\varnothing}$. It is denoted by the partition $[1^{n+1}]$, which is immediately readable from the Levi subalgebra with symmetry $S(U(1)^{n+1})$.
\clearpage
The Levi decomposition gives:
\begin{align*}
\mathfrak{p}_\varnothing&=\begin{pmatrix}
*&*_{1}&*_{1+2}&\cdots&*_{1+\ldots+(n-1)}&*_{1+\ldots+n}\\[5pt]
0&* &*_{2}&\cdots&\cdots&*_{2+\ldots+n}\\
\vdots & \ddots &\ddots&\ddots&\vdots&\vdots\\
\vdots & \, &\ddots&\ddots&*_{(n-1)}&*_{(n-1)+n}\\
\vdots &\,&\,&\ddots& * & *_{n}\\[5pt]
0 &\cdots&\cdots&\cdots& 0 &*
\end{pmatrix},
\end{align*}
with $\mathfrak{p}_\varnothing=\mathfrak{l}_\varnothing\oplus\mathfrak{n}_\varnothing$, where
\begin{align*}
\mathfrak{l}_\varnothing=\begin{pmatrix}
*&0&\cdots&\cdots&\cdots&0\\
0 &*&\ddots&\,&\,&\vdots\\
\vdots&\ddots&\ddots&\ddots&\,&\vdots\\
\vdots &\,& \ddots &\ddots&\ddots&\vdots\\
\vdots & \,&\, &\ddots&*&0\\[5pt]
0 &\cdots&\cdots&\cdots& 0 &*
\end{pmatrix}\end{align*}
and
\begin{align*}
\mathfrak{n}_\varnothing=
\begin{pmatrix}
0&*_{1}&*_{1+2}&\cdots&*_{1+\ldots+(n-1)}&*_{1+\ldots+n}\\[5pt]
0&0 &*_{2}&\cdots&\cdots&*_{2+\ldots+n}\\
\vdots & \ddots &\ddots&\ddots&\vdots&\vdots\\
\vdots & \, &\ddots&\ddots&*_{(n-1)}&*_{(n-1)+n}\\
\vdots &\,&\,&\ddots& 0 & *_{n}\\[5pt]
0 &\cdots&\cdots&\cdots& 0 &0
\end{pmatrix}.\\
\end{align*}
We see explicitly that the nonzero inner products $\langle e_{\gamma},\omega_i\rangle $ make up the $i$-th line of the nilradical $\mathfrak{n}_{\varnothing}$.\\
In this example, there is in fact one other set ${\cal W}_{\cal S}$ that singles out the nilradical $\mathfrak{n}_{\varnothing}$; it is the set of all weights in the first fundamental representation of $A_n$. The resulting 2d quiver is again the top one in Figure \ref{fig:AnExamples}, but with reversed orientation.\\
Now we analyze this defect from the Toda CFT perspective: starting from our set ${\cal W}_{\cal S}$ and recalling that $\beta=\sum_{i=1}^{|{\cal W}_{\cal S}|}\hat{\beta}_i w_i$, ${\cal W}_{\cal S}$ defines the Toda momentum vector $\beta$. We can write this momentum $\beta$ explicitly as the semi-simple element diag($\beta_1,\beta_2,\ldots,\beta_{n+1}$), where all the entries add up to 0. One checks at once that the commutant of this element is the Levi subalgebra $\mathfrak{l}_\varnothing$ written above.\\
The flag manifold $T^*(G/\mathcal{P})$ associated to this defect also appears as the resolution of the Higgs branch of the same quiver,
\begin{center}
\begin{tikzpicture}[baseline=0pt,font=\small]
\begin{scope}[auto, every node/.style={minimum size=1.4cm}]
\def 0.9cm {1.8cm}
\node[circle, draw](k1) at (0,0) {$n$};
\node[circle, draw](k2) at (1*0.9cm,0) {$n-1$};
\node[circle, draw](k3) at (3*0.9cm,0*0.9cm) {$2$};
\node[circle, draw](k4) at (4*0.9cm,0*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=1.3cm](N1) at (-1*0.9cm,0*0.9cm) {$n+1$};
\draw[-] (k1) to (k2);
\draw[dashed] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}
\end{center}
which is an instance of mirror symmetry, since the complete flag is self-mirror.
Furthermore, it is easy to see from the method of section \ref{sec:nillevi} that the nilpotent orbit associated to this theory is the maximal nilpotent orbit of $A_n$, denoted by the partition $[n+1]$.
\subsubsection{Next to Maximal Puncture}
We start by constructing the set ${\cal W}_{\cal S}$:
Consider all the $n+1$ weights of the $n$-th fundamental representation. For each $1\leq i\leq n$, the set contains two unique weights $\omega_i$ and $\omega_{i+1}$ such that $\alpha_i=\omega_i - \omega_{i+1}$, with $\alpha_i$ the $i$-th simple root. Remove $\omega_i$ and $\omega_{i+1}$ from the set, and replace them with the single weight $\omega'\equiv \omega_i+\omega_{i+1}$. $\omega'$ is always a weight in the $n-1$-th fundamental representation of $A_n$. Therefore, the set we consider is made of $n-1$ weights in the $n$-th fundamental representation, and the weight $\omega'$ in the $n-1$-th fundamental representation. It is easy to check that the sum of these weights is 0, so these $n$ weights define a valid set ${\cal W}_{\cal S}$. The weights once again define a 2d quiver gauge theory $T^{2d}$; it is shown in the middle of Figure \ref{fig:AnExamples}. All of the positive roots except the $i$-th simple root $\alpha_i$ have a negative inner product with at least one weight $\omega_i\in{\cal W}_{\cal S}$, so these positive roots define the nilradical $\mathfrak{n}_{\{\alpha_i\}}$.\\
For a given simple root $\alpha_i$, the parabolic subalgebra is then $\mathfrak{p}_{\{\alpha_i\}}$. It is denoted by the partition $[2,1^{n-1}]$, which is immediately readable from the Levi subalgebra with symmetry $S(U(2)\times U(1)^{n-1})$.
\clearpage
The Levi decomposition gives:
\begin{align*}
\mathfrak{p}_{\{\alpha_i\}}&=\begin{pmatrix}
*&*_{1}&*_{1+2}&\cdots&\cdots&\cdots&\cdots&\cdots&*_{1+\ldots+(n-1)}&*_{1+\ldots+n}\\[5pt]
0&* &*_{2}&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&*_{2+\ldots+n}\\
\vdots &\ddots &\ddots&\ddots&\,&\,& \,&\,&\vdots&\vdots\\
\vdots &\, &\ddots&\ddots&\ddots & \,&\,&\,&\vdots&\vdots\\
\vdots & \,&\, &0&*&*_{i}&\,&\,&\vdots&\vdots\\
\vdots & \,\,& &\,&*_{-i}&*&\ddots&\,&\vdots&\vdots\\
\vdots &\,&\,&\,&\,& 0 & \ddots&\ddots &\vdots&\vdots\\
\vdots &\,&\,&\,&\,&\,&\ddots&\ddots&*_{(n-1)}&*_{(n-1)+n}\\
\vdots &\,&\,&\,&\,&\,&\,&\ddots&*&*_n\\[5pt]
0 &\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots& 0 &*
\end{pmatrix},
\end{align*}
with $\mathfrak{p}_{\{\alpha_i\}}=\mathfrak{l}_{\{\alpha_i\}}\oplus\mathfrak{n}_{\{\alpha_i\}}$, where
\begin{align*}
\mathfrak{l}_{\{\alpha_i\}}=\begin{pmatrix}
*&0&\cdots&\cdots&\cdots&\cdots&\cdots&0\\
0 &*&\ddots&\,&\,&\,&\,&\vdots\\
\vdots&\ddots&\ddots&0&\,&\,&\,&\vdots\\
\vdots &\,& 0 &*&*_i&\,&\,&\vdots\\
\vdots & \,&\, &*_{-i}&*&0&\,&\vdots\\
\vdots & \,&\, &\,&0&\ddots&\ddots&\vdots\\
\vdots &\,&\,&\,&\,&\ddots&*&0\\[5pt]
0 &\cdots&\cdots&\cdots&\cdots&\cdots& 0 & *
\end{pmatrix}
\end{align*}
and
\begin{align*}
\mathfrak{n}_{\{\alpha_i\}}=\begin{pmatrix}
0&*_{1}&*_{1+2}&\cdots&\cdots&\cdots&\cdots&\cdots&*_{1+\ldots+(n-1)}&*_{1+\ldots+n}\\[5pt]
0&0 &*_{2}&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&*_{2+\ldots+n}\\
\vdots &\ddots &\ddots&\ddots&\,&\,& \,&\,&\vdots&\vdots\\
\vdots &\, &\ddots&\ddots&*_{i-1} & \,&\,&\,&\vdots&\vdots\\
\vdots & \,&\, &\ddots&0&0&\,&\,&\vdots&\vdots\\
\vdots & \,\,& &\,&0&0&*_{i+1}&\,&\vdots&\vdots\\
\vdots &\,&\,&\,&\,& \ddots & \ddots&\ddots &\vdots&\vdots\\
\vdots &\,&\,&\,&\,&\,&\ddots&\ddots&*_{(n-1)}&*_{(n-1)+n}\\
\vdots &\,&\,&\,&\,&\,&\,&\ddots&0&*_n\\[5pt]
0 &\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots& 0 &0
\end{pmatrix}.
\end{align*}
There is in fact another set ${\cal W}_{\cal S}$ that spells out the nilradical $\mathfrak{n}_{\{\alpha_i\}}$ for fixed $\alpha_i$; just as for the full puncture, the corresponding 2d quiver would be the middle one in Figure \ref{fig:AnExamples}, but again with reversed orientation.\\
Now we rederive this result from the Toda CFT perspective: consider once again the set ${\cal W}_{\cal S}$. We define the momentum vector $\beta$ from $\beta=\sum_{i=1}^{|{\cal W}_{\cal S}|}\hat\beta_i \omega_i$. It is easy to check that
\[
\langle\beta,\alpha_i\rangle=0
\]
for the simple root $\alpha_i$, since $\beta$ has a unique 0 as its $i$-th Dynkin label.
This defines a null state at level 1 in the CFT. One can easily check that there is only one other set ${\cal W}_{\cal S}$ such that $\langle\beta,\alpha_i\rangle=0$; this alternate choice gives the reflection of our 2d quiver. Also note that the commutant of the semi-simple element $\beta$ is the Levi subalgebra $\mathfrak{l}_{\alpha_i}$ written above in the fundamental representation.\\
We make the following important observations:
\begin{itemize}
\item This puncture is in fact described by many sets ${\cal W}_{\cal S}$. To obtain them, one simply considers all possible Weyl group actions that preserve the root sign: $w(\alpha_i)$ must be a positive root. Then all possible momenta are given by $\beta'=w(\beta)$. Note that the condition $\langle\beta,\alpha_i\rangle=0$ is Weyl invariant: $\langle\beta,\alpha_i\rangle=\langle\beta',w(\alpha_i)\rangle$. Therefore, from the CFT perspective, the momentum of this different theory satisfies instead:
\[
\langle\beta',w(\alpha_i)\rangle=0.
\]
Because $w(\alpha_i)$ is a positive \textit{non}-simple root, this is strictly speaking a higher than level-1 null state condition of $A_n$-Toda. As explained in section \ref{sec:nullstates}, this higher level distinction is not relevant in the semi-classical limit $\hbar \rightarrow 0$ (or $Q \rightarrow 0$ in Toda), which is enough for our purposes. The explicit null state for all the theories obtained from the sets ${\cal W}_{\cal S}$ can then be written at level 1, it is
\begin{equation}
\left(W^{(n+1)}_{-1}+\beta_i W^{(n)}_{-1}+\beta_i^2 W^{(n-2)}_{-1}+\cdots +\beta_i^{n-1} W^{(2)}_{-1} \right)|\vec{\beta}\rangle.
\end{equation}
Here, $W^{(j)}_{-1}$ is the mode $-1$ of the spin $j$ generator, and $\beta_i$ is the $i$-th entry of $\beta$, written in the fundamental representation, where $i$ labels the singled-out simple root $\alpha_i$. The eigenvalues of the $W^{(j)}_{0}$ modes are then functions of all the entries of $\beta$.
\item All of the many different sets ${\cal W}_{\cal S}$ mentioned above give rise to the same 2d quiver gauge theory, in the middle of Figure \ref{fig:AnExamples}.
\item The definition of the weight $\omega'\equiv \omega_i+\omega_{i+1}$ above is an illustration of the weight addition rule from section \ref{ssec:weightadd}. This corresponds to moving on the Higgs branch, and transitioning from the top quiver to the middle quiver in Figure \ref{fig:AnExamples}. In gauge theory terms, when the hypermultiplet masses for $\omega_i$ and $\omega_{i+1}$ of the full puncture are set equal, one can transition from the top 2d theory to the middle 2d theory, which has a single hypermultiplet mass for $\omega'$ instead.
\item The nilpotent orbit associated to this puncture is the unique subregular nilpotent orbit of $A_n$, with partition $[n,1]$.
\end{itemize}
The flag manifold $T^*(G/\mathcal{P})$ also appears as the resolution of the Higgs branch of the quiver
\begin{center}
\begin{tikzpicture}[baseline=0pt,font=\small]
\begin{scope}[auto, every node/.style={minimum size=1.4cm}]
\def 0.9cm {1.8cm}
\node[circle, draw](k1) at (0,0) {$n-1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$n-2$};
\node[circle, draw](k3) at (3*0.9cm,0*0.9cm) {$2$};
\node[circle, draw](k4) at (4*0.9cm,0*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=1.3cm](N1) at (-1*0.9cm,0*0.9cm) {$n+1$};
\draw[-] (k1) to (k2);
\draw[dashed] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}
\end{center}
which is again mirror to ours.
\subsubsection{Minimal (``simple'') Puncture}
We start by constructing the set ${\cal W}_{\cal S}$.
Writing $w_i$ for the highest weight of the $i$-th fundamental representation, we define ${\cal W}_{\cal S}$ as:
\begin{align*}
&\omega_1=-w_n,\\
&\omega_2=-w_1+\alpha_1+\alpha_2+\ldots+\alpha_n.
\end{align*}
Written as above, the weights spell out the 2d quiver at the bottom of Figure \ref{fig:AnExamples}. This is called the simple puncture. We compute the inner product of the weights with the positive roots:
$\omega_1\equiv[0,0,\ldots,0,-1]$ has a negative inner product with:\\
$\alpha_n, \;\alpha_n + \alpha_{n-1},\; \ldots, \; \alpha_n+\alpha_{n-1}+\ldots+\alpha_1$
$\omega_2\equiv[0,0,\ldots,0,\phantom{-}1]$ has no negative inner product with any of the positive roots.\\
So the only positive roots of $\mathfrak{g}$ that have a negative inner product with some weight $\omega_i\in{\cal W}_{\cal S}$ are $\alpha_n, \;\alpha_n + \alpha_{n-1},\; \ldots, \; \alpha_n+\alpha_{n-1}+\ldots+\alpha_1$, and they define the nilradical $\mathfrak{n}_{\Delta\backslash\{\alpha_n\}}$.
The parabolic subalgebra is then $\mathfrak{p}_{\Delta\backslash\{\alpha_n\}}$. It is denoted by the partition $[n,1]$, which is immediately readable from the Levi subalgebra with symmetry $S(U(n)\times U(1))$. The Levi decomposition gives:
\begin{align*}
\mathfrak{p}_{\Delta\backslash \{\alpha_n\}}&=\begin{pmatrix}
*&*_{1}&*_{1+2}&\cdots&\cdots&*_{1+\ldots+(n-1)}&*_{1+\ldots+n}\\[5pt]
*_{-1}&* &*_{2}&\cdots&\cdots&*_{2+\ldots+(n-1)}&*_{2+\ldots+n}\\[5pt]
*_{-(1+2)} &*_{-2}&* &\cdots&\cdots&*_{3+\ldots+(n-1)}&*_{3+\ldots+n}\\
\vdots & \vdots&\vdots&\ddots&\cdots&\vdots&\vdots\\
\vdots & \vdots&\vdots&\cdots&\ddots&*_{(n-1)}&*_{(n-1)+n}\\[5pt]
*_{-(1+\ldots+(n-1))} &*_{-(2+\ldots+(n-1))}&*_{-(3+\ldots+(n-1))} & \cdots&*_{-(n-1)}& * & *_{n}\\[5pt]
0 &0 &0&\cdots&0& 0 &*
\end{pmatrix},
\end{align*}
with $\mathfrak{p}_{\Delta\backslash \{\alpha_n\}}=\mathfrak{l}_{\Delta\backslash \{\alpha_n\}}\oplus\mathfrak{n}_{\Delta\backslash \{\alpha_n\}}$, where
\begin{align*}
\mathfrak{l}_{\Delta\backslash \{\alpha_n\}}=\begin{pmatrix}
*&*_{1}&*_{1+2}&\cdots&\cdots&*_{1+\ldots+(n-1)}&0\\[5pt]
*_{-1}&* &*_{2}&\cdots&\cdots&*_{2+\ldots+(n-1)}&0\\[5pt]
*_{-(1+2)} &*_{-2}&* &\cdots&\cdots&*_{3+\ldots+(n-1)}&0\\
\vdots & \vdots&\vdots&\ddots&\cdots&\vdots&\vdots\\
\vdots & \vdots&\vdots&\cdots&\ddots&*_{(n-1)}&0\\[5pt]
*_{-(1+\ldots+(n-1))} &*_{-(2+\ldots+(n-1))}&*_{-(3+\ldots+(n-1))} & \cdots&*_{-(n-1)}& * & 0\\[5pt]
0 &0 &0&\cdots&0& 0 &*
\end{pmatrix}
\end{align*}
and
\begin{align*}
\mathfrak{n}_{\Delta\backslash \{\alpha_n\}}=\begin{pmatrix}
0&\cdots&\cdots&\cdots&\cdots&0&*_{1+\ldots+n}\\[5pt]
\vdots&\ddots&\,&\,&\,&\vdots&*_{2+\ldots+n}\\[5pt]
\vdots&\,&\ddots&\,&\,&\vdots&*_{3+\ldots+n}\\
\vdots & \,&\,&\ddots&\,&\vdots&\vdots\\
\vdots & \,&\,&\,&\ddots&\vdots&*_{(n-1)+n}\\[5pt]
0 &\cdots&\cdots & \cdots&\cdots& 0& *_{n}\\[5pt]
0 &\cdots &\cdots&\cdots&\cdots& 0 &0
\end{pmatrix}.
\end{align*}
We see explicitly that the non-zero inner products $\langle e_{\gamma},\omega_i\rangle $ give the last column of the nilradical $\mathfrak{n}_{\Delta\backslash\{\alpha_n\}}$.\\
Now we rederive this result from the CFT perspective: consider once again the set ${\cal W}_{\cal S}$. We define the momentum vector $\beta$ from $\beta=\sum_{i=1}^{|{\cal W}_{\cal S}|}\hat\beta_i \omega_i$. It is easy to check that
\[
\langle\beta,\alpha_i\rangle=0,\qquad\qquad i=1,2,\ldots,n-1
\]
since $\beta$ has a 0 as its $i$-th Dynkin label for $i=1,2,\ldots,n-1$.
This defines many level 1 null states in the CFT. One can easily check that no other set ${\cal W}_{\cal S}$ satisfies the above vanishing inner product conditions. Also note that the commutant of the semi-simple element $\beta$ is the Levi subalgebra $\mathfrak{l}_{\Delta\backslash\{\alpha_n\}}$ written above in the fundamental representation.\\
We make the following important observations:
\begin{itemize}
\item This puncture is in fact described by many sets ${\cal W}_{\cal S}$. To obtain them, one simply considers all possible Weyl group actions that preserve the root sign: $w(\alpha_i)$ must be a positive root; the details are in the previous example. The upshot is once again that the explicit null states for all these 2d theories can be written at level 1; they are:
\begin{equation}
\left(W^{(n+1)}_{-1}+\beta W^{(n)}_{-1}+\beta^2 W^{(n-2)}_{-1}+\ldots +\beta^{n-1} W^{(2)}_{-1} \right)|\vec{\beta}\rangle,
\end{equation}
and the $n-1$ derivatives of this equation with respect to $\beta$:
\begin{align*}
& \left(W^{(n)}_{-1}+2\beta W^{(n-2)}_{-1}+\ldots +(n-1)\beta^{n-2} W^{(2)}_{-1} \right)|\vec{\beta}\rangle\\
& \left(2 W^{(n-2)}_{-1}+\ldots +(n-1)(n-2)\beta^{n-3} W^{(2)}_{-1} \right)|\vec{\beta}\rangle\\
& \qquad\vdots\\
& \;\;W^{(2)}_{-1}|\vec{\beta}\rangle
\end{align*}
Here, $W^{(j)}_{-1}$ is the mode $-1$ of the spin $j$ generator, and $\vec{\beta}$=diag$(\beta,\beta,\ldots,\beta,-n\beta)$, written in the fundamental representation. The eigenvalues of the $W^{(j)}_{0}$ modes are again functions of $\beta$.
\item All the many different sets ${\cal W}_{\cal S}$ mentioned above give rise to the same 2d quiver gauge theory, in the bottom of Figure \ref{fig:AnExamples}, and they all characterize the parabolic subalgebra $\mathfrak{p}_{\Delta\backslash\{\alpha_n\}}$, even if not directly readable from the positive root inner products with the Weyl reflected weights.
\item Once again, we can use the weight addition procedure to move on the Higgs branch, and transition from the top quiver to the bottom quiver in Figure \ref{fig:AnExamples}. In gauge theory terms, when the hypermultiplet masses for $\omega_1$, $\omega_2$, $\ldots$, $\omega_{n}$ of the full puncture are set equal, one can transition from the top 2d theory to the bottom 2d theory, which has a single hypermultiplet mass for the single weight $\omega_1+\omega_2+\ldots+\omega_{n}$ instead. Explicitly,
\[
[-1,0,0,\ldots,0]+[1,-1,0,\ldots,0]+\ldots+[0,\ldots,0,1,-1]=[0,0,\ldots,0,-1].
\]
\item The nilpotent orbit for this theory is the minimal non-trivial orbit of $A_n$, with partition $[1^{n+1}]$.
\end{itemize}
The flag manifold $T^*(G/\mathcal{P})$ associated to this defect also appears as the resolution of the Higgs branch of the quiver
\begin{center}
\begin{tikzpicture}[baseline=0pt,font=\small]
\begin{scope}[auto, every node/.style={minimum size=1.4cm}]
\def 0.9cm {1.8cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[draw, inner sep=0.1cm,minimum size=1.3cm](N1) at (-1*0.9cm,0*0.9cm) {$n+1$};
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}
\end{center}
which is the Grassmanian $G(1,n+1)$.
Note this is again precisely mirror to our quiver theory $T^{2d}$.\\
\newpage
\subsection{$D_n$ Examples: Polarized Theories}
\subsubsection{Examples for Arbitrary $n$}
\begin{figure}[h!]
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=1.5cm}]
\def 0.9cm {1.9cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$5$};
\node[circle, draw](k3) at (2*0.9cm,0) {$7$};
\node[circle, draw](k4) at (4*0.9cm,0) {$2n-3$};
\node[circle, draw](k5) at (4.9*0.9cm,0.8*0.9cm) {$n-1$};
\node[circle, draw](k6) at (4.9*0.9cm,-0.8*0.9cm) {$n-1$};
\node[draw, inner sep=0.1cm,minimum size=1.2cm](N1) at (0*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=1.2cm](N4) at (4*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=1.2cm](N5) at (5.9*0.9cm,0.8*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=1.2cm](N6) at (5.9*0.9cm,-0.8*0.9cm) {$1$};
\node[align=left] at (6.15*0.9cm,0) {\small $\Theta=\{\alpha_2,\alpha_2,\ldots,\alpha_{n-2}\}$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[dashed] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k4) to (k6);
\draw (k1) -- (N1);
\draw (k4) -- (N4);
\draw (k5) -- (N5);
\draw (k6) -- (N6);
\end{scope}
\end{tikzpicture}
\vspace{2em}
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=1.5cm}]
\def 0.9cm {1.9cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$2$};
\node[circle, draw](k4) at (4*0.9cm,0) {$2$};
\node[circle, draw](k5) at (4.9*0.9cm,0.8*0.9cm) {$1$};
\node[circle, draw](k6) at (4.9*0.9cm,-0.8*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=1.2cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[align=left] at (5.75*0.9cm,0) {\small $\Theta=\Delta\setminus\{\alpha_1\}$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[dashed] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k4) to (k6);
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}
\caption{The top quiver is a nontrivial puncture characterized by the parabolic subalgebra $\mathfrak{p}_{\{\alpha_2,\alpha_3,\ldots,\alpha_{n-2}\}}$. It is denoted by the partition $[(n-2)^2,1^4]$ in the fundamental representation. The bottom quiver is the simple puncture of $D_n$, characterized by the parabolic subalgebra $\mathfrak{p}_{\Delta\backslash\{\alpha_1\}}$. It is denoted by the partition $[2n-2,1^2]$.}
\label{fig:DnExamples}
\end{figure}
Here, we give two nontrivial $D_n$ examples. We proceed as in the $A_n$ case and start by constructing a valid set of weights ${\cal W}_{\cal S}$:
\begin{align*}
\omega_1&\equiv[\phantom{-}1,0,\ldots,0,\phantom{-}0,\phantom{-}0],\\
\omega_2&\equiv[\phantom{-}0,0,\ldots,0,-1,\phantom{-}0],\\
\omega_3&\equiv[\phantom{-}0,0,\ldots,0,\phantom{-}0,-1],\\
\omega_4&\equiv[-1,0,\ldots,0,\phantom{-}1,\phantom{-}1].
\end{align*}
These weights obviously add up to 0, so they define a valid set ${\cal W}_{\cal S}$. Now note that:
\begin{align*}
\omega_1&=-w_1+2\alpha_1+2\alpha_2+\ldots+2\alpha_{n-2}+\alpha_{n-1}+\alpha_{n}\\
\omega_2&=-w_{n-2}+\alpha_1+3\alpha_2+5\alpha_3\ldots+(2n-5)\alpha_{n-2}+(n-2)\alpha_{n-1}+(n-2)\alpha_{n}\\
\omega_3&=-w_{n-1}\\
\omega_4&=-w_{n}
\end{align*}
This defines the 2d quiver gauge theory $T^{2d}$ shown on top of Figure \ref{fig:DnExamples}. Computing $\langle e_{\gamma},\omega_i\rangle$ for all positive roots $e_{\gamma}$, we identify the nilradical $\mathfrak{n}_{\{\alpha_2,\alpha_3,\ldots,\alpha_{n-2}\}}$. Therefore, we associate to ${\cal W}_{\cal S}$ the parabolic subalgebra $\mathfrak{p}_{\{\alpha_2,\alpha_3,\ldots,\alpha_{n-2}\}}$ from the Levi decomposition.\\
Now we rederive this result from the CFT perspective: consider once again the set ${\cal W}_{\cal S}$. We define the momentum vector $\beta$ from $\beta=\sum_{i=1}^{|{\cal W}_{\cal S}|}\hat\beta_i \omega_i$. It is easy to check that
\[
\langle\beta,\alpha_i\rangle=0,\qquad\qquad i=2,3,\ldots,n-2
\]
since $\beta$ has a 0 as its $i$-th Dynkin label for $i=2,3,\ldots,n-2$.
This defines many level 1 null states in the CFT. One can easily check that no other set ${\cal W}_{\cal S}$ satisfies $\langle\beta,\alpha_i\rangle=0$. Also note that the commutant of the semi-simple element $\beta$ is the Levi subalgebra $\mathfrak{l}_{\{\alpha_2,\alpha_3,\ldots,\alpha_{n-2}\}}$.\\
We make the following important observations:
\begin{itemize}
\item This puncture features the instance of a new phenomenon: there are in fact many 2d quivers associated to the parabolic subalgebra $\mathfrak{p}_{\{\alpha_2,\alpha_3,\ldots,\alpha_{n-2}\}}$. We just exhibited one possible 2d quiver among many valid others.
\item Just as in the $A_n$ case, there are many different sets ${\cal W}_{\cal S}$ for each 2d quiver, which do not directly allow us to read off the parabolic subalgebra. The upshot is once again that the explicit null states for all these sets ${\cal W}_{\cal S}$ can be written at level 1; they are given by:
\begin{equation}
\label{eq:dnnull}
\left(({\tilde W}^{(n)})^2_{-1}+\beta^2 W^{(2n-2)}_{-1}+\beta^4 W^{(2n-4)}_{-1}+\cdots +\beta^{2n-2} W^{(2)}_{-1} \right)|\vec{\beta}\rangle
\end{equation}
and derivatives of this equation with respect to $\beta$.
Here, $W^{(j)}_{-1}$ is the mode $-1$ of the spin $j$ generator. In the split representation of $\mathfrak{so}(2n)$, a generic semi-simple element is $\vec{\beta}=\text{diag}(\beta_1,\beta_2,\ldots,\beta_n,-\beta_1,-\beta_2,\ldots,-\beta_n)$. The puncture we study sets $n-2$ entries $\beta_i$ equal to each other; call them $\beta$ (and so $n-2$ entries $-\beta_i$ become $-\beta$). It is this parameter $\beta$ that appears in the null state \eqref{eq:dnnull}.
\item We can also identify the nilpotent orbit corresponding to this theory: for even $n$, it is given by the partition $[5,3,2^{n-4}]$, and for odd $n$, the orbit has the partition $[5,3,2^{n-5},1,1]$ (this agrees with the results of \cite{Chacaltana:2011ze}.)
\end{itemize}
We now turn to the second example. We start with the set of weights:
\begin{align*}
\omega_1&\equiv[\phantom{-}1,0,0,\ldots,0]=-w_1+2\alpha_1+2\alpha_2+\ldots+2\alpha_{n-2}+\alpha_{n-1}+\alpha_{n}\\
\omega_2&\equiv[-1,0,0,\ldots,0]=-w_1
\end{align*}
These weights obviously add up to 0, so they define a valid set ${\cal W}_{\cal S}$. Written as above, they spell out a 2d quiver theory $T^{2d}$ shown at the bottom of Figure \ref{fig:DnExamples}. Computing $\langle e_{\gamma},\omega_i\rangle$ for all positive roots $e_{\gamma}$, we identify the nilradical $\mathfrak{n}_{\Delta\backslash \{\alpha_1\}}$. So we associate to ${\cal W}_{\cal S}$ the parabolic subalgebra $\mathfrak{p}_{\Delta\backslash \{\alpha_1\}}$ from the Levi decomposition. Unlike the previous example, the 2d quiver theory associated to this puncture is unique. All other possible sets ${\cal W}_{\cal S}$ are then obtained by Weyl reflection.
The nilpotent orbit corresponding to this theory is the minimal non-trivial orbit in $D_n$, with partition $[3,1^{2n-2}]$.\\
The corresponding space $T^*(G/\mathcal{P})$ also appears as the resolution of the Higgs branch of the quiver
\begin{center}
\begin{tikzpicture}[baseline=0pt,font=\small]
\begin{scope}[auto, every node/.style={minimum size=1.5cm}]
\def 0.9cm {2.0cm}
\node[circle, draw](k1) at (0,0) {$USp(2)$};
\node[draw, inner sep=0.1cm,minimum size=1.5cm](N1) at (-1*0.9cm,0*0.9cm) {$SO(2n)$};
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}
\end{center}
Note that this quiver theory is again mirror to ours.\\
\subsubsection{Complete $D_4$ Classification}
In Figure \ref{fig:D4allpunctures} we give the full classification of surface defects for $D_4$: the left column shows a representative quiver $T^{2d}$ from \cite{Aganagic:2015cta} that describes each puncture. The middle column shows the subset of simple roots $\Theta$ which defines the parabolic subalgebra associated to $T^{2d}_{m_s\rightarrow\infty}$. The right column features all the nilpotent orbits, in the notation of \cite{Chacaltana:2011ze}, as Hitchin Young diagrams. Note that lines 2 to 5 on the left denote one and the same nilpotent orbit, but different parabolic subalgebras. More subtle is the fact that lines 2 and 3 on the right feature the same Young diagram, but that is just an unfortunate misfortune in the notation: they really denote distinct nilpotent orbits and parabolic subalgebras; the Levi decompositions indeed yield two distinct nilradicals. An asterisk is written down to differentiate those two punctures. In order to specify which of the three parabolic subalgebras the left 2d quiver of line 6 is associated to, one would need to specify explicitly the set ${\cal W}_{\cal S}$ that defines it. We omitted writing ${\cal W}_{\cal S}$ for brevity.
\\
The nilpotent orbit classification of punctures has a disadvantage: two distinct punctures can be associated to one and the same Hitchin Young diagram (see for instance lines 2 and 3 on the right in Figure \ref{fig:D4allpunctures}), so extra data is needed to differentiate them. Classifying the CFT defects from the little string perspective, on the other hand, every polarized puncture in the classification is associated to a distinct parabolic subalgebra. Unpolarized punctures, however, have to be added separately. For $D_4$, there is exactly one such unpolarized puncture: the one featuring the null weight $[0,0,0,0]$; we show the explicit quiver theory $T^{2d}$ in the section \ref{sec:unpol}. It is interesting to note that special and non-special punctures in the classification of \cite{Chacaltana:2012zy} are treated on an equal footing in the little string formalism.\\
\begin{figure}
\resizebox{\textwidth}{!}{%
\begin{tabular}[t]{lcllcl}
2d Quiver Theory& $\Theta$ & Nilpotent orbit&2d Quiver Theory& $\Theta$ & Nilpotent orbit\\
\cmidrule[\heavyrulewidth](r{1.5em}){1-3}\cmidrule[\heavyrulewidth](l){4-6}\addlinespace[1em]
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$4$};
\node[circle, draw](k2) at (1*0.9cm,0) {$5$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}
&$\Theta=\varnothing$&\ydiagram{7,1}\hspace*{2em}&\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$2$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k2) -- (N2);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}
&$\Theta=\{\alpha_1,\alpha_4\}$&{\color{blue}\ydiagram{4,4}}\\[1.3cm]
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}
&$\Theta=\{\alpha_1\}$&\ydiagram{5,3}&
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$3$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$2$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}
&$\underset{(i,j)=(1,2),(2,3),(2,4)}{\Theta=\{\alpha_i,\alpha_j\}}$&\ydiagram{3,3,1,1}$^\bigstar$\\[1.3cm]
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k3) -- (N3);
\end{scope}
\end{tikzpicture}
&$\Theta=\{\alpha_3\}$&\ydiagram{5,3}&\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$2$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k2) -- (N2);
\end{scope}
\end{tikzpicture}
&$\Theta=\{\alpha_1,\alpha_3,\alpha_4\}$&\ydiagram{3,3,1,1}\\[1.3cm]
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$2$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}
&$\Theta=\{\alpha_4\}$&\ydiagram{5,3}&\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$1$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}
&$\Theta=\{\alpha_2,\alpha_3,\alpha_4\}$&\ydiagram{3,1,1,1,1,1}\\[1.3cm]
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$5$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k2) -- (N2);
\draw (k3) -- (N3);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}
&$\Theta=\{\alpha_2\}$&\ydiagram{5,3}&\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$2$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k3) -- (N3);
\end{scope}
\end{tikzpicture}
&$\Theta=\{\alpha_1,\alpha_2,\alpha_3\}$&{\color{red}\ydiagram{2,2,2,2}}\\[1.3cm]
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$2$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k1) -- (N1);
\draw (k2) -- (N2);
\end{scope}
\end{tikzpicture}
&$\Theta=\{\alpha_3,\alpha_4\}$&\ydiagram{5,1,1,1}&\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$1$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N4) at (2.8*0.9cm,-0.6*0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k4) -- (N4);
\end{scope}
\end{tikzpicture}
&$\Theta=\{\alpha_1,\alpha_2,\alpha_4\}$&{\color{blue}\ydiagram{2,2,2,2}}\\[1.3cm]
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=0.75cm}]
\def 0.9cm {1cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N2) at (1*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.67cm](N3) at (2.8*0.9cm,0.6*0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k2) -- (N2);
\draw (k3) -- (N3);
\end{scope}
\end{tikzpicture}
&$\Theta=\{\alpha_1,\alpha_3\}$&{\color{red}\ydiagram{4,4}}\\[1.3cm]
\end{tabular}%
}
\caption{Surface defects of $D_4$. 2d quiver theories from the Little String are shown in the left column. Parabolic subalgebras that arise in the CFT limit $T^{2d}_{m_s\rightarrow\infty}$ are shown in the middle column. Nilpotent orbits from the defect classification of \cite{Chacaltana:2011ze} are shown in the right column. We omitted writing down an explicit set of weights ${\cal W}_{\cal S}$ for each defect for brevity. The minimal nilpotent orbit is analyzed separately in section \ref{sec:unpol}.}
\label{fig:D4allpunctures}
\end{figure}
\clearpage
\subsection{$E_n$ Examples: Polarized Theories}
Here, we give the quivers of $E_n$ with the smallest number of Coulomb moduli that describe a polarized puncture.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=1cm}]
\def 0.9cm {1.3cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$3$};
\node[circle, draw](k3) at (2*0.9cm,0) {$4$};
\node[circle, draw](k4) at (3*0.9cm,0) {$3$};
\node[circle, draw](k5) at (4*0.9cm,0) {$2$};
\node[circle, draw](k6) at (2*0.9cm,-1*0.9cm) {$2$};
\node[align=left,text width=3cm] at (8.35*0.9cm,0) {\small $\Theta=\Delta\setminus\{\alpha_1\}$ or $\Theta=\Delta\setminus\{\alpha_5\}$};
\node[draw, inner sep=0.1cm,minimum size=0.9cm](N1) at (0*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.9cm](N5) at (4*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k3) to (k6);
\draw (k1) -- (N1);
\draw (k5) -- (N5);
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=1cm}]
\def 0.9cm {1.3cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (2*0.9cm,0) {$6$};
\node[circle, draw](k4) at (3*0.9cm,0) {$5$};
\node[circle, draw](k5) at (4*0.9cm,0) {$4$};
\node[circle, draw](k6) at (5*0.9cm,0) {$3$};
\node[circle, draw](k7) at (2*0.9cm,-1*0.9cm) {$3$};
\node[align=left,text width=3cm] at (8.35*0.9cm,0) {\small $\Theta=\Delta\setminus\{\alpha_6\}$};
\node[draw, inner sep=0.1cm,minimum size=0.9cm](N6) at (5*0.9cm,0.9cm) {$2$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k5) to (k6);
\draw[-] (k3) to (k7);
\draw (k6) -- (N6);
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}[baseline]
\begin{scope}[auto, every node/.style={minimum size=1cm}]
\def 0.9cm {1.3cm}
\node[circle, draw](k1) at (0,0) {$4$};
\node[circle, draw](k2) at (1*0.9cm,0) {$8$};
\node[circle, draw](k3) at (2*0.9cm,0) {$12$};
\node[circle, draw](k4) at (3*0.9cm,0) {$10$};
\node[circle, draw](k5) at (4*0.9cm,0) {$8$};
\node[circle, draw](k6) at (5*0.9cm,0) {$6$};
\node[circle, draw](k7) at (6*0.9cm,0) {$4$};
\node[circle, draw](k8) at (2*0.9cm,-1*0.9cm) {$6$};
\node[draw, inner sep=0.1cm,minimum size=0.9cm](N7) at (6*0.9cm,0.9cm) {$2$};
\node[align=left,text width=3cm] at (8.35*0.9cm,0) {\small $\Theta=\Delta\setminus\{\alpha_7\}$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k5) to (k6);
\draw[-] (k6) to (k7);
\draw[-] (k3) to (k8);
\draw (k7) -- (N7);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{The top, middle, and bottom quivers are $E_6$, $E_7$, and $E_8$ 2d theories respectively. The associated parabolic subalgebras are $\mathfrak{p}_{\Delta\backslash \{\alpha_1\}}$, $\mathfrak{p}_{\Delta\backslash \{\alpha_6\}}$, and $\mathfrak{p}_{\Delta\backslash \{\alpha_7\}}$ respectively. These punctures all have Bala-Carter label $2 A_1$ in the classification of \cite{Chacaltana:2012zy}.}
\label{fig:EnExamples}
\end{figure}
For $E_6$, we start with the set ${\cal W}_{\cal S}$:
\begin{align*}
\omega_1&\equiv[\phantom{-}1,0,0,0,0,0]=-w_5+2\alpha_1+3\alpha_2+4\alpha_3+2\alpha_4+\alpha_5+2\alpha_6\\
\omega_2&\equiv[-1,0,0,0,0,0]=-w_1
\end{align*}
This defines a 2d theory (shown in Figure \ref{fig:EnExamples}). One checks at once from the positive roots that ${\cal W}_{\cal S}$ characterizes the nilradical $\mathfrak{n}_{\Delta\backslash \{\alpha_1\}}$, so the associated parabolic subalgebra is $\mathfrak{p}_{\Delta\backslash \{\alpha_1\}}$.
In fact, no other set ${\cal W}_{\cal S}$ is associated to this parabolic subalgebra. The level 1 null state condition in the $E_6$-Toda CFT is:
\[
\langle\beta,\alpha_i\rangle=0,\qquad\qquad i=2,\ldots,6
\]
The set ${\cal W}_{\cal S}$:
\begin{align*}
\omega_1&\equiv[0,0,0,0,\phantom{-}1,0]=-w_1+2\alpha_1+3\alpha_2+4\alpha_3+2\alpha_4+\alpha_5+2\alpha_6,\\
\omega_2&\equiv[0,0,0,0,-1,0]=-w_5,
\end{align*}
produces the same 2d quiver as above, but the associated parabolic subalgebra is instead $\mathfrak{p}_{\Delta\backslash \{\alpha_5\}}$, and the level 1 null state condition is:
\[
\langle\beta,\alpha_i\rangle=0,\qquad\qquad i=1,2,3,4,6.
\]
All the other possible sets ${\cal W}_{\cal S}$ associated to $\mathfrak{p}_{\Delta\backslash \{\alpha_1\}}$ are obtained by Weyl reflection on the two weights (and the same is true about $\mathfrak{p}_{\Delta\backslash \{\alpha_5\}}$).\\
For $E_7$, we start with the set ${\cal W}_{\cal S}$:
\begin{align*}
\omega_1&\equiv[0,0,0,0,0,\phantom{-}1,0]=-w_6+2\alpha_1+4\alpha_2+6\alpha_3+5\alpha_4+4\alpha_5+3\alpha_6+3\alpha_7\\
\omega_2&\equiv[0,0,0,0,0,-1,0]=-w_6
\end{align*}
This defines a 2d theory (shown in the middle of Figure \ref{fig:EnExamples}). One checks at once from the positive roots that ${\cal W}_{\cal S}$ characterizes the nilradical $\mathfrak{n}_{\Delta\backslash \{\alpha_6\}}$, so the associated parabolic subalgebra is $\mathfrak{p}_{\Delta\backslash \{\alpha_6\}}$.
In fact, no other set ${\cal W}_{\cal S}$ is associated to this parabolic subalgebra. The level 1 null state condition in the $E_7$-Toda CFT is:
\[
\langle\beta,\alpha_i\rangle=0,\qquad\qquad i=1,2,3,4,5,7.
\]
All the other possible sets ${\cal W}_{\cal S}$ associated to $\mathfrak{p}_{\Delta\backslash \{\alpha_6\}}$ are obtained by Weyl reflection on the two weights.\\
For $E_8$, we start with the set ${\cal W}_{\cal S}$:
\begin{align*}
\omega_1&\equiv[0,0,0,0,0,0,\phantom{-}1,0]=-w_7+4\alpha_1+8\alpha_2+12\alpha_3+10\alpha_4+8\alpha_5+6\alpha_6+4\alpha_7+6\alpha_8\\
\omega_2&\equiv[0,0,0,0,0,0,-1,0]=-w_7
\end{align*}
This defines a 2d theory (shown at the bottom of Figure \ref{fig:EnExamples}). One checks at once from the positive roots that ${\cal W}_{\cal S}$ characterizes the nilradical $\mathfrak{n}_{\Delta\backslash \{\alpha_7\}}$, so the associated parabolic subalgebra is $\mathfrak{p}_{\Delta\backslash \{\alpha_7\}}$.
In fact, no other set ${\cal W}_{\cal S}$ is associated to this parabolic subalgebra. The level 1 null state condition in the $E_8$-Toda CFT is:
\[
\langle\beta,\alpha_i\rangle=0,\qquad\qquad i=1,2,3,4,5,6,8.
\]
All the other possible sets ${\cal W}_{\cal S}$ that are associated to $\mathfrak{p}_{\Delta\backslash \{\alpha_7\}}$ are obtained by Weyl reflection on the two weights.
\subsection{Unpolarized Theories}
\label{sec:unpol}
Here we give some examples of unpolarized theories for $D_n$ and $E_n$ only, since there is no such theory for $A_n$.
\begin{figure}[htpb]
\begin{center}
\ytableausetup{smalltableaux}
\begin{tabular}{ccc}
$D_4$ & $D_5$ & $D_6$\\
\toprule
\begin{tikzpicture}[baseline,font=\small]
\begin{scope}[auto, every node/.style={minimum size=0.6cm}]
\def 0.9cm {0.9cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (1.8*0.9cm,0.6*0.9cm) {$1$};
\node[circle, draw](k4) at (1.8*0.9cm,-0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N2) at (1*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k2) to (k4);
\draw (k2) -- (N2);
\node at (2.8*0.9cm,0) {\resizebox{0.5cm}{!}{\ydiagram{2,2,1,1,1,1}}};
\end{scope}
\end{tikzpicture}&
\begin{tikzpicture}[baseline]
\node at (0,1.3) {\begin{tikzpicture}[baseline,font=\small]
\begin{scope}[auto, every node/.style={minimum size=0.6cm}]
\def 0.9cm {0.9cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$2$};
\node[circle, draw](k4) at (2.8*0.9cm,0.6*0.9cm) {$1$};
\node[circle, draw](k5) at (2.8*0.9cm,-0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N2) at (1*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k3) to (k5);
\draw (k2) -- (N2);
\node at (3.8*0.9cm,0) {\resizebox{0.5cm}{!}{\ydiagram{2,2,1,1,1,1,1,1}}};
\end{scope}
\end{tikzpicture}};
\node at (0,-1.3) {\begin{tikzpicture}[baseline,font=\small]
\begin{scope}[auto, every node/.style={minimum size=0.6cm}]
\def 0.9cm {0.9cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$3$};
\node[circle, draw](k3) at (2*0.9cm,0) {$4$};
\node[circle, draw](k4) at (2.8*0.9cm,0.6*0.9cm) {$2$};
\node[circle, draw](k5) at (2.8*0.9cm,-0.6*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N1) at (0*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N3) at (2*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k3) to (k5);
\draw (k1) -- (N1);
\draw (k3) -- (N3);
\node at (3.8*0.9cm,0) {\resizebox{0.7cm}{!}{\ydiagram{3,3,1,1,1,1}}};
\end{scope}
\end{tikzpicture}};
\end{tikzpicture}
&\begin{tikzpicture}[baseline]
\node at (0,4.7) {\begin{tikzpicture}[baseline,font=\small]
\begin{scope}[auto, every node/.style={minimum size=0.6cm}]
\def 0.9cm {0.9cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$2$};
\node[circle, draw](k4) at (3*0.9cm,0) {$2$};
\node[circle, draw](k5) at (3.8*0.9cm,0.6*0.9cm) {$1$};
\node[circle, draw](k6) at (3.8*0.9cm,-0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N2) at (1*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k4) to (k6);
\draw (k2) -- (N2);
\node at (4.8*0.9cm,0) {\resizebox{0.45cm}{!}{\ydiagram{2,2,1,1,1,1,1,1,1,1}}};
\end{scope}
\end{tikzpicture}};
\node at (0,2.3) {\begin{tikzpicture}[baseline,font=\small]
\begin{scope}[auto, every node/.style={minimum size=0.6cm}]
\def 0.9cm {0.9cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0) {$3$};
\node[circle, draw](k4) at (3*0.9cm,0) {$4$};
\node[circle, draw](k5) at (3.8*0.9cm,0.6*0.9cm) {$1$};
\node[circle, draw](k6) at (3.8*0.9cm,-0.6*0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N4) at (3*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k4) to (k6);
\draw (k4) -- (N4);
\node at (4.8*0.9cm,0) {\resizebox{0.45cm}{!}{\ydiagram{2,2,2,2,1,1,1,1}}};
\end{scope}
\end{tikzpicture}};
\node at (0,0) {\begin{tikzpicture}[baseline,font=\small]
\begin{scope}[auto, every node/.style={minimum size=0.6cm}]
\def 0.9cm {0.9cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$3$};
\node[circle, draw](k3) at (2*0.9cm,0) {$4$};
\node[circle, draw](k4) at (3*0.9cm,0) {$4$};
\node[circle, draw](k5) at (3.8*0.9cm,0.6*0.9cm) {$2$};
\node[circle, draw](k6) at (3.8*0.9cm,-0.6*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N1) at (0*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N3) at (2*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k4) to (k6);
\draw (k1) -- (N1);
\draw (k3) -- (N3);
\node at (4.8*0.9cm,0) {\resizebox{0.65cm}{!}{\ydiagram{3,3,1,1,1,1,1,1}}};
\end{scope}
\end{tikzpicture}};
\node at (0,-2.3) {\begin{tikzpicture}[baseline,font=\small]
\begin{scope}[auto, every node/.style={minimum size=0.6cm}]
\def 0.9cm {0.9cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (2*0.9cm,0) {$5$};
\node[circle, draw](k4) at (3*0.9cm,0) {$6$};
\node[circle, draw](k5) at (3.8*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k6) at (3.8*0.9cm,-0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N2) at (1*0.9cm,0.9cm) {$1$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N4) at (3*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k4) to (k6);
\draw (k2) -- (N2);
\draw (k4) -- (N4);
\node at (4.85*0.9cm,0) {\resizebox{0.8cm}{!}{\ydiagram{4,4,1,1,1,1}}};
\end{scope}
\end{tikzpicture}};
\node at (0,-4.6) {\begin{tikzpicture}[baseline,font=\small]
\begin{scope}[auto, every node/.style={minimum size=0.6cm}]
\def 0.9cm {0.9cm}
\node[circle, draw](k1) at (0,0) {$3$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (2*0.9cm,0) {$5$};
\node[circle, draw](k4) at (3*0.9cm,0) {$6$};
\node[circle, draw](k5) at (3.8*0.9cm,0.6*0.9cm) {$3$};
\node[circle, draw](k6) at (3.8*0.9cm,-0.6*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N1) at (0*0.9cm,0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.57cm](N4) at (3*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k4) to (k6);
\draw (k1) -- (N1);
\draw (k4) -- (N4);
\node at (4.9*0.9cm,0) {\resizebox{0.9cm}{!}{\ydiagram{5,3,1,1,1,1}}};
\end{scope}
\end{tikzpicture}};
\end{tikzpicture}
\end{tabular}
\end{center}
\caption{Exhaustive list of unpolarized quiver gauge theories $T^{2d}$ for $D_4$, $D_5$, and $D_6$. The nilpotent orbit in the classification of \cite{Chacaltana:2011ze} is also written for reference.}
\label{fig:Dntype2}
\end{figure}
\begin{figure}[htpb]
\begin{center}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{cp{5cm}}
\begin{tikzpicture}[baseline=0pt,font=\footnotesize]
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {0.9cm}
\node[circle, draw](k1) at (0,0) {$1$};
\node[circle, draw](k2) at (1*0.9cm,0) {$2$};
\node[circle, draw](k3) at (2*0.9cm,0*0.9cm) {$3$};
\node[circle, draw](k4) at (3*0.9cm,0*0.9cm) {$2$};
\node[circle, draw](k5) at (4*0.9cm,0*0.9cm) {$1$};
\node[circle, draw](k6) at (2*0.9cm,1*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N6) at (2*0.9cm,2*0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k3) to (k6);
\draw (k6) -- (N6);
\end{scope}
\end{tikzpicture}&$\mathfrak{g}: E_6$\newline B.-C.-label: $A_1$\\
\begin{tikzpicture}[baseline=0pt,font=\footnotesize]
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {0.9cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$3$};
\node[circle, draw](k3) at (2*0.9cm,0*0.9cm) {$4$};
\node[circle, draw](k4) at (3*0.9cm,0*0.9cm) {$3$};
\node[circle, draw](k5) at (4*0.9cm,0*0.9cm) {$2$};
\node[circle, draw](k6) at (5*0.9cm,0*0.9cm) {$1$};
\node[circle, draw](k7) at (2*0.9cm,1*0.9cm) {$2$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N1) at (0*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k5) to (k6);
\draw[-] (k3) to (k7);
\draw (k1) -- (N1);
\end{scope}
\end{tikzpicture}&$\mathfrak{g}: E_7$\newline B.-C.-label: $A_1$\\
\begin{tikzpicture}[baseline=0pt,font=\footnotesize]
\begin{scope}[auto, every node/.style={minimum size=0.5cm}]
\def 0.9cm {0.9cm}
\node[circle, draw](k1) at (0,0) {$2$};
\node[circle, draw](k2) at (1*0.9cm,0) {$4$};
\node[circle, draw](k3) at (2*0.9cm,0*0.9cm) {$6$};
\node[circle, draw](k4) at (3*0.9cm,0*0.9cm) {$5$};
\node[circle, draw](k5) at (4*0.9cm,0*0.9cm) {$4$};
\node[circle, draw](k6) at (5*0.9cm,0*0.9cm) {$3$};
\node[circle, draw](k7) at (6*0.9cm,0*0.9cm) {$2$};
\node[circle, draw](k8) at (2*0.9cm,1*0.9cm) {$3$};
\node[draw, inner sep=0.1cm,minimum size=0.6cm](N7) at (6*0.9cm,0.9cm) {$1$};
\draw[-] (k1) to (k2);
\draw[-] (k2) to (k3);
\draw[-] (k3) to (k4);
\draw[-] (k4) to (k5);
\draw[-] (k5) to (k6);
\draw[-] (k6) to (k7);
\draw[-] (k3) to (k8);
\draw (k7) -- (N7);
\end{scope}
\end{tikzpicture}&$\mathfrak{g}: E_8$\newline B.-C.-label: $A_1$
\end{tabular}
\end{center}
\caption{Examples of unpolarized quiver gauge theories for $E_n$. The ones shown here have the smallest Coulomb branch dimension. The Bala Carter label $A_1$ in the defect classification of \cite{Chacaltana:2012zy} is also written for reference.}
\label{fig:Entype2}
\end{figure}
The simplest case of an unpolarized quiver gauge theory arises when only a single fundamental hypermultiplet is present, so there is only one mass. The corresponding weight is then the null weight, which is obviously not in the orbit of any fundamental weight. For instance, such a scenario occurs for the unique unpolarized theory of $D_4$, where the weight $[0,0,0,0]$ is indeed in the second fundamental representation; see Figure \ref{fig:Dntype2} for examples in the $D_n$ case, and Figure \ref{fig:Entype2} for examples in the $E_n$ case.\\
As explained in section \ref{sec:types}, unpolarized theories can also have more than one weight: for example, looking at $D_5$, it is possible to choose weights in the third fundamental representation that actually belong to the orbit of the first fundamental weight instead. One can then construct the bottom $D_5$ quiver of Figure \ref{fig:Dntype2}. An example of two weights that make up such a quiver is $[1,0,0,0,0]$, chosen in the first fundamental representation, and $[-1,0,0,0,0]$, chosen in the third fundamental representation. If one wishes, it is always possible to flow on the Higgs branch and make these defects polarized, \ref{sec:types}.
\section*{Acknowledgements}
We first want to thank Mina Aganagic for her guidance and insights throughout this project. We also thank Aswin Balasubramanian, Oscar Chacaltana, Sergey Cherkis, Jacques Distler, Amihay Hanany, Peter Koroteev, Noppadol Mekareeya, Hiraku Nakajima, Shamil Shakirov and Alex Takeda for their time to discuss various points and their willingness to answer our questions. The research of N. H. and C. S. is supported in part by the Berkeley Center for Theoretical
Physics, by the National Science Foundation (award PHY-1521446) and by the
US Department of Energy under Contract DE-AC02-05CH11231.
\newpage
\mciteSetMidEndSepPunct{}{}{}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Brownian Motion is probably the simplest manifestation of a transport in random environment. In this case the particle path is constantly modified by collisions with molecules that compose the surrounding media. The trajectory will appear as if the direction of motion is randomly changes as a function of time and a simple random walk (RW) is quite useful to describe the motion.
The continuum representation of a RW is a regular diffusion~\cite{Weiss}. When the motion of the particle occurs in a complex media, the simple RW might be insufficient for proper description of the transport. In many materials the basic linear dependence of the mean squared displacement (MSD), $\langle x^2(t) \rangle$, is missing and instead $\langle x^2(t)\rangle\sim t^{\alpha} $ while $0<\alpha<1$. Such behavior is termed anomalous subdiffusion and materials where it appears include living cells~\cite{Metzler2011,LiveCell,Tabei,Bariers}, blinking quantum dots~\cite{QuantumD}, plasma membrane~\cite{Krapf2011}, filamentous networks~\cite{BurovPnas} and many more~\cite{Sokolov2005}.
The modeling of transport in these systems is quite complicated, when compared to the original RW. In the works of Scher and Montroll~\cite{ScherMontroll} the continuous time random walk (CTRW) approach for transport in amorphous materials was developed. The idea behind CTRW is the existence of regions of local arrest, i.e. traps, where the traced particle waits for some random time before it continues its motion inside the media.
When the expected random waiting times diverge the behavior is non-ergodic~\cite{Bel2005,YongHe} and CTRW will produce the mentioned subdiffusive scaling of the MSD.
While CTRW became extremely popular and applicative~\cite{Bouchaud,Klafter,Kutner2017}, this approach treats the disorder in the media as annealed and uncorrelated.
Quenchness of the disorder in the media is more physically appealing in many situations but it implies existence of strong correlations that in their turn introduce significant difficulties in calculating basic properties of the transport~\cite{Kehr}. When the local dwell times of CTRW are fixed the model is known as the quenched trap model (QTM).
The QTM was found to be an important model that describes glassy behavior such as aging, weak-ergodicity breaking and non self-averaging~\cite{BouchaudAg,Monthus1996,Rinn2000,Rinn2001,Bertin,Burov2007}.
Beyond the applications of the QTM, the difficulty of untangling the behavior dictated by quenched disorder, that is associated with QTM, posed this model and methods for its solution as a fundamental problem of anomalous transport~\cite{Bouchaud}.
The presence of the mentioned correlations, imposed by the quenched disorder, make the treatment of the QTM highly non-trivial task.
Over the years many theoretical methods were devised to advance the general understanding of the QTM. The method of semi-equilibration~\cite{Derrida} allowed to determine the average velocity and diffusion constant in the one-dimensional ($d=1$) case for the non-anomalous transport.
Description of the QTM in terms of master equation and their representation in the Fourier space produced the scaling behavior of the QTM propagator at the origin~\cite{Bernasconi, Alexander}.
Renormalization Group approach~\cite{Machta}, and scaling arguments~\cite{Bouchaud1987}, provided the existence of a critical dimension, $d=2$, for the QTM and the scaling behavior of the MSD.
Based on these works a qualitative understanding that for sufficient high dimension ($d>2$) the behavior of the QTM can be mapped on-to the mean filed representation, i.e. CTRW.
Further, the behavior of the QTM was studied for various lattices under the simplification of directed walk, i.e. without returns to previously visited traps~\cite{Aslangul}.
The decimation of disorder allowed Monthus to calculate (among other quantities) the behavior of the positional probability density function (PDF) in $d=1$ case in the limit of very low temperatures~\cite{Monthus,MonthusSec}. Rigorous probabilistic approach to the QTM led to mathematically exact scaling theorems~\cite{BenArous1,BenArous2} and further generalization of the QTM to such models as the randomly trapped random walk~\cite{BenArous3,Cerny01}. The effect of fractal structures for QTM~\cite{Akimoto2015} and behavior of the QTM under influence of a bias~\cite{Akimoto2019} are part of a current research.
The previously obtained results suggest that for any dimension $d>2$ the behavior of QTM converges to the one of CTRW. A simple hand-waving argument that support this qualitative result is that in sufficiently high dimensions the traced particle rarely returns to the same lattice point, thus reducing the effect of strong correlations imposed by the quenched disorder. The P{\'o}lya's~\cite{Weiss} theorem states that the probability to return to the origin (or any previously occupied position) is less then $1$ for any dimension above $d=2$.
A valid question is what is the quantitative representation of the mapping between QTM and CTRW? Can one extend this mapping to the cases where dimensionality is low but the formerly raised hand-waiving argument still holds, i.e. the biased case?
In this manuscript we will provide an explicit form of the mapping between QTM and CTRW for any transient case in any dimension. By using the randomly stopped time approach, that was originally developed for the $d=1$ case~\cite{Burov1,Burov2}, we manage to obtain a subordiantion of the spatial process to the temporal $\alpha$-stable process. Unlike the CTRW where the subordinated spatial process advances as a function of the number of jumps~\cite{Bouchaud,Fogedby,Barkai}, for QTM the local time of the spatial process is quite different. A brief summary of part of our results was published in Ref.~\cite{Burov2017}.
This paper is organized as follows. In Sec.~\ref{section_def} the QTM is defined together with local time, measurement time and the subordination approach. In Sec.~\ref{salphaSec} the local time $S_\alpha$ is explored and the mean value of the local time is computed in Sec.~\ref{meansalpha} and the second moment in Sec.~\ref{secondsalpha}. In Sec.~\ref{deltafunction} we summarize the results of the first and second moment calculation and show that the local time convergences to the number of jumps that the process has performed. In Section~\ref{doublesubordination} the previously established convergence of the local time is exploited in order to establish an explicit mapping between the CTRW and QTM, by the means of double subordination. The formulas are applied to the one-dimensional cased of biased QTM. In Sec.~\ref{nonlinresp} we obtain analytic expressions for the moments of the transient case of the QTM and show how the quenched disorder gives rise to the non-linear response of externally applied field. The summary is provided in Sec.~\ref{summary}. Several Appendices supply specific technical calculations and referred to in the manuscript.
\section{The Quenched Trap Model and Subordination}
\label{section_def}
The QTM is defined as a random jump process of a particle on top of a lattice of dimension $d$. For every lattice point ${\bf x}$ a quenched random variable $\tau_{\bf x}$ is defined.
This quenched variable $\tau_{\bf x}$ defines the time that the particle is going to spend at ${\bf x}$ before jumping to some other site ${\bf x}'$', i.e. $\tau_{\bf x}$ is the local dwell time. The probability to jump from ${\bf x}$ to ${\bf x}'$ is provided by $p({\bf x}',{\bf x})$. In the following we will assume translational invariance of the lattice that leads to $p({\bf x}',{\bf x})$ of the form $p({\bf x}'-{\bf x})$. The quenched dwell times $\{\tau_{\bf x}\}$ are , real, positive and independently distributed random variables with
\begin{equation}
\psi(\tau_{\bf x})\sim\tau^{-(1+\alpha)}A\big/|\Gamma(-\alpha)|\qquad \left(\tau_{\bf x}\to\infty\right)
\label{psitaudef}
\end{equation}
as the PDF ($A>0$). The value of the exponent $\alpha$ is bounded to $0<\alpha<1$. For such values of $\alpha$ the average dwell time is diverging, $\int_0^\infty\tau\psi(\tau)\,d\tau\to\infty$ and the model gives rise to anomalous subdiffusion and aging~\cite{BouchaudAg}.
The physical picture behind this definition of QTM is a thermally activated particle that is jumping between various energetic traps. When a particle is in a trap, the average escape time $\tau$ is provided by the Arrhenius law $\tau\propto \exp\left(E_{\bf x}/T\right)$, where $E_{\bf x}>0$ is the depth of the trap ${\bf x}$ and $T$ is the temperature. When the distribution of $E_{\bf x}$s is $f(E)=\frac{1}{T_g}\exp\left(-E{\bf x}/T_g\right)$, the average escape time is distributed according to Eq.~(\ref{psitaudef}), and $\alpha=T/T_g$. For low temperatures $T<T_g$ and glassy behavior, i.e. aging and non-self-averaging, is observed~\cite{Bertin}. The QTM is thus a version of a transport on top of a random energetic landscape with exponential distribution of trap depths.
We wish to perform a separation of the QTM into two processes. The first one is a spatial process on top of the lattice. This process is defined by the jump probabilities $p({\bf x}'-{\bf x})$ with some local time. The other process is a temporal process that transforms the local time into the measurement time $t$, that is defined by the dwell times. How exactly the measurement time and the local time are defined and related to each other is crucial for the solution of the QTM.
\subsection{Measurement Time and Local Time}
\label{loctime}
During the measurement time $t$, the particle has visited several lattice points and stayed exactly $\tau_{\bf x}$ at each lattice ${\bf x}$.
The measurement time $t$ is then simply given by
\begin{equation}
t=\sum_{\bf x} n_{\bf x}\tau_{\bf x}
\label{measurtime}
\end{equation}
where $n_{\bf x}$ is the number of time the particle visited site ${\bf x}$ and the summation is over all the lattice points. While $\tau_{\bf x}$ are independent, identically distributed (I.I.D) random variables, $n_{\bf x}$ are correlated. Indeed, the number of times the particle visited at site ${\bf x}$ shouldn't be very different from the number of times the particle visited in adjacent sites. The local time for the spatial process is defined as
\begin{equation}
S_\alpha=\sum_{\bf x} \left(n_{\bf x}\right)^\alpha.
\label{localtime}
\end{equation}
The variable
\begin{equation}
\eta=t/(S_\alpha)^{1/\alpha}
\label{etadef}
\end{equation}
is of high interest, especially in the $t\to\infty$ and $S_\alpha\to\infty$ limit. Lets consider that $\{n_{\bf x}\}$ are fixed (an outcome of a given experiment) then $\eta$ depends on the realization of the disorder, i.e. $\{\tau_{\bf x}\}$. The PDF of $\eta$ is found by examining disorder averaged $\exp(-u\eta)$, i.e $\langle \exp\left(-u\eta\right)\rangle$, that is given by
\begin{equation}
\langle e^{-u\eta} \rangle =\displaystyle \langle \exp\left( -u \sum_{\bf x}\frac{n_{\bf x}\tau_{\bf x}}{(S_\alpha)^{1/\alpha}}\right) \rangle.
\label{etalaplace}
\end{equation}
Since the $\{\tau_{\bf x}\}$ are I.I.D Eq.~(\ref{etalaplace}) takes the form
\begin{equation}
\langle e^{-u\eta} \rangle =\displaystyle \prod_{\bf x} {\hat{\psi}}\left[\frac{n_{\bf x}u}{(S_\alpha)^{1/\alpha}}\right],
\label{etalaplace02}
\end{equation}
where the product is over all the lattice sites and ${\hat{\psi}}(u)=\int_0^\infty\exp(-\tau_{\bf x} u)\psi(\tau_{\bf x})\,d\tau_{\bf x}$.
Due to Eq.~(\ref{psitaudef}) the small $u\to 0$ limit of ${\hat{\psi}}(u)$ is ${\hat{\psi}}(u)\sim 1-Au^\alpha$ and Eq.~(\ref{etalaplace02}) takes the form
\begin{equation}
\langle e^{-u\eta} \rangle =\displaystyle \prod_{\bf x} \left( 1-\frac{n_{\bf x}^\alpha}{S_\alpha}Au^\alpha\right).
\label{etalpalace03}
\end{equation}
When all the multiplications are performed on the r.h.s. of Eq.~(\ref{etalpalace03}) the leading term is $1$. The next term is $-\sum_{\bf x}n_{\bf x}^\alpha Au^\alpha/S_\alpha$ that is simply $-A u^\alpha$. The following term is $\frac{1}{2}\sum_{\bf x}\sum_{{\bf x}'}n_{\bf x}^\alpha n_{{\bf x}'}^\alpha A^2 u^{2\alpha}/S_\alpha^2$ that takes the form $\frac{1}{2}A^2u^{2\alpha}$. By computing next terms with higher orders of $u$ we obtain that the r.h.s is of the form $\sum_{j=0}^\infty(-Au^\alpha)^j/\Gamma[j+1]$, that is simply the Taylor expansion of $\exp\left(-Au^\alpha\right)$. When taking into account the higher orders of $u$ in the expansion of ${\hat{\psi}}(u)=1-Au^\alpha+Bu^\beta+...$ (where $\beta>\alpha$), we show in Appendix~\ref{sbetaproof} that in the limit of $S_\alpha\to\infty$ all these terms converge to $0$ and do not contribute to the r.h.s of Eq.~(\ref{etalpalace03}). Finally we can state that in the large $S_\alpha$ limit
\begin{equation}
\langle e^{-u\eta} \rangle = e^{-A u^\alpha}
\label{etalaplacefnl}
\end{equation}
which means that the PDF of $\eta$ is one sided L{\'e}vy stable distribution $l_{\alpha,A,1}$~\cite{Klafter,Barkai}. We managed to obtain the distribution of $\eta$ and the distribution of the measurement time $t$ for a given local time $S_\alpha$, since $t=S_\alpha^{1/\alpha}\eta$. Because $S_\alpha$ is positive and strictly growing, as we let the particle jump from one lattice point to another, we can inverse the relation in Eq.~(\ref{etadef}), $S_\alpha=(t/\eta)^\alpha$, use the known distribution of $\eta$, and obtain the PDF of $S_\alpha$ for a given measurement time $t$
\begin{equation}
{\cal P}_t\left(S_\alpha\right)\sim
\frac{t}{\alpha}S_\alpha^{-1/\alpha-1}l_{\alpha,A,1}\left(\frac{t}{S_\alpha^{1/\alpha}}\right)
\label{salphadist}
\end{equation}
in the large $t$ limit. The measurement time $t$ is the quantity that is set in any experiment or calculation. Eq.~(\ref{salphadist}) describes the probability to obtain various $S_\alpha$ when averaging over disorder and letting the process to evolve up to time $t$. We use this disorder-averaged relation between local time $S_\alpha$ and $t$ in the next subsection while constructing the representation of the QTM propagator in terms of the two processes.
\subsection{Subordination}
\label{Subordintaion}
The probability $p({\bf x}'-{\bf x})$ describes the transition probability between two lattice points. It completely determines the spatial process on top of any translationally-invariant lattice, as long as we don't take the disorder due to traps into account.
For example it determines the probability to find the particle at position ${\bf x}$ after $N$ jumps. In this case, $N$ is the local time of the spatial process, the process is terminated when the number of performed jumps reaches a specific threshold and the position is recorded.
Any strictly growing function of the jumps can be considered as a local time, specifically $S_\alpha$.
When the process starts $S_\alpha$ equals to zero and its value is updated each time the particle performs a jump. As $S_\alpha$ crosses a given value the process is terminated. The quantity $P_{S_\alpha}({\bf x})$ is the probability to find the particle at position ${\bf x}$ (starting from the origin) after local time $S_\alpha$ has passed.
Due to dependence of $S_\alpha$ on local visitation numbers $n_{\bf x}$ (Eq.~(\ref{localtime})), the local time is a function of both the number of jumps and the trajectory taken by the particle.
The PDF $P({\bf x},t)$ to find the particle at position ${\bf x}$ after measurement time $t$ is presented by conditioning on all the possible $S_\alpha$ that can occur during the process.
One needs to sum over all the possible $P_{S_\alpha}({\bf x})$ multiplied by the appropriate probability to observe such $S_\alpha$ at time $t$, for a given disorder. After averaging over disorder the PDF takes the form
\begin{equation}
\langle P({\bf x},t) \rangle=\sum_{S_\alpha}P_{S_\alpha}({\bf x})
{\cal P}_t(S_\alpha)
\label{subordination01}
\end{equation}
and due to Eq.~(\ref{salphadist}) in the $t\to\infty$ limit we obtain
\begin{equation}
\langle P({\bf x},t) \rangle\sim\int_0^\infty P_{S\alpha}({\bf x})
\frac{t}{\alpha}S_\alpha^{-1/\alpha-1}l_{\alpha,A,1}\left(\frac{t}{S_\alpha^{1/\alpha}}\right)\,dS_\alpha.
\label{subordination02}
\end{equation}
while we replaced the summation by integral~\cite{Bouchaud}. Eq.~(\ref{subordination02}) represents the propagator of the QTM as a subordination of two processes. The spatial process that has no disorder but is terminated at random local time $S_\alpha$ and the temporal process that involves the disorder and make the mapping between local time and measurement time. While the function $l_{\alpha,A,1}(\dots)$ is known, the missing part is the probability $P_{S_\alpha}$ that is obtained for the case of a transient spatial process.
\section{Local time $S_\alpha$}
\label{salphaSec}
The propagator $P_{S_\alpha}({\bf x})$ lacks the disorder that is present in the QTM and is a simple jump process on a lattice, but nevertheless it is highly non-trivial.
The main complication is the stopping time $S_\alpha$ that is dependent on the path taken by the particle.
If the local time is simply the number of jumps $N$, the probability to find the particle at ${\bf x}$ after $N$ jumps is completely defined by the corresponding probabilities after $(N-1)$ jumps. This is not the case for $P_{S_\alpha}({\bf x})$.
The arrival to ${\bf x}$ do not increases $S_\alpha$ by $1$ as happens with the number of jumps, but rather the increase of $S_\alpha$ depends on the total number of times that ${\bf x}$ was previously visited.
In the case of $1$-dimensional simple random walk (RW) the shape of $P_{S_\alpha}({\bf x})$ was previously~\cite{Burov1} computed in the limit of $\alpha\to 0$. In this example $P_{S_\alpha}({\bf x})$ has a very distinctive V shape (with a minimum at the origin) and is quite different from the regular Gaussian propagator of the random walk.
Before obtaining the $P_{S_\alpha}({\bf x})$, the study of the properties of $S_\alpha$ is in place. Specifically the first two moments of $S_\alpha$, i.e. ${\overline{S_\alpha}}$ and ${\overline{S_\alpha^2}}$. The averaging ${\overline{\dots}}$ is with respects to many trajectories of the RW walk on a lattice without traps. The results of Sec.\ref{meansalpha} and Sec.\ref{secondsalpha} are summarized in Sec.\ref{deltafunction}.
\subsection{${\overline{S_\alpha}}(N)$}
\label{meansalpha}
The mean value of $S_\alpha$ is obtained from Eq.~(\ref{localtime}), ${\overline{S_\alpha}}=\sum_{\bf x}\overline{n_{\bf x}^\alpha}$. Defining $\beta_N({\bf x};k)$ to be the probability for the RW to visit lattice site ${\bf x}$ exactly $k$ times after $N$ steps, we write the average local time after $N$ steps as
\begin{equation}
{\overline{S_\alpha}}(N)=\sum_{\bf x}\sum_{k=0}^\infty k^\alpha \beta_N({\bf x};k).
\label{salphamean01}
\end{equation}
The probability $\beta_N({\bf x};k)$ is the probability to arrive at ${\bf x}$ at-least $k$ times minus the probability to arrive at-least $k+1$ times during $N$ jumps. Since the $k$th arrival must occur during these $N$ jumps, $\beta_N({\bf x};k)$ is expressed as
\begin{equation}
\begin{array}{ll}
\beta_N({\bf x};k)=\sum_{m=1}^N f_m({\bf x};k) - \sum_{m=1}^N f_m({\bf x};k+1)
& \qquad {\bf x}\neq {\bf 0}
\\
\beta_N({\bf 0};k)=\sum_{m=1}^N f_m({\bf 0};k-1) - \sum_{m=1}^N f_m({\bf 0};k)
&
\end{array}
\label{betaxk01}
\end{equation}
where $f_N({\bf x};k)$ is the probability to reach site ${\bf x}$ for the $k$'th time after $N$ steps. By defining $f_N({\bf 0})$ to be the probability of first return to the origin (${\bf x}={\bf 0}$) after $N$ steps, we write the recursive form for $f_N({\bf x};k)$
\begin{equation}
f_N({\bf x};k+1)=\sum_{m=0}^N f_m({\bf x};k)f_{N-m}({\bf 0}).
\label{firstpassagesdef}
\end{equation}
The generating function ${\hat f}_z({\bf x};k)=\sum_{N=0}^\infty z^N f_N({\bf x};k)$ is then
\begin{equation}
{\hat f}_z({\bf x};k)=\left[{\hat f}_z({\bf 0})\right]^{k-1}{\hat f}_z({\bf x})
\label{frecursive}
\end{equation}
where ${\hat f}_z({\bf 0})$ is the generating function of the probability of first return to ${\bf 0}$ and ${\hat f}_z({\bf x})$ is the generating function of the probability of first arrival to ${\bf x}$. Eq.~(\ref{betaxk01}) and Eq.~(\ref{frecursive}) provide the generating function of $\beta_N({\bf x};k)$
\begin{equation}
\begin{array}{ll}
{\hat \beta}_z({\bf x};k)=\frac{1}{1-z}\left[1-{\hat f}_z({\bf 0})\right]\left[{\hat f}_z({\bf 0})\right]^{k-1}{\hat f}_z({\bf x})
& \qquad {\bf x}\neq {\bf 0}
\\
{\hat \beta}_z({\bf 0};k)=\frac{1}{1-z}\left[1-{\hat f}_z({\bf 0})\right]\left[{\hat f}_z({\bf 0})\right]^{k-1}
&
\end{array}
\label{betaxk02}
\end{equation}
Eq.~(\ref{betaxk02}) allows us to compute the generating function of ${\overline{S_\alpha}}(N)$, while the summation $\sum_{\bf x}{\hat f}_x({\bf x})$ can be obtained by the means of $c_N({\bf x})$, the probability to find the particle at position ${\bf x}$ after $N$ steps (started at ${\bf 0}$). Since $c_N({\bf x})$ is related to $f_N({\bf x})$ by
\begin{equation}
c_N({\bf x}) = \delta_{N,0}\delta_{{\bf x},{\bf 0}} +
\sum_{m=1}^N f_m({\bf x}) c_{N-m}({\bf 0})
\label{cnxdefinition}
\end{equation}
the generating functions ${\hat f}_z({\bf x})$ and ${\hat c}_z({\bf x})$ are connected by
\begin{equation}
\begin{array}{l}
{\hat f}_z({\bf x\neq 0}) = {\hat c}_z({\bf \neq 0})\big/{\hat c}_z({\bf 0})
\\
{\hat f}_z({\bf 0}) =1- 1\big/{\hat c}_z({\bf 0}) .
\end{array}
\label{candfgenerating}
\end{equation}
Together with the fact that $\sum_{\bf x} c_N({\bf x}) =1$ and consequently $\sum_{\bf x} {\hat c}_z({\bf x})=1/(1-z)$, Eqs.(\ref{salphamean01},\ref{betaxk02},\ref{candfgenerating}) result in
\begin{equation}
{\overline {{\hat{S}_\alpha}}}(z) = \left[\frac{1-{\hat f}_z({\bf 0})}{1-z} \right]^2 \sum_{k=0}^\infty k^\alpha {\hat f}_z({\bf 0})^{k-1}.
\label{salphameanz01}
\end{equation}
For the case when the spatial process is transient and the probability of eventually returning to the origin $Q_0=\sum_{N=0}^\infty f_N({\bf 0})$, is less than $1$, the asymptotic ($N\to\infty$) is readily obtained from Eq.~(\ref{salphameanz01}). For $z\to 1$, ${\hat f}_z({\bf 0})\to Q_0<1$. The fact that $\sum_{N=0}^\infty N z^N = z/(1-z)^2$ and Tauberian theorem~\cite{Weiss} implies that
\begin{equation}
{\overline {S_\alpha}}(N)\sim \Lambda N \qquad (N\to\infty)
\label{salphaNlarge}
\end{equation}
where
\begin{equation}
\Lambda = \frac{\left[1-Q_0\right]^2}{Q_0} Li_{-\alpha}(Q_0)
\label{lambdaconst}
\end{equation}
and $Li_a(b)=\sum_{k=1}^\infty b^k/k^a$ is the Polylogarithm function.
The form of average $S_\alpha$ as expressed in Eq.~(\ref{salphaNlarge}) will be essential in the following for asymptotic representation of $\langle P({\bf x},t) \rangle$ for the transient case by the means of $P_{S_\alpha}({\bf x})$. The average behavior of $S_\alpha$ suggests that the local time $S_\alpha$ is not very much different from the regular local time, i.e. the number of jumps $N$, at-least for the transient case $Q_0<1$. The behavior of the second moment of $S_\alpha$ should indicate if one indeed can exchange the local time $S_\alpha$ by a linear function of $N$.
\subsection{${\overline{S_\alpha^2}}(N)$}
\label{secondsalpha}
The goal of this section is to provide the conditions for a plausible substitution of $S_\alpha$ by its average value $\langle S_\alpha\rangle$. The second moment of $S_\alpha$ is computed in a similar fashion as the first moment was computed in Sec.~\ref{meansalpha}, and the first moment of $S_\alpha$ (Eq.~\ref{salphamean01}) is generalized to
\begin{equation}
{\overline{S_\alpha^2}}(N) = \sum_{{\bf x}}\sum_{{\bf x}'}\sum_{k_1}\sum_{k_2}k_1^\alpha k_2
^\alpha \beta_N\left({\bf x};k_1,{\bf x}';k_2\right)
\label{secmomdef}
\end{equation}
where $\beta_N\left({\bf x};k_1,{\bf x}';k_2\right)$ is the probability that in $N$ steps the RW will visit site ${\bf x}$ exactly $k_1$ times and the site ${\bf x}'$ exactly $k_2$ times. This probability is calculated in the terms of $f_N\left({\bf x},k_1;{\bf x}',k_2\right)$, the probability to arrive to ${\bf x}$ after $N$ steps for the $k_1$th time while visiting ${\bf x}'$ exactly $k_2$ times. $\beta_N\left({\bf x};k_1,{\bf x}';k_2\right)$ is the probability that the $k_1$th arrival was performed but not the $(k_1+1)$th, i.e.
\begin{equation}
\begin{array}{ll}
\beta_N\left({\bf x};k_1,{\bf x}';k_2\right)= &
\sum_{l=0}^N \left\{\left[
f_l\left({\bf x},k_1;{\bf x}',k_2\right)-f_l\left({\bf x},k_1+1;{\bf x}',k_2\right)
\right]\right.
\\
&
\left. +\left[
f_l\left({\bf x}',k_2;{\bf x},k_1\right)-f_l\left({\bf x}',k_2+1;{\bf x},k_1\right)
\right]\right\}\qquad (k_1+k_2\geq 2).
\end{array}
\label{twopointbeta01}
\end{equation}
The range $k_1>0$ and $k_2>0$ is sufficient since $\beta_N({\bf x},k_1;{\bf x'},k_2)$ is multiplied by $k^\alpha$ in Eq.~(\ref{secmomdef}). We define the probability to start at ${\bf x}$ and after $N$ steps to reach ${\bf x}'$, without visiting ${\bf x}$ or ${\bf x}'$ on the way, as $M_N({\bf x},{\bf x}')$ and the probability to start at ${\bf x}$ and return to the same site after $N$ steps, without visiting ${\bf x}$ or ${\bf x}'$ on the way, as $T_N({\bf x},{\bf x}')$.
The probability $f_N({\bf x},k_1;{\bf x}';k_2)$ is recursively expressed in terms of $M_N({\bf x},{\bf x}')$ and $T_N({\bf x},{\bf x}')$
\begin{equation}
\begin{array}{ll}
f_N({\bf x},k_1+1;{\bf x}';k_2)= &
\sum_{l=0}^N
f_l({\bf x},k_1;{\bf x}';k_2)T_{N-l}({\bf x},{\bf x}')+f_l({\bf x}',k_2;{\bf x};k_1)M_{n-l}({\bf x}',{\bf x})
\\
\end{array}
\label{frstpsgmt01}
\end{equation}
where $f_N({\bf x},0;{\bf x'},k_2)=0$. Eq.~(\ref{frstpsgmt01}) leads to the following expression in $z$ space
\begin{equation}
{\hat f}_z({\bf x},k_1+1;{\bf x}',k_2)=
{\hat f}_z({\bf x},k_1;{\bf x}',k_2){\hat T}_z({\bf x},{\bf x}')+
{\hat f}_z({\bf x}',k_2;{\bf x},k_1){\hat M}_z({\bf x}',{\bf x}).
\label{frstpsgmt02}
\end{equation}
Application of additional transformation $k_1\to\xi_1$ and $k_2\to\xi_2$, by performing a double summation $\sum_{k_1=1}^\infty\sum_{k_2=1}^\infty\xi_1^{k_1}\xi_2^{k_2}$ on both sides of Eq.~(\ref{frstpsgmt02}) delivers
\begin{equation}
\left[
1-\xi_1{\hat T}_z({\bf x},{\bf x'})
\right]
{\hat {\tilde f}}_z({\bf x},\xi_1;{\bf x}',\xi_2)
-
\xi_1{\hat M}_z({\bf x}',{\bf x})
{\hat {\tilde f}}_z({\bf x}',\xi_2;{\bf x},\xi_1)
=\xi_1{\hat f'}_z({\bf x},1;{\bf x'},\xi_2)
\label{frstpsgxi01}
\end{equation}
where ${\hat {\tilde f}}_z({\bf x},\xi_1;{\bf x}',\xi_2)=\sum_{k_1=1}^\infty\sum_{k_2=1}^\infty\xi_1^{k_1}\xi_2^{k_2}{\hat f}_z({\bf x},k_1;{\bf x}',k_2)$ and \\ ${\hat f'}_z({\bf x},1;{\bf x'},\xi_2)=\sum_{k_2=1}^\infty\xi_2^{k_2}{\hat f}_z({\bf x},1;{\bf x}',k_2)$.
In a similar fashion we obtain
\begin{equation}
\left[
1-\xi_2{\hat T}_z({\bf x'},{\bf x})
\right]
{\hat {\tilde f}}_z({\bf x'},\xi_2;{\bf x},\xi_1)
-
\xi_2{\hat M}_z({\bf x},{\bf x'})
{\hat {\tilde f}}_z({\bf x},\xi_1;{\bf x'},\xi_2)
=\xi_2{\hat f'}_z({\bf x'},1;{\bf x},\xi_1).
\label{frstpsgxi02}
\end{equation}
Eqs.~(\ref{frstpsgxi01},\ref{frstpsgxi02}) are linear equations in terms of ${\hat{\tilde f}}_z({\bf x},\xi_1;{\bf x'},\xi_2)$ and ${\hat{\tilde f}}_z({\bf x'},\xi_2;{\bf x},\xi_1)$ that attain the solution
\begin{equation}
{\hat{\tilde f}}_z({\bf x},\xi_1;{\bf x'},\xi_2) =
\frac{\xi_1\left[1-\xi_2 {\hat T}_z({\bf x'},{\bf x})\right]{\hat f'}_z({\bf x},1;{\bf x'},\xi_2)+\xi_1\xi_2{\hat M}_z({\bf x'},{\bf x}){\hat f'}_z({\bf x'},1;{\bf x},\xi_1)}
{[1-\xi_1{\hat T}_z({\bf x},{\bf x'})]
[1-\xi_2{\hat T}_z({\bf x'},{\bf x})]
-\xi_1\xi_2{\hat M}_z({\bf x},{\bf x'}){\hat M}_z({\bf x'},{\bf x})}.
\label{genformf01}
\end{equation}
Since $f_N({\bf x'},k+1;{\bf x},0)=\sum_{l=0}^N f_l({\bf x'},k;{\bf x},0) T_{N-l}({\bf x'},{\bf x})$, the transform ${\hat{ {f'}}}_z({\bf x'},\xi_2;{\bf x},0)=\sum_{k=1}^\infty \xi_2^k {\hat f}_z({\bf x'},k;{\bf x},0)$ is
\begin{equation}
{\hat{\tilde {f'}}}_z({\bf x'},\xi_2;{\bf x},0) =
\frac{\xi_2 {\hat f}_z({\bf x'},1;{\bf x},0)}{1-\xi_2 {\hat T}_z({\bf x'},{\bf x})}.
\label{specformf02}
\end{equation}
By using the expression $f_N({\bf x},1;{\bf x'},k_2)=\sum_{l=0}^N f_l({\bf x'},k_2;{\bf x},0) M_{N-l}({\bf x'},{\bf x})$ and Eq.~(\ref{specformf02}) we obtain
\begin{equation}
{\hat{\tilde{f'}}}_z({\bf x},1;{\bf x'},\xi_2) =
\frac{\xi_2 {\hat f}_z({\bf x'},1;{\bf x},0)
{\hat M}_z({\bf x'},{\bf x})}
{1-\xi_2 {\hat T}_z({\bf x'},{\bf x})},
\label{specformf03}
\end{equation}
and then by substitution of Eqs.~(\ref{genformf01},\ref{specformf03}) in Eq.~(\ref{twopointbeta01}), and using Eq.~(\ref{frstpsgmt02}), we obtain for ${\hat{\tilde{\beta}}}_z({\bf x};\xi_1,{\bf x'};\xi_2)=\sum_{N=0}^\infty\sum_{k_1=1}^\infty\sum_{k_2=1}^\infty z^N {\xi_1}^{k_1}{\xi_2}^{k_2} \beta_N({\bf x};k_1,{\bf x'};k_2)$
\begin{equation}
\begin{array}{ll}
{\hat{\tilde{\beta}}}_z({\bf x};\xi_1,{\bf x'};\xi_2)=
&
\frac{1}{1-z}\left\{
\left(
1-{\hat T}_z({\bf x},{\bf x'})-{\hat M}_z({\bf x'},{\bf x})
\right)
\frac{\xi_1\xi_2 {\hat f}_z({\bf x'},1;{\bf x},0)
{\hat M}_z({\bf x'},{\bf x})+{\xi_1}^2\xi_2{\hat M}_z({\bf x'},{\bf x})\frac{ {\hat f}_z({\bf x},1;{\bf x},0)
{\hat M}_z({\bf x},{\bf x'})}
{1-\xi_1 {\hat T}_z({\bf x},{\bf x'})}}
{[1-\xi_1{\hat T}_z({\bf x},{\bf x'})]
[1-\xi_2{\hat T}_z({\bf x'},{\bf x})]
-\xi_1\xi_2{\hat M}_z({\bf x},{\bf x'}){\hat M}_z({\bf x'},{\bf x})}\right.
\\
&
\left.+
\left(
1-{\hat T}_z({\bf x'},{\bf x})-{\hat M}_z({\bf x},{\bf x'})
\right)
\frac{\xi_2\xi_1 {\hat f}_z({\bf x},1;{\bf x'},0)
{\hat M}_z({\bf x},{\bf x'})+{\xi_2}^2\xi_1{\hat M}_z({\bf x},{\bf x'})\frac{ {\hat f}_z({\bf x'},1;{\bf x'},0)
{\hat M}_z({\bf x'},{\bf x})}
{1-\xi_2 {\hat T}_z({\bf x'},{\bf x})}}
{[1-\xi_1{\hat T}_z({\bf x'},{\bf x})]
[1-\xi_1{\hat T}_z({\bf x},{\bf x'})]
-\xi_2\xi_1{\hat M}_z({\bf x'},{\bf x}){\hat M}_z({\bf x},{\bf x'})}
\right\}
.
\end{array}
\label{twopointbeta02}
\end{equation}
The generating functions of the two-point probabilities $T_N({\bf x},{\bf x'})$, $M_N({\bf x},{\bf x'})$ and $f_N({\bf x},1;{\bf x'},0)$ that define the behavior of ${\hat{\tilde{\beta}}}_z({\bf x},\xi_1;{\bf x'},\xi_2)$ are expressed in terms of the generating function of the probability of first arrival $f_N({\bf x})$, which is provided by Eq.~(\ref{candfgenerating}). In Appendix~\ref{twopintgen} we show that
\begin{equation}
\begin{array}{lll}
{\hat f}_z({\bf x},1;{\bf x'},0) & =
&
\frac{{\hat f}_z({\bf x})-{\hat f}_z({\bf x'}){\hat f}_z({\bf x-x'})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}
\\
{\hat M}_z({\bf x},{\bf x'}) & =
&
\frac{{\hat f}_z({\bf x'-x})-{\hat f}_z({\bf 0}){\hat f}_z({\bf x'-x})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}
\\
{\hat T}_z({\bf x},{\bf x'}) & =
&
\frac{{\hat f}_z({\bf 0})-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}.
\end{array}
\label{twopointonepoint}
\end{equation}
Since the generating function of $\beta_N({\bf x},k_1;{\bf x'},k_2)$ is represented in terms of ${\hat f}_z({\bf x})$, ${\hat f}_z({\bf x'})$, ${\hat f}_z({\bf x-x'})$ and ${\hat f}_z({\bf x'-x})$, the summation over ${\bf x}$ and ${\bf x'}$ can be achieved in the $t\to\infty$ limit. Due to Eq.~(\ref{candfgenerating}) and the already mentioned fact that $\sum_{\bf x}{\hat c}_z({\bf x})=1/(1-z)$, the summation over all possible ${\bf x}$ and ${\bf x'}$ on the right hand side of Eq.~(\ref{twopointbeta02}) can be expanded in a power series over $1/(1-z)$. The Tauberian theorem~\cite{Weiss} states that the leading order in $t$ space is provided by the leading order of $1/(1-z)$ in the $z\to 1$ limit in $z$ space. It is clear that $\sum_{\bf x}\sum_{\bf x'}{\hat c}_z({\bf x}){\hat c}_z({\bf x'})=1/(1-z)^2$, but in Eq.~(\ref{twopointbeta02}) all the multiplications of generating functions of single point probabilities are of mixed origin, e.g. ${\hat c}_z({\bf x}){\hat c}_z({\bf x-x'})$, ${\hat c}_z({\bf x-x'}){\hat c}_z({\bf x'-x})$ and all other possibilities. Moreover substitution of Eq.~(\ref{twopointonepoint}) and Eq.~(\ref{candfgenerating}) in Eq.~(\ref{twopointbeta02}) shows that most of the multiplications will include more than two terms, e.g. ${\hat c}_z({\bf x}){\hat c}_z({\bf x-x'}){\hat c}_z({\bf x'-x})$. In Appendix~\ref{fouriecxz} we show that
\begin{equation}
\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}){\hat c}_z({\bf x'-x})= \frac{1}{(1-z)^2}
\label{doublesumz01}
\end{equation}
for any case of transient RW (the roles of ${\bf x}$ and ${\bf x'}$ can be interchanged). Any other terms of the form $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x-x'}){\hat c}_z({\bf x'-x})$ or $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}){\hat c}_z({\bf x}){\hat c}_z({\bf x'-x})$ (or generally multiplication of any number of terms greater than $2$) grow slower than $1/(1-z)^2$ when $z\to 1$ (see Appendix~\ref{fouriecxz}). This means that when expanding the denominator in Eq.~(\ref{twopointbeta02}) and utilizing Eq.~(\ref{twopointonepoint}), all the terms in the expansion, except the zero order, i.e. $1/[1-\xi_1{\hat T}_z({\bf x'},{\bf x})]
[1-\xi_1{\hat T}_z({\bf x},{\bf x'})]$, will grow slower than $1/(1-z)^2$ after summation over ${\bf x}$ and ${\bf x'}$.
Then in the $z\to 1$ limit we use
\begin{equation}
\begin{array}{llll}
{\hat f}_z({\bf x},1;{\bf x'},0) & =
&
{\hat f}_z({\bf x})
&
\\
{\hat M}_z({\bf x},{\bf x'}) & =
&
{\hat f}_z({\bf x'-x})-{\hat f}_z({\bf 0}){\hat f}_z({\bf x'-x})
& \qquad (z\to 1)
\\
{\hat T}_z({\bf x},{\bf x'}) & =
&
{\hat f}_z({\bf 0})
&.
\end{array}
\label{zto1lim01}
\end{equation}
and the only relevant terms in the summation over ${\bf x}$ an ${\bf x'}$ are
\begin{equation}
\begin{array}{ll}
\sum_{\bf x}\sum_{\bf x'}
{\hat{\tilde{\beta}}}_z({\bf x};\xi_1,{\bf x'};\xi_2)\underset{z\to 1}{\longrightarrow}
\sum_{\bf x}\sum_{\bf x'}
\frac{(1-{\hat f}_z({\bf 0}))^2}{1-z}
\frac{\xi_1 \xi_2}{[1-\xi_1{\hat f}_z({\bf 0})]
[1-\xi_2{\hat f}_z({\bf 0})]
}
&
\left\{
{\hat f}_z({\bf x'})
{\hat f}_z({\bf x-x'})
\right.
\\
&
\left.+
{\hat f}_z({\bf x})
{\hat f}_z({\bf x'-x})
\right\}.
\end{array}
\label{sumbetazto1}
\end{equation}
Substituting the expression in Eq.~(\ref{doublesumz01}) and Eq.~(\ref{candfgenerating}) into Eq.~(\ref{sumbetazto1}) leads to
\begin{equation}
\sum_{\bf x}\sum_{\bf x'}
{\hat{\tilde{\beta}}}_z({\bf x};\xi_1,{\bf x'};\xi_2)\underset{z\to 1}{\longrightarrow}
\frac{2(1-{\hat f}_z({\bf 0}))^4}{(1-z)^3}
\frac{\xi_1 \xi_2}{[1-\xi_1{\hat f}_z({\bf 0})]
[1-\xi_2{\hat f}_z({\bf 0})]
},
\label{sumbetazto1_01}
\end{equation}
and since $\sum_{k=1}^\infty \xi^k \left({\hat f}_z({\bf 0})\right)^{k-1}=\xi\big/[1-\xi{\hat f}_z({\bf 0})]$
\begin{equation}
\sum_{\bf x}\sum_{\bf x'}
{\hat{{\beta}}}_z({\bf x};k_1,{\bf x'};k_2)\underset{z\to 1}{\longrightarrow}
\frac{2(1-{\hat f}_z({\bf 0}))^4}{(1-z)^3}
{\hat f}_z({\bf 0})^{k_1-1}
{\hat f}_z({\bf 0})^{k_2-1}.
\label{sumbetazto1_1}
\end{equation}
Eventually from Eq.~(\ref{sumbetazto1_1}) and Eq.~(\ref{secmomdef}) we obtain
\begin{equation}
{\overline{\hat{S_\alpha^2}}}(z)\underset{z\to 1}{\longrightarrow} \frac{(1-{\hat f}_z({\bf 0}))^4}{{\hat f}_z({\bf 0})^2}
\left\{Li_{-\alpha}({\hat f}_z({\bf 0})\right\}^2
\frac{2}{(1-z)^3}
\label{secmomzform01}
\end{equation}
then according to the identity $\sum_{N=0}^\infty z^N N(N-1)=2z^2/(1-z)^3$, and the Tauberian theorem, the asymptotic behavior of ${\overline{S_\alpha^2}}(N)$ is
\begin{equation}
{\overline{{S_\alpha^2}}}(N)\sim \frac{(1-Q_0)^4}{Q_0^2}
\left\{Li_{-\alpha}(Q_0)\right\}^2
N^2 \qquad N\to\infty.
\label{secmomzform02}
\end{equation}
This relation shows that for any transient RW, in the large $N$ limit the second moment of $S_{\alpha}(N)$ converges to a square of the mean of $S_{\alpha}(N)$, i.e.
\begin{equation}
\frac{{\overline{S_{\alpha}(N)^2}}}
{{\overline{S_{\alpha}(N)}}^2}
\underset{N\to\infty}{\longrightarrow}1.
\label{convergensmom}
\end{equation}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{./sconverg_a.pdf}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{./sconverg_b.pdf}
\end{subfigure}
\caption{ Convergence of $S_\alpha(N)$ to $\Lambda N$. Both panels describe the behavior of $S_\alpha$ for a one dimensional RW with probability $0.7$ to make a step $+1$ and probability $0.3$ to make a step $-1$. The thick line in both panels are simulation results while the dashed line is the theoretical prediction of Eq.~(\ref{deltaconverge}) with $\Lambda$ provided by Eq.~(\ref{lambdaconst}). For both panels $Q_0=0.6$. Panel{\bf (a)} presents the case with $\alpha=0.5$ while panel {\bf (b)} the case with $\alpha=0.25$.
}
\label{salphaconverge}
\end{figure}
\subsection{Convergence to a $\delta$-function}
\label{deltafunction}
We had shown that the distribution of $S_\alpha$ is such that in the $N\to\infty$ limit the square of the first moment converges to the second moment. The minimal value of $S_\alpha/N$ is $2(N/2)^\alpha /N$ that is achieved if the RW performed $N/2$ back and forward jumps between two sites. The maximal value of $S_\alpha /N$ is $1$, that is achieved if the RW never visited any site twice.
Since those two limits are achieved for a very specific trajectories of the RW the probability of the minimal and maximal values of $S_\alpha$ converges to $0$ in the $N\to\infty$ limit.
For the random variable
\begin{equation}
s=S_\alpha/N,
\label{sdivNdef}
\end{equation}
the PDF $\lambda(s)$ is defined for $0\leq s \leq 1$ and $\lambda(s)\to 0$ when $s\to 0$, or $s\to 1$.
Moreover the proven equivalence of ${\overline{S_\alpha^2}}$ and ${\overline{S_\alpha}}^2$ in the $N\to\infty$ limit means that
\begin{equation}
\left(\int_0^1s\lambda(s)\,ds\right)^2=\int_0^1\left(s\right)^2\lambda(s)\,ds\qquad N\to\infty.
\label{jenseneq01}
\end{equation}
Since $\lambda(s)$ is a PDF and $\left(\dots\right)^2$ is a strictly convex function, Jensen inequality~\cite{Jensen} states that $\left(\int_0^1s\lambda(s)\,ds\right)^2\leq\int_0^1\left(s\right)^2\lambda(s)\,ds$ and the equality is achieved only when $s$ is constant, i.e. $\lambda(s)$ is a $\delta$-function. Then from Eq.~(\ref{salphaNlarge}) we obtain that
\begin{equation}
\lambda(s)\underset{N\to\infty}{\longrightarrow} \delta\left(s-\Lambda\right),
\label{deltaconverge}
\end{equation}
where the constant $\Lambda$ is provided in Eq.~(\ref{lambdaconst}).
This result means that in the large $N$ limit the local time $S_\alpha$ and the number of jumps $N$ are equivalent up to a transformation $S_\alpha\to \Lambda N$. This result is presented in Fig.~\ref{salphaconverge}, where the random variable $S_\alpha(N)/N$ (obtained from numerical simulation) converges to a non-zero constant for large $N$. In the next section we utilize this result to establish the form of $P_{S_\alpha}({\bf x})$ and a simplified representation of the positional probability density function, i.e. $P({\bf x},t)$.
\section{Double subordination and the equivalence of CTRW and the transient QTM}
\label{doublesubordination}
The PDF $P({\bf x},t)$, as it is presented by Eq.~(\ref{subordination02}), depends on $P_{S_\alpha}({\bf x})$.
The form of $P_{S_\alpha}({\bf x})$ is obtained by using again the subordination approach where the local time $S_\alpha$ is subordinated to $N$ ,number of jumps performed, and the spatial process is given provided by $W_N{\bf x}$ - the PDF of regular RW, i.e.
\begin{equation}
P_{S_\alpha}({\bf x})=\sum_{N=0}^\infty W_N({\bf x}) {\cal G}_{S_\alpha}(N,{\bf x}).
\label{doublesub01}
\end{equation}
${\cal G}_{S_\alpha}(N,{\bf x})$ is the probability to perform $N$ steps before reaching ${\bf x}$ provided that the value of $S_\alpha$ is known. In the previous section we have shown that in the $N\to\infty$ limit the PDF of $s=S_\alpha/N$, i.e. $\lambda(s)$, is converging to $\delta(s-\Lambda)$. For $\lambda(s)$, $S_\alpha$ is the random variable and $N$ is the parameter. For ${\cal G}_{S_\alpha}$ $N$ is the random variable and $S_\alpha$ is the parameter. The convergence of $\lambda(s)$ to a $\delta$-function shows that in the $N\to\infty$ limit these two quantities are interchangeable and then for a transient RW
\begin{equation}
{\cal G}_{S_\alpha}(N,{\bf x})\underset{S_\alpha\to\infty}{\longrightarrow}
\delta\left({S_\alpha-\Lambda N}\right),
\label{gsalpharep}
\end{equation}
independent of the value of ${\bf x}$.
The double subordination approach prescribes the disorder averaged PDF $\langle P({\bf x},t) \rangle$ the form
\begin{equation}
\langle P({\bf x},t) \rangle=\sum_{S_\alpha}\sum_{N=0}^\infty
W_N({\bf x}) {\cal G}_{S_\alpha}(N,{\bf x})
{\cal P}_t(S_\alpha)
\label{doublesub02}
\end{equation}
where we used Eqs.(\ref{subordination01},\ref{doublesub01}).
When taking the limit $t\to\infty$ the form of ${\cal P}_t(S_\alpha)$ in Eq.~(\ref{salphadist}) dictates that only large $S_\alpha$ need to be considered, and then according to Eq.~(\ref{gsalpharep}) only large $N$ are of interest, finally we obtain that
\begin{equation}
\langle P({\bf x},t) \rangle
\sim\int_0^\infty W_{ N}({\bf x})
\frac{t\big/\Lambda^{1/\alpha}}{\alpha} N^{-1/\alpha-1}l_{\alpha,A,1}\left(\frac{t\big/\Lambda^{1/\alpha}}{N^{1/\alpha}}\right)\,dN
\qquad t\to\infty,
\label{pxtformfin}
\end{equation}
where the transition to integration is the regular practice of the subordination technique~\cite{Bouchaud}.
It is important to notice that in the case of continuous time random walk (CTRW)~\cite{Weiss} the particle experience each jump a new waiting time $\tau$, independent of the previous visitation even if itis currently located in a previously visited site. This makes the CTRW a kind of mean-filed approximation of the QTM and specifically, according to Eq.~(\ref{localtime}), for CTRW $S_\alpha=N$. Accordingly, only one level of subordination is needed and $P_{S_\alpha}({\bf x})$ is simply $W_N({\bf x})$ that leads to
\begin{equation}
\langle P({\bf x},t) \rangle_{CTRW}
\sim\int_0^\infty W_{ N}({\bf x})
\frac{t}{\alpha} N^{-1/\alpha-1}l_{\alpha,A,1}\left(\frac{t}{N^{1/\alpha}}\right)\,dN
\qquad t\to\infty.
\label{pxtctrw}
\end{equation}
Comparison of Eq.~(\ref{pxtctrw}) and Eq.~(\ref{pxtformfin}) leads to
\begin{equation}
\langle P({\bf x},t) \rangle_{QTM} \sim \langle P({\bf x},t/\Lambda^{1/\alpha})_{CTRW} \qquad t\to\infty,
\label{equivalence}
\end{equation}
or simply said : the disorder averaged propagator of a transient QTM is equivalent to the propagator of CTRW taken at time $t/\Lambda^{1/\alpha}$. Eventually we proved that a simple transformation of time for CTRW
\begin{equation}
t\to t\big/\Lambda^{1/\alpha}
\label{timechange}
\end{equation}
makes this model sufficient to asymptotically represent the transient case of the QTM. Eq.~(\ref{equivalence}) states that for every situation that the propagator of CTRW can be computed~\cite{Barkai}, the propagator of QTM can be computed as well. The constant $\Lambda^{-1/\alpha}$ is provided by Eq.~(\ref{lambdaconst}) and displayed in Fig.~\ref{qzeroplot} for $0\leq Q_0 <1$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./qzeroplt.pdf}
\caption{
Behavior of the $\Lambda=\frac{\left[1-Q_0\right]^2}{Q_0} Li_{-\alpha}(Q_0)$ (dependent pre-factor of the temporal transformation $t\to t/\Lambda^{1/\alpha}$) as a function of the return probability $Q_0$, for $\alpha=0.75$. The divergence for $Q_0\to 1$ signifies the limitation of this transformation strictly to the transient case.
}
\label{qzeroplot}
\end{figure}
This constant is positive and $>1$ for any $Q_0$. In the limit when $Q_0\to 1$, i.e. the approach to the recurrent case, $Li_{-\alpha}(Q_0)\sim (1-Q_0)^{-1-\alpha}$~\cite{Abramowitz} and $\Lambda^{-1/\alpha}\sim(1-Q_0)^{-(1-\alpha)/\alpha}$ diverges as $Q_0\to1$. This divergence signifies the limitation of the presented result to the transient case $0\le Q_0 <1$. When $Q_0=0$ the QTM is exactly described by the CTW since the particle never returns to previously visited site, indeed in this case $\Lambda^{-1/\alpha}=1$. For any $0<Q_0 <1$ the constant is greater than $1$. This means that the QTM process is faster than CTRW, i.e. the two models attain the same PDFs but for QTM it is achieved on shorter time-scales. Such behavior can be attributed to the fact that CTRW never resamples previously visited traps (the disorder is annealed), while it is not true for QTM. Since CTRW never resamples previously visited traps it has a higher probability (when compared to QTM) to find deeper traps, which means that its propagation is going to be slower than QTM, on average.
For the $1$-dimensional case of a biased RW on a simple lattice with constant spacing the $W_N( x)$ is a binomial distribution that is very well approximated by the Gaussian approximation
\begin{equation}
W_N(x)=\frac{1}{\sqrt{2\pi 4q(1-q)N}}e^{-\frac{\left(x-(2q-1)N\right)^2}{8q(1-q)N}}\qquad \left(N >> 1\right),
\label{gauss1dbias}
\end{equation}
where $q$ is the probability to jump to the right one step on the lattice and $1-q$ is the probability to jump to the left. The return probability for this process is $Q_0=2(1-q)$, as proven in the next section. For several values of $\alpha$ the form of $l_{\alpha,A,1}$ is explicitly known\cite{Barkai}, specifically for $\alpha=1/2$,
\begin{equation}
l_{1/2,1,1}(\eta)=\frac{1}{2\sqrt{\pi}}\eta^{-3/2}e^{-\frac{1}{4\eta}}.
\label{lohalfdist}
\end{equation}
Then according to Eq.~(\ref{pxtformfin}), for the $1$-dimensional case the PDF is provided by
\begin{equation}
\begin{array}{ll}
\langle P(x,t) \rangle \sim & \displaystyle
\int_0^\infty
\frac{\sqrt{t} e^{-\frac{\left(x-(2q-1)N\right)^2}{8q(1-q)N}} }{\sqrt{2\pi^2 4q(1-q)N}} \left(\frac{2(1-q)}{(2q-1)^2 Li_{-1/2}(2(1-q))}\right)^{-1}
\\
&
\exp\left[-\frac{N^2}{4t}\left(\frac{2(1-q)}{(2q-1)^2 Li_{-1/2}(2(1-q))}\right)^{2}\right]
\,dN\qquad (t\to\infty).
\end{array}
\label{onedbiasedpxt}
\end{equation}
In Fig.~\ref{pxtbiasfig} we perform a comparison between a numerical simulation of the QTM and the theoretical result of Eq.~(\ref{onedbiasedpxt}). The comparison is performed for $t=10^3$ and it is excellent for this finite time.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./pxtq07alp05.pdf}
\caption{
Comparison of the numerical simulation of the PDF for a 1d QTM with bias and theoretical predication of Eq.~(\ref{onedbiasedpxt}). The symbols is the numerical simulation while the thick line is theory without fitting parameters. The parameters of the case are : $A=1$, $q=0.7$, $\alpha=1/2$ and the spacing of the lattice is $1$.
}
\label{pxtbiasfig}
\end{figure}
\subsection{Moments of the QTM and non-linear response}
\label{nonlinresp}
The explicit form of the disorder averaged PDF, expressed by Eq.~(\ref{pxtformfin}), permits evaluation of different moments $\langle {\bf x}^\mu \rangle$. Indeed, the approximation works for a regime when the the measurement time is sufficiently large and many jumps have been performed. In this limit the probability density $W_N({\bf x})$ attains the Gaussian form and all the moments $\int |{\bf x}|^\mu W_N({\bf x})d\,{\bf x}$ can be easily computed~\cite{Winkelbauer}. Generally we can say that
\begin{equation}
\int |{\bf x}|^\mu W_N({\bf x})d\,{\bf x} = B_\mu N^{\gamma_\mu}.
\label{gammamudef}
\end{equation}
The constant $B_\mu$ depends on the power $\mu$ and the lattice that determine the properties of the Gaussian approximation, i.e. second moment and the mean of the Gaussian distribution. Then according to Eq.~(\ref{pxtformfin}) the $\mu$th moment $\langle |{\bf x}|^\mu \rangle$ is provided by $\int_0^\infty (B_\mu t\big/\Lambda^{1/\alpha}\alpha)N^{\gamma_\mu-1-1/\alpha}l_{\alpha,A,1}\left(t/(\Lambda N)^{1/\alpha}\right)dN$. Since, $\int_0^\infty y^q l_{\alpha,A,1}(y)dy=A^{q/\alpha}\Gamma[1-q/\alpha]/\Gamma[1-q]$ (for $q/\alpha<1$)~\cite{Barkai} the expression for the moments of ${\bf x}$ takes the form
\begin{equation}
\langle |{\bf x}|^\mu \rangle= \frac{\Gamma[1+\gamma_\mu]}{A^{\gamma_\mu}\Gamma[1+\alpha\gamma_\mu]}\frac{B_\mu}{\Lambda^{\gamma_\mu}}
t^{\alpha\gamma_\mu}.
\label{momentexpression}
\end{equation}
The constants $\gamma_\mu$, $B_\mu$ and $Q_0$ depend only on the lattice dimension and the type of the RW on top of this lattice.
Of a special interest is the behavior of the first moment when an external force is applied, i.e. response of the system to a bias. In the QTM model the force is applied in such a way that it is not affecting the dwell times $\tau_{\bf x}$ but rather determines the transition probabilities between different locations~\cite{Bertin02,MonthusSec,Deborah01}. When the imposed external force $F_0$ is sufficiently weak the transition probabilities $p({\bf x - x'})$ should be proportional to $\exp(F_0({\bf x-x'})/2k_BT)$ for transition from ${\bf x'}$ to ${\bf x}$, and to $\exp(-F_0({\bf x-x'})/2k_BT)$ for the reverse transition. Here we assume that the force is constant and applied in the direction of ${\bf x-x'}$, otherwise one needs to use the projection of the force in the ${\bf x-x'}$ direction. Since we are interested only in the limit of weak force it is possible to expand the exponential up to first order in $F_0$. In the case of a simple binomial RW on top of a $1$-dimensional lattice the probability $q$ to perform a jump to the right will be $q=\frac{1}{2}(1+F_0a/2k_BT)$ and the probability to jump to the left $1-q=\frac{1}{2}(1-F_0a/2k_BT)$, where $a$ is the lattice spacing. For dimensions $d\geq 2$ similar expansion will take place, the only difference is that $F_0$ will be multiplied by some $\cos(\theta)$ where $\theta$ is the appropriate angle between the direction of the force and local axis of the lattice.
The presence of the force affects not only the constant $B_\mu$ in Eq.~(\ref{momentexpression}) but also the constant $\Lambda$ by the means of $Q_0$.
Of special interest is the one-dimensional case. For $d=1$ $Q_0$, without the presence of external force, is $1$~\cite{Weiss}.
When external small external force $F_0$ is added $Q_0$ is decreased but still attains values in the vicinity of $1$ and consequently (due to the form of $\Lambda$ in Eq.~(\ref{lambdaconst})) contributes to a non-trivial dependence on the force of the first moment.
The first moment of the one dimensional case with a presence of a weak force $F_0$ is the case of traps on a one a simple one-dimensional lattice with probabilities $q=\frac{1}{2}(1+F_0a/2k_BT)$ to jump to the right and $1-q$ to jump to the left.
For the spatial process $W_N(x)$ this is the case of a binomial random walk and thus for sufficiently large $N$ the Gaussian limit is attained
\begin{equation}
W_N(x)\sim\frac{\exp\left[-\frac{\left(x-(2q-1)N\right)^2}{8q(1-q)N}\right]}
{\sqrt{8\pi q(1-q)N}}
\label{binomial1dspat}
\end{equation}
and
\begin{equation}
\int_{-\infty}^{\infty} x W_N(x)\,dx = (2q-1) N
\label{binom1dmoment}
\end{equation}
meaning that $B_1=2q-1$ and $\gamma_1=1$. Eq.~(\ref{binom1dmoment}) describes the linear response to the external force for the spatial part of the QTM.
The return probability $Q_0\underset{z\to 1}{\to}\sum_{N=0}^\infty f_N({\bf 0})={\hat f}_{z}({\bf 0})$ is provided by Eq.~(\ref{candfgenerating}) while the
Fourier transform of the jump probability $p({\bf x})$ is ${\overline p({\bf k})}=\sum_{\bf x} e^{i({\bf k \dot x})}p({\bf x})$ dictates the form of ${\hat c}_z({\bf 0})$ for dimension $d$~\cite{Weiss}
\begin{equation}
{\hat c}_z({\bf 0})=\frac{1}{(2\pi)^d}\int_{-\pi}^{\pi}\dots\int_{-\pi}^{\pi}\frac{1}{1-z{\overline p({\bf k})}}\,d^d{\bf k}.
\label{cxgenerating}
\end{equation}
For $d=1$, ${\overline{p}}(k)=q\exp(ik)+(1-q)\exp(-ik)$ and Eq.~(\ref{cxgenerating}) is
\begin{equation}
{\hat c}_z({\bf 0})=\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{1-z(q\exp(i k)+(1-q)\exp(-i k)}\,dk,
\label{cxgendeq1}
\end{equation}
by changing the variable to $y=\exp(ik)$ the integral in Eq.~(\ref{cxgendeq1}) is transformed into
\begin{equation}
{\hat c}_z({\bf 0})=\frac{1}{2\pi i}\oint_{|y|=1}\frac{1}{y-zq y^2-z(1-q)}\,dy.
\label{cxgendeq1_01}
\end{equation}
For any $z<1$ the two solutions of $-zq y^2+y-z(1-q)=0$ are located on the real line while one of them is for $y>1$ and the other is is for $y<1$.
This means that the integral depends on the presence of one single pole $\forall z<1$. This pole is located at $y=(1-q)/q$ for $z=1$ and the integral in Eq.~(\ref{cxgendeq1_01}) in the $z\to 1$ limit is
\begin{equation}
{\hat c}_0({\bf 0})=\frac{1}{2q-1}.
\label{cxgebdeq1_02}
\end{equation}
Then according to Eq.~(\ref{candfgenerating}) for $d=1$ the probability to return to the starting point, given the process is biased (i.e. $q>1/2$), is
\begin{equation}
Q_0=2(1-q).
\label{qzerodeq1}
\end{equation}
Finally, according to Eqs.~(\ref{momentexpression},\ref{binom1dmoment},\ref{qzerodeq1}) and Eq.~(\ref{lambdaconst}) we obtain that
\begin{equation}
\langle x(t) \rangle \underset{t\to\infty}{\sim}\frac{1}{A\Gamma[1+\alpha]}
\frac{2(1-q)}{(2q-1) Li_{-\alpha}\left[2(1-q)\right]} t^\alpha,
\label{xmoment1d01}
\end{equation}
and when explicitly writing the the probability $q=\frac{1}{2}(1+F_0a/2k_BT)$ and the fact that the spacing of the lattice is $a$, $\langle x(t) \rangle$ is transformed into
\begin{equation}
\langle x(t) \rangle \underset{t\to\infty}{\sim}\frac{a}{A\Gamma[1+\alpha]}
\frac{[1-F_0a/2k_BT]}{(F_0a/2k_BT) Li_{-\alpha}\left[1-F_0a/2k_BT\right]} t^\alpha.
\label{xmoment1d01_xa01}
\end{equation}
For small $F_0\to 0$ we use the asymptotic relation $Li_{-\alpha}(1-y)\sim \Gamma[1+\alpha]y^{-\alpha-1}$~\cite{Abramowitz} and obtain the non-linear response to externally applied small force
\begin{equation}
\langle x(t) \rangle \underset{t\to\infty}{\sim}\frac{a}{A\Gamma[1+\alpha]^2}
\left(\frac{F_0a}{2k_BT} \right)^\alpha t^\alpha.
\label{xmoment1d01_xa02}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{./qtmbiased1dforce02.pdf}
\caption{
Comparison of the numerical simulation of the first moment $\langle x(t)\rangle$ for a 1d QTM with a presence of external force $F_0$ and theoretical predication of Eqs.~(\ref{xmoment1d01_xa01},\ref{xmoment1d01_xa02}). The symbols describe the results of the numerical simulation with $t=10^8$, $\alpha=1/2$, $a=1$ and $A=1$. The thick line is theory as described by Eq.~(\ref{xmoment1d01_xa01}) without fitting parameters and the dashed line is the prediction of Eq.~(\ref{xmoment1d01_xa02}). For sufficiently small $F_0a/k_BT$ the two theoretical results coincide.
}
\label{biasedforcefig}
\end{figure}
A convincing comparison between the analytical results of Eqs.~(\ref{xmoment1d01_xa01},\ref{xmoment1d01_xa02}) and numerical simulation is presented in Fig.~\ref{biasedforcefig}.
It is clear from the figure that both theoretical result due to Eq.~(\ref{xmoment1d01_xa01}) and Eq.~(\ref{xmoment1d01_xa02}) coincide for sufficiently small external force $F_0$.
The behavior of the first moment for small forces, as described by Eq,~(\ref{xmoment1d01_xa02}) does not satisfy linear response.
The response to external force is anomalous and the force enters the equation with an exponent $\alpha<1$.
This behavior for a $1$-dimensional biased QTM was previously predicted by using scaling analysis~\cite{Bouchaud,Bertin02} and also obtained by exploitation of the Renormalization Group techniques in the limit of low temperatures~, i.e $\alpha\to 0$~\cite{MonthusSec}. The non-linear response is present only due to the strong disorder and the quenched nature of the disorder. For the annealed case with power-law waiting times the response is linear~\cite{Bouchaud}. From the treatment of the $1$-dimensional case it becomes clear that the non-linearity appears solely due to presence of $\Lambda$ in the denominator of Eq.~(\ref{momentexpression}). According to Eq.~(\ref{lambdaconst}) $\Lambda$ depends on $Q_0$ in a non-trivial fashion. When a small external force is present it alters the probability of return $Q_0$. Of special interest are the cases where $Q_0=1$ when $F_0=0$. Addition of small $|F_0|$ will decrease $Q_0$ and introduce a non-linear contribution due to the divergence $\Lambda$ in the limit of $Q_0\to1$. For the cases where $Q_0<1$ even when the external force is non-present, addition of a non-zero external force slightly decreases $Q_0$ that is translated to a small change in $\Lambda$ and the linear response is not affected. It is then according to classical result of P{\'o}lya~\cite{Weiss}, the non-linear response is to be expected for $d=1,2$ while for any higher dimension the strong quenched disorder will not alter the linear response to external field.
\section{Summary}
\label{summary}
The properties of transport in the QTM have been extensively explored over the years. In this manuscript we provided an explicit mapping between the transient cases of QTM and the widely used CTRW. This result allows to generalize any result that is known for the CTRW to the case of QTM. Immediate applications include, first-passage properties~\cite{Redner}, super-diffusive fluctuations for anomalous transport~\cite{Lindenberg,Voituriez}, representation by the means of fractional equations~\cite{Klafter}, large deviation properties~\cite{BarkaiBurov20} and many more. The non trivial dependence of the mapping on the probability to return to the origin, $Q_0$, implies that we should expect very important differences between the QTM and CTRW for low dimensions even when the process is transient. Like the existence of non-linear response to externally applied field that was calculated for the QTM and is absent for CTRW.
The developed theoretical framework of double subordination and two-point probabilities have merit on their own. We hope that these methods will help in addressing the recurrent case of QTM. Finally we would like to notice that existence of explicit mappings between the QTM and other models of transport in disordered media, such as the barrier model~\cite{Sollich}, can allow to address the general case of transport in a random-potential landscape~\cite{SokolovCamb}.
{\bf Acknowledgments:} This work was supported by the Pazy foundation grant 61139927. I thank D.A. Kessler for fruitful discussions.
\section{Appendix}
\subsection{Additional terms of ${\hat{\psi}}(u)$}
\label{sbetaproof}
In Section~\ref{loctime} it was shown that when the expansion of ${\hat \psi}(u)$ is of the form ${\hat \psi}(u)\sim 1-Au^\alpha$, Eq.~(\ref{etalaplacefnl}) holds.
Here we show that additional terms in the expansion, i.e. ${\hat \psi}(u)\sim 1-Au^\alpha+Bu^\beta$ with $\beta>\alpha$, won't change this equation when $S_\alpha \to \infty$.
In such a case
\begin{equation}
\langle e^{-u\eta}\rangle = \displaystyle \prod_{\bf x} \left( 1-\frac{n_{\bf x}^\alpha}{S_\alpha}Au^\alpha +\frac{n_{\bf x}^\beta}{S_\alpha^{\beta/\alpha}}Bu^\beta\right)
\label{betprooffull01}
\end{equation}
and the multiplication will produce the terms mentioned in Sec.~\ref{loctime} and also terms of the form
$\sum_{\bf x}n_{\bf x}^\beta B u^\beta/S_\alpha ^{\beta/\alpha}$,
$\sum_{\bf x}\sum_{\bf x'}n_{\bf x}^\alpha n_{\bf x'}^\beta A B u^{\alpha+\beta}/S_\alpha^{1+\beta/\alpha}$, $\sum_{\bf x}\sum_{\bf x'}n_{\bf x}^\beta n_{\bf x'}^\beta B^2 u^{2\beta}/S_\alpha^{2\beta/\alpha}$ etc.
Since $\sum_{\bf x} n_{\bf x}^\beta=S_\beta$, the behavior of the the term $\sum_{\bf x}n_{\bf x}^\beta B u^\beta/S_\alpha ^{\beta/\alpha}$ is dictated by the ratio $S_\beta/S_\alpha ^\beta/\alpha$. For the transient case , i.e presence of bias or $d>2$, we have shown in Sec.~\ref{salphaSec} that ${\overline S_\alpha}\sim \Lambda N$ when $N\to\infty$.
This means that in the limit of many jumps, $N\to\infty$, the ratio $S_\beta/S_\alpha ^\beta/\alpha$ is decaying like $N^{-\frac{\beta}{\alpha}+1}$, ($\beta>\alpha$).
Therefore, all the terms that are not of the form $\left(\sum_{\bf x} \frac{n_{\bf x}^\alpha}{S_\alpha} A u^\alpha\right)^j$ will decay to $0$ in the $N\to\infty$ limit.
We can then state that only the two first terms in the expansion of ${\hat \psi}(u)$ ($1-Au^\alpha$) are needed.
\subsection{Generating functions of two-point probabilities}
\label{twopintgen}
In Sec.~\ref{secondsalpha} three two-point probabilities were crucial for the behavior of $\beta_N({\bf x},k_1;{\bf x'},k_2)$ : {\bf I} $f_N({\bf x},1;{\bf x'},0)$, {\bf II} $M_N({\bf x},{\bf x'})$ and {\bf III} $T_N({\bf x},{\bf x'})$.
The probability $f_N({\bf x},1;{\bf x'},0)$ is the probability to start at point ${\bf 0}$ and after $N$ steps to reach the point ${\bf x}$ for the first time, without visiting ${\bf x'}$ even once. So from all the possibilities to reach ${\bf x}$ for the first time after $N$ we must subtract those where the point ${\bf x'}$ was visited at-least once (before reaching ${\bf x}$), i.e.
\begin{equation}
f_N({\bf x},1;{\bf x'},0)=f_N({\bf x}) - \sum_{l=0}^N f_l({\bf x'},1;{\bf x},0)f_{N-l}({\bf x-x'}),
\label{app_fngen01}
\end{equation}
where $f_N({\bf x})$ is the first-passage probability defined in Eq.~(\ref{candfgenerating}). The translational invariance of the lattice was utilized. According to Eq.~(\ref{app_fngen01}) the $z$-transform of $f_N({\bf x},1;{\bf x'},0)$ is
\begin{equation}
{\hat f}_z({\bf x},1;{\bf x'},0)={\hat f}_z({\bf x}) - {\hat f}_z({\bf x'},1;{\bf x},0){\hat f}_z({\bf x-x'}).
\label{app_fngen02}
\end{equation}
By switching the places of ${\bf x}$ and ${\bf x'}$ in Eq.~(\ref{app_fngen01}) and performing a $z$-transform we obtain
\begin{equation}
{\hat f}_z({\bf x'},1;{\bf x},0)={\hat f}_z({\bf x'}) - {\hat f}_z({\bf x},1;{\bf x'},0){\hat f}_z({\bf x'-x}).
\label{app_fngen03}
\end{equation}
Substitution of Eq.~(\ref{app_fngen03}) into Eq.~(\ref{app_fngen02}) leads to an expression for ${\hat f}_z({\bf x},1;{\bf x'},0)$ in terms of a generating function of $f_N({\bf x})$
\begin{equation}
{\hat f}_z({\bf x},1;{\bf x'},0)=
\frac{{\hat f}_z({\bf x})-{\hat f}_z({\bf x'}){\hat f}_z({\bf x-x'})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}.
\label{app_fngen04}
\end{equation}
The probability $M_N({\bf x},{\bf x'})$ is the probability to start at ${\bf x}$ and after $N$ steps to reach ${\bf x'}$ for the first time, without returning to ${\bf x}$ on the way. Due to translational invariance of the lattice $M_N({\bf x},{\bf x'})$ is expressible in terms of $f_N({\bf x},1;{\bf x'},0)$, i.e. $M_N({\bf x},{\bf x'})=f_N({\bf x'-x},1;{\bf 0},0)$. Then according to Eq.~(\ref{app_fngen04}) the generating function of $M_N({\bf x},{\bf x'})$ is
\begin{equation}
{\hat M}_z({\bf x},{\bf x'})=
\frac{{\hat f}_z({\bf x'-x})-{\hat f}_z({\bf 0}){\hat f}_z({\bf x'-x})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}.
\label{app_mgen01}
\end{equation}
The probability $T_N({\bf x},{\bf x'})$ is the probability to return to ${\bf x}$ after $N$ steps without visiting ${\bf x'}$ on the way. Once again the translational invariance of the lattice allows to utilize $f_N({\bf x},1;{\bf x'},0)$ and hence $T_N({\bf x},{\bf x'})=f_{N}({\bf 0},1;{\bf x-x'},0)$. Then according to Eq.~(\ref{app_fngen04}), the generating function of $T_N({\bf x},{\bf x'})$ is provided by
\begin{equation}
{\hat T}_z({\bf x},{\bf x'}) =
\frac{{\hat f}_z({\bf 0})-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}.
\label{app_tgen01}
\end{equation}
\subsection{Properties of $c_N({\bf x})$ and summation over all lattice points}
\label{fouriecxz}
The probability to find the particle at position ${\bf x}$ after $N$ steps (when starting at ${\bf 0}$), $c_N({\bf x})$ is normalized, i.e. $\sum_{\bf x}c_N({\bf x})=1$, where the summation is over all possible lattice points. This leads to the following relation
\begin{equation}
\sum_{\bf x} c_N({\bf x}) e^{i{\bf a\cdot x}}
\underset{{\bf a}\to{\bf 0}}{\longrightarrow}1
\label{afr_single01}
\end{equation}
and consequently for the generating function ${\hat c}_z({\bf x})=\sum_{N=0}^\infty z^N c_N({\bf x})$
\begin{equation}
\sum_{\bf x} {\hat c}_z({\bf x}) e^{i{\bf a\cdot x}}
\underset{{\bf a}\to{\bf 0}}{\longrightarrow}\frac{1}{1-z}.
\label{afr_single02}
\end{equation}
For the single jump probability $p({\bf x})$ the characteristic function is defined as
${\hat p}({\bf a})=\sum_{x_1}\sum_{x_2}\dots\sum_{x_d}p({\bf x})e^{i{\bf a\cdot x}}$, where ${\bf x}=(x_1,x_2,\dots,x_d)$ are all possible single steps on the lattice. Since all the jumps of the RW on the lattice are independent, $\sum_{\bf x} c_N({\bf x})e^{i{\bf a\cdot x}}=\left({\hat p}({\bf a})\right)^N$ and according to Eq.~(\ref{afr_single02})
\begin{equation}
\sum_{\bf x} {\hat c}_z({\bf x}) e^{i{\bf a\cdot x}}=
\frac{1}{1-z{\hat p}({\bf a})}
\underset{{\bf a}\to{\bf 0}}{\longrightarrow}\frac{1}{1-z}.
\label{afr_single03}
\end{equation}
According to Eq.~(\ref{afr_single03}) the double sum $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'})$ is simply
\begin{equation}
\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'})=\underset{{\bf a}\to{\bf 0}}{\lim}\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'})e^{i{\bf a\cdot x}}e^{i{\bf a\cdot x'}}=
\underset{{\bf a}\to{\bf 0}}{\lim}\frac{1}{\left(1-z{\hat p}({\bf a})\right)^2}=\frac{1}{(1-z)^2}.
\label{afr_double01}
\end{equation}
This result is simply extended to the case of
$\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'-x})$. Indeed,
\begin{equation}
\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'-x})e^{i{\bf a\cdot x}}e^{i{\bf a\cdot x'}}=\sum_{\bf x}{\hat c}_z({\bf x})e^{i2{\bf a\cdot x}}
\sum_{\bf x'}{\hat c}_z({\bf x'-x})e^{i{\bf a\cdot(x'-x)}},
\label{afr_double02}
\end{equation}
due to translational invariance the right hand side of Eq.~(\ref{afr_double02}) equals to
$\frac{1}{1-z{\hat p}(2{\bf a})}\frac{1}{1-z{\hat p}({\bf a})}$ and we obtain
\begin{equation}
\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'-x})e^{i{\bf a\cdot x}}e^{i{\bf a\cdot x'}}=\underset{{\bf a}\to{\bf 0}}{\lim}\frac{1}{1-z{\hat p}(2{\bf a})}\frac{1}{1-z{\hat p}({\bf a})}=\frac{1}{(1-z)^2}.
\label{afr_double03}
\end{equation}
Sums of terms of the form ${\hat c}_z({\bf x'}){\hat c}_x({\bf x-x'})$ produce similar result. Generally speaking, when the arguments of ${\hat c}_z(\dots){\hat c}_z(\dots)$ cover all possible points $({\bf x},{\bf x'})$ of the $2d$ lattice, the double summation will provide the result $1/(1-z)^2$.
We turn now to calculation of sums of the form
$\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x})
{\hat c}_z({\bf x'-x}){\hat c}_z({\bf x-x'})$. For this case the behavior of $\sum_{\bf x'}{\hat c}_z({\bf x'-x}){\hat c}_z({\bf x -x'})e^{i{\bf a\cdot x'}}$ must be inspected.
According to the convolution theorem
\begin{equation}
\sum_{\bf x'}{\hat c}_z({\bf x'}){\hat c}_z(-{\bf x'})e^{i{\bf a\cdot x'}}=\left(\frac{1}{2\pi}\right)^d
\int_{-\pi}^{\pi}\dots\int_{-\pi}^{\pi}
\frac{1}{1-z{\hat p}({\bf b})}\frac{1}{1-z{\hat p}({\bf b-a})}d^d{\bf b},
\label{afr_triple01}
\end{equation}
where $d^d{\bf b}$ is $db_1\,db_2\dots db_d$. When the ${\bf a}\to{\bf 0}$ limit is taken, the integrand on the right hand side of Eq.~(\ref{afr_triple01}) is simply $1\big/(1-z{\hat p}({\bf b}))^2$.
Moreover, the asymptotic limit of $N\to\infty$ is translated as the $z\to 1$ limit in the $z$ space. In this limit the main contribution to the integral in Eq.~(\ref{afr_triple01}) is from the values of ${\bf b}$ that are in the vicinity of ${\bf 0}$, since ${\hat p}({\bf 0})=1$ and the integrand converges to $1/(1-z)^2$. We concentrate on two types of ${\hat p}({\bf b})$ expansions in the vicinity of ${\bf b = 0}$. The first type is a linear case
\begin{equation}
{\hat p}({\bf b})\sim 1+i{\bf b \cdot B}\qquad {\bf b}\to 0.
\label{afr_tripexp01}
\end{equation}
This is the case of a RW with a bias in the ${\bf B}$ direction. Then
\begin{equation}
\left(\frac{1}{2\pi}\right)^d
\int_{-\pi}^{\pi}\dots\int_{-\pi}^{\pi}
\frac{1}{\left(1-z{\hat p}({\bf b})\right)^2}d^d{\bf b}
\underset{z\to 1}{\sim}
\left(\frac{1}{2\pi}\right)^d
\int_{-\pi}^{\pi}\dots\int_{-\pi}^{\pi}
\frac{1}{\left(1-z(1+i{\bf b\cdot B})\right)^2}d^d{\bf b},
\label{afr_tripexp01int01}
\end{equation}
and since $1\big/{\left(1-z(1+i{\bf b\cdot B}) \right)^2}=(1-z)^{-2}\left[1+i\frac{z}{1-z}{\bf b\cdot B}\right]^2\big/\left[1+\frac{z^2}{(1-z)^2}({\bf b\cdot B})^2)\right]^2$
we obtain for Eq.~(\ref{afr_tripexp01int01}) (after making $\frac{z}{1-z}{\bf b}={\bf b'}$ substitution)
\begin{equation}
\left(1-z\right)^{d-2}
\left(\frac{1}{2\pi}\right)^d \int_{-\frac{z\pi}{1-z}}^{\frac{z\pi}{1-z}}\int_{-\frac{z\pi}{1-z}}^{\frac{z\pi}{1-z}}\dots\int_{-\frac{z\pi}{1-z}}^{\frac{z\pi}{1-z}}
\frac{\left[1+i{\bf b'\cdot B}\right]^2}{\left[1+({\bf b'\cdot B})^2)\right]^2}d^d{\bf b'}.
\label{afr_tripexp01int02}
\end{equation}
We see that in the $z\to 1$ limit the $z$ dependence arrives from the $(1-z)^{d-2}$ pre-factor and the fact that the range of integration diverges as $1/(1-z)$. For $d=1$ extra caution is needed since the pre-factor $1/(1-z)$ diverges while the integral $\int_{-\infty}^{\infty}\left[1+ib'B\right]^2\big/\left[1+(b'B)^2\right]^2db'=0$. Exact calculation of the integral in Eq.~(\ref{afr_tripexp01int02}) for $d=1$ shows that
\begin{equation}
\frac{1}{2\pi(1-z)}\int_{-\frac{z\pi}{1-z}}^{\frac{z\pi}{1-z}} \frac{[1+ib'B]^2}{\left[1+(b'B)^2\right]^2}d\,b'=\frac{1}{1+z(z-2+B^2\pi^2 z)}\underset{z\to 1}{\longrightarrow}
\frac{1}{B^2\pi^2}
\label{afr_tripexp01d1}
\end{equation}
a constant and is not diverging in the $z\to 1$ limit. This proofs that for $d=1$ and the case of a present bias ($B\neq 0$) the sum $\sum_{\bf x'}{\hat c}_z({\bf x'}){\hat c}_z(-{\bf x'})$ converges to a constant when $z\to 1$ so the double sum $\sum_{\bf x}\sum_{\bf x'}\dots$ diverges as $1/(1-z)$ (and not as $1/(1-z)^2$) in the $z\to 1$ limit.
For any $d\geq 2$ the pre-factor $(1-z)^{d-2}$ in Eq.(\ref{afr_tripexp01int02}) is not diverging and the only divergences are possible from the range of the integration when $z\to 1$.
Inspection of the function $[1+i \sum_{j=1}^d b'_j B_j]^2\big/\left[1+(\sum_{j=1}^d b'_j B_j)^2\right]^2$ shows that when the $|b'_1|\to\infty$ the leading order of this function is $\sim 1/(b'_1B_1+\sum_{j=2}^db'_j B_j)^2$.
Integration over $b'_1$ provides a leading order of $1/(b'_2 B_2+\sum_{j=3}^db'_j B_j)^1$ for $|b'_2|\to\infty$.
Next integration over $b'_2$ will provide a leading order of $\log\left(\sum_{j=3}^d b'_j B_j\right)$ for the other $b'_j$s.
By continuing the integration over all the different $b'_j$ ($d$ integrals in total) we obtain that the integrals in Eq.~(\ref{afr_tripexp01int02}) are diverging as $|(1-z)^{2-d}\log\left(1-z\right)|$ when $z\to 1$.
Then from Eq.~(\ref{afr_tripexp01d1}), Eq.~(\ref{afr_tripexp01int02}) and Eq.~(\ref{afr_triple01}) it is established that
\begin{equation}
\sum_{\bf x'}{\hat c}_z({\bf x'}){\hat c}_z(-{\bf x'})
\underset{z\to 1}{\sim}
\left\{
\begin{array}{ll}
\frac{1}{B^2\pi^2} & d=1 \\
|\log\left(1-z\right)| & d\geq2
\end{array}
\right.
\label{afr_triple01fin}
\end{equation}
Finally we have shown that for any dimension of the lattice $d$, when the RW has a bias (i.e. ${\bf B\neq 0}$), the double sum
$\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x})
{\hat c}_z({\bf x'-x}){\hat c}_z({\bf x-x'})$ is growing as $|\log\left(1-z\right)|/(1-z)$ in the $z\to 1$ limit.
The second type of behavior is the case without bias, i.e.
\begin{equation}
{\hat p}({\bf b})\sim 1-\left({\bf b \cdot B}\right)^2 \qquad {\bf b}\to 0.
\label{afr_tripexp02}
\end{equation}
In a similar fashion as Eq.~(\ref{afr_tripexp01int02}) was derived, we obtain that
\begin{equation}
\sum_{\bf x'}{\hat c}_z({\bf x'}){\hat c}_z(-{\bf x'})
\underset{z \to 1}{\longrightarrow}
\left(1-z\right)^{d/2-2}
\left(\frac{1}{2\pi}\right)^d \int_{-\sqrt{\frac{z}{1-z}}\pi}^{\sqrt{\frac{z}{1-z}}\pi}\dots\int_{-\sqrt{\frac{z}{1-z}}\pi}^{\sqrt{\frac{z}{1-z}}\pi}
\frac{1}{\left[1+({\bf b'\cdot B})^2)\right]^2}d^d{\bf b'}.
\label{afr_tripexp02int01}
\end{equation}
The integral on the right hand side of Eq.~(\ref{afr_tripexp02int01}) is always positive and the integration coordinates can be transformed into generalized polar coordinates. In this case the only non-constant integration is of the form $\int_0^{\sqrt{\frac{z}{1-z}}\pi|{\bf B}|}r^{d-1}/(1+r^2)^2$ that is diverges as $(1-z)^{2-d/2}|\log(1-z)|$ for $d\geq4$ and converges for any $d<4$. Eventually in the $z\to 1$ limit
\begin{equation}
\sum_{\bf x'}{\hat c}_z({\bf x'}){\hat c}_z(-{\bf x'})
\underset{z \to 1}{\sim} \left\{
\begin{array}{ll}
(1-z)^{-3/2} & d=1 \\
(1-z)^{-1} & d=2 \\
|\log\left((1-z)\right)| & d>2
\end{array}
\right.
\label{afr_trip02fin}
\end{equation}
We have shown that for any dimension $d>2$, when the RW has no bias (i.e. ${\bf B= 0}$), the double sum
$\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x})
{\hat c}_z({\bf x'-x}){\hat c}_z({\bf x-x'})$ is growing as $|\log\left(1-z\right)|/(1-z)$ in the $z\to 1$ limit.
We have proven that for the specific case of $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x})
{\hat c}_z({\bf x'-x}){\hat c}_z({\bf x-x'})$ and transient RW the double sum diverges slower than $(1-z)^2$ in the $z\to 1$ limit. This result holds also for any double summation over ${\bf x}$ and ${\bf x'}$ and triple multiplications of the probability densities ${\hat c}_z({\bf x-x'}){\hat c}_z({\bf x'-x}){\hat c}_z({\bf x'})$ (or any permutation of the positions). Again, due to the properties of the convolution integrals that lead to Eqs.(\ref{afr_triple01fin},\ref{afr_trip02fin}). When the double summation is performed over multiplication of more than three ${\hat c}_z({\bf x})$s the result will be equivalent to several convolutions integral. Since each convolution reduces the order of divergence of $1/(1-z)$, additional convolutions will only reduce the divergences that appear in Eqs.~(\ref{afr_triple01fin},\ref{afr_trip02fin}). This means that the results of this section show that {\em any double summation over ${\bf x}$ and ${\bf x'}$ and n-th multiplication of positional PDFs diverges slower than $1/(1-z)^2$ when $z\to 1$, if the RW is transient}.
\bibliographystyle{apsrev4-1}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The aim of this note is to give a proof of Bott periodicity using Voiculescu's famous example \cite{Voiculescu:1983km} of `almost commuting' unitary matrices that cannot be approximated by exactly commuting unitary matrices. Indeed, in his thesis \cite[Chapter I]{Loring:1985ud} Loring explained how the properties of Voiculescu's example can be seen as arising from Bott periodicity; this note explains how one can go `backwards' and use Loring's computations combined with Atiyah's rotation trick \cite[Section 1]{Atiyah:1968ek} to prove Bott periodicity. Thus the existence of matrices with the properties in Voiculescu's example is in some sense equivalent to Bott periodicity.
A secondary aim is to explain how to interpret the above in terms of Yu's localization algebra \cite{Yu:1997kb} and the Dirac operator on the circle. The localization algebra of a topological space $X$ is a $C^*$-algebra $C_L^*(X)$ with the property that the $K$-theory of $C^*_L(X)$ is canonically isomorphic to the $K$-homology of $X$. We explain how Voiculescu's matrices, and the computations we need to do with them, arise naturally from the Dirac and Bott operators on the circle using the localisation algebra. A key ingredient is a new explicit formula for the pairing between $K$-homology and $K$-theory in terms of Loring's machinery.
We do not claim any real technical originality in this note: as will be obvious, our approach to Bott periodicity is heavily dependent on Loring's work in particular. However, we think this approach is particularly attractive and concrete -- it essentially reduces the proof of the Bott periodicity theorem to some formal observations and a finite dimensional matrix computation -- and hope that this exposition is worthwhile from that point of view. We also hope it is interesting insofar as it bridges a gap between different parts of the subject. A secondary motivation comes from possible applications in the setting of controlled $K$-theory; this will be explored elsewhere, however.\\
\textbf{Acknowledgment}\\
Thank you to the anonymous referee for several helpful comments. This work was partly supported by NSF grant DMS 1564281.
\section{Preliminaries and Atiyah's rotation trick}
In this section, we set up notation and recall Atiyah's rotation trick.
Let $S^1:=\{z\in \mathbb{C}\mid |z|=1\}$. Let $S$ denote the $C^*$-algebra $C_0(S^1\setminus \{1\})$, so $S\cong C_0(\mathbb{R})$. For a $C^*$-algebra $A$, let $SA$ denote the suspension $S\otimes A\cong C_0(S^1\setminus \{1\},A)$. Let $b\in K_1(S)$ denote the Bott generator, i.e.\ $b$ is the class of the unitary $z\mapsto z^{-1}$ in the unitization of $S$. Recall moreover that for a unital $C^*$-algebra $A$, the \emph{Bott map} is defined on the class of a projection $p\in M_n(A)$ by
$$
\beta_A:K_0(A)\to K_1(SA), \quad [p] \mapsto [bp+(1-p)],
$$
where the right hand side is understood to be the class in $K_1(SA)$ of the unitary $z\mapsto z^{-1} p+1-p$ in the unitization of $C(S^1\setminus \{1\},M_n(A))\cong M_n(SA)$. This identifies with an element in the unitary group of the $n\times n$ matrices over the unitization of $SA$. One checks directly that the process above $\beta_A$ indeed gives a well-defined map on $K$-theory that induces a map in the non-unital case in the usual way, and moreover that the collection of maps is natural in $A$ in the sense that for any $*$-homomorphism $\phi:A\to B$, the corresponding diagram
$$
\xymatrix{ K_0(A) \ar[d]^-{\phi_*} \ar[r]^-{\beta_A} & K_1(SA) \ar[d]^-{(\text{id}_S\otimes \phi)_*}\\ K_0(B) \ar[r]^-{\beta_B} & K_1(SB) }
$$
commutes (see for example \cite[Section 11.1.1]{Rordam:2000mz}).
\begin{theorem}[Bott periodicity theorem]
For every $C^*$-algebra $A$, $\beta_A$ is an isomorphism.
\end{theorem}
An important ingredient in our approach to this will be \emph{Atiyah's rotation trick} \cite[Section 1]{Atiyah:1968ek}: this allows one to reduce the proof of Bott periodicity to constructing a homomorphism $\alpha_A:K_1(SA)\to K_0(A)$ for each $C^*$-algebra $A$, so that the collection $\{\alpha_A\mid A\text{ a $C^*$-algebra}\}$ satisfies two natural axioms. As well as the fact that it simplifies the proof of Bott periodicity, a key virtue the rotation trick (as already pointed out in \cite[Section 7]{Atiyah:1968ek}) is that the axioms satisfied by the $\alpha_A$ are easily checked for several different natural analytic and geometric constructions of a collection of homomorphisms $\alpha_A$; this paper essentially codifies the observation that Voiculescu's almost commuting matrices give yet another way of constructing an appropriate family $\{\alpha_A\}$.
In order to give a precise statement of the axioms, recall from \cite[Section 4.7]{Higson:2000bs} that for unital $C^*$-algebras $A$ and $B$, the formulas
$$
\times:K_0(A)\otimes K_0(B)\to K_0(A\otimes B),\quad [p]\otimes [q] \mapsto [p\otimes q]
$$
and
$$
\times:K_1(A)\otimes K_0(B)\to K_1(A\otimes B), \quad [u]\otimes [p]\mapsto [u\otimes p+1\otimes (1-p)]
$$
induce canonical external products on $K$-theory; moreover, applying these products to unitizations and restricting, these extend to well-defined products in the non-unital case too.
Here then is a precise statement of the reduction that Atiyah's rotation trick allows: see the exposition in \cite[Section 4.9]{Higson:2000bs} for a proof of the result as stated here.
\begin{proposition}\label{atiyah rot}
Assume that for each $C^*$-algebra $A$ there is homomorphism $\alpha_A:K_1(SA)\to K_0(A)$ satisfying the following conditions.
\begin{enumerate}[(i)]
\item $\alpha_\mathbb{C}(b)=1$.
\item For any $C^*$-algebras $A$ and $B$, the diagram below
$$
\xymatrix{ K_1(SA)\otimes K_0(B) \ar[d]^-{\alpha_A\otimes 1} \ar[r]^-\times & K_1(S(A\otimes B)) \ar[d]^-{\alpha_{A\otimes B}} \\ K_0(A)\otimes K_0(B) \ar[r]^-{\times} & K_0(A\otimes B)}
$$
commutes; here the horizontal arrows are the canonical external products in $K$-theory discussed above, and we have used the canonical identifications
$$
SA\otimes B=(S\otimes A)\otimes B=S\otimes (A\otimes B)=S(A\otimes B)
$$
to make sense of the top horizontal arrow.
\end{enumerate}
Then $\alpha_A$ and $\beta_A$ are mutually inverse for all $C^*$-algebras $A$. \qed
\end{proposition}
\section{Almost commuting matrices}
Our goal in this section is to construct homomorphisms $\alpha_A:K_1(SA)\to K_0(A)$ with the properties in Proposition \ref{atiyah rot} above, and thus prove Bott periodicity. To motivate our construction, we start by recalling Voiculescu's almost commuting matrices.
Let $\{\delta_0,...,\delta_{n-1}\}$ be the canonical orthonormal basis for $\mathbb{C}^n$, and define unitaries by
\begin{equation}\label{voic mat}
u_n:\delta_k\mapsto e^{2\pi i k/n} \delta_k \quad \text{and}\quad v_n:\delta_k\mapsto \delta_{k+1},
\end{equation}
where the `$k+1$' in the subscript above should be interpreted modulo $n$, or in other words $v_n:\delta_{n-1}\mapsto \delta_0$. It is straightforward to check that $\|[u_n,v_n]\|\to 0$ as $n\to\infty$ (the norm here and throughout is the operator norm). Voiculescu \cite{Voiculescu:1983km} proved the following result.
\begin{theorem}\label{voic the}
There exists $\epsilon>0$ such that if $u_n',v_n'$ are $n\times n$ unitary matrices that actually commute, then we must have $\max\{\|u_n-u_n'\|,\|v_n-v_n'\|\}\geq \epsilon$ for all $n$. \qed
\end{theorem}
In words, the sequence of pairs $(u_n,v_n)$ cannot be well-approximated by pairs that actually commute. Voiculescu's original proof of this fact uses non-quasidiagonality of the unilateral shift; there is a close connection of this to $K$-theory and Bott periodicity, but this is not obvious from the original proof. A more directly $K$-theoretic argument is due to Loring from his thesis \cite{Loring:1985ud}. Fix smooth functions $f,g,h:S^1\to [0,1]$ with the properties in \cite[pages 10-11]{Loring:1985ud}: the most salient of these are that
$$
f^2+g^2+h^2=f, \quad f(1)=1,\quad g(1)=h(1)=0, \quad \text{and}\quad gh=0.
$$
For a pair of unitaries $u,v$ in a unital $C^*$-algebra $B$, define the \emph{Loring element}
\begin{equation}\label{loring elt}
e(u,v):=\begin{pmatrix} f(u) & g(u)+ h(u)v \\ v^*h(u)+g(u) & 1-f(u)\end{pmatrix},
\end{equation}
which in general is a self-adjoint element of $M_2(B)$. For later use, note that the conditions on $f,g,h$ above imply the following formula, where for brevity we write $e=e(u,v)$, $f=f(u)$, $g=g(u)$, $h=h(u)$,
\begin{equation}\label{e2 form}
e^2=e+\begin{pmatrix} hvg+gv^*h & [f,hv] \\ [v^*h,f] & v^*h^2v-h^2 \end{pmatrix};
\end{equation}
in particular, if $u$ and $v$ happen to commute, this shows that $e$ is an idempotent. Similarly, one can show that if $\|uv-vu\|$ is suitably small, then $\|e^2-e\|$ is also small, so in particular the spectrum of $e$ misses $1/2$: indeed, this is done qualitatively in \cite[Proposition 3.5]{Loring:1985ud}, while a quantitative result for a specific choice of $f$, $g$, and $h$ can be found in \cite[Theorem 3.5]{Loring:2014xw}; the latter could be used to make the conditions on $t$ that are implicit in our results more explicit. Thus if $\chi$ is the characteristic function of $[1/2,\infty)$, then $\chi$ is continuous on the spectrum of $e$, and so $\chi(e)$ is a well-defined projection in $B$. Loring shows that if $e_n\in M_{2n}(\mathbb{C})$ is the Loring element associated to the matrices $u_n,v_n\in M_n(\mathbb{C})$ as in line \eqref{voic mat}, then for all suitably large $n$,
$$
\text{rank}(e_n)-n=1.
$$
However, it is not difficult to see that if $u_n$ and $v_n$ were well-approximated by pairs of actually commuting matrices in the sense that the conclusion of Theorem \ref{voic the} fails, then for all suitably large $n$ one would have that $\text{rank}(e_n)=0$. Thus Loring's work in particular reproves Theorem \ref{voic the}.
Now, let us get back to constructing a family of homomorphisms $\alpha_A:K_1(SA)\to K_0(A)$ satisfying the conditions of Proposition \ref{atiyah rot}, and thus to prove Bott periodicity. For this, it will be convenient to have a continuously parametrised versions of the sequence $(u_n)$, which we now build.
Let then $\delta_n:S^1\to \mathbb{C}$ be defined by $\delta_n(x)=\frac{1}{\sqrt{2\pi}}z^n$, so the collection $\{\delta_n\}_{n\in \mathbb{Z}}$ is the canonical `Fourier orthonormal basis' for $L^2(S^1)\cong \ell^2(\mathbb{Z})$. For each $t\in [1,\infty)$, define a unitary operator $u_t:L^2(S^1)\to L^2(S^1)$ to be diagonal with respect to this basis, and given by
$$
u_t\delta_n:=\left\{\begin{array}{ll} e^{2\pi i nt^{-1}}\delta_n & 0\leq t^{-1}n \leq 1 \\ \delta_n & \text{otherwise} \end{array}\right..
$$
Thus $u_t$ agrees with the operator of rotation by $2\pi t^{-1}$ radians on $\text{span} \{\delta_0,...,\delta_n\mid n\leq t\}$, and with the identity elsewhere. Let $A$ be a unital $C^*$-algebra faithfully represented on some Hilbert space $H$, and represent $C(S^1)$ on $L^2(S^1)$ by multiplication operators in the canonical way. Represent $SA=\{f\in C(S^1, A)\mid f(1)=0\}$ faithfully on $L^2(S^1,H)$ as multiplication operators. Let $\chi$ be the characteristic function of $\{z\in \mathbb{C}\mid \text{Re}(z)\geq \frac{1}{2}\}$.
Note that the Bott element $b$ acts on $L^2(S^1)$ via the (backwards) bilateral shift. When compressed to the subspace $\text{span}\{\delta_0,...,\delta_n\mid n\leq t\}$ of $L^2(S^1)$, the operators $u_t$ and $b$ are thus slight variants of Voiculescu's almost commuting unitaries from line \eqref{voic mat}.
\begin{lemma}\label{in right place}
Let $A$ be a unital $C^*$-algebra, let $\widetilde{SA}$ be the unitization of its suspension, and let $v\in \widetilde{SA}$ be a unitary operator. Then with notation as in the above discussion:
\begin{enumerate}[(i)]
\item the spectrum of the Loring element $e(u_t\otimes 1_A,v)$ (cf.\ line \eqref{loring elt}) does not contain $1/2$ for all large $t$;
\item the difference
$$
\chi(e(u_t\otimes 1_A,v))-\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}
$$
is in $\mathcal{K}(L^2(S^1))\otimes M_{2}(A)$ for all $t$;
\item the function
$$
t\mapsto \chi(e(u_t\otimes 1_A,v))
$$
is operator norm continuous for all large $t$.
\end{enumerate}
\end{lemma}
\begin{proof}
For part (i), we first claim that for any $f\in C(S^1)$, $[v_t,f]\to 0$ as $t\to\infty$. Indeed, it suffices to check this for the Bott element $b(x)=z^{-1}$ as this function generates $C(S^1)$ as a $C^*$-algebra. With respect to the orthonormal basis $\{\delta_n:n\in \mathbb{Z}\}$ we have that $b$ acts by
$$
b:\delta_n\mapsto \delta_{n-1},
$$
i.e.\ $b$ is the inverse of the usual bilateral shift. On the other hand, we have that
$$
\|[v_t,b]\|=\|(v_t-bv_tb^*)b\|=\|v_t-bv_tb^*\|,
$$
and one computes directly that $v_t-bv_tb^*$ is a multiplication operator by an element of $\ell^\infty(\mathbb{Z})$ that tends to zero as $t$ tends to infinity, completing the proof of the claim.
It follows from the claim that $[v_t\otimes 1,f\otimes a]\to 0$ as $t\to\infty$ for any $f\in C(S^1)$, and any $a\in A$. Hence $[k(v_t)\otimes 1,a]\to 0$ for any $a\in C(S^1,A)$ and any $k\in C(S^1)$ (as $k(v_t)$ is in the $C^*$-algebra generated by $v_t$). Part (i) follows from this and the formula in line \eqref{e2 form}, plus the fact that $hg=gh=0$.
For part (ii), consider
$$
e(u_t\otimes 1_A,v)=\begin{pmatrix} f(u_t)\otimes1_A & g(u_t)\otimes1_A+(h(u_t)\otimes 1_A)v \\ g(u_t)\otimes1_A+v^*(h(u_t)\otimes 1_A) & (1-f(u_t))\otimes1_A \end{pmatrix}.
$$
As $e$ is norm continuous in the `input unitaries', we may assume that $v\in C(S^1,A)$ is of the form $z\mapsto \sum_{n=-M}^M z^n a_n$ for some finite $M\in \mathbb{N}$. It follows from this, the formula for $v_t$, and the facts that $h(1)=0=g(1)$ and $f(1)=1$, that there exists $N\in \mathbb{N}$ (depending on $M$ and $t$) such that the operator $e(u_t\otimes 1_A,v)$ leaves some subspace of $(L^2(S^1)\otimes H)^{\oplus 2}$ of the form
$$
(\text{span}\{\delta_{-N},...,\delta_N\}\otimes H)^{\oplus 2}
$$
invariant, and moreover that it agrees with the operator $\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$ on the orthogonal complement of this subspace. It follows from this that
$$
\chi(e(u_t\otimes 1_A,v))-\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}
$$
is also zero on the orthogonal complement of
$$
(\text{span}\{\delta_{-N},...,\delta_N\}\otimes H)^{\oplus 2}
$$
and we are done.
For part (iii), it is straightforward to check that the functions $t\mapsto h(u_t)$, $t\mapsto f(u_t)$ and $t\mapsto g(u_t)$ are continuous, as over a compact interval in the $t$ variable, they only involve continuous changes on the span of finitely many of the eigenvectors $\{\delta_n\}_{n\in \mathbb{Z}}$. It follows from this and the formula for $e(u,v)$ that the function
$$
t\mapsto e(u_t\otimes 1_A,v)
$$
is norm continuous. The claim follows from this, the fact that $\chi$ is continuous on the spectrum $e(u_t\otimes 1_A,v)$ for large $t$, and continuity of the functional calculus in the appropriate sense (see for example \cite[Lemma 1.2.5]{Rordam:2000mz}).
\end{proof}
\begin{corollary}\label{alpha map}
With notation as above, provisionally define
$$
\alpha_A:K_1(SA)\to K_0(A\otimes \mathcal{K})
$$
by the formula for $u\in M_n(\widetilde{A})$
$$
[u]\mapsto [\chi(e(v_t\otimes 1_{M_n(A)},u))]-\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix},
$$
where $t$ is chosen sufficiently large (depending on $u$) so that all the conditions in Lemma \ref{in right place} hold. Then this is a well-defined homomorphism for any $C^*$-algebra $A$.
\end{corollary}
\begin{proof}
This is straightforward to check from the formulas involved together with the universal property of $K_1$ as exposited in \cite[Proposition 8.1.5]{Rordam:2000mz}, for example.
\end{proof}
Abusing notation slightly, we identity $K_0(A\otimes \mathcal{K})$ with $K_0(A)$ via the canonical stabilization isomorphism, and thus treat $\alpha_A$ as a homomorphism from $K_1(SA)$ to $K_0(A)$. To complete our proof of Bott periodicity, it remains to check that these homomorphisms $\alpha_A$ have the properties from Proposition \ref{atiyah rot}. The second of these properties is almost immediate; we leave it to the reader to check.
The first property, that $\alpha_\mathbb{C}(b)=1$, is more substantial, and we give the proof here following computations in Loring's thesis.
\begin{proposition}\label{key computation}
With notation as above, $\alpha_\mathbb{C}(b)=1$.
\end{proposition}
\begin{proof}
We must compute the element
$$
[\chi(e(u_t,b))]-\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \in K_0(\mathcal{K})\cong \mathbb{Z}
$$
for suitably large $t$, and show that it is one. We will work with an integer value $t=N$ for $N$ suitably large. Note that with respect to the canonical basis $\{\delta_n\}_{n\in \mathbb{Z}}$ of $L^2(S^1)$, the element $b$ acts as the (inverse of the) unilateral shift. On the other hand, on this basis $u_N(\delta_n)=\delta_n$ for all $n\not\in (0,N)$. Define
$$
H_N:=\text{span}\{\delta_n\mid 1\leq n \leq N\},
$$
It follows from the above observations, the fact that $f(1)=1$ and $h(1)=g(1)=0$, and a direct computation, that $H_N^{\oplus 2}$ is an invariant subspace of $L^2(S^1)^{\oplus 2}$ for both $e(u_N,b)$, and for $\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$. Moreover, these two operators agree on the orthogonal complement $(H_N^{\oplus 2})^\perp$. On $H_N^{\oplus 2}$, $e(u_N,b)$ agrees with the operator $e(u_N,b_N)$, where we abuse notation by writing $u_N$ also for the restriction of $u_N$ to $H_N$, and where $b_N:H_N\to H_N$ is the permutation operator defined by
$$
b_N:\delta_n\mapsto \delta_{n+1~\text{mod $n$}}
$$
(i.e.\ $b_N$ is the cyclic shift of the canonical basis). From these computations, we have that if we identify $K_0(\mathcal{K}(L^2(S^1)))\cong \mathbb{Z}$ and $K_0(\mathcal{B}(H_N))\cong \mathbb{Z}$ via the canonical inclusion $\mathcal{B}(H_N)\to \mathcal{K}(L^2(S^1))$ (which induces an isomorphism on $K$-theory) then
$$
[\chi(e(u_N,b))]-\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}= [\chi(e(u_N,b_N))]-\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}\in \mathbb{Z}.
$$
Thus we have reduced the proposition (and therefore the proof of Bott periodicity) to a finite-dimensional matrix computation for $N$ large: we must show that if $e_N:=\chi(e(u_N,b_N))$, then the trace of the $2N\times 2N$ matrix
$$
\chi(e_N)-\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}
$$
is $1$ for all large $N$, or equivalently, that the trace of the $2N\times 2N$ matrix $e_N$ is $N-1$ for all large $N$.
This can be computing directly, following Loring. The computation is elementary, although slightly involved; we proceed as follows.\\
\textbf{Step 1:} $\|\chi(e_N)-e_N\|=O(1/N)$ for all $N$.
For notational convenience, we fix $N$, drop the subscript $_N$, and write $f$ for $f(v_N)=f(v)$ and similarly for $g$ and $h$. We have that $|\chi(x)-x|\leq 2|x^2-x|$ for all $x\in \mathbb{R}$, whence from the functional calculus $\|\chi(e)-e\|\leq 2\|e-e^2\|$. Using the formula in line \eqref{e2 form} above, it will thus suffice to show that the norm of
$$
\begin{pmatrix} hbg+gb^*h & [f,bv] \\ [b^*h,f] & b^*h^2b-h^2 \end{pmatrix}
$$
is bounded by $C/N$, where $C>0$ is an absolute constant not depending on $N$. Note that if a function $k:S^1\to \mathbb{C}$ is Lipschitz with Lipschitz constant $\text{Lip}(k)$ , we have that
$$
\|[k,b]\|=\|bkb^*-k\|=\sup_{x\in [0,1]}|k(e^{2\pi ix})-k(2\pi i (x+1/N))|\leq \frac{2\pi\text{Lip}(k)}{N}.
$$
This, combined with the fact that $\|h\|\leq 1$ implies that
$$
\Big\|\begin{pmatrix} h[b,g]+[g,b^*]h & h[f,b] \\ [u^*,f]h & u^*(h[h,b]+[h,b]h)\end{pmatrix}\Big\|\leq \frac{4\pi}{N}(\text{Lip}(f)+\text{Lip}(g)+\text{Lip}(h))
$$
and we are done with this step.\\
\textbf{Step 2:} $\text{tr}(\chi(e_N))-\text{tr}(3e_N^2-2e_N^3)\to 0$ as $N\to \infty$.
The result of step one says that there is a constant $C>0$ such that the eigenvalues of $e_N$ are all within $C/N$ of either one or zero for all large $N$. The function $p(x)=3x^2-2x^3$ has the property that $p(0)=0$, $p(1)=1$, and $p'(0)=p'(1)=0$. Hence there is a constant $D$ such that the eigenvalues of $p(e_N)$ are all within $D/N^2$ of either $0$ or $1$. It follows that $\|\chi(e_N)-p(e_N)\|\leq D/N^2$. Hence
$$
|\text{tr}(\chi(e_N))-\text{tr}(p(e_N))|=|\text{tr}(\chi(e_N)-p(e_N))|\leq 2N\|\chi(e_N)-p(e_N)\|\leq \frac{2D}{N}.
$$
This tends to zero as $N\to\infty$, completing the proof of step 2.\\
\textbf{Step 3:} $\text{tr}(e_N^2)=N$ for all $N$.
Using the formula in line \eqref{e2 form} (with the same notational conventions used there) and rearranging a little, we get
$$
\text{tr}(e^2)=\text{tr}(e)+\text{tr}(hbg+gb^*h)+\text{tr}(b^*h^2b-h^2).
$$
From the formula for $e$, we get that $\text{tr}(e)=N$. The second two terms are both zero using the trace property, and that $gh=0$.\\
\textbf{Step 4:} $\text{tr}(e_N^3)-(N-\frac{1}{2})\to 0$ as $N\to\infty$.
Again, we use the formula in line \eqref{e2 form}, multiplied by $e=e_N$ to see that
$$
\text{tr}(e^3)=\text{tr}(e^2)+\text{tr}\Big(e\begin{pmatrix} hbg+gb^*h & [f,hb] \\ [b^*h,f] & b^*hb-h^2 \end{pmatrix}\Big).
$$
The first term is $N$ using step 3. Multiplying the second term out, simplifying using the trace properties and that $gh=0$, we see that the trace of the second term equals
$$
\text{tr}(3h^2(f-bfb^*))=3\sum_{k=0}^{N-1} h(e^{2\pi ik/N})^2(f(e^{2\pi i k/N})-f(e^{2\pi i(k-1)/N})).
$$
Assuming (as we may) that $f$ is differentiable, this converges as $N$ tends to infinity to
$$
3\int_0^1 h(x)^2f'(x)dx.
$$
Using that $h=0$ on $[0,1/2]$, and that $h=f-f^2$ on $[1/2,1]$ (plus the precise form for $f$ in \cite[pages 10-11]{Loring:1985ud}) we get that
$$
3\int_{\frac{1}{2}}^1 (f(e^{2\pi ix})-f(e^{2\pi ix})^2)f'(e^{2\pi ix})dx=3\int_0^1 \lambda-\lambda^2 d\lambda = \frac{1}{2}.
$$
This completes the proof of step 4.
Combining steps 2, 3, and 4 completes the argument and we are done.
\end{proof}
\section{Connection with the localization algebra}
Homomorphisms $\alpha_A$ with the properties required by Proposition \ref{atiyah rot} are maybe more usually defined using differential, or Toeplitz, operators associated to the circle. There is a direct connection between this picture and our unitaries $u_t$, which we now explain; we will not give complete proofs here as this would substantially increase the length of the paper, but at least explain the key ideas and connections.
The following definition is inspired by work of Yu \cite{Yu:1997kb} in the case that $A$ is commutative. It agrees with Yu's original definition for unital commutative $C^*$-algebras.
\begin{definition}\label{loc alg}
Let $A$ be a $C^*$-algebra, and assume that $A$ is represented nondegenerately and essentially\footnote{This means that no non-zero element of $A$ acts as a compact operator, so in particular, such a representation is faithful.} on some Hilbert space $H$. Let $C_{ub}([1,\infty),\mathcal{B}(H))$ denote the $C^*$-algebra of bounded uniformly continuous functions from $[1,\infty)$ to $\mathcal{B}(H)$. The \emph{localization algebra} of $A$ is defined to be
$$
C_L^*(A):=\left\{ \begin{array}{l|l} f\in C_{ub}([1,\infty),\mathcal{B}(H)) & f(t)a\in \mathcal{K}(H) \text{ for all }t\in [1,\infty),a\in A \\ &\text{ and } \|[f(t),a]\|\to 0 \text{ for all } a\in A\end{array}\right\}.
$$
If $A=C_0(X)$ is commutative, we will write $C_L^*(X)$ instead of $C^*_L(C_0(X))$.
\end{definition}
The localization algebra of a $C^*$-algebra $A$ does not depend on the choice of essential representation $H$ up to non-canonical isomorphism, and its $K$-theory does not depend on $H$ up to canonical isomorphism (these remarks follow from \cite[Theorem 2.7]{Dadarlat:2016qc}); thus we say `the' localization algebra of $A$, even thought this involves a slight abuse of terminology. Moreover, building on work of Yu \cite{Yu:1997kb} and Qiao-Roe \cite{Qiao:2010fk} in the commutative case, \cite[Theorem 4.5]{Dadarlat:2016qc} gives a canonical isomorphism
$$
K^*(A)\to K_*(C^*_L(A))
$$
from the $K$-homology groups of $A$ to the $K$-theory groups of $C^*_L(A)$ (at least when $A$ is separable).
One can define a pairing
$$
K_i(C_L^*(A))\otimes K_j(A)\mapsto \mathbb{Z}
$$
between the $K$-theory of the localization algebra (i.e.\ $K$-homology) and the $K$-theory of a $C^*$-algebra $A$. The most complicated case, and also the one relevant to the current discussion of this occurs when $i=j=1$, so let us focus on this. Let $(u_t)_{t\in [1,\infty)}$ be a unitary in the unitization of $C^*_L(A)$ and $v$ in the unitization of $A$ be another unitary (the construction also works with matrix algebras involved in exactly the same way, but we ignore this for notational simplicity). Let $H$ be the Hilbert space used in the definition of the localization algebra, and for $t\in [1,\infty)$, let
$$
e(u_t,v)\in M_2(\mathcal{B}(H))
$$
be the Loring element of line \eqref{loring elt}. One can check that for all large $t$, the spectrum of $e(u_t,v)$ does not contain $1/2$. Hence if $\chi$ be the characteristic function of $[1/2,\infty)$, we get a difference
$$
\chi(e(u_t,v))-\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}\in M_2(\mathcal{B}(H))
$$
of projections for all suitably large $t$. It is moreover not difficult to check that this difference is in $M_2(\mathcal{K}(H))$, and thus defines an element in $K_0(\mathcal{K}(H))\cong \mathbb{Z}$, which does not depend on $t$ for $t$ suitably large. We may thus define
$$
\langle [u_t],[v]\rangle:=[e(u_t,v)]-\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \in K_0(\mathcal{K})\cong \mathbb{Z}
$$
for any suitably large $t$. One checks that this formula gives a well-defined pairing between $K_1(A)$ and $K^1(A)$. More substantially, one can check that it agrees with the canonical pairing between $K$-homology and $K$-theory (at least up to sign conventions).
Let us go back to the case of interest for Bott periodicity. In terms of elliptic operators, one standard model for the canonical generator of the $K$-homology group $K_1(S^1)$ of the circle is the class of the Dirac operator $D=\frac{-i}{2\pi}\frac{d}{d\theta}$, where $\theta$ is `the' angular coordinate. We consider $D$ as an unbounded operator on $L^2(S^1)$ with domain the smooth functions $C^\infty(S^1)$. Let $\chi:\mathbb{R}\to [-1,1]$ be any continuous function such that $\lim_{t\to\pm\infty}\chi(t)=\pm 1$, and for $t\in [1,\infty)$, define
$$
F_t:=\chi(t^{-1}D)
$$
using the unbounded functional calculus. Concretely, each $F_t$ is diagonal with respect to the canonical orthonormal basis $\{\delta_n\mid n\in \mathbb{Z}\}$, acting by
$$
F_t:\delta_n\mapsto \chi(t^{-1}n)\delta_n~;
$$
this follows as $D:\delta_n\mapsto n\delta_n$ for each $n$.
Using the above concrete description of the eigenspace decomposition of $F_t$, it is not too difficult to show that the function $t\mapsto F_t$ defines an element of the multiplier algebra $M(C^*_L(S^1))$ of the localization algebra $C_L^*(S^1)$. Moreover, one checks similarly that the function $t\mapsto \frac{1}{2}(F_t+1)$ maps to a projection in $M(C_L^*(S^1)) / C_L^*(S^1)$, and thus defines a class $[D]_0\in K_0(M(C_L^*(S^1))/C_L^*(S^1))$. Yu defines the $K$-homology class associated to this operator to be the image $[D]$ of $[D]_0$ under the boundary map
$$
\partial : K_0(M(C_L^*(S^1))/C_L^*(S^1)) \to K_1(C_L^*(S^1))
$$
(all this is part of a very general machine for turning elliptic differential operators into elements of $K_*(C^*_L(S^1))$: see \cite[Chapter 8]{Willett:2010ay}). This boundary map is explicitly computable (compare for example \cite[Section 12.2]{Rordam:2000mz}): the image of $[D]_0$ under this map is the class of the unitary
$$
e^{2\pi i\frac{1}{2}(F_t+1)}=-e^{\pi i \chi(t^{-1}D)}.
$$
Choosing $\chi$ to be the function which is negative one on $(-\infty,0]$, one on $[1,\infty)$, and that satisfies $\chi(t)=2t-1$ on $(0,1)$, we see that
\begin{equation}\label{ut form}
-e^{\pi i \chi(t^{-1}D)}=u_t
\end{equation}
i.e.\ the canonical generator of the $K$-homology group $K_1(S^1)$ is given precisely by the class of $u_t$ in $K_1(C_L^*(S^1))$.
The formula for $\alpha_A$ is then a specialization of the general formula above for the pairing, using the element of line \eqref{ut form} (more precisely, its amplification to $C(S^1,A)$ in an obvious sense). Thus our proof of Bott periodicity using almost commuting matrices fits very naturally into the localization picture of $K$-homology.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\renewcommand{\thepage}{\arabic{page}}
We consider a three-dimensional Riemannian manifold $(M^3,g)$ that admits a smooth nonzero solution $f$ to the equation
\begin{align}
\nabla df=\psi Rc+\phi g, \label{ggrsdef}
\end{align}
where $\psi,\phi$ are given smooth functions of $f$, $Rc$ is the Ricci tensor of $g$. One can easily see that if $\psi\neq0$ and $f$ is a constant function (we call this {\it trivial}), then this is nothing but an Einstein manifold equation. In this paper, we study
some well-known classes of spaces with the function $f$ that satisfies (\ref{ggrsdef}).
The first class is when $\psi=-1$ and $\phi=\lambda$, where $\lambda$ is a constant, i.e., $(M^3,g,f)$ is a \textit{gradient Ricci soliton} satisfying the following equation:
\begin{align}
\nabla df+Rc=\lambda g. \label{soldef}
\end{align}
A gradient Ricci soliton is said to be shrinking, steady, or expanding if $\lambda$ is positive, zero, or negative, respectively.
The gradient Ricci soliton has significance as a singularity model of Ricci flow. Therefore, considerable effort has been devoted toward understanding its geometric properties and classifying it.
Studies have been conducted under various geometric conditions. Some of these studies are related to this paper, as stated below.
In \cite{BM}, J. Berstein and T. Mettler classified two-dimensional complete gradient Ricci solitons. In \cite{CCZ}, any three-dimensional complete noncompact non-flat shrinker was
proved to be a quotient of the round cylinder $\mathbb{S}^2\times \mathbb{R}$; see also \cite{Iv3, NW, P}. For the three-dimensional gradient steadiers, S. Brendle showed that a three-dimensional steady gradient Ricci soliton that is non-flat and $\kappa$-noncollapsed is isometric to the Bryant soliton up to scaling in \cite{Br}; see also \cite{Cao1}.
In higher dimensions, gradient Ricci solitons have been studied when the Weyl tensor satisfies certain conditions. Complete locally conformally flat solitons have been studied in \cite{CC1, CWZ, CM, PW2, Z}. Gradient Ricci solitons with harmonic Weyl tensors have been studied in \cite{FG, Ki, MS, WWW}. Bach-flat cases have been studied in \cite{CCCMM, CC2}.
Next, we consider a \textit{$V$-static space} \cite{CEM} that is a Riemannian manifold $(M,g)$ that admits a smooth function $f$ satisfying
\begin{align}
\nabla df =f(Rc-\frac{R}{n-1}g)-\frac{\kappa}{n-1}g \ \ \ \ {\rm for} \ {\rm a} \ {\rm constant} \ \kappa.\label{miaodefin}
\end{align}
Note that the existence of a nonzero solution to (\ref{miaodefin}) guarantees that the scalar curvature is constant. The (vacuum) static spaces corresponding to $\kappa =0$ have been studied in general relativity since the beginning of 20th century. Moreover, they have been the focus of a recent study \cite{QY2} related to positive mass conjecture. In addition, the geometric significance of
this class of spaces for $\kappa \neq 0$ has been well studied in \cite{MT2, MST} and especially in \cite{CEM}.
Harmonic Weyl cases were studied in \cite{ BBR, JJ2}. More related to this article, three-dimensional Ricci-degenerate spaces in the case $\kappa=0$ were already studied in \cite{Le, JEK} by different arguments from this article. Here we mainly study the case $\kappa\neq0$.
Finally, one may consider Riemannian metrics $(M, g)$ of constant scalar curvature that admit a non-
constant solution $f$ to
\begin{align}
\nabla df=f(Rc-\frac{R}{n-1}g)+Rc-\frac{R}{n}g. \label{cpedef}
\end{align}
If $M$ is a closed manifold, then $g$ is a critical point of the total scalar curvature functional defined on the space of Riemannian metrics with unit volume and with constant scalar curvature on $M$. By an abuse of terminology, we shall refer to a metric $g$ satisfying (\ref{cpedef}) as \textit{a critical point metric} even when $M$ is not closed. In \cite{Be}, the conjecture that a compact critical point metric is always Einstein was
raised. This conjecture was verified under some geometric conditions, namely locally conformally flat \cite{La}, harmonic curvature \cite{HCY}, and Bach-flat \cite{QY} conditions.
A number of studies have investigated this subject, including \cite[Section 4.F]{Be} and \cite{BR, JJ2, HY}.
In this paper, we study three-dimensional Riemannian manifolds with two distinct Ricci eigenvalues having a nonzero solution to (\ref{ggrsdef}). This paper is the result of efforts devoted toward refining and generalizing \cite{JJ}, which concerns three-dimensional $m$-quasi Einstein manifolds.
We preferentially study the common properties of spaces for general $\psi(f)$ and $\phi(f)$. We show that when $\nabla f$ is not an Ricci-eigen vector, the metric $g$ must have a specific form. Then, we focus on each of the three specific classes stated above.
We mainly adopt a local approach and state local versions of theorems below for the result.
Other versions for complete spaces can follow readily, as the three classes of spaces are real analytic. One may also note below that these local spaces have their own geometric significance.
\begin{thm}\label{metricthm1}
Let $(M^3,g,f)$ be a three-dimensional Riemannian manifold satisfying (\ref{ggrsdef}) with Ricci eigenvalues $\lambda_1\neq\lambda_2=\lambda_3$. Consider an orthonormal Ricci-eigen frame field $\{E_i \ | \ i=1,2,3\}$ in an open subset of $\{\nabla f\neq 0\}$ such that $E_2\perp \nabla f$ and $E_3f\neq0$. Then there exists a local coordinate system $(x_1,x_2,x_3)$ in which the metric $g$ can be written as
\begin{align}
g=g_{11}(x_1,x_3)dx_1^2+g_{33}(x_1,x_3)v(x_3)dx_2^2+g_{33}(x_1,x_3)dx_3^2 \label{startmetric22}
\end{align}
for a function $v(x_3)$ where $E_i=\frac{1}{\sqrt{g_{ii}}}\frac{\partial}{\partial x_i}$.
\end{thm}
Note that one can always choose an orthonormal Ricci-eigen frame field $\{E_i\ | \ i=1,2,3\}$ satisfying $\nabla f\perp E_2$ without loss of generality.
\begin{thm}
Let $(M^3,g,f)$ be a three-dimensional gradient Ricci soliton with Ricci-eigenvalues $\lambda_1\neq\lambda_2=\lambda_3$. Then near each point in the open dense
subset $\{\nabla f\neq 0\}$ of $M$, there exist local coordinates $(x_1,x_2,x_3)$ in which $(g,f)$ can be one of the following:
{\rm (i)} $g=dx_1^2+h(x_1)^2\tilde{g}$ where $\tilde{g}$ has constant curvature. In particular, $g$ is conformally flat.
{\rm (ii)} $g=dx_1^2+\tilde{g}$ where $\tilde{g}$ is a 2-dimensional gradient Ricci soliton together with a potential function $\tilde{f}$. And the potential function is $f=\frac{\lambda}{2}x_1^2+\tilde{f}$.
\end{thm}
In \cite{CMM}, it has been shown that a complete three-dimensional Ricci-degenerate simply connected steady gradient Ricci soliton is either isometric to the Riemannian product of $\mathbb{R}$ with a surface or locally conformally flat.
Their argument depends crucially on the completeness assumption of a soliton, which guarantees nonnegativity of sectional curvatures.
Our argument is purely local; nonetheless, our Theorem 1 holds for expanding solitons as well as steady ones.
\begin{thm} \label{th2}
Let $(M^3,g,f)$ be a three-dimensional $V$-static space $(\kappa\neq0)$ with Ricci-eigenvalues $\lambda_1\neq\lambda_2=\lambda_3$. Then near each point in the open dense
subset $\{\nabla f\neq 0\}$ of $M$, there exist local coordinates $(x_1,x_2,x_3)$ in which $(g,f)$ can be one of the following:
{\rm (i)} $g=dx_1^2+h(x_1)^2\tilde{g}$ where $\tilde{g}$ has constant curvature. In particular, $g$ is conformally flat.
{\rm (ii)} $g=\frac{1}{\{q(x_3)+b(x_1)\}^2}\{dx_1^2+(q')^2dx_2^2+dx_3^2\}$ where $q(x_3)$ and $b(x_1)$ satisfy $(q')^2-2mq^3-lq^2+\alpha q+k=0$ and
$(b')^2-2mb^3+lb^2+\alpha b+\frac{R}{6}-k=0$ for constants $m\neq0$, $l,\alpha,k$, respectively. The potential function is $f=\frac{c(x_1)}{q+b}$ where $c$ is a solution to $b''c=b'c'-\frac{\kappa}{2}$.
\end{thm}
The explicit spaces in Theorem \ref{th2} {\rm (ii)} cannot be defined on closed manifolds, and
most of these are not even complete. However, recent studies have effectively provided a geometric meaning for even local V-static spaces; see \cite{Yu} and \cite[Theorem 2.3]{CEM}.
The spaces in Theorem 2 are neither warped products nor conformally flat.
\begin{thm} \label{th3}
Let $(M^3,g,f)$ be a three-dimensional critical point metric with Ricci-eigenvalues $\lambda_1\neq\lambda_2=\lambda_3$. Then near each point in the open dense
subset $\{\nabla f\neq 0\}$ of $M$, there exist local coordinates $(x_1,x_2,x_3)$ in which $(g,f)$ can be one of the following:
{\rm (i)} $g=dx_1^2+h(x_1)^2\tilde{g}$ where $\tilde{g}$ has constant curvature. In particular, $g$ is conformally flat.
{\rm (ii)} $g=\frac{1}{\{q(x_3)+b(x_1)\}^2}\{dx_1^2+(q')^2dx_2^2+dx_3^2\}$ where $q$ and $b$ satisfy $(q')^2-2mq^3-lq^2+\alpha q+k=0$ and
$(b')^2-2mb^3+lb^2+\alpha b+\frac{R}{6}-k=0$ for constants $m\neq0$, $l,\alpha,k$, respectively. The potential function is $f=\frac{c(x_1)}{q+b}-1$ where $c$ is a solution to $b''c=b'c'+\frac{R}{6}$.
{\rm (iii)} $g=p^2dx_1^2+(p')^2dx_2^2+dx_3^2$ where $p:=p(x_3)$ satisfies $(p')^2=\beta p^{-1}+\gamma$ for constants $\beta<0$ and $\gamma$. The
potential function is $f=c_1p-1$ where $c_1(x_1)$ satisfies $c_1''+\gamma c_1=0$.
\end{thm}
The spaces in {\rm (iii)} of Theorem \ref{th3} are complete and their scalar curvature equals $0$, whereas most of the spaces in {\rm (ii)} are incomplete.
We presume that even local spaces of Theorem \ref{th3} may have geometric importance, as discussed above.
\bigskip
The gradient vector field $\nabla f $ plays an important role in the study of gradient Ricci solitons, $V$-static spaces, and critical point metrics. If one can show that $\nabla f$ is a Ricci-eigen vector, then the geometric equation becomes quite tractable. For most three-dimensional Ricci-degenerate spaces satisfying (\ref{ggrsdef}), $\nabla f$ is not Ricci-eigen. This fact requires careful and elaborate arguments when using $\nabla f$.
Hence, to prove theorems, we consider orthonormal frame fields $\{E_i\ | \ i=1,2,3\}$ such that $\lambda_1=R(E_1,E_1)\neq\lambda_2=R(E_2,E_2)=\lambda_3$ and $E_2\perp\nabla f$. With regard to $E_3$, we have two possible cases: (1) $g(E_3,\nabla f)=E_3f=0$ so that $\nabla f$ is Ricci-eigen, and (2) $E_3f\neq0$. When $E_3f=0$, we show that $g$ is a warped product metric through well-known arguments.
When $E_3f\neq0$, it can be shown that there is
a local coordinate system $(x_1,x_2,x_3)$ for each $p\in\{\nabla f\neq0\}$ such that the metric $g$ can be written as $g=g_{11}dx_1^2+g_{22}dx_2^2+g_{33}dx_3^2$, where $E_i=\frac{\partial_i}{\sqrt{g_{ii}}}$. Further, we get a more concrete form of the metric on the basis of the fact that $\lambda_2=\lambda_3$ as well as fundamental properties such as the Jacobi identity.
Next, we explore each of the three classes of spaces mentioned above. We observe that there exists a Codazzi tensor whose eigenspaces coincide with those of the Ricci tensor. From the presence of this Codazzi tensor, we can show that the $\lambda_2$-eigenspace forms an integrable and umbilic distribution. This provides additional geometric information and a sufficiently good form of the metric $g$ to make conclusive arguments.
\bigskip
The remainder of this paper is organized as follows. In Section 2, we discuss some basic properties of Riemannian manifolds satisfying geometric equations. In Section 3, we classify three-dimensional gradient Ricci solitons with $E_3f\neq0$.
In Section 4, we classify $V$-static spaces and critical point metrics with $E_3f\neq0$. Finally, in Section 5, we prove our main theorems by dealing with the case where $E_3f=0$.
\section{Three-dimensional manifolds satisfying geometric equation}
In this section, we present our notations and discuss already known basic properties of a three-dimensional manifold that admits a smooth function $f$ satisfying (\ref{ggrsdef}).
Our notational convention is as follows: for orthonormal vector fields $E_i$, $i=1, \cdots, n$ on an $n$-dimensional Riemannian manifold, the curvature components are
$R_{ijkl}:=R(E_i, E_j, E_k, E_l) = < \nabla_{E_i} \nabla_{E_j} E_k - \nabla_{E_j} \nabla_{E_i} E_k - \nabla_{[E_i, E_j]} E_k , E_l> $.
If $F$ is a function of one variable, i.e., $F=F(x)$, then we denote $\frac{\partial}{\partial x}F$ by $F'$.
As is already well known, the Weyl tensor vanishes in a three-dimensional manifold.
Therefore, (\ref{ggrsdef}) gives us more geometric information than higher-dimensional $n\geq 4$ cases.
\begin{lemma} \label{basiccodazzi}
Let $(M^3,g,f)$ be a three-dimensional Riemannian manifold satisfying (\ref{ggrsdef}). Then,
\rm{ (i)} $2(d\psi)R+\psi(dR)+4d\phi=2Rc(\nabla \psi-\nabla f,\cdot).$
\rm {(ii)} For a tensor $A_{ijk}=(R_{jk}-\frac{R}{2}g_{jk})(\nabla_i\psi+\nabla_if)+\frac{1}{2}R_{il}(\nabla_l\psi+\nabla_lf)g_{jk}+\psi(\nabla_iR_{jk}-\frac{\nabla_iR}{4}g_{jk})$, $A_{ijk}=A_{jik}$.
\end{lemma}
\begin{proof}
We take the divergence of (\ref{ggrsdef}):
\begin{align*}
\psi \cdot div Ric_b=&\nabla^i\nabla_i\nabla_bf-(\nabla^i\psi)R_{ib}-(\nabla^i\phi)g_{ib}\\
=&\nabla_b\Delta f+(\nabla^if)R_{ib}-(\nabla^i\psi)R_{ib}-\nabla_b\phi.
\end{align*}
By $\Delta f=\psi R+3\phi$ and Schur's lemma, $\nabla R=2divRic$, we obtain (i).
By the Ricci identity,
\begin{align*}
-R_{ijkl}\nabla_l f=&\nabla_i\nabla_j\nabla_kf-\nabla_j\nabla_i\nabla_kf\\
=&(\nabla_i\psi)R_{jk}-(\nabla_j\psi)R_{ik}+\psi(\nabla_iR_{jk}-\nabla_jR_{ik})+(\nabla_i\phi)g_{jk}-(\nabla_j\phi)g_{ik}.
\end{align*}
Furthermore, in the third dimension, $R_{ijkl}\nabla_lf=-(R_{ik}\nabla_jf+R_{jl}g_{ik}\nabla_lf-R_{il}g_{jk}\nabla_lf-R_{jk}\nabla_if)+\frac{R}{2}(g_{ik}\nabla_jf-g_{jk}\nabla_if)$.
Thus, we get
\begin{align*}
&R_{jk}(\nabla_i\psi+\nabla_if)+(\nabla_i\phi)g_{jk}+R_{il}\nabla_lfg_{jk}+\psi\nabla_iR_{jk}-\frac{R}{2}\nabla_if g_{jk}\\
=&R_{ik}(\nabla_j\psi+\nabla_jf)+(\nabla_j\phi)g_{ik}+R_{jl}\nabla_lfg_{ik}+\psi\nabla_jR_{ik}-\frac{R}{2}\nabla_jf g_{ik}.
\end{align*}
By substituting $\nabla_i\phi=-\frac{\psi}{4}\nabla_iR-\frac{\nabla_i\psi}{2}R+\frac{1}{2}R_{il}(\nabla_l\psi-\nabla_lf)$ obtained from (i) in the
above equation, we can get (ii).
\end{proof}
As mentioned in the Introduction, we consider three-dimensional manifolds with two distinct Ricci eigenvalues, say $\lambda_1\neq\lambda_2=\lambda_3$.
We can choose an orthonormal Ricci-eigen frame field $\{E_i\}$ in a neighborhood of each point in $\{\nabla f\neq 0\}$ such that $\lambda_1=R(E_1,E_1)\neq \lambda_2=\lambda_3$.
Without loss of generality, we assume that $E_2\perp \nabla f$, i.e., $E_2f=0$. We shall say that such an orthonormal Ricci-eigen frame field $\{E_i\}$ is {\em adapted}.
\medskip
Let $\{E_i\}$ be an adapted frame field. Since $\psi$ and $\phi$ are functions of $f$, $E_2\psi$ and $E_2\phi$ also vanish; then, so does $E_2R$ by (i) in Lemma \ref{basiccodazzi}. For orthonormal frame fields $\{E_i\}$, we get the following by (ii) in Lemma \ref{basiccodazzi}:
\begin{align}
&\left(\lambda_i+\frac{1}{2}\lambda_j-\frac{R}{2}\right)(\nabla_j\psi+\nabla_jf)+\psi\left(\nabla_jR_{ii}-\nabla_iR_{ji}-\frac{\nabla_jR}{4}\right)=0 \label{ijcoda}\\
&\nabla_iR_{jk}=\nabla_jR_{ik} \label{ijkcoda}
\end{align}
for different $i,j,k$.
\begin{lemma}\label{nabla}
Let $(M^3,g,f)$ be a three-dimensional Riemannian manifold satisfying (\ref{ggrsdef}) with Ricci eigenvalues $\lambda_1\neq\lambda_2=\lambda_3$. Consider an adapted frame field $\{E_i\}$ in an open subset of $\{\nabla f\neq 0\}$. Suppose $E_3f\neq0$. Then we get the following:
\begin{align*}
& \nabla_{E_1}E_1=\Gamma_{11}^3E_3,\quad \nabla_{E_1}E_2=0,\quad \nabla_{E_1}E_3=-\Gamma_{11}^3E_1\\
& \nabla_{E_2}E_1=\frac{H}{2}E_2,\quad \nabla_{E_2}E_2=-\frac{H}{2}E_1+\Gamma_{22}^3E_3,\quad \nabla_{E_2}E_3=-\Gamma_{22}^3E_2\\
& \nabla_{E_3}E_1=\frac{H}{2}E_3,\quad \nabla_{E_3}E_2=0,\quad \nabla_{E_3}E_3=-\frac{H}{2}E_1.
\end{align*}
for functions $\Gamma_{ij}^k$ and $H$.
And $R_{1221}=-\frac{E_1H}{2}-\Gamma_{11}^3\Gamma_{22}^3-\frac{H^2}{4}$, $R_{1331}=-\frac{E_1H}{2}+E_3\Gamma_{11}^3-(\Gamma_{11}^3)^2-\frac{H^2}{4}$, $ R_{2332}=E_3\Gamma_{22}^3-(\Gamma_{22}^3)^2-\frac{H^2}{4}$.
\end{lemma}
\begin{proof}
Let $\nabla_{E_i}E_j:=\Gamma_{ij}^kE_k$. Using (\ref{ijcoda}) and (\ref{ijkcoda}):
\begin{itemize}
\item When $i=3,j=2$, we get $E_2\lambda_3=0$ that means also $E_2\lambda_1=0.$
\item When $i=1,j=2$, we get $E_2\lambda_1=\Gamma_{11}^2(\lambda_1-\lambda_2)$, so $\Gamma_{11}^2=0$.
\item Comparing the case of $i=3,j=1$ with $i=2,j=1$, we can see that $\Gamma_{21}^2=\Gamma_{31}^3$(denote by $\frac{H}{2}$).
\item When $i=1,j=2,k=3$, we get $\Gamma_{21}^3=0$.
\item When $i=1,j=3,k=2$, we get $\Gamma_{31}^2=0$.
\end{itemize}
From $\nabla df(E_1,E_2)=\nabla df(E_3,E_2)=0$, we get $\Gamma_{12}^3=\Gamma_{32}^3=0$. The
curvature components can be computed directly.
\end{proof}
Several facts can be easily established from the above lemma. First, by assuming that $\lambda_2=\lambda_3$, $R_{1221}$ and $R_{1331}$ must
be equal to each other. Thus, we have
\begin{align}
E_3\Gamma_{11}^3=\Gamma_{11}^3(\Gamma_{11}^3-\Gamma_{22}^3). \label{r12211331}
\end{align}
Second, from the Jacobi identity of the Lie bracket, we get
\begin{align}
E_2\Gamma_{11}^3=0,\quad E_1\Gamma_{22}^3+\frac{H}{2}(\Gamma_{22}^3-\Gamma_{11}^3)=0,\quad E_2H=0.
\end{align}
Third,
\begin{align*}
E_2E_1f=(E_1E_2-[E_1,E_2])f=0\\
E_2E_3f=(E_3E_2-[E_3,E_2])f=0.
\end{align*}
Then, by $\nabla df(E_2,E_2)=\frac{H}{2}(E_1f)-\Gamma_{22}^3(E_3f)=\psi \lambda_2+\phi$, we have $E_2\Gamma_{22}^3=0$.
Finally, let $E_{ij}$ be the distribution generated by $E_i$ and $E_j$. Then, $E_{ij}$ are integrable for $i,j=1,2,3$. Thus, there exists locally a
coordinate system $(x_1,x_2,x_3)$ such that the metric $g$ can be written as
\begin{align}
g=g_{11}dx_1^2+g_{22}dx_2^2+g_{33}dx_3^2,\label{startmetric}
\end{align}
where $E_i=\frac{1}{\sqrt{g_{ii}}}\partial_i$ and $g_{ii}$ are functions of $x_j$ for $j=1,2,3$ \cite{JJ}. In this coordinate system,
\begin{align}
\Gamma_{11}^3=-\frac{E_3g_{11}}{2g_{11}},\quad \Gamma_{22}^3=-\frac{E_3g_{22}}{2g_{22}},\quad H=\frac{E_1g_{33}}{g_{33}}=\frac{E_1g_{22}}{g_{22}}. \label{gammacord}
\end{align}
\begin{lemma}\label{commonmetric}
Under the hypothesis of Lemma \ref{nabla}, there exists locally a coordinate system $(x_1,x_2,x_3)$ in which
$<\nabla_{E_2}E_2,E_1>=<\nabla_{E_3}E_3,E_1>=-\frac{H}{2}$ depends only on $x_1$;
\end{lemma}
\begin{proof}
We start from the metric in (\ref{startmetric}).
First, we show that the leaves of $E_{23}$ are totally umbilic. We denote $\nabla_{\partial_i}\partial_j=:\gamma_{ij}^k\partial_k$. In this coordinate,
\begin{align*}
<\nabla_{\partial_i}\partial_j,E_1>=\sqrt{g_{11}}\gamma_{ij}^1=-\frac{\partial_1g_{ij}}{2\sqrt{g_{11}}}
\end{align*}
for $i,j=2,3$. Since $g_{ij}=0$ for $i\neq j$ and (\ref{gammacord}), we can say that $<\nabla_{\partial_i}\partial_j,E_1>=-\frac{H}{2}g_{ij}$.
Thus, $E_{23}$ is totally umbilic. Further, we can see that $E_3H=0$ by taking the trace of the Codazzi-Mainardi equation \cite{CMM}.
\end{proof}
Now we prove Theorem \ref{metricthm1}.
\begin{pf1}
Computing $\Gamma_{11}^2=<\nabla_{E_1}E_1,E_2>=0$ in the coordinates in (\ref{startmetric}) gives $\Gamma_{11}^2=-\frac{\partial_2g_{11}}{2g_{11}\sqrt{g_{22}}}=0$.
Thus, we get $\partial_2g_{11}=0$. Similarly, by calculating $\Gamma_{32}^3=0$, we get $\partial_2g_{33}=0$. By (\ref{gammacord}), $\frac{\partial_1g_{22}}{g_{22}}=\frac{\partial_1g_{33}}{g_{33}}$.
Thus, $g_{22}=k(x_2,x_3)g_{33}$ for a function $k(x_2,x_3)$. However, note that $\partial_3\{\ln k(x_2,x_3)\}=\frac{\partial_3g_{22}}{g_{22}}-\partial_3(\ln g_{33})=-2\Gamma_{22}^3\sqrt{g_{33}}-\partial_3(\ln g_{33}).$
Then, $\partial_2\partial_3(\ln k)=0$, which means that $k(x_2,x_3)=q(x_2)v(x_3)$ for functions $v$ and $q$. By replacing $x_2$ with a new variable, which we still denote by $x_2$, we can replace $q(x_2)dx_2^2$ with $dx_2^2$. Then we get the result.
\end{pf1}
\begin{lemma}\label{23second}
Under the hypothesis of Lemma \ref{nabla}, the following statements hold;
{\rm (i)} In the local coordinate system in Theorem \ref{metricthm1}, $\partial_3f=c_1(x_1)g_{33}\sqrt{v}$ for a function $c_1\neq0$;
{\rm (ii)} For the metric $g$ in Theorem \ref{metricthm1}, if $\Gamma_{11}^3=0$, then $g_{11}=1$, $g_{33}=e^{\int_c^{x_1}H(u)du}$. Otherwise, $\partial_3g_{11}=c_2(x_1)g_{33}\sqrt{g_{11}\cdot v}$ for a function $c_2\neq 0$.
\end{lemma}
\begin{proof}
Consider the local coordinate system in Theorem \ref{metricthm1}.
From $\nabla df(E_2,E_2)=\nabla df(E_3,E_3)$, we have $E_3E_3f+\Gamma_{22}^3(E_3f)=0$. Since $E_3f\neq 0$, by (\ref{gammacord}), $\frac{\partial_3(E_3f)}{E_3f}-\frac{\partial_3g_{22}}{2g_{22}}=0$; then, we get
$\partial_3f=c_1(x_1)\sqrt{g_{22}g_{33}}=c_1(x_1)g_{33}\sqrt{v}$ for a function $c_1$ by integration.
Suppose that $\Gamma_{11}^3=0$. By (\ref{gammacord}), $\partial_3g_{11}=0$; thus, $g_{11}=g_{11}(x_1)$. By replacing $g_{11}(x_1)dx_1^2$ with $dx_1^2$, we may write $g=dx_1^2+g_{33}vdx_2^2+g_{33}dx_3^2$.
Then, by (\ref{gammacord}), $H(x_1)=\partial_1(\ln g_{33})$. Thus, $g_{33}=k(x_3)e^{\int_c^{x_1}H(u)du}$ for a function $k$. By changing the variable, the metric $g$ can be written as
\begin{align*}
g= dx_1^2+e^{\int_c^{x_1}H(u)du}vdx_2^2+e^{\int_c^{x_1}H(u)du}dx_3^2.
\end{align*}
Now, suppose that $\Gamma_{11}^3\neq0$. By (\ref{r12211331}) and (\ref{gammacord}), $\partial_3(\ln \Gamma_{11}^3)=-\frac{1}{2}\partial_3(\ln g_{11})+\frac{1}{2}\partial_3(\ln g_{22})$. Thus, $\Gamma_{11}^3=\sqrt{\frac{g_{33}\cdot v}{g_{11}}}\cdot c(x_1)$ for a nonzero function $c$. Then, we can get $\partial_3g_{11}=c_2(x_1)g_{33}\sqrt{g_{11}\cdot v}$ for a nonzero function $c_2$ again by (\ref{gammacord}).
\end{proof}
To analyze the metric obtained from the above lemma in greater detail, we need a Codazzi tensor, as mentioned in the Introduction.
\begin{lemma}\label{existcodazzi}
Let $(M^3,g,f)$ be a three-dimensional Riemannian manifold satisfying (\ref{ggrsdef}). Then $C:=a Rc+b g$ is a Codazzi tensor if and only if two functions $a$ and $b$ satisfy the following conditions:
\begin{align}
&\frac{\nabla_i a}{a}=\frac{\nabla_i\psi+\nabla_if}{\psi}\label{acond}\\
&\frac{b_i}{a}=\frac{1}{\psi}\left\{\frac{1}{2}R_{il}(\nabla_l\psi+\nabla_lf)-\frac{R}{2}(\nabla_i\psi+\nabla_if)-\frac{\nabla_iR}{4}\psi\right\}\label{bcond}
\end{align}
\end{lemma}
\begin{proof}
Let $C:=a Rc+bg$ be a Codazzi tensor. Then $0=\nabla_iC_{jk}-\nabla_jC_{ik}=a(\nabla_iR_{jk}-\nabla_jR_{ik})+(\nabla_ia)R_{jk}+(\nabla_ib)R_{jk}-(\nabla_ja)R_{ik}-(\nabla_jb)g_{ik}$. So we have
\begin{align*}
\nabla_iR_{jk}-\nabla_jR_{ik}=\frac{\nabla_ja}{a}R_{ik}-\frac{\nabla_ia}{a}R_{jk}+\frac{\nabla_jb}{a}g_{ik}-\frac{\nabla_ib}{a}g_{jk}.
\end{align*}
By (ii) in Lemma \ref{basiccodazzi},
\begin{align*}
\psi(\nabla_iR_{jk}-\nabla_jR_{ik})=(R_{ik}-\frac{R}{2}g_{ik})(\nabla_j\psi+\nabla_jf)-(R_{jk}-\frac{R}{2}g_{jk})(\nabla_i\psi+\nabla_if)\\
+\frac{1}{2}R_{jl}(\nabla_l\psi+\nabla_lf)g_{ik}
-\frac{1}{2}R_{il}(\nabla_l\psi+\nabla_lf)g_{jk}-\psi(\frac{\nabla_jR}{4}g_{ik}-\frac{\nabla_iR}{4}g_{jk}).
\end{align*}
Therefore, by comparing the coefficients of $Rc$ and $g$ in the above two equations, we get the desired result.
For the converse part, one can easily check that if $a$ and $b$ satisfy (\ref{acond}) and (\ref{bcond}), then $aRc+bg$ is a Codazzi tensor.
\end{proof}
In general, it is not easy to obtain $C$ explicitly. However, in subsequent sections,
we will show that if $(M^3,g,f)$ is one of the manifolds listed in the Introduction, then $C$ can be obtained explicitly.
\begin{lemma}\cite{De} \label{derdlem}
For a Codazzi tensor $C$ on a Riemannian manifold $M$,
in each connected component of $M_C:=\{x\in M|$ the number of distinct eigenvalues of $C_x$ is constant in a nbd of $x$ $\}$.
{\rm (i)} $M_C$ is an open dense subset of $M$ and that in each connected component of $M_C$
{\rm (ii)} Given distinct eigenfunctions $\lambda, \mu$ of $C$ and local vector fields $v, u$ such that $C v = \lambda v$, $Cu = \mu u$ with $|u|=1$, it holds that
$ \ \ \ \ \ v(\mu) = (\mu - \lambda) <\nabla_u u, v > $.
{\rm (iii)} For each eigenfunction $\lambda$, the $\lambda$-eigenspace distribution is integrable and its leaves are totally umbilic submanifolds of $M$.
{\rm (iv)} If $\lambda$-eigenspace $V_{\lambda}$ has dimension bigger than one, the eigenfunction $\lambda$ is constant along the leaves of $V_{\lambda}$.
{\rm (v)} Eigenspaces of $C$ form mutually orthogonal differentiable distributions.
\end{lemma}
In Lemma \ref{derdlem}, (ii) states that it holds not only for distinct eigenfunctions but also for the same eigenfunction as long as the eigenvectors are orthogonal, i.e., $Cv=\lambda v,Cu=\lambda u,u\perp v$.
In general, a Riemannian manifold satisfying (\ref{ggrsdef}) is not real analytic, but if $(M,g)$ is one of the spaces stated in the Introduction, i.e., gradient soliton, $V$-static, or critical point metric, then we can show that $(M,g)$ is real analytic in harmonic coordinates; see \cite{HPW} or \cite{CM2}. Thus, if $f$ is not a constant, then $M_C\cap \{\nabla f\neq0\}$ is open and dense in $M$.
\begin{lemma}\label{giftlem}
Let $(M^3,g,f)$ be a three-dimensional real analytic Riemannian manifold satisfying (\ref{ggrsdef}) with $\lambda_1\neq \lambda_2=\lambda_3$. Consider an adapted frame field $\{E_i\}$ in an open subset of $\{\nabla f\neq0\}$.
Suppose that $E_3f\neq0$ and there exists a Codazzi tensor $\mathcal{C}$ whose eigenspaces coincide with the eigenspace of the Ricci tensor. Let $\mu_i$ be an eigenvalue of $\mathcal{C}$ for $i=1,2,3$. Then, either $\mu_2$ is a constant or $\sqrt{g_{11}}(\mu_2-\mu_1)=c_3(x_1)$ for a positive function $c_3$.
\end{lemma}
\begin{proof}
First, note that $E_3\mu_2=0$ from (ii) in the above lemma.
Lemma \ref{nabla} and Lemma \ref{derdlem} give the following:
\begin{eqnarray} \label{e1f0}
-\frac{H}{2}=<\nabla_{E_2}E_2,E_1>=\frac{E_1\mu_2}{\mu_2-\mu_1}.
\end{eqnarray}
Thus,
\begin{align}
E_j\left(\frac{E_1\mu_2}{\mu_2-\mu_1}\right)=\frac{E_jE_1\mu_2}{\mu_2-\mu_1}+\frac{(E_1\mu_2)(E_j\mu_1)}{(\mu_2-\mu_1)^2}=0 \label{noneigenst1}
\end{align}
for $j=2,3$. However, note that
\begin{align*}
E_jE_1\mu_2=(\nabla_{E_j}E_1-\nabla_{E_1}E_j+E_1E_j)\mu_2=-(\nabla_{E_1}E_j)\mu_2.
\end{align*}
The second equality is due to the fact that $E_j\mu_2=0$ for $j=2,3$. Then, (\ref{noneigenst1}) can be expressed as
\begin{align}
(E_1\mu_2)\{E_j\mu_1+\Gamma_{11}^j(\mu_2-\mu_1)\}=0. \label{codazzigift}
\end{align}
Since $E_3\mu_2=0$, if $E_1\mu_2=0$, then $\mu_2$ is a constant.
Otherwise,
\begin{align*}
0=E_3\mu_1+\Gamma_{11}^3(\mu_2-\mu_1)=\frac{\partial_3\mu_1}{\sqrt{g_{33}}}-\frac{\partial_3g_{11}}{2g_{11}\sqrt{g_{33}}}(\mu_2-\mu_1).
\end{align*}
By integrating, we get $\sqrt{g_{11}}(\mu_2-\mu_1)=c_3(x_1)$ for a positive function $c_3$.
\end{proof}
\section{Gradient Ricci soliton with $E_3f\neq0$}
In this section, we consider gradient Ricci solitons with $E_3f\neq0$, i.e., $\psi=-1$ and $\phi=\lambda$ in (\ref{ggrsdef}). First, we recall
some basic properties of gradient Ricci solitons. Then, we observe that there exists a Codazzi tensor.
\begin{lemma} \label{solitonformulas}
For any gradient Ricci soliton $(M,g,f)$, we have;
\smallskip
{\rm (i)} $\frac{1}{2} dR = R(\nabla f, \cdot ) $, where $R$ in the left hand side denotes the scalar curvature, and $R(\cdot, \cdot)$ is a Ricci tensor.
\smallskip
{\rm (ii)} $R + |\nabla f|^2 - 2\lambda f = constant$.
\end{lemma}
\begin{lemma}
Let $(M^3,g,f)$ be a three-dimensional gradient Ricci soliton. Then
\begin{align}
\mathcal{T}=e^{-f}(Rc-\frac{R}{2}g)
\end{align}
is a Codazzi tensor.
\end{lemma}
\begin{proof}
This is already shown in \cite{CMM}. One can easily check that $a=e^{-f}$ and $b=-e^{-f}\frac{R}{2}$ satisfy the conditions in Lemma \ref{existcodazzi}.
\end{proof}
As mentioned earlier, the presence of this Codazzi tensor immediately gives additional information about the geometry of $(M,g,f)$.
By Lemma \ref{derdlem},
\begin{align*}
0=E_3\mu_2=E_3\left\{e^{-f}\left(\lambda_2-\frac{R}{2}\right)\right\}=E_3\left\{e^{-f}\left(-\frac{\lambda_1}{2}\right)\right\}.
\end{align*}
Thus, $E_3\lambda_1=\lambda_1(E_3f)$. In the local coordinate system $(x_1,x_2,x_3)$ defined in Theorem \ref{metricthm1}, it is equivalent to
\begin{align}
\lambda_1=c_4(x_1)e^f \label{solitonlambda1}
\end{align}
for a positive function $c_4$ by integration.
Further, since $E_3R=2\lambda_2(E_3f)$ by Lemma \ref{solitonformulas},
\begin{align}
E_3\lambda_2=\frac{1}{2}(E_3R-E_3\lambda_1)=\left(\lambda_2-\frac{\lambda_1}{2}\right)(E_3f). \label{sole3lambda2}
\end{align}
Let us first consider the case where $\mu_2$ is not a constant.
\begin{lemma}\label{gamma113}
If $\mu_2$ is not a constant, then we get $E_3f=0$, which is a contradiction.
\end{lemma}
\begin{proof}
Assuming that $\mu_2$ is not a constant, we have $e^{-f}(\lambda_2-\lambda_1)=\mu_2-\mu_1=\frac{c_3(x_1)}{\sqrt{g_{11}}}$ from Lemma \ref{giftlem}. Thus, $\lambda_2=\left(c_4+\frac{c_3}{\sqrt{g_{11}}}\right)e^{f}$ by (\ref{solitonlambda1}). Suppose that $\Gamma_{11}^3\neq0$. Taking the $\partial_3$-derivative,
\begin{align}
\partial_3\lambda_2=&c_3(g_{11})^{-\frac{3}{2}}(\partial_3g_{11})e^f+\left(c_4+\frac{c_3}{\sqrt{g_{11}}}\right)e^f(\partial_3f).
\end{align}
However, $\partial_3\lambda_2=\left(\lambda_2-\frac{\lambda_1}{2}\right)\partial_3f$ by (\ref{sole3lambda2}). By the result $\partial_3f=c_1g_{33}\sqrt{v}$ in Lemma \ref{23second},
\begin{align*}
c_3(g_{11})^{-\frac{3}{2}}c_2g_{33}\sqrt{g_{11}v}e^f+\left(c_4+\frac{c_3}{\sqrt{g_{11}}}\right)e^fc_1g_{33}\sqrt{v} \\
=\left\{\left(c_4+\frac{c_3}{\sqrt{g_{11}}}\right)e^f-\frac{c_4}{2}e^f\right\}c_1g_{33}\sqrt{v}.
\end{align*}
Thus, we get $\frac{c_1c_4}{2}+\frac{c_3c_2}{g_{11}}=0$. Therefore, if $c_2c_3\neq 0$, then $g_{11}$ is a function of $x_1$ only, which means that $\Gamma_{11}^3=-\frac{\partial_3g_{11}}{2g_{11}\sqrt{g_{33}}}=0$.
Thus, $c_2c_3$ must be zero. However, we are then in a contradictory situation where $\mu_1=\mu_2$ or $\Gamma_{11}^3=0$. Therefore, we can see that our assumption that $\Gamma_{11}^3$ is not zero is incorrect.
Now, suppose that $\Gamma_{11}^3=0$. By Lemma \ref{23second}, there exists locally a coordinate system $(x_1,x_2,x_3)$ in which
\begin{align}
g=dx_1^2+e^{\int_c^{x_1}Hdu}v(x_3)dx_2^2+e^{\int_c^{x_1}Hdu}dx_3^2. \label{113vanish}
\end{align}
By Lemma \ref{nabla},
\begin{align*}
c_4(x_1)e^f=\lambda_1=2R_{1221}=-H'(x_1)-\frac{H(x_1)^2}{2}.
\end{align*}
Hence, either $f$ is a function of $x_1$ only or $c_4=0=-H'-\frac{H^2}{2}$, which implies that $\lambda_1=0$. If $\lambda_1=0$, then $\mu_2=e^{-f}(\lambda_2-\frac{R}{2})=e^{-f}
(-\frac{\lambda_1}{2})=0$. However, we assumed that $\mu_2$ is not a constant. Hence, $f$ should depend only on $x_1$.
\end{proof}
Now, we suppose that $\mu_2$ is a constant. Since $\mu_2=e^{-f}(\lambda_2-\frac{R}{2})=e^{-f}(-\frac{\lambda_1}{2})$, we have $\lambda_1=a e^f$ for a constant $a$. From $\frac{E_1\mu_2}
{\mu_2-\mu_1}=-\frac{H}{2}=-\frac{E_1g_{33}}{2g_{33}}$, we obtain $H=0$ and $\partial_1g_{33}=0$. Hence, the metric $g$ can be written as
\begin{align}
g=g_{11}(x_1,x_3)dx_1^2+v(x_3)dx_2^2+dx_3^2.
\end{align}
\begin{lemma} \label{solitonresult}
Let $(M,g,f)$ be a three-dimensional gradient Ricci soliton with $\lambda_1\neq\lambda_2=\lambda_3$. Suppose that $E_3f\neq0$ and $\mu_2$ is
a constant. Then, there exists locally a coordinate system $(x_1,x_2,x_3)$ such that the metric $g$ is
\begin{align}
g=dx_1^2+k'(x_3)^2dx_2^2+dx_3^2, \label{solmetricresult}
\end{align}
where $k$ is a solution of
\begin{align}
k''-\frac{(k')^2}{2}+\lambda k=C
\end{align}
for a constant $C$. Further, the potential function $f=\frac{\lambda}{2}x_1^2+k(x_3)$. In particular, $(\tilde{g},\tilde{f})=(k'(x_3)^2dx_2^2+dx_3^2,k)$ is a two-dimensional gradient
Ricci soliton. Conversely, any metrics $g$ and $f$ in the above form satisfy (\ref{soldef}).
\end{lemma}
\begin{proof}
Since $H=0$, the curvature components are as follows:
\begin{align*}
R_{1221}=-\Gamma_{11}^3\Gamma_{22}^3,\quad R_{1331}=E_3\Gamma_{11}^3-(\Gamma_{11}^3)^2,\quad R_{2332}=E_3\Gamma_{22}^3-(\Gamma_{22}^3)^2.
\end{align*}
From $\nabla df(E_2,E_2)=\lambda-\lambda_2$,
\begin{align}
-\Gamma_{22}^3(E_3f)=\lambda -(E_3\Gamma_{22}^3)+(\Gamma_{22}^3)^2+\Gamma_{11}^3\Gamma_{22}^3. \label{soso1}
\end{align}
Suppose that $\lambda_1\neq 0$. Taking the $E_3$-derivative of $\lambda_1=-2\Gamma_{11}^3\Gamma_{22}^3=ae^f$, we get $E_3f=\frac{E_3\Gamma_{11}^3}{\Gamma_{11}^3}+\frac{E_3\Gamma_{22}^3}{\Gamma_{22}^2}$. By (\ref{soso1}), $\lambda+(\Gamma_{22}^3)^2+\Gamma_{11}^3\Gamma_{22}^3+\frac{E_3\Gamma_{11}^3}{\Gamma_{11}^3}\Gamma_{22}^3=0$. Then,
$\lambda+2\Gamma_{11}^3\Gamma_{22}^3=0$ by (\ref{r12211331}). This implies that $f$ is a constant function. Hence, we can
conclude that $\lambda_1=0$. However, if $\Gamma_{22}^3=0$, then $\lambda_1=\lambda_2=0$. Hence,
$\Gamma_{11}^3=-\frac{E_3g_{11}}{2g_{11}}$ must be zero, which implies that $g_{11}$ does not depend on $x_3$. Thus, the metric $g$ can now be written as
\begin{align*}
g=dx_1^2+v(x_3)dx_2^2+dx_3^2.
\end{align*}
We can easily see that $\lambda_2=-\frac{v''}{2v}$. From $\nabla df=-Rc+\lambda g$,
\begin{align}
\partial_1\partial_1f&=\lambda, \label{e1e1}\\
\partial_1\partial_3f&=0, \label{e1e3}\\
\frac{v'}{2v}(\partial_3f)&=\frac{v''}{2v}-\frac{1}{4}\left(\frac{v'}{v}\right)^2+\lambda, \label{e2e22}\\
\partial_3\partial_3f&=\frac{v''}{2v}-\frac{1}{4}\left(\frac{v'}{v}\right)^2+\lambda. \label{e3e3}
\end{align}
From (\ref{e1e3}), $\partial_3f$ depends only on $x_3$. Then, if necessary, through coordinate change, we get $v=(\partial_3f)^2$ by (\ref{e2e22}) and (\ref{e3e3}). Similarly, $f=\frac{\lambda}{2}x_1^2+k(x_3)$ by (\ref{e1e1}) and (\ref{e1e3}). Let $p(x_3):=(\partial_3f)(x_3)$. Then, we
get $p'=\frac{p''}{p}+\lambda$ by (\ref{e2e22}). Let us consider (ii) in Lemma \ref{solitonformulas}. Assigning what we have obtained thus far, we get $-2\frac{p''}{p}+(k')^2-2\lambda k=C$
for a constant $C$. Since $k'=p$ and $\frac{p''}{p}=p'-\lambda$, we finally obtain
\begin{align}
k''-\frac{(k')^2}{2}+\lambda k=C \label{ksol}
\end{align}
for a constant $C$. One can easily check that $(\tilde{g}=k'(x_3)^2dx_2^2+dx_3^2,\tilde{f}=k)$ satisfies $\nabla df+Rc=\lambda g$ as long as $k$ is a solution to (\ref{ksol}) and the converse part.
\end{proof}
\section{$V$-static and critical point metric with $E_3f\neq0$}
In this section, we consider the $V$-static ($\kappa\neq0$) and critical point metric simultaneously. These two spaces can
be regarded as the general one
\begin{align}
\nabla df=(a_1+f)Rc+\left(a_2-\frac{R}{2}f\right)g \label{generaldef}
\end{align}
for constants $a_1$ and $a_2$. Note that the scalar curvature $R$ is constant in these cases.
\begin{lemma}
Let $(M,g,f)$ be a three-dimensional Riemannian manifold satisfying (\ref{generaldef}). Then, the $(0,2)$-tensor
\begin{align}
D:=(a_1+f)^2Rc+\left\{\frac{1}{2}|\nabla f|^2-(a_2+a_1R)f-\frac{R}{4}f^2\right\}g
\end{align}
is a Codazzi tensor.
\end{lemma}
\begin{proof}
One can easily check that $a=(a_1+f)^2$ and $b=\frac{1}{2}|\nabla f|^2-(a_2+a_1R)f-\frac{R}{4}f^2$ satisfy the conditions in Lemma \ref{existcodazzi}.
\end{proof}
Recall that a coordinate system $(x_1,x_2,x_3)$ in Theorem \ref{metricthm1} in which the metric $g$ is as follows exists locally:
\begin{align}
g=g_{11}(x_1,x_3)dx_1^2+g_{33}(x_1,x_3)v(x_3)dx_2^2+g_{33}(x_1,x_3)dx_3^2. \label{startmetric1}
\end{align}
\begin{lemma}\label{e3mu2tam}
Let $(M^3,g,f)$ be a three-dimensional Riemannian manifold satisfying (\ref{generaldef}) with $\lambda_1\neq\lambda_2=\lambda_3$. Consider an adapted frame field $\{E_i\}$ in
an open subset of $\{\nabla f\neq 0\}$. Suppose that $E_3f\neq0$. Then, there exists locally a coordinate system $(x_1,x_2,x_3)$ such that
\begin{align*}
(a_1+f)^3=\frac{c_5(x_1)}{3\lambda_2-R},\quad (3\lambda_2-R)^2=\frac{p_2(x_3)}{g_{33}^3}
\end{align*}
for positive functions $c_5$ and $p_2$.
\end{lemma}
\begin{proof}
We start with the metric $g$ in (\ref{startmetric1}). By the property of the Codazzi tensor described in Lemma \ref{derdlem},
\begin{align*}
0=E_3\mu_2=&E_3\{(a_1+f)^2\lambda_2+\frac{1}{2}|\nabla f|^2-(a_2+a_1R)f-\frac{R}{4}f^2\}\\
=&(E_3f)(3\lambda_2-R)(a_1+f)+(a_1+f)^2(E_3\lambda_2).
\end{align*}
Thus, $\frac{\partial_3f}{a_1+f}=-\frac{\partial_3\lambda_2}{3\lambda_2-R}$. By integrating, we get $(a_1+f)^3=\frac{c_5(x_1)}{3\lambda_2-R}$ for a positive function $c_5$. Again, by Lemma \ref{derdlem},
\begin{align*}
E_1\mu_2=(\mu_2-\mu_1)<\nabla_{E_2}E_2,E_1>=(a_1+f)^2(3\lambda_2-R)(-\frac{H}{2}).
\end{align*}
However, direct computation gives
\begin{align}
E_1\mu_2=E_1\{(a_1+f)^2\lambda_2+\frac{1}{2}|\nabla f|^2-(a_2+a_1R)f-\frac{R}{4}f^2\}=(a_1+f)^2(E_1\lambda_2). \label{e1mu2dir}
\end{align}
Thus, we have $\frac{E_1\lambda_2}{3\lambda_2-R}=-\frac{H}{2}=-\frac{E_1g_{33}}{2g_{33}}$. By integrating, we get $(3\lambda_2-R)^2=\frac{p_2(x_3)}{g_{33}^3}$ for a positive function $p_2$.
\end{proof}
\begin{lemma}\label{metriclemtam}
Let $(M^3,g,f)$ be a three-dimensional Riemannian manifold satisfying (\ref{generaldef}) with $\lambda_1\neq\lambda_2=\lambda_3$. Consider an adapted frame field $\{E_i\}$ in an open
subset of $\{\nabla f\neq 0\}$. Suppose that $E_3f\neq0$ and $\mu_2$ is not a constant. Then, there exists a local coordinate system in which
\begin{align}
g=\frac{1}{\{q(x_3)+b(x_1)\}^2}\{dx_1^2+(q')^2dx_2^2+dx_3^2\} \label{113notzerometric}
\end{align}
for nonconstant functions $q$ and $b$ such that $b'=-\frac{H}{2}$. The potential function $f=c_8(q+b)^{-1}-a_1$ for a function $c_8$.
\end{lemma}
\begin{proof}
First, consider $\Gamma_{11}^3=0$. Then, the metric $g$ can be written as $g=dx_1^2+e^{\int_c^{x_1}H(u)du}vdx_2^2+e^{\int_c^{x_1}H(u)du}dx_3^2$ by Lemma \ref{23second}. Thus, $\lambda_1=-H'-\frac{H^2}{2}$ depends only on $x_1$. However, $\lambda_2$ is also a function of $x_1$ only, and this implies that $E_3f=0$ by Lemma \ref{e3mu2tam}. Therefore, $\Gamma_{11}^3$ cannot be identically zero.
Now, we assume that $\Gamma_{11}^3\neq0$. Since we assumed that $\mu_2$ is not a constant, by Lemma \ref{giftlem}, we have
\begin{align}
\sqrt{g_{11}}(\mu_2-\mu_1)=\sqrt{g_{11}}(a_1+f)^2(3\lambda_2-R)=c_3(x_1)\label{findff}
\end{align}
for a positive function $c_3$. Then, we can get $g_{11}=p_3(x_3)c_6(x_1)g_{33}$ for positive functions $p_3$ and $c_6$ by Lemma \ref{e3mu2tam}.
By substituting $c_6dx_1^2=dx_1^2$, we can write the metric $g$ as
\begin{align*}
g=p_3g_{33}dx_1^2+vg_{33}dx_2^2+g_{33}dx_3^2.
\end{align*}
By (\ref{gammacord}), $ H(x_1)=\frac{\partial_1g_{33}}{g_{33}\sqrt{g_{11}}}=\frac{1}{\sqrt{p_3}}g_{33}^{-\frac{3}{2}}\partial_1g_{33}.$
By integrating, we get $g_{33}=\frac{1}{\{\int (-H/2)dx_1+p_4(x_3)\}^2p_3}$ for a function $p_4$. By changing the variable,
\begin{align*}
g=\frac{1}{\{q(x_3)+b(x_1)\}^2}dx_1^2+\frac{v}{(q+b)^2p_3}dx_2^2+\frac{1}{(q+b)^2}dx_3^2
\end{align*}
for functions $q$ and $b$, where $b'=-\frac{H(x_1)}{2}$. Assuming that $\Gamma_{11}^3\neq 0$, (\ref{r12211331}) gives $\partial_3g_{11}=c_7(x_1)\sqrt{g_{11}g_{22}g_{33}}$ as in the proof of Lemma \ref{23second}.
Thus, we can get $q'=C\sqrt{\frac{v}{p_3}}$ for a constant $C$. Hence, the metric $g$ can be written as in (\ref{113notzerometric}). By (\ref{findff}) and Lemma \ref{e3mu2tam}, $c_3(a_1+f)=\sqrt{g_{11}}(a_1+f)^3(3\lambda_2-R)=\sqrt{g_{11}}c_5$. Thus, the potential function $f=\frac{c_8}{q+b}-a_1$ for a function $c_8(x_1)$. Note that if $q'=0$, then $q$ is a constant, which implies that $\Gamma_{11}^3=\frac{E_3g_{11}}{g_{11}}=0$. Thus, $q$ cannot be a constant. Further, if $b$ is a constant, then $H=0$, which implies that $\mu_2$ is a constant. Hence, $b$ also cannot be a constant.
\end{proof}
The curvature components in this coordinate system are calculated as follows:
\begin{align*}
&H=-2b',\quad \Gamma_{11}^3=q',\quad \Gamma_{22}^3=q'-\frac{q''}{q'}(q+b),\\
&R_{1221}=(q+b)(q''+b'')-(q')^2-(b')^2,\\
&R_{2332}=2(q+b)q''-\frac{q'''}{q'}(q+b)^2-(q')^2-(b')^2.
\end{align*}
By Lemma \ref{e3mu2tam},
\begin{align*}
\sqrt{p_2}(q+b)^3=\sqrt{p_2}g_{33}^{-\frac{3}{2}}=3\lambda_2-R=c_5(a_1+f)^{-3}=c_5c_8^{-3}(q+b)^3.
\end{align*}
Thus, $\sqrt{p_2}=c_5c_8^{-3}$ is a constant, and the Ricci eigenvalues are
\begin{align*}
\lambda_2=-m(q+b)^3+\frac{R}{3},\quad \lambda_1=2m(q+b)^3+\frac{R}{3}\\
\end{align*}
for a constant $m\neq0$.
\begin{lemma}\label{cpeode}
The functions $b$ and $q$ in Lemma \ref{metriclemtam} are solutions of the following ODE, respectively:
\begin{align}
&(q')^2-2mq^3-lq^2+\alpha q+k=0 \label{solq}\\
&(b')^2-2mb^3+lb^2+\alpha b+\frac{R}{6}-k=0 \label{solb}
\end{align}
for constants $m\neq 0,l,\alpha,k$. And $c_8$ satisfies $ b''c_8=b'c_8'+a_2+\frac{a_1R}{2}$. Conversely, any metric $g$ and $f$ of the form in Lemma \ref{metriclemtam} satisfying the above differential equations satisfy (\ref{generaldef}).
\end{lemma}
\begin{proof}
From $\lambda_1=2R_{1221}$ and $\lambda_2=R_{1221}+R_{2332}=\frac{1}{2}\lambda_1+R_{2332}$, we get
\begin{align}
& m(q+b)^3+\frac{R}{6}=(q+b)(q''+b'')-(q')^2-(b')^2. \label{lambda1tam}\\
& 2m(q+b)^3-\frac{R}{6}=\left\{\frac{q'''}{q'}(q+b)-2q''\right\}(q+b)+(q')^2+(b')^2 \label{lambda2tam}
\end{align}
Assigning $(E_1,E_1)$ and $(E_2,E_2)$ to (\ref{generaldef}),
\begin{align}
&2m(q+b)^3-\frac{R}{6}=\left\{\frac{c_8''}{c_8}(q+b)-\frac{c_8'b'}{c_8}-b''-\frac{1}{c_8}\left(a_2+\frac{a_1R}{2}\right)\right\}(q+b)+(q')^2+(b')^2 \label{lambda1tam2}\\
& m(q+b)^3+\frac{R}{6}=\left\{\frac{b'c_8'}{c_8}+q''+\frac{1}{c_8}\left(a_2+\frac{a_1R}{2}\right)\right\}(q+b)-(q')^2-(b')^2 \label{lambda2tam2}
\end{align}
By (\ref{lambda1tam}) and (\ref{lambda2tam2}), we get
\begin{align}
b''c_8=b'c_8'+a_2+\frac{a_1R}{2}\label{relbc8}
\end{align}
By (\ref{lambda2tam}) and (\ref{lambda1tam2}), we get
\begin{align}
\left(\frac{q'''}{q'}-\frac{c_8''}{c_8}\right)(q+b)=2q''-b''-\frac{c_8'b'}{c_8}-\frac{1}{c_8}\left(a_2+\frac{a_1R}{2}\right). \label{ddee}
\end{align}
By (\ref{relbc8}), $\frac{c_8'b'}{c_8}+\frac{y}{c_8}=b''$ and $\frac{c_8''}{c_8}=\frac{b'''}{b'}$. Thus (\ref{ddee}) becomes
\begin{align}
\left(\frac{q'''}{q'}-\frac{b'''}{b'}\right)(q+b)=2(q''-b'') \label{ddee2}
\end{align}
Adding (\ref{lambda1tam}) and (\ref{lambda2tam}),
\begin{align*}
3m(q+b)^3=&\left(b''-q''+\frac{q'''}{q'}(q+b)\right)(q+b)\\
=&\frac{q'''}{q'}(q+b)^2-\frac{1}{2}\left(\frac{q'''}{q'}-\frac{b'''}{b'}\right)(q+b)^2\quad \because(\ref{ddee2}).
\end{align*}
Since $q=q(x_3)$ and $b=b(x_1)$, we get $-\frac{b'''}{b'}+6mb=\frac{q'''}{q'}-6mq=l$ for a constant $l$. By integrating,
\begin{align}
(q')^2-2mq^3-lq^2+\alpha q+k_1=0,\quad (b')^2-2mb^3+lb^2+\beta b+k_2=0 \label{ddee3}
\end{align}
for constants $\alpha,\beta,k_i$. Putting (\ref{ddee3}) in (\ref{ddee2}) shows that $\alpha=\beta$. Assigning these to (\ref{generaldef}), we get $k_1+k_2=\frac{R}{6}$.
The converse part
can be easily checked by direct computations.
\end{proof}
\begin{remark}
The metric $g$ in Lemma \ref{metriclemtam} cannot be defined on a compact manifold. For $g$ to be so, $\tilde{g}=\frac{(q')^2}{(q+b)^2}dx_2^2+\frac{1}{(q+b)^2}dx_3^2$ for a given $x_1$ should be a metric on a sphere.
By $dx_3=\frac{dq}{q'}$ and (\ref{solq}), $\tilde{g}$ becomes $ \tilde{g}=\frac{2mq^3+lq^2-\alpha q-k}{(q+b)^2}dx_2^2+\frac{1}{(q+b)^2(2mq^3+lq^2-\alpha q-k)}dq^2$. Let $r:=\frac{1}{(q+b)}$. Then, $\tilde{g}=f(r)dx_2^2+\frac{1}{f(r)}dr^2$, where $f(r)=(l-6mb)+(lb^2-2mb^3+\alpha b-k)r^2
+(6mb^2-2bl-\alpha)r+\frac{2m}{r}$. Let $dt:=\frac{dr}{\sqrt{f(r)}}$. Then, $t=\int \frac{dr}{\sqrt{f(r)}}$ and the metric $\tilde{g}$ becomes
$dt^2+f(t(r))dx_2^2$. Suppose that $\tilde{g}$ is a metric
on a sphere. This means that there exist two points $a\neq b$ such that $f(a)=f(b)=0$, $f'(a)+f'(b)=0$, and $f'(a),f'(b)\neq0$. However, note that
\begin{align*}
\frac{df}{dt}=(lb^2-2mb^3+\alpha b-k)2r(\frac{dr}{dt})+(6mb^2-2bl-\alpha)(\frac{dr}{dt})-\frac{2m}{r^2}(\frac{dr}{dt}).
\end{align*}
Since $\frac{dr}{dt}=\sqrt{f(r)}$, if $f(a)=0$, then $f'(a)=0$. Thus, $g$ cannot be a compact metric.
\end{remark}
\medskip
Now, we consider the case that $\mu_2$ is a constant. Suppose that $\mu_2$ is a constant. By $\frac{E_1\mu_2}{\mu_2-\mu_1}=<\nabla_{E_2}E_2,E_1>=- \frac{H}{2}=-\frac{E_1g_{33}}{2g_{33}}=0$. Hence, the metric $g$ in Theorem \ref{metricthm1} can be written as
\begin{align*}
g=g_{11}(x_1,x_3)dx_1^2+v(x_3)dx_2^2+dx_3^2.
\end{align*}
\begin{lemma}\label{cpe2233}
Let $(M^3,g,f)$ be a three-dimensional Riemannian manifold satisfying (\ref{generaldef}) with $\lambda_1\neq\lambda_2=\lambda_3$. Suppose that $E_3f\neq0$ and $\mu_2$ is a constant. Then, $(M^3,g,f)$ must be a critical point metric, and there exists a local coordinate system $(x_1,x_2,x_3)$ such that
\begin{align}
g=p^2dx_1^2+(p')^2dx_2^2+dx_3^2,\quad f=c_1p-1,
\end{align}
where $p(x_3)$ satisfies $(p')^2=\beta p^{-1}+\gamma$ for constants $\beta<0$ and $\gamma$, and $c_1(x_1)$ satisfies $c_1''+\gamma c_1=0$. Conversely, any
metrics $g$ and $f$ in the above form satisfy (\ref{cpedef}).
\end{lemma}
\begin{proof}
If $\Gamma_{11}^3 \neq 0$, then $\partial_3g_{11}=c_2\sqrt{g_{11}v}$ by Lemma \ref{23second}. By integrating, $g_{11}=c_2^2\left(\int\frac{\sqrt{v}}{2}dx_3+a(x_1)\right)^2$ for
a function $a$. By substituting $c_2^2dx_1^2=dx_1^2$ and $4dx_2^2=dx_2^2$, we may write
\begin{align}
g=(p(x_3)+a(x_1))^2dx_1^2+(p')^2dx_2^2+dx_3^2 \label{ddee33}
\end{align}
for a function $p(x_3)$. In this coordinate, the curvature components are $R_{1221}=-\frac{p''}{p+a}$ and $R_{2332}=-\frac{p'''}{p'}$.
However, note that $E_1\mu_2=0$ means that $\partial_1\lambda_2=0$ by (\ref{e1mu2dir}). Hence, $a(x_1)$ must be a constant function. Therefore, the metric is
\begin{align*}
g=p^2dx_1^2+(p')^2dx_2^2+dx_3^2
\end{align*}
and the Ricci curvatures are $\lambda_1=-2\frac{p''}{p}$, $\lambda_2=-\frac{p''}{p}-\frac{p'''}{p'}$. By $\partial_3f=c_1g_{33}\sqrt{v}$ in Lemma \ref{commonmetric}, $f=c_1(p+c_2)$ for a function $c_2$.
Then,
\begin{align*}
0=\nabla df(E_3,E_1)=E_3E_1f=-\frac{p'}{p^2}(c_1'c_2+c_1c_2').
\end{align*}
Thus, $f=c_1p+\alpha$ for a constant $\alpha$. From $\nabla df(E_1,E_1)$ and $\nabla df(E_2,E_2)$,
\begin{align}
c_1''+c_1(p')^2=-2p''(a_1+c_1p+\alpha)+a_2p-\frac{R}{2}p(c_1p+\alpha) \label{1111}\\
c_1p''=(a_1+c_1p+\alpha)(-\frac{p''}{p}-\frac{p'''}{p'})+a_2-\frac{R}{2}(c_1p+\alpha) \label{2222}.
\end{align}
Taking the $\partial_3$-derivative of (\ref{1111}) and using $R=-4\frac{p''}{p}-2\frac{p'''}{p'}$, we get
$c_1p''=\frac{p''}{p}(a_1+c_1p+\alpha)+\frac{R}{4}a_1+\frac{R}{8}\alpha+\frac{a_2}{4}$. Comparing with (\ref{2222}),
\begin{align}
(a_1+\alpha)p''+(\frac{R}{2}a_1+a_2)p=0\\
Ra_1+3a_2-\frac{R}{2}\alpha=0.
\end{align}
If $a_1+\alpha\neq0$, then $\frac{p''}{p}=\frac{p'''}{p'}=-\frac{\frac{R}{2}a_1+a_2}{a_1+\alpha}$, which implies that $\lambda_1=\lambda_2$, which is a contradiction. Therefore, $a_1+\alpha=\frac{R}{2}a_1+a_2=0$. This equation cannot be satisfied in $V$-static spaces but only in critical point spaces under the condition of $R=0$. If $R=0$, then $-2\frac{p''}{p}-\frac{p'''}{p'}=0$; hence, $p$ is a solution of $(p')^2=\beta p^{-1}+\gamma$ for constants $\beta<0$ and $\gamma$. By (\ref{1111}), $c_1$ satisfies $c_1''+\gamma c_1=0$.
Now, suppose that $\Gamma_{11}^3=0$. Then, the metric $g$ can be written as
\begin{align*}
g=dx_1^2+p(x_3)^2dx_2^2+dx_3^2.
\end{align*}
From $\nabla df(E_3,E_1)=\partial_3\partial_1f=0$, we have $f=f_1(x_1)+f_3(x_3)$ for functions $f_1(x_1)$ and $f_3(x_3)$. On the other hand, $\nabla df(E_1,E_1)=\partial_1\partial_1f=a_2-\frac{R}{2}f$ implies that $R=0$, because we assumed that $E_3f\neq0$. However, note that if $R=0$, then $\lambda_1=\lambda_2=0$, which is a contradiction.
The converse part can be easily checked.
\end{proof}
\begin{remark}
The metric $g$ in Lemma \ref{cpe2233} cannot be defined on a compact manifold. For the metric $g$ to be on a compact metric, there should exist two points $a\neq b$ such
that $p'(a)=p'(b)=0$ and $p''(a)+p''(b)=0$ \cite{Pe}. If $p'(a)=p'(b)=0$, then $0=(p'(a))^2=\beta p(a)^{-1}+\gamma=\beta p(b)^{-1}+\gamma=(p'(b))^2=0$, which implies that $p(a)=p(b)$. However, then $p''(a)+p''(b)=-\frac{\beta}{2}(\frac{1}{p(a)^2}+\frac{1}{p(b)^2})=-\frac{\beta}{p(a)^2}$ cannot be
zero unless $\beta=0$.
\end{remark}
\begin{remark}
In \cite{Ko}, Kobayashi studied the behavior of the solution to $k=(r')^2+\frac{2a}{n-2}r^{2-n}+\frac{R}{n(n-1)}r^2$. Our equation in Lemma \ref{cpe2233} corresponds to the case ${\rm (IV.1)}$ in the list of \cite[p 670]{Ko}.
One can check that the metric $g$ in Lemma \ref{cpe2233} is complete by simple calculus; see \cite[Lemma 4.5]{JJ}.
\end{remark}
\section{Three-dimensional Ricci-degenerate manifolds with $E_3f=0$}
In this section, we deal with the case where $E_3f=0$. Unlike the previous section, this section covers the three spaces simultaneously. First, consider adapted frame $\{E_i,i=1,2,3\}$. Then $E_3f=0$ implies
that $\nabla f$ is parallel to $E_1$. We may set $\frac{\nabla f}{|\nabla f|}=E_1$. Then we can prove the following lemma by standard argument; see \cite{CMMR} or \cite{Ki}.
\begin{lemma}\label{threesolb1}
Let $(M^3,g,f)$ be a three-dimensional Ricci-degenerate Riemannian manifold with $\lambda_1\neq\lambda_2=\lambda_3$ satisfying (\ref{ggrsdef}). Let $c$ be a regular value of $f$ and
$\Sigma_c=\{x|f(x)=c\}$. If $\frac{\nabla f}{|\nabla f|}=E_1$, then the following hold.
{\rm (i)} $R$ and $|\nabla f|^2$ are constant on a connected component of $\Sigma_c$.
{\rm (ii)} There is a function $s$ locally defined with $s(x)=\int\frac{df}{|\nabla f|}$, so that $ds=\frac{df}{|\nabla f|}$ and $E_1=\nabla s$.
{\rm (iii)} $\nabla_{E_1}E_1=0$.
{\rm (iv)} $\lambda_1$ and $\lambda_2$ are constant on a connected component of $\Sigma_c$ and so depend on the local variable $s$ only.
{\rm (v)} Near a point in $\Sigma_c$, the metric $g$ can be written as
$\ \ \ g= ds^2 + \sum_{i,j > 1} g_{ij}(s, x_2, \cdots x_n) dx_i \otimes dx_j$, where
$x_2, \cdots x_n$ is a local coordinates system on $\Sigma_c$.
{\rm (vi)} $\nabla_{E_i}E_1=\zeta(s) E_i, \textrm{$i=2,3$ with }\zeta(s)=\frac{\psi \lambda_i+\phi}{|\nabla f|}$ and
$g(\nabla_{E_i}E_i, E_1)=-\zeta $.
\end{lemma}
\begin{proof}
By assumption, for $i=2,3$, $R(\nabla f, E_i) =0$ and Lemma \ref{basiccodazzi} (i) gives $E_i(R) =0$. Equation {\rm (\ref{ggrsdef})} gives $E_i(|\nabla f|^2) =0$. We can see $d( \frac{df}{|df |} )=0$.
$g(\nabla_{E_1} E_1 , E_1)=0$ is trivial. We can get
$g(\nabla_{E_1} E_1 , E_i)=g(\nabla_{E_1} (\frac{\nabla f}{|\nabla f |} ), E_i) =0$ from {\rm (\ref{ggrsdef})}. (i), (ii) and (iii) are proved.
As $\nabla f$ and the level surfaces of $f$ are perpendicular, we get (v).
Assigning $(E_1,E_1)$ to (\ref{ggrsdef}), we have $E_1E_1f=\psi\lambda_1+\phi$. Since $\psi\neq0$, $\lambda_1$ is a function of $s$ only. Then
$\lambda_2$ also depends only on $s$ from the fact that $R=R(s)$. So we proved {\rm (iv)} and {\rm (vi)}.
\end{proof}
\begin{lemma} \label{claim112b3}
Let $(M^3,g,f)$ be a three-dimensional Ricci-degenerate Riemannian manifold satisfying (\ref{ggrsdef}).
Suppose there exists a Codazzi tensor $C$ whose eigenspace coincide with Ricci eigenspace. Suppose that $\frac{\nabla f}{|\nabla f|}= E_1$ and $ \lambda_1 \neq \lambda_2= \lambda_3$ for an adapted frame fields $\{ E_j\}$,
on an open subset $U$ of $\{ \nabla f \neq 0 \}$.
\smallskip
Then for each point $p_0$ in $U$, there exists a neighborhood $V$ of $p_0$ in $U$ with coordinates $(s, x_2, x_3)$ such that $\nabla s= \frac{\nabla f }{ |\nabla f |}$ and $g$ can be written on $V$ as
\begin{equation} \label{mtr1a3}
g= ds^2 + h(s)^2 \tilde{g},
\end{equation}
where $h:=h(s)$ is a smooth function and
$\tilde{g}$ is (a pull-back of) a Riemannian metric of constant curvature on a $2$-dimensional domain with $x_2, x_3$ coordinates.
In particular, $g$ is locally conformally flat.
\end{lemma}
\begin{proof}
The metric $g$ of Lemma \ref{threesolb1} (v) can be written as
\begin{equation} \label{ggg}
g= ds^2 + g_{22}dx_2^2 + g_{23} dx_2 \odot dx_3 + g_{33}dx_3^2,
\end{equation}
where $g_{ij}$ are functions of $(x_1:=s, \ x_2, \ x_3)$.
one easily gets $E_1 =\frac{\partial }{\partial s} $.
We write $\partial_{1}:=\frac{\partial }{\partial s}$ and $\partial_{1}:=\frac{\partial }{\partial x_i}, i=2,3$ .
We consider the second fundamental form $\tilde{ h}$ of a leaf for $E_{23}$ with respect to $E_1$;
$\tilde{ h} ( u , u ) = - < \nabla_{u} u , E_1> $. As the leaf is totally umbilic by Lemma \ref{derdlem} {\rm (ii)}, $\tilde{ h} ( u , u ) = \eta \cdot g( u , u) $ for some function $\eta$ and any $u$ tangent to a leaf.
Then, $\tilde{ h} (E_2, E_2 ) = - < \nabla_{E_2} E_2 , E_1> = \eta= \zeta $, which is a function of $s$ only by Lemma \ref{threesolb1} (vi).
For $i, j \in \{ 2,3 \}$,
\begin{eqnarray*}
\zeta g_{ij}& =\tilde{ h} ( \partial_i , \partial_{j} ) = - < \nabla_{\partial_i} \partial_{j} , \frac{\partial }{\partial s}> = - <\sum_k \Gamma^{k}_{i{j}} \partial_k , \frac{\partial }{\partial s} > \\
& = - \sum_k < \frac{1}{2} g^{kl}( \partial_i g_{lj} +\partial_{j} g_{li} - \partial_l g_{ij} )\partial_k , \frac{\partial }{\partial s} > = \frac{1}{2} \frac{\partial }{\partial s} g_{i{j}}.
\end{eqnarray*}
So, $\frac{1}{2} \frac{\partial }{\partial s} g_{i{j}} = \zeta g_{ij}$. Integrating it, for $i, j \in \{ 2,3 \}$, we get $ g_{ij} = e^{C_{ij}} h(s)^2$. Here the function $h(s)>0$ is independent of $i,j$ and each function $C_{ij}$ depends only on $x_2, x_3$.
Now $g$ can be written as $g= ds^2 + h(s)^2 \tilde{g} $, where $\tilde{g}$ can be viewed as a Rimannian metric in a domain of $(x_2, x_3)$-plane.
From Gauss-Codazzi equation,
$R^{g} = R^{\tilde{g}} + 2 Ric^{g}(E_1,E_1)+ \|h\|^2 - H^2$. As all others are constant on a hypersurface of $w$, so is $R^{\tilde{g}}$. Therefore each hypersurface has constant curvature. Thus $\tilde{g}$ has constant curvature and
$g$ is locally conformally flat.
\end{proof}
Now, we can prove our theorems.
\begin{pf2}
Combine Lemma \ref{gamma113}, Lemma \ref{solitonresult} and Lemma \ref{claim112b3}.
\end{pf2}
\begin{pf3}
Combine Lemma \ref{metriclemtam}, Lemma \ref{cpeode}, Lemma \ref{cpe2233} and Lemma \ref{claim112b3}.
\end{pf3}
\begin{pf4}
Combine Lemma \ref{metriclemtam}, Lemma \ref{cpeode}, Lemma \ref{cpe2233} and Lemma \ref{claim112b3}.
\end{pf4}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Simulation software is often governed by a number of parameters that affect
both the accuracy of the results and the time-to-solution. In a typical
setting, the choice of these values represents a challenging trade-off
scenario: the more accuracy is desired, the more computation is required (thus
longer simulations), and vice versa.
Users, who normally aim at a target accuracy level,
face the problem of choosing $p$,
a {\em configuration} of the parameters (i.e., a tuple of values),
on a given set of computing resources,
that fulfills the accuracy requirements while minimizing
the execution time:
$$\min_{p} \ \text{time}(\text{resources}, p) \quad
\text{subject to} \quad \text{accurate}(p). $$
The problem is exacerbated by the large space of possibilities,
the intricate relation among the parameters, and the dependence
on the actual simulated system and underlying architecture.
In general, given the dimensionality of the space of configurations,
finding the optimal values for the parameters is a daunting task,
and even experts necessitate a considerable amount of trial and error
to only provide rules of thumb that often are suboptimal.
Users are left with two options: either use the (potentially) suboptimal
rules of thumb from the literature, or perform a tedious and time
consuming search, which requires knowledge from the application domain,
the solver, and the underlying computing architecture.
In this paper, we present a methodology for the automatic parameter
selection for simulation codes, aiming at both an increase in productivity
and an improved utilization of computing resources of scientific simulations.
A case study on one of the most popular methods in molecular dynamics
(the particle-particle particle-mesh method~\cite{PPPM-Hockney}) demonstrates the potential savings
offered by the methodology.
To be amenable to our methodology, a numerical code must
present the following three characteristics.
First, it has to be governed by a number of parameters that affect the efficiency
of the computation and/or the accuracy of the results; these parameters must be
exposed to the user, typically as input arguments to the simulation software.
Second, analytical formulas as functions of the parameters need to be available
for the estimation of the error incurred by a given configuration.
Finally, rough (asymptotic) cost estimates, generated either manually
or analytically, are required.
If these three requirements are satisfied, the methodology
proceeds in three steps.
\begin{enumerate}
\item The first step consists in characterizing
the parameters that represent the search space;
this involves identifying those parameters that affect performance and/or
accuracy, choosing meaningful ranges for them, and discretizing the continuous ones.
We refer to the set of all possible values for these parameters
as the ``parameter space'' or ``search space''.
\item In the second step, analytical error bounds are used to divide
the search space into accurate and inaccurate configurations, according to
whether or not they are estimated to satisfy the user
requirements; only the accurate subspace is further considered.
\item As a third step, the execution time of the method is modeled by fitting
one or more functions corresponding to the computational cost of the method to
data samplings (collected from short runs);
the combination of these functions yields a model that accurately predicts the
execution time for each configuration in the accurate subspace.
\end{enumerate}
The description of the steps of the methodology is deliberately general. In
practice, their application will be adjusted to the properties of the method
to overcome the curse of dimensionality: While the first stage only
requires acquiring a high-level understanding of the method or software at
hand, the second and third stages require actual computation, and the
potentially high dimensionality of the search space poses challenges to an
accurate prediction and selection of parameters.
To overcome these challenges, it is critical to exploit
method-specific properties and knowledge in order to reduce the complexity of the
search and to obtain faster and more accurate predictions.
To illustrate the potential of the methodology, we developed
a prototype that implements the methodology and applied it to
the particle-particle particle-mesh (PPPM) method.
This method is governed by four parameters (three of them
affect performance, and all four affect accuracy) leading to
a large parameter space.
Moreover, the overall accuracy of the simulation may be regulated by
multiple accuracy thresholds, corresponding to different sections
of the method.
In general, in order to remove the need for a manual search of good configurations and to
simplify the user's workflow, the developers of the solvers provide rules of thumb
on how to set these parameters.
However, since the optimal choice highly depends on both the
actual simulation and the architecture, the effectiveness
of these guidelines is limited.
In contrast, as we demonstrate, when both the simulated system and the computing architecture are
taken into account, it is possible to identify configurations that lead to
close-to-optimal performance, and thus to an efficient use of resources.
The benefits of our tool are two-fold.
On the one hand, it provides the users with close-to-optimal
configurations specifically chosen for the system and architecture
of interest.
On the other hand, it does so while dispensing them from the burden of an
unpleasant and time-consuming manual search for such efficient configurations.
Moreover, the tool does not require any deep understanding of
the solvers or computing architectures. The user can thus focus
on the high-level aspects of the scientific problem.
In short, our experiments demonstrate how even expert choices for parameters might be
severely suboptimal in terms of efficiency: While the simulations deliver
the required accuracy, they do not do so in the most efficient way. In other
words, resources are underutilized.
At the expense of an initial (automated) search, our approach
yields gains in terms of productivity and better usage of resources.
This is especially relevant in the common cases where the simulations
take considerable time or many simulations with similar characteristics
are to be run.
More specifically, in our experiments we observed reductions
of time-to-solution between 10\% and 60\%. Given the length
of typical molecular dynamics simulations, this may translate to
savings of hours or even days of computation, or the execution
of twice as many simulations given a fixed core-hour budget.
\subsection{Contributions.}
The main contribution of this paper is a methodology for the automatic
parameter selection for simulation software. The requirements for its
application are that the parameters affecting accuracy and performance of the
software are exposed to the user, and that formulas for error bounds and
computational complexity of the underlying solvers are available.
The outcome of the process is a configuration of the parameters that
yields accurate enough results in (almost) minimal time.
We focus on the domain of molecular dynamics, and contribute a practical example
of the potential benefits of our approach based on a very popular method in the field,
the particle-particle particle-mesh method ({\sc pppm}{})~\cite{PPPM-Hockney}, and its implementation
from the well-known LAMMPS suite~\cite{LAMMPS-1995}.\footnote{For a list of papers citing LAMMPS,
many of which present results using this software, please visit
\url{http://lammps.sandia.gov/papers.html}.}
Usage of our prototype implementing the methodology does not require deep knowledge
of the solvers and computing architectures, and at the cost of an easily
amortized automated search, the tool provides the user with close-to-optimal
configurations.
As a result, researchers are enabled to carry out many more or larger simulations
and therefore to gain deeper scientific insights in the problem at hand.
\subsection{Outline of the paper.}
This paper is structured as follows. Section 2 provides an overview of the basic
ideas behind molecular dynamics and the PPPM method.
Sections 3, 4 and 5 discuss in detail the three steps in our methodology, with
practical examples using the PPPM method. These steps are: characterization of
the search space, identification of the accurate subspace, and sampling and modeling.
In Sec. 6 we present multiple experimental results, while in Sec. 7 we draw conclusions.
\section{Background}
This section reviews the basic ideas behind molecular dynamics and the PPPM
method, as well as research efforts related to the presented work. The readers
familiar with both molecular dynamics and PPPM may skip this section.
\subsection{Molecular Dynamics and the PPPM method}
Molecular dynamics (MD) is a well-established tool for
the study of the properties of complex particle systems at the atomistic level;
it is widely used in a variety of fields, including computational chemistry, biophysics, and
materials science.
Typical simulations consist of systems comprising between
thousands and tens of millions of particles. In order to simulate time scales
relevant for the processes being studied, and thus to obtain meaningful
evidence, these systems must evolve for at least millions of timesteps.
In practice, MD simulations are limited by computing resources,
and practitioners usually have to apply for compute time on supercomputers.
It is therefore critical to make an efficient use of the available resources.
The basic idea underlying an MD simulation is to study the movement of the
particles due to the forces acting on each of them, for a given time span.
The computation in these simulations is dominated
by the calculation of the forces exerted on each particle
(or, similarly, the calculation of the potential energy of the system).
Given a system with $n$ particles, the direct evaluation of pairwise
interactions would cost $O(n^2)$ operations (per timestep), and
is thus entirely infeasible for systems with a large number of particles.
The so-called mesh-based Ewald methods, among which we find the PPPM method,
reduce the algorithmic complexity
to $O(n\,\,log \,n)$~\cite{PPPM-Hockney,PME-Darden,SPME-Essmann}.
In order to reduce the computational cost of one timestep from $O(n^2)$
to $O(n\,log\,n)$, PPPM{} splits the the interactions
into short- and long-range ones. Forces among neighboring particles within a
given {\it cutoff} radius are computed in {\it real} space by means of direct evaluation,
while forces due to the interaction of distant particles are computed
in Fourier (or {\it reciprocal}) space.
The calculation of the reciprocal space contributions, that is, the long-range interactions,
requires solving the Poisson equation in Fourier space. In order to take
advantage of the Fast-Fourier Transform (FFT) algorithm and achieve
the $O(n\,log\,n)$ complexity, the potential of
the particles is mapped into a regular grid, computations based on FFTs
are performed, and the resulting potential is mapped back to the particles.
Depending on the specifics of how the Poisson equation is solved, multiple
flavors of PPPM{} arise. In the following, we consider two of them:
analytical differentiation ($ad${}), and $i${\bf k}{} numerical
differentiation ($i${\bf k}{}). For details on these two flavors
we refer the reader to~\cite{MeshUpI}.
A simulation based on the PPPM{} method is governed by 4 parameters:
the cutoff, which restricts the short-range contribution
to particles within a certain radius;
the size of the grid into which the particles are mapped for the
calculations of the long-range interactions;
the interpolation order, which affects the mapping of potential into the grid,
and indicates the number of grid points (per dimension)
to which the potential is mapped; and
the Ewald parameter, which controls the weight
of each (short- and long-range) contribution.
Out of these four parameters, the first three (cutoff, grid size, and
interpolation order) affect both the accuracy and execution time of the simulation,
while the Ewald parameter affects the accuracy but not the execution time.
The impact of the cutoff, grid size, and interpolation order is rather
straightforward. When the cutoff is increased, more particles are taken into
consideration, the accuracy of the real space part also increases, and
the computation becomes more expensive.
Similarly, an increase in the grid size or in the interpolation order results
in higher accuracy and computational cost for the reciprocal space part.
The role of the Ewald parameter ($\alpha${}) is more subtle. While it does not
play a role in terms of computation, and thus performance, it has a strong
influence on accuracy. For instance, for a fixed cutoff, grid, and interpolation
order, larger values of $\alpha${} improve the accuracy of the real space and
reduce that of the reciprocal space.
This fact can be used to optimize performance: Given a configuration that
attains the desired accuracy but is suboptimal in terms of performance,
the value of $\alpha${} can be modified to shift the contribution
to compute time from one part to the other.
To showcase our methodology, we choose the two types of systems
depicted in Fig.~\ref{fig:systypes}:
bulk (homogeneous) and interfacial (inhomogeneous).
Homogeneous systems, with a random distribution of particles over the entire
domain, are typically used to initially test accuracy, and
performance models. Inhomogeneous systems are very common and the most relevant in
practice; they constitute a
class of complex enough systems to stress the effectiveness of our methodology.
\begin{figure}
\centering
\includegraphics[scale=0.5]{figs/MD-boxes/csbulk.png} \hspace{2cm}
\includegraphics[scale=0.25]{figs/MD-boxes/csint.png}
\caption{Two types of systems. Left, bulk system. Right, interfacial system.}
\label{fig:systypes}
\end{figure}
As a specific implementation of the PPPM solver, we choose the PPPM
solver for dispersion interactions from the LAMMPS package, a widely-used open
source MD suite.
The choice of the
solver for dispersion is not arbitrary; dispersion forces are
a type of forces that exist between all types of atoms and are
therefore present in every system. Of course, our approach is
applicable to other types of forces, such as the electrostatic ones.
Our tool takes as input a description of the simulation
to be run and the desired accuracy, and returns as output the estimated fastest
configuration that satisfies the accuracy constraints.
The input description includes the size of the simulation domain,
the number of particles in the domain, and whether they fill up the entire
domain (bulk), or only a box within the domain (interfacial).
The desired accuracy is expressed as either two independent thresholds
for the short- and long-range contributions ($\Delta F_{real}$ and
$\Delta F_{reciprocal}$, respectively), or a single value as a threshold
for the combined root mean square
({\footnotesize $\sqrt{\Delta F_{real}^2 + \Delta F_{reciprocal}^2}$ }),
where $\Delta F_\star$ is defined as $\sqrt{\frac{1}{N} \sum_{i=1}^N
(F_i - F_i^{\text{exact}})^2}$, $N$ the number of particles.
The tool returns the estimated optimal values for cutoff, grid size,
interpolation order, and Ewald parameter.
\subsection{Related work}
Research efforts in the domain of molecular dynamics simulations concentrate
mainly in the design of accurate and efficient methods and their
parallel scalable implementation in software packages.
The MD landscape is populated with a broad variety of methods,
from simple truncation (with or without tail correction), through
tree methods, and grid-based methods. The latter group contains, among
others, the particle-mesh Ewald (PME), smooth particle-mesh Ewald (SPME),
the particle-particle particle-mesh (PPPM{}), and the multi-level
summation (MSM) methods. Our methodology is applicable to all these methods.
The list of available molecular dynamics suites is also large. Among others, it
is worth mentioning GROMACS, NAMD,
and CHARMM.~\cite{GROMACS-1995,CHARMM-2009,NAMD} While in our case
study we consider LAMMPS, the approach is generic and totally portable to any other
suite.
Literature on the accuracy of the different methods is
abundant~\cite{Kolafa1992,Petersen1995,Deserno:1998wb,Hardy2006}.
Furthermore, there exists literature on the optimal choice of certain
parameters for accuracy. Among them, in~\cite{MeshUpII}, the authors
discuss the choice of optimal Ewald parameter given fixed cutoff,
grid size, and interpolation order for the PPPM{} method in
its $i${\bf k}{} differentiation flavor. The authors of~\cite{Stern-optimal-ewald}
perform a similar study for both $ad${} and $i${\bf k}{} differentiation in PPPM{}.
However, despite the importance of making an efficient usage
of the available resources in order to target larger systems,
the optimal choice of parameters in terms of performance
has received much less attention. An attempt that shares some
resemblance with our approach is given in~\cite{Wang-optimal-perf}.
They propose an analytical approach to finding optimal
values for all four parameters in SPME (the same four as in PPPM{}).
However, we observe two limitations in the
study. First, the fact that their approach does not take
into consideration the actual hardware. The authors work
under the assumption that every flop (arithmetic operation)
has the same cost; due to caching effects and the cost of data movement, it is
well understood that an accurate
model requires taking into account that the cost of flops is not constant
across a piece of software.
Second, their numerical
results do not provide any reference to understand how close the
execution times are to the optimal. As later discussed in this paper,
we determine the region in the parameter space that potentially
contains the optimal configurations, and compare the results of
our tool with the best timings in that region.
The fact that we take into consideration the architecture, also allows
to identify close-to-optimal configurations across computing platforms.
\section{Characterization of the search space}
\label{sec:space}
The first step in our methodology for the automatic selection of parameters
is to characterize the parameter space, that is, to identify the
set of parameters $\mathcal{P}$ that play a role in the accuracy and/or performance
of the target method.
For most algorithms in computational science, the set $\mathcal{P}$ of input
parameters is a mixture of (potentially unbounded) discrete and continuous
values. For each of these parameters, a range must be specified and, for the
continuous ones, a discretization (not necessarily regular) provided. This
process originates the search space $\mathcal{S}$ of parameter configurations.
Without loss of generality, when there is freedom in the choice, the
granularity of the discretization and the considered ranges of values are set
based on the experience of practitioners and domain experts.
The objective of our methodology is to find the configuration,
that is, the one point in the (highly dimensional) space $\mathcal{S}$, that delivers
the requested accuracy in the minimum time.
\vspace{3mm}
\noindent
{\bf Example: Characterizing $\mathcal{S}$ for the {\sc pppm}{} method. }
The {\sc pppm}{} method is parameterized by
the cutoff radius ($r_c${}),
the grid size ($(n_x \times n_y \times n_z)${}),
the interpolation order ($p${}),
and the Ewald parameter ($\alpha${}).
Out of the four parameters, the interpolation order and the grid size are
already discrete, while the Ewald parameter and the cutoff are continuous.
In the LAMMPS implementation of {\sc pppm}{}, the accepted values for the interpolation order are integers from 2 to
6.
The grid size is restricted to values that can be expressed as
multiples of only 2, 3, and 5 (e.g., a grid
of size $60 \times 60 \times 60$ is valid, but not a grid of size
$66 \times 66 \times 66$).
To constrain the (infinite) number of possible grid sizes,
we dynamically set an upper bound based on the system
under consideration.
This upper bound is limited so that
only grids containing a number of grid points up to 8 times
the number of particles in the system (2x per dimension) and with a
shape proportional to the simulation domain are allowed.
This bound is generous---the optimal grid size is typically far from
the largest allowed---and may be decreased to reduce the search time.
With respect to the continuous parameters,
the Ewald parameter must be strictly positive,
and is typically not much larger than 1.0$\sigma^{-1}${};
we allow values in the range (0.0, 1.0].
As for the cutoff, no hard constraints are imposed, other than being strictly positive;
however, it is accepted that it takes at least a value of 2.0$\sigma$.
Regarding the upper bound, we allow rather large cutoffs
up to 6.0$\sigma$.
For the discretization of the Ewald parameter and the cutoff,
we choose a step size of 0.01$\sigma^{-1}${} and 0.1$\sigma$, respectively.
We recall that in both cases one can certainly explore a larger space of values;
the aforementioned bounds are flexible and
the validity of our methodology and results are not affected by these choices.
This discretization leads to a 4-dimensional search space, where each
configuration consists of a 4-tuple ($\alpha${}, $r_c${}, $(n_x \times n_y \times n_z)${}, $p${}).
As we discuss in the next section, the evaluation of
error estimates for all configurations in $\mathcal{S}$ is computationally too
expensive and thus unfeasible in practice due to the introduced overhead.
Furthermore, it is expensive to develop an accurate performance model that
takes the entirety of the search space into account. Therefore we advocate for
an approach that exploits the structure of the target methods to reduce
the dimensionality of the search.
For the {\sc pppm}{} method (and an entire class of similar methods), this includes
(1) the fact that only accurate configurations are worth considering, and
(2) the study of both accuracy and performance can be split using
a divide-and-conquer strategy into the study of its components,
namely the real- and reciprocal-space contributions,
which are then composed to provide a global solution.
\section{Identification of the accurate subspace}
In this first computational stage of our methodology,
accuracy bounds
are used as a discriminant to restrict the search space to
only those configurations that result in simulations
accurate enough to merit the effort of performance modeling.
Therefore, the discretized parameter space
is split into {\em accurate} and {\em inaccurate subspaces},
$\mathcal{S_A}$ and $\mathcal{S_I}$ respectively,
and only the former is kept for further consideration.
We refer to the boundary between both subspaces as the
{\em frontier} ($\mathcal{F}$). The frontier is a Pareto
Efficient Frontier comprising the configurations that are
Pareto optimal, that is, configurations that, while
satisfying the accuracy constraints, cannot reduce the contribution to the
computational cost of any one of the parameters without increasing the
contribution of the others or without compromising the accuracy of the solution
(crossing the boundary accurate-inaccurate).
To estimate the accuracy of each configuration of parameters, we
require the availability of formulas for the error bounds.
These are typically derived, and provided by the developer
of each method in the corresponding publication.
For {\sc pppm}{}, the error bounds are provided in~\cite{Rolf1}, and
consist of two formulas, for the real space and the
the reciprocal space contributions, respectively.
We outline these formulas.
The error of the real space contribution is bound by
$$\Delta F_{real} = \frac{{C} \sqrt{\pi} \alpha^5}{\sqrt{N V r_c}} %
\left( \frac{6}{r^6_c \alpha^6} + \frac{6}{r^4_c \alpha^4} + \frac{3}{r^2_c \alpha^2} + 1 \right) %
e^{-r^2_c \alpha^2},$$
where $C$ is the dispersion coefficient (dependent on the particles in the system),
$N$ is the number of particles in the system, $V$ is the volume of the system,
and $\alpha$ and $r_c$ are the Ewald parameter and the cutoff, respectively.
The error for the reciprocal space contribution is bound by
$$\Delta F_{reciprocal} = C\sqrt{\frac{Q}{NV}},$$
where
$$
Q = \frac{1}{V} \sum_{{\bf k}\in \mathbb{M}}
\left\{
\sum_{{\bf m} \in \mathbb{Z}^3} \left\vert {\bf \tilde{R}} \left( {\bf k} + \frac{2\pi}{h}{\bf m} \right) \right\vert^2
- \frac{
\left\vert
\tilde{\bf D}({\bf k}) \sum_{{\bf m} \in \mathbb{Z}^3} \tilde{U}^2 ({\bf k} + \frac{2\pi}{h}{\bf m})
\tilde{\bf R}^{*} ({\bf k} + \frac{2\pi}{h}{\bf m})
\right\vert^2
}
{ \vert\tilde{\bf D}({\bf k})\vert^2 [\sum_{{\bf m} \in \mathbb{Z}^3} \tilde{U}^2 ({\bf k} + \frac{2\pi}{h}{\bf m})]^2 }
\right\}.
$$
\noindent
The details of the last formula are beyond the scope of this paper.
The main message is that it is not uncommon that error bounds are given by
means of complex formulas, whose evaluation might be computationally expensive
and even require numerical approximations.
To limit the cost of the evaluation of the formulas, one can make
use of the available knowledge on the specific method at hand.
For instance, in {\sc pppm}{}, since separate formulas are provided for both the real-
and reciprocal-space contributions, and some of the parameters affect only one
of the two errors, we decompose the evaluation in that of each space, and then
combine the results to obtain the error estimates for each configuration of the
four-dimensional space. This approach is general and valid for a class of
methods with similar characteristics.
In {\sc pppm}{}, the real space error is only affected by the choice of
$\alpha${} and $r_c${},
while that of the reciprocal space
is only affected by the choice of $\alpha${}, $p${}, and $(n_x \times n_y \times n_z)${}.
Figures~\ref{fig:acc-realspace} and~\ref{fig:acc-kspace}
show, respectively, the independent evaluation of the error
estimates for the real and reciprocal spaces. The figures correspond
to the {\em Large Cube} test case ({\it TestLC}{}) described in Appendix~\ref{app:scenarios}.
The figures illustrate the tradeoffs and difficulties associated with
the manual selection of parameters for a simulation run.
While higher values of the Ewald parameter
increase the real-space accuracy, they reduce the accuracy
of the reciprocal space.
Also, for a fixed target accuracy, smaller values of the Ewald parameter allow
to use a smaller grid size or interpolation order, and hence to reduce the execution time for the
reciprocal-space calculation, at the expense of setting a larger cutoff, thus
increasing the execution time for the real space, and vice versa.
It becomes apparent that identifying the accurate subspace $\mathcal{S_A}$, and then determining
the fastest configurations within $\mathcal{S_A}$
is a daunting task.
\begin{figure}[!h]
\centering
\begin{subfigure}[b]{0.44\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{figs/RealSpace_error-LargeCube.pdf}
\caption{Real-space error.}
\label{fig:acc-realspace}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.50\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{figs/KSpace_error-LargeCube-ad.pdf}
\caption{Reciprocal-space error.}
\label{fig:acc-kspace}
\end{subfigure}
\caption{Contour plots for the study of the accurate subspace.
The plots correspond to {\it TestLC}{} using $ad${} differentiation. In the case of the reciprocal
space, we show the error for an interpolation order of 5.}
\label{fig:acc-split}
\end{figure}
While the evaluation of the real-space error formula is inexpensive,
the evaluation of the reciprocal-space error formula is, unfortunately, still too costly, even when
only the 3D subspace $\alpha${} $\times$ $(n_x \times n_y \times n_z)${} $\times$ $p${}
is considered. In fact, the values for Fig.~\ref{fig:acc-kspace}
were calculated by an approximation also provided in~\cite{Rolf1}
which is only valid for the $i${\bf k}{} differentiation, cubic domains, and
grid sizes with equal number of grid points in each dimension.
Without this simplification, the evaluation of the entire grid
using the generalized formula would take days.
We opt for an alternative that further reduces the amount of required
evaluations of the formula. The insight is that it
suffices to identify the values that split accurate and inaccurate configurations.
That is, referring to Fig.~\ref{fig:acc-kspace}, if the user requests an
accuracy of $10^{-4} \epsilon/\sigma$, it suffices to find the corresponding contour plot;
every point below that line is accurate enough.
To this end, for each interpolation order and grid size,
we perform a binary search for this splitting value. Additionally, the search
is parallelized and carried out in place, making use of the same architecture in which
the simulation will be run.
Following this idea,
the time spent in the evaluation of the error estimates may be reduced
from days to minutes.
Once the error estimates for both real and reciprocal spaces are
available, these are combined back in the single four-dimensional space $S$;
it is then possible to split the full search space $\mathcal{S}$
into $S_A$ and $S_I$,
according to the target accuracy.
The splitting of the search space is carried out in one of two different ways,
depending on how the user expresses the accuracy requirements.
If the user inputs one single value for the accuracy,
those configurations where
the combined root mean square of the errors
({\footnotesize $\sqrt{\Delta F_{real}^2 + \Delta F_{reciprocal}^2}$ })
is below the accuracy threshold are considered.
If, instead, the user provides individual thresholds for each component,
then the individual errors must satisfy the corresponding constraint independently.
It is now important to point out that not all of
the parameters that influence accuracy also impact performance.
For instance, $\alpha${} only affects accuracy, and
does not directly influence the amount of computation performed.
Thus, the space of parameters can be reduced to 3-dimensional
($r_c${}, $(n_x \times n_y \times n_z)${}, $p${}) when modeling performance. At this
stage, points in this three-dimensional space are labeled as inaccurate, unless
there exists at least one value of $\alpha${} that makes the combination
accurate.
Figures~\ref{fig:acc-space-ex} and~\ref{fig:csb-acc-space-ex}
illustrate this subdivision, respectively
for {\it TestLC}{}, and the bulk system used in our
experimental results (Sec.~\ref{sec:experiments}).
In both figures, green and red dots denote the accurate and inaccurate
subspaces, respectively.
\begin{figure}[!h]
\centering
\includegraphics[width=.6\textwidth]{figs/acc-subspace/largecube/acc-gp.pdf}
\caption{Search space divided into accurate and inaccurate configurations
for {\it TestLC}{}.}
\label{fig:acc-space-ex}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=.6\textwidth]{figs/acc-subspace/csbulk/acc-gp.pdf}
\caption{Search space divided into accurate and inaccurate configurations
for the bulk system described in Sec.~\ref{subsec:exp-bulk}.}
\label{fig:csb-acc-space-ex}
\end{figure}
\section{Sampling and modeling}
\label{sec:modeling}
Once the accurate subspace $\mathcal{S_A}$ has been identified,
the third and final
step consists in determining the configurations in $\mathcal{S_A}$ that lead to the best overall performance.
A range of alternatives exist to estimate the execution time given a choice of parameters.
At one extreme, one could rely on a purely analytical approach
which consists of using the flop count
(the amount of arithmetic operations required by a given
configuration) as a metric to estimate the execution time.
Unfortunately, it is well-understood that not every flop incurs
the same cost: Different sections of a given implementation
attain different performance levels
(flops/s), and even variations of the values for the parameters
influence the use of the memory hierarchy (reuse
of data), and thus performance.
Therefore, even though flops are a metric that allows for a
fast estimation of execution time, they lead to inaccurate predictions.
At the other extreme, one may consider a purely empirical approach, where each
of the accurate configurations is executed and timed, and the fastest one is
selected. However, the number of configurations in $\mathcal{S_A}$ may vary
from several hundred to tens of thousands, depending on the particular system
and the desired accuracy.
Such an extensive search would still consume
considerable resources. Thus, this approach is reliable, but slow.
Instead, we advocate a hybrid analytical-empirical approach,
based on a reduced number of samples,
which predicts the performance for each configuration by fitting
a set of simple models to the measurements.
In Sec.~\ref{sec:static}, we present a static strategy that samples at
a set of predetermined points in the space. While rather effective and useful
to better understand the properties of the method, it presents some drawbacks
that we discuss in Sec.~\ref{sec:pros-cons}. In Sec.~\ref{subsec:adaptive-sampling}, we switch
to a dynamic sampling strategy that reduces significantly
the amount of required sampling and improves the accuracy of the estimations.
As our experiments confirm, the adaptive approach yields fast and reliable predictions.
\subsection{Static dense sampling}
\label{sec:static}
Our static approach consists in collecting samples at a set of predefined
points that cover a relatively dense subset of the space. The granularity
of the sampling obviously affects the accuracy and speed of the prediction:
the larger the number of samples, the slower and the more accurate is the
prediction, and vice versa.
If the space and the cost of sampling are too large, properties of the
method may be exploited to, for instance, reduce the dimensionality of the
space or speedup the samples.
For the {\sc pppm}{} method, we take advantage of the fact that the method consists of two successive,
independent stages, and model each stage in isolation. The resulting models are
then combined to produce the predictions.
Similarly to the approach taken to evaluate the analytical formulas for the error bounds,
a divide-and-conquer strategy is used to decompose the performance modeling
of the {\sc pppm}{} method into that of the real and reciprocal contributions.
Thus, the real and reciprocal spaces are sampled independently, the samples are then fitted
to models, and finally the models are combined to predict the total compute
time of the accurate configurations.
\subsubsection{Sampling the real-space computation}
The only parameter that affects the time
required to compute the real-space contribution is the
cutoff.
Hence, different values for the cutoff are sampled,
while the other parameters are fixed. More specifically,
we use the following configurations:
\begin{itemize}
\item {Ewald parameter:} 0.50$\sigma^{-1}${}
\item {Interpolation order:} 2
\item {Grid size:} $1 \times 1 \times 1$
\item {Cutoff:} [2.0$\sigma$, 2.5$\sigma$, 3.0$\sigma$, ..., 6.0$\sigma$]
\end{itemize}
Here, the choice of Ewald parameter is arbitrary, and the interpolation
order and grid size are set to the smallest possible value to
minimize the time spent in the sampling. We sample at multiple
values of the cutoff in the range $[2.0\sigma,\ 6.0\sigma]$ in steps of 0.5$\sigma$,
for a total of nine data points.
\subsubsection{Sampling the reciprocal-space computation}
The time spent in computing the reciprocal-space contribution is, in principle,
affected only by the grid size and the interpolation order. Hence,
the cutoff and Ewald parameter are fixed, and the following configurations are sampled:
\begin{itemize}
\item {Ewald parameter:} 0.50$\sigma^{-1}${}
\item {Interpolation order:} [2, 3, 4, 5, 6]
\item {Grid sizes:} a function of the target system (domain size and number of particles)
\item {Cutoff:} 2.0$\sigma$.
\end{itemize}
Once again, the choice of the Ewald parameter value is arbitrary, the
cutoff is kept to a minimum, and we sample all interpolation orders in the
range $[2, 6]$, and the full set of valid grid sizes within a range determined
according to the number of particles and domain size of the target system. The
total number of data points varies with the size of the system, and is equal to five
(interpolation orders) times the number of considered grid sizes.
\vspace{5mm}
Here, and for the remainder of the paper, each sampling consists in the
execution of 1000 timesteps of the actual simulation of interest; of course,
this number is configurable. The rest of the properties of the simulation, such
as mixing rules, ensemble, temperature, etc, are fixed and configured by the user.
\subsubsection{Modeling and fitting}
Each set of samples is now fitted to a corresponding function.
The choice of the functions to be used comes either from domain expertise
or from the analysis of the method's complexity.
Since the computational cost for the evaluation of the real-space contribution
is proportional to the cube of the cutoff, we fit the function $f(c) = a + b\cdot c^3$
to the data points ($c_i$, $t(c_i)$), where the parameter $a$ accounts for the
overhead in allocation and setting of data.
As an example, we consider the test case {\em Small Interface} ({\it TestSI}{}, see App.~\ref{app:scenarios}
for details).
Figure~\ref{fig:cutoff-full-sampling} shows the measured execution time for the
real-space contribution for each of the sampled cutoff values.
The fit is satisfactory, presenting an average relative error
$\frac{1}{n} \sum_{i=1}^n (|f(c_i)-t(c_i)|/t(c_i))$ of less than 5\%.
\begin{figure}[!h]
\centering
\includegraphics[width=0.6\textwidth]{data/CutoffSampling-Full/SmallInterface/cutoff-full-sampling.pdf}
\caption{Fitting of the function $f(c) = a + b*c^3$ to the timings for the
real-space contribution in {\it TestSI}{} ($i${\bf k}{} differentiation).
Values for parameters $a$ and $b$ are 0.44 and 0.0565, respectively.
The average relative error is less than 5\%.}
\label{fig:cutoff-full-sampling}
\end{figure}
To simplify the modeling of the reciprocal space, we consider each
interpolation order $P$ independently. Accordingly, we model the execution
time of the reciprocal space by means of multiple functions $h_i(g) = p + b \cdot g$, where
$g$ represents the number of points in the FFT grid, and $p$ accounts for the
time spent in the mapping of the particles into the grid and back.\footnote{
Even though the computational cost of the FFT is $O(g \; log(g))$, $g$ the number
of points in the grid, the empirical timings show that the implementation has a linear
cost. This comes at no surprise, since the implementation is communication-bound,
especially for large number of nodes.
}
Figure~\ref{fig:kspace-full-sampling} depicts the execution time for the
reciprocal space corresponding to $P=5$ and a range of grid sizes,
also for {\it TestSI}{}.
The average relative error is around 2\%.
\begin{figure}[!h]
\centering
\includegraphics[width=0.6\textwidth]{data/KSpace-Sampling-Full/SmallInterface/kspace-full-sampling.pdf}
\caption{Fitting of the function $h_5(g) = p + b*g$ to the timings for the
reciprocal-space contribution in {\it TestSI}{} ($i${\bf k}{} differentiation).
Values for parameters $p$ and $b$ are 1.81 and $8.33\cdot10^{-5}$, respectively.
The average relative error is 2.3\%.}
\label{fig:kspace-full-sampling}
\end{figure}
\subsubsection{Prediction}
An estimate of the overall compute time is obtained by summing up the
estimates for the real-space and the reciprocal-space contributions.
We use test case {\it TestSI}{} to provide preliminary results
on the accuracy
of the predictions yielded by the described static approach.
As a reference, we measured the time spent in the computation of the real and reciprocal
space contributions for the frontier configurations and compared the timings
with the predictions for these same points.
Table~\ref{tab:full-sampling} collects the results for the fastest
five configurations based on empirical measurements (column 2).
Columns 3 and 4 show the predicted execution time for those same
configurations and the relative error in the prediction, measured as
$|t_{pred}-t_{emp}|/t_{pred}$, where $t_{emp}$ is the empirically measured
time and $t_{pred}$ is the predicted time.
The model-based predictions are quite accurate. The average relative error for
the entire set of frontier configurations (73) is 4.98\%. Moreover, and most
importantly, the methodology correctly identifies the configuration which leads
to the fastest execution.
\begin{table}[!h]
\centering
\begin{tabular}{c @{\hspace*{1em}} c @{\hspace*{1em}}|@{\hspace*{1em}} c c @{\hspace*{1em}}||
@{\hspace*{1em}} c @{\hspace*{1em}}c @{\hspace*{1em}}c @{\hspace*{1em}}c} \toprule
\multicolumn{2}{c| @{\hspace*{1em}}}{\bf Empirical} &
\multicolumn{2}{c||@{\hspace*{1em}}}{\bf Prediction} &
\multicolumn{4}{c}{\bf Configuration} \\ \midrule
Ranking & Time & Time & Error & $\alpha${} & $r_c${} & $p${} & $(n_x \times n_y \times n_z)${} \\ \midrule
\#1 & 8.770s & 8.378s & (-4.68\%) & 0.56$\sigma^{-1}${} & 4.60$\sigma$ & 4 & $10 \times 10 \times 160$ \\
\#2 & 8.920s & 8.495s & (-4.99\%) & 0.60$\sigma^{-1}${} & 4.40$\sigma$ & 4 & $12 \times 12 \times 180$ \\
\#3 & 9.023s & 8.595s & (-4.98\%) & 0.63$\sigma^{-1}${} & 4.30$\sigma$ & 4 & $12 \times 12 \times 216$ \\
\#4 & 9.119s & 8.754s & (-4.17\%) & 0.58$\sigma^{-1}${} & 4.50$\sigma$ & 5 & $10 \times 10 \times 160$ \\
\#5 & 9.299s & 8.610s & (-8.00\%) & 0.64$\sigma^{-1}${} & 4.20$\sigma$ & 5 & $12 \times 12 \times 180$ \\ \bottomrule
\end{tabular}
\caption{Predictions for {\it TestSI}{} ($i${\bf k}{} differentiation) using
static sampling.}
\label{tab:full-sampling}
\end{table}
\subsection{Advantages and disadvantages of the static sampling}
\label{sec:pros-cons}
We observed a number of advantages and disadvantages of using
a static sampling. Among the advantages, we highlight the
simplicity of implementation, since the approach is system-agnostic,
that is, the system does not influence the search beyond the
selection of grid sizes to consider, and no online decisions
are necessary. Second, the accuracy of the predictions is rather
satisfactory in general. Finally, it is relevant beyond the automatic
selection of parameters. The sampling allows to understand the
behavior of the method in practice and to expose unexpected
performance signatures. We give examples below.
On the contrary, we also identified a number of drawbacks that
may imply limited accuracy in the predictions or excessive sampling time:
\begin{enumerate}
\item Unexpectedly, the value of the cutoff does affect the execution time
of the reciprocal-space computation.
\item Unlike in Fig.~\ref{fig:kspace-full-sampling}, in certain cases
the timings for the reciprocal space may not fit a single straight line;
two shifted lines are observed instead (see Fig.~\ref{fig:kspace-shift} below).
\item The number of required samples may grow large.
\end{enumerate}
These issues are discussed in detail hereafter.
Our proposed solution, based on adaptive sampling, is developed in the next subsection.
\paragraph{Impact of $r_c${} on the reciprocal space. }
While, in principle, the cutoff should only have an impact on
the execution time of the real-space contribution, we observed that the execution time of
the reciprocal space is also affected.
This behavior is observed, for instance,
in the test case {\em Small Bulk} ({\it TestSB}{}, see App.~\ref{app:scenarios}).
As illustrated by Fig.~\ref{fig:kspace-cutoff-effect}, the difference in
execution time when calculating the reciprocal-space contribution with
two different fixed cutoffs (in this case 2.0$\sigma$ and 5.3$\sigma$) may be considerable.
Indeed, these differences are carried on to the prediction of the execution
time of the overall simulation, as illustrated in Tab.~\ref{tab:cutoff-kspace}.
Columns 3 and 4 correspond to predictions after using a cutoff of 2.0$\sigma$
for the samplings of the reciprocal-space. As one can appreciate, the predictions
may be quite off. In fact, the average relative error between the measured and the
estimated execution times when using this cutoff is about 5\%.
If, instead, we take into account the range of cutoff values represented in the
configurations included in $S_A$, and choose to sample using a value for the
cutoff within that range, the overall estimation improves. In the case of {\it TestSB}{},
the cutoff in the frontier configurations ranges from 4.6$\sigma$ to 6.0$\sigma$.
If we fix the cutoff for sampling to the mid value (5.3$\sigma$),
the average relative error
(Tab.~\ref{tab:cutoff-kspace}, columns 5 and 6) is reduced to about 2\%.
We thus conclude that is critical to dynamically choose the sampling values based
on the simulation under consideration in order to obtain highly-accurate predictions.
\begin{figure}[!h]
\centering
\includegraphics[width=0.6\textwidth]{data/Cutoff-diff/kspace-cutoff-shift.pdf}
\caption{Difference in execution time for the calculation of the
reciprocal-space contribution when using different cutoffs.}
\label{fig:kspace-cutoff-effect}
\end{figure}
\begin{table}[!h]
\centering
\begin{tabular}{c @{\hspace*{1em}} c @{\hspace*{1em}} |
@{\hspace*{1em}} c @{\hspace*{-4em}} c |
@{\hspace*{1em}} c @{\hspace*{-4em}} c} \toprule
\multicolumn{2}{c|@{\hspace*{1em}}}{\bf Empirical} &
\multicolumn{2}{c@{\hspace*{1em}}|@{\hspace*{1em}}}{\bf Prediction ($r_c${} = 2.0$\sigma$)} &
\multicolumn{2}{c@{\hspace*{1em}} }{\bf Prediction ($r_c${} = 5.3$\sigma$)} \\ \midrule
Ranking & Time & Time & Error & Time & Error \\ \midrule
\#1 & 7.498s & 7.413s & (-1.15\%) & 7.679s & (+2.36\%) \\
\#2 & 7.542s & 7.394s & (-2.01\%) & 7.687s & (+1.88\%) \\
\#3 & 7.595s & 7.270s & (-4.47\%) & 7.610s & (+0.20\%) \\
\#4 & 7.660s & 7.520s & (-1.86\%) & 7.759s & (+1.28\%) \\
\#5 & 7.800s & 7.255s & (-7.52\%) & 7.736s & (-0.83\%) \\ \bottomrule
\end{tabular}
\caption{Results for {\it TestSB}{} ($i${\bf k}{} differentiation). The accuracy of the
predictions improve when using a cutoff closer to the range in the
frontier configurations ([4.6$\sigma$, 6.0$\sigma$]).}
\label{tab:cutoff-kspace}
\end{table}
\paragraph{Irregular behavior of the reciprocal space.}
In some test cases, we observed that the timings for the reciprocal
space do not lay on one single line, but rather on
two parallel ones (in a piecewise manner). We relate this behavior
to a switch in the data distribution in {\sc pppm}{}'s implementation~\cite{Rolf1,Rolf2},
where depending on the grid size and the number of processes used for the computation,
the FFT domain is decomposed either in slabs (faces) or pencils (columns).
More specifically, the shift occurs at the point where the number of grid points in
the $z$ dimension becomes equal or greater than the number of processes used in
the simulation. As an example, Fig.~\ref{fig:kspace-shift} illustrates
the shift for the {\it TestLC}{} scenario. In the example, 96 processes
where used; the shift thus occurs at grid size $96\times96\times96$.
An adaptive sampling approach is required to identify the gap
and correctly fit the data.
\begin{figure}[!h]
\centering
\includegraphics[width=0.6\textwidth]{data/KSpace-shift/LargeCube/kspace-shift.pdf}
\caption{Samples for the execution time of the reciprocal-space contribution
in {\it TestLC}{} ($ad${} differentiation). The data is not fitted by one single line;
it requires two of them, with a similar slope but a shifted offset.}
\label{fig:kspace-shift}
\end{figure}
\paragraph{Reducing the number of samples.}
Finally, the dense static sampling
may involve a fairly large number of samples.
For instance, {\it TestSI}{} ($i${\bf k}{} differentiation)
requires around 200
samples.
While tractable, the number of required samples
will be reduced with an adaptive sampling technique.
\subsection{Adaptive sampling}
\label{subsec:adaptive-sampling}
In light of the aforementioned issues, we present here an adaptive
strategy to exploring the search space, whose decisions are guided by the
characteristics of the simulation at hand.
The new strategy is built upon the algorithm sketched in Alg.~\ref{alg:adaptive}.
Given a fitting function {\tt f}, the list of possible values for the independent
variables {\tt xs} (either the cutoff for the real space or the grids for the
reciprocal space) and an error {\tt threshold}, the algorithm commences by sampling
the minimum, maximum, and midpoint values in {\tt xs}; the function {\tt f} is
then fitted to the measurements. If the relative error at any of the
sampled points is larger than the given {\tt threshold}, the algorithm proceeds
recursively for each half of {\tt xs}; it terminates otherwise.
Next, we make use of a classic top-down algorithm for the segmentation of time
series~\cite{segmentation}
that takes the series of samples collected by
Alg.~\ref{alg:adaptive} and creates a piece-wise function consisting of one or
more instances of {\tt f} with different parameters $a_1, a_2, ..., a_n$. Six such
piece-wise functions will be created, one for the real space contribution, and
five for the reciprocal space (one per interpolation order). These functions
will then be used to model the execution time of each contribution given a
cutoff value (real space), an interpolation order and a grid size (reciprocal space).
\begin{center}
\begin{algorithm}
\begin{algorithmic}[1]
\Function{adaptive\_sampling}{f, threshold, xs}
\Function{adaptive\_sampling\_rec}{i, j}
\If{$(j-i) \le 1$}
\State \Return
\EndIf
\State midpoint = (i+j) / 2
\State timings(midpoint) = sample(midpoint)
\State x = [xs(i), xs(midpoint), xs(j)]
\State y = [timings(i), timings(midpoint), timings(j)]
\If{error(f, x, y) $>$ threshold}
\State \Call{adaptive\_sampling\_rec}{i, midpoint}
\State \Call{adaptive\_sampling\_rec}{midpoint, j}
\EndIf
\EndFunction
\State n\_xs = length(xs)
\State timings = array(n\_xs)
\State timings(1) = sample(1)
\State timings(n\_xs) = sample(n\_xs)
\State \Call{adaptive\_sampling\_rec}{1, n\_xs}
\State \Return timings
\EndFunction
\end{algorithmic}
\caption{{\bf: Adaptive sampling.}}
\label{alg:adaptive}
\end{algorithm}
\end{center}
\vspace{3ex}
This adaptive strategy reduces the amount of sampling. For instance,
when sampling for the reciprocal space in {\it TestSI}{},
the static full sampling (Sec.~\ref{sec:static}) and the adaptive sampling
use 18 and 3 data points per interpolation order, respectively
(see Fig.~\ref{fig:kspace-full-sampling} vs. Fig.~\ref{fig:kspace-si-dyn}).
In the ``less friendly'' {\it TestLC}{} scenario,
17 and 8 samples per interpolation order are used, respectively
(see Fig.~\ref{fig:kspace-shift} vs. Fig.~\ref{fig:kspace-lc-dyn}).
Not only the number of samples is reduced; the shift is also correctly identified,
thus improving the accuracy of the predictions.
Finally, the effects of the cutoff in the computation of the reciprocal-space
term are addressed as follows: instead of fixing the value of the cutoff to
2.0$\sigma$, we sample for the minimum and the maximum values of the cutoff
present in the accurate subspace, and interpolate for intermediate values.
To compensate for the increase in number of samplings, we further adjust
the sampling to the target system.
Concretely, since we are only interested
in accurate enough configurations, we only sample the interpolation orders present in the
accurate subspace configurations. Likewise, we adjust the range of grid sizes for
the sampling of the reciprocal space and the range of cutoffs for
the sampling of the real space.
\begin{figure}[!h]
\centering
\includegraphics[width=0.6\textwidth]{data/KSpace-Sampling-dyn/SmallInterface/kspace-full-sampling.pdf}
\caption{Samples for the execution time of the reciprocal-space contribution
in {\it TestSI}{} ($i${\bf k}{} differentiation). Dynamic strategy.}
\label{fig:kspace-si-dyn}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.6\textwidth]{data/KSpace-shift-dyn/LargeCube/kspace-shift.pdf}
\caption{Samples for the execution time of the reciprocal-space contribution
in {\it TestLC}{} ($ad${} differentiation). Dynamic strategy.}
\label{fig:kspace-lc-dyn}
\end{figure}
As a result of the described adaptive sampling, we obtain accurate predictions
at a reduced sampling cost. In Tab.~\ref{tab:si-dyn}, we present results for
{\it TestSI}{} ($i${\bf k}{} differentiation).
While the static sampling required 99 samples and attained a relative error
of 4.98\%, the dynamic strategy only required 37 samples, and achieved a
reduced relative error of 2.46\%.
Most importantly, the dynamic approach still selects the fastest configuration as the
optimal choice.
\begin{table}[!h]
\centering
\begin{tabular}{c @{\hspace*{1em}} c @{\hspace*{1em}}|@{\hspace*{1em}} c c @{\hspace*{1em}}||
@{\hspace*{1em}} c @{\hspace*{1em}}c @{\hspace*{1em}}c @{\hspace*{1em}}c} \toprule
\multicolumn{2}{c| @{\hspace*{1em}}}{\bf Empirical} &
\multicolumn{2}{c||@{\hspace*{1em}}}{\bf Prediction} &
\multicolumn{4}{c}{\bf Configuration} \\ \midrule
Ranking & Time & Time & Error & $\alpha${} & $r_c${} & $p${} & $(n_x \times n_y \times n_z)${} \\ \midrule
\#1 & 8.770s & 8.663s & (-1.23\%) & 0.56$\sigma^{-1}${} & 4.60$\sigma$ & 4 & $10 \times 10 \times 160$ \\
\#2 & 8.920s & 8.808s & (-1.26\%) & 0.60$\sigma^{-1}${} & 4.40$\sigma$ & 4 & $12 \times 12 \times 180$ \\
\#3 & 9.023s & 8.919s & (-1.16\%) & 0.63$\sigma^{-1}${} & 4.30$\sigma$ & 4 & $12 \times 12 \times 216$ \\
\#4 & 9.119s & 9.024s & (-1.06\%) & 0.58$\sigma^{-1}${} & 4.50$\sigma$ & 5 & $10 \times 10 \times 160$ \\
\#5 & 9.299s & 8.886s & (-4.64\%) & 0.64$\sigma^{-1}${} & 4.20$\sigma$ & 5 & $12 \times 12 \times 180$ \\ \bottomrule
\end{tabular}
\caption{Predictions for {\it TestSI}{} ($i${\bf k}{} differentiation) based on dynamic sampling.}
\label{tab:si-dyn}
\end{table}
Table~\ref{tab:lc-dyn} collects similar results for the {\it TestLC}{}
scenario ($ad${} differentiation). The average relative error is of 3.72\%,
obtained with 51 samples. As in the previous example, our methodology
again selects the fastest configuration as optimal.
\begin{table}[!h]
\centering
\begin{tabular}{c @{\hspace*{1em}} c @{\hspace*{1em}}|@{\hspace*{1em}} c c @{\hspace*{1em}}||
@{\hspace*{1em}} c @{\hspace*{1em}}c @{\hspace*{1em}}c @{\hspace*{1em}}c} \toprule
\multicolumn{2}{c| @{\hspace*{1em}}}{\bf Empirical} &
\multicolumn{2}{c||@{\hspace*{1em}}}{\bf Prediction} &
\multicolumn{4}{c}{\bf Configuration} \\ \midrule
Ranking & Time & Time & Error & $\alpha${} & $r_c${} & $p${} & $(n_x \times n_y \times n_z)${} \\ \midrule
\#1 & 91.663s & 94.163s & (+2.65\%) & 0.52$\sigma^{-1}${} & 5.30$\sigma$ & 6 & $90 \times 90 \times 90$ \\
\#2 & 92.356s & 96.552s & (+4.35\%) & 0.50$\sigma^{-1}${} & 5.50$\sigma$ & 6 & $80 \times 80 \times 80$ \\
\#3 & 92.374s & 95.707s & (+3.48\%) & 0.49$\sigma^{-1}${} & 5.50$\sigma$ & 5 & $90 \times 90 \times 90$ \\
\#4 & 93.417s & 98.256s & (+4.93\%) & 0.48$\sigma^{-1}${} & 5.60$\sigma$ & 6 & $75 \times 75 \times 75$ \\
\#5 & 93.675s & 98.289s & (+4.69\%) & 0.47$\sigma^{-1}${} & 5.70$\sigma$ & 5 & $80 \times 80 \times 80$ \\ \bottomrule
\end{tabular}
\caption{Predictions for the {\it TestLC}{} scenario ($ad${} differentiation) based on dynamic sampling.}
\label{tab:lc-dyn}
\end{table}
\section{Experimental results}
\label{sec:experiments}
Through a number of case studies,
we now discuss in detail the practical
benefits of our methodology to speed up scientific codes.
For each experiment, we report the time required by our tool
to estimate the best selection of parameters,
and the speedup with respect to the parameters chosen by an expert.
Moreover, we quantify the benefits brought to the end user
in terms of amount of additional research enabled, that is,
for a characteristic setting where a scientist is granted
10 million core-hours at a supercomputer, and each simulation
is to be run for 50 million timesteps, we calculate how many
additional simulations are now possible.
\subsection{Experimental setup}
The experiments were run on two computing clusters.
The first one, referred as {\em Harpertown}, consists of 32 nodes;
each node comprises two four-core Intel Harpertown E5450 processors,
operating at a frequency of 3.00GHz, and is equipped with 16GB of RAM.
The second cluster is the {\em SuperMUC} supercomputer at the Leibniz
Supercomputing Center; each node of SuperMUC consists of 16-core nodes
based on Sandy Bridge-EP Xeon E5-2680 processors, operating at
a frequency of 2.7GHz. Each node is equipped with 32GB of RAM.
In all cases, the simulations were run using LAMMPS (version 22Jan14), FFTW,
and OpenMPI.
In all cases, Lennard Jones particles with energy $\epsilon$ and diameter
$\sigma$ were randomly placed in the domain (or box for interfacial systems).
Then, the systems were equilibrated for 100{,}000 timesteps after minimization
using soft potential. The simulations were run at a temperature of
0.7$\epsilon$/$k_B$ using a Nos\'e-Hoover thermostat~\cite{NoseHoover} with
damping factor of 10$\tau$.\footnote{Here $\epsilon$ is the depth of the
Lennard-Jones potential and $k_B$ the Boltzmann constant.}
For the type of systems used in our experiments, the developer of
the LAMMPS PPPM solver for dispersion recommends to set the target
accuracy to 0.001$\epsilon/\sigma$ for the real space and 0.1$\epsilon/\sigma$
for the reciprocal space. As for a fair comparison against the
parameters automatically selected by our tool, he suggests to
set a cutoff value of $r_c = 3.0\sigma$ and let the solver set the other
parameters (Ewald parameter, interpolation order, and grid size).
\subsection{Case Study 1: Bulk system}
\label{subsec:exp-bulk}
As a first case study,
we consider a bulk system consisting of
256{,}000 Lennard-Jones (LJ) particles randomly placed in a domain of length
$55\sigma \times 55\sigma \times 110\sigma$.
%
The computations were carried out on 12 Harpertown
nodes (i.e., 96 processors).
To determine the benefit of our methodology for automatic parameter selection,
we compare it with a human expert's best guess, and with the empirical fastest.
The execution time for a sample of 1{,}000 timesteps with the configuration
suggested by the developer is collected in
Tab.~\ref{tab:bulk-results}, rows 2 and 4. The empirical fastest configurations
and the corresponding timings are displayed in rows 3 and 5.
Next, we allowed our prediction tool to run the necessary samples
to estimate the execution time of the parameterizations at the
frontier configurations, and select the predicted fastest.
The sampling and prediction took about four hours.
Table~\ref{tab:bulk-results} collects the results for our tool.
%
In both cases, the predictions match the best empirically-determined
configurations.
The final choice of our tool is to use $ad${} differentiation with the following
parameters: ($\alpha${}$\; =0.84$, $r_c${}$\; =3.30\sigma$, $p${}$\; =4$,
$(n_x \times n_y \times n_z)${}$\; =45\times45\times90$). The choice not only coincides with the empirically
fastest, but it also reduces execution time by 35\% with respect to the developer's best
guess.
\begin{table}[!h]
\centering
\begin{tabular}{l @{\hspace*{1em}}||@{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c} \toprule
{\bf Approach} & {\bf Diff.} & {\bf Time} & {\bf $\alpha${}} & {\bf $r_c${}} & {\bf $p${}} & {\bf $(n_x \times n_y \times n_z)${}} \\\midrule
Expert guess & $i${\bf k}{} & 69.36s & 0.965$\sigma^{-1}${} & 3.00$\sigma$ & 5 & $64 \times 64 \times 125$ \\
Empirical & $i${\bf k}{} & 44.39s & 0.810$\sigma^{-1}${} & 3.40$\sigma$ & 4 & $40 \times 40 \times 81\phantom{a}$ \\\midrule
Expert guess & $ad${} & 61.87s & 0.965$\sigma^{-1}${} & 3.00$\sigma$ & 5 & $60 \times 60 \times 120$ \\
Empirical & $ad${} & 40.16s & 0.840$\sigma^{-1}${} & 3.30$\sigma$ & 4 & $45 \times 45 \times 90\phantom{a}$ \\\midrule
Prediction & $i${\bf k}{} & 44.39s & 0.810$\sigma^{-1}${} & 3.40$\sigma$ & 4 & $40 \times 40 \times 81\phantom{a}$ \\
Prediction & $ad${} & 40.16s & 0.840$\sigma^{-1}${} & 3.30$\sigma$ & 4 & $45 \times 45 \times 90\phantom{a}$ \\\bottomrule
\end{tabular}
\caption{Results for the {\em Bulk system}. Expert choices and best empirical configurations
for each differentiation mode. The predicted best configurations
coincide with those that empirically proved to be fastest.
}
\label{tab:bulk-results}
\end{table}
%
%
Given the characteristic scenario in Sec.~\ref{sec:experiments}, with the choice of
parameters of the expert, one may run up to 121 simulations, while the
automatically chosen parameters enable 187 of them; that is, the scientist
can carry out a 50\% more research.
\subsection{Case study 2: Interfacial system}
As a second case study, we consider an interfacial system consisting of
128{,}000 LJ particles randomly placed in a box of length
$55\sigma \times 55\sigma \times 55\sigma$,
centered in a domain of length
$55\sigma \times 55\sigma \times 165\sigma$.
%
The computations were carried out on 6 of the Harpertown nodes,
that is, 48 processes in total.
As in the previous example, we first run the experiments based
on the developer recommended configurations, shown
in rows 1 and 3 of Tab.~\ref{tab:int-results}.
Then, we timed the configurations in the frontier (rows 2 and 4).
Finally, we ran our tool (for about three hours), which selected the configurations in rows 5 and 6.
In this case, our tool selected the best configuration for the $i${\bf k}{}
differentiation; as for the $ad${} case, while it did not find the absolute best
configuration, its choice is less than 2\% away from the optimal.
Most importantly, when compared to the expert's guess, the automatically
selected parameters yield a reduction in the execution time of 13\% and 27\%
for the $ad${} and $i${\bf k}{} differentiations, respectively.
\begin{table}[!h]
\centering
\begin{tabular}{l @{\hspace*{1em}}||@{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c} \toprule
{\bf Approach} & {\bf Diff.} & {\bf Time} & {\bf $\alpha${}} & {\bf $r_c${}} & {\bf $p${}} & {\bf $(n_x \times n_y \times n_z)${}} \\\midrule
Expert guess & $i${\bf k}{} & 60.78s & 0.92$\sigma^{-1}${} & 3.00$\sigma$ & 5 & $48 \times 48 \times 144$ \\
Empirical & $i${\bf k}{} & 44.52s & 0.77$\sigma^{-1}${} & 3.50$\sigma$ & 2 & $32 \times 32 \times 96\phantom{0}$ \\\midrule
Expert guess & $ad${} & 46.62s & 0.92$\sigma^{-1}${} & 3.00$\sigma$ & 5 & $45 \times 45 \times 144$ \\
Empirical & $ad${} & 39.79s & 0.81$\sigma^{-1}${} & 3.40$\sigma$ & 3 & $32 \times 36 \times 100$ \\\midrule
Prediction & $i${\bf k}{} & 44.52s & 0.77$\sigma^{-1}${} & 3.50$\sigma$ & 2 & $32 \times 32 \times 96\phantom{0}$ \\
Prediction & $ad${} & 40.55s & 0.74$\sigma^{-1}${} & 3.60$\sigma$ & 2 & $30 \times 30 \times 90\phantom{0}$ \\\bottomrule
\end{tabular}
\caption{Results for the {\em Interfacial system}. Expert choices and best empirical configurations
for each differentiation mode. The predicted best configuration is
extremely close to the one that empirically proved to be the fastest one,
and is considerably faster than the expert's choice.
}
\label{tab:int-results}
\end{table}
In reference to the characteristic setting outlined in Sec.~\ref{sec:experiments}, with the choice of
parameters of the developer, one may run up to 320 simulations, while the
automatically chosen parameters enable 370 of them; that is, the scientist
can carry out a 16\% more research.
%
%
\subsection{Case study 3: Large Interfacial system}
We turn now our attention to larger simulations requiring at least
hundreds of processors. Our third case study consists of an interfacial
system including 512{,}000 particles placed in a box of length
$64\sigma \times 64\sigma \times 128\sigma$, centered in a domain of length
$64\sigma \times 64\sigma \times 256\sigma$.
The computations were carried out on 32 Harpertown
nodes (i.e., 256 processors).
Table~\ref{tab:intl-results} collects the timings for the configurations
selected by the expert user, as well as the configuration automatically
selected by our tool. Since we already demonstrated the accuracy of
our predictions, and with the purpose of limiting the usage of resources in the experiment,
we do not run the simulation for each of the configurations in the frontier.
The
automatically
selected parameters attain a remarkable speedup with respect to the developer's best guess of 2.33x.
\begin{table}[!h]
\centering
\begin{tabular}{l @{\hspace*{1em}}||@{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c @{\hspace*{1em}} c} \toprule
{\bf Approach} & {\bf Diff.} & {\bf Time} & {\bf $\alpha${}} & {\bf $r_c${}} & {\bf $p${}} & {\bf $(n_x \times n_y \times n_z)${}} \\\midrule
Expert guess & $i${\bf k}{} & 165.5s & 0.947$\sigma^{-1}${} & 3.00$\sigma$ & 5 & $64 \times 64 \times 256$ \\
Expert guess & $ad${} & 146.1s & 0.947$\sigma^{-1}${} & 3.00$\sigma$ & 5 & $64 \times 64 \times 256$ \\
Prediction & $ad${} & \phantom{0}62.8s & 0.850$\sigma^{-1}${} & 3.30$\sigma$ & 3 & $54 \times 54 \times 216$ \\\bottomrule
\end{tabular}
\caption{Results for the {\em Large Interfacial system}.
Expert choices for each differentiation mode and the predicted best configuration.
The automatically selected parameters attain a speedup of 2.33x with respect to
the best expert guess.
}
\label{tab:intl-results}
\end{table}
In terms of the aforementioned characteristic scenario, instead of only 19 simulations,
the user can now run 45 of them.
\subsection{SuperMUC: Different workloads on a supercomputer}
In this final case study we take a slightly different direction
to demonstrate the applicability of our prototype when the target architecture
is a supercomputer.
To this end, we select an interfacial system with 2 million particles
placed in a box of size
$128\sigma \times 128\sigma \times 128\sigma$,
centered within a domain of size
$128\sigma \times 128\sigma \times 256\sigma$.
As in the previous examples, the desired accuracy is set to
0.001$\epsilon/\sigma$ for the real space, and 0.1$\epsilon/\sigma$ for the reciprocal space.
Now, to confer more breadth to the study, we consider simulation
runs on different number of cores: 512, 1024, and 2048, and
thus with different workloads per core.
Due to the limited resources at our disposal to carry out
this experiment, we only consider the $ad${} differentiation.
In Tab.~\ref{tab:supermuc} we collect the results
for 1{,}000 timesteps of the simulation using the developer's
suggestion (column 2) and the configuration selected by our
tool (column 3).
In all three cases, the expert configuration is:
$\alpha${} $ = 0.947$, cutoff = 3.0, interpolation order = 5, and
grid size = ($125 \times 125 \times 243$), while our tool selects:
$\alpha${} $ = 0.85$, cutoff = 3.3, interpolation order = 3, and
grid size = ($108 \times 108 \times 216$).
Irrespective of the workload, as long as it reaches
a reasonable minimum of 1000 particles per processor, the automatic
selected parameters achieve speedups between a 11\% and
a 16\%.
\begin{table}[!h]
\centering
\begin{tabular}{c @{\hspace*{1em}} ||
@{\hspace*{1em}} r
@{\hspace*{1em}} r
@{\hspace*{1em}} c} \toprule
{\bf \# procs} &
\multicolumn{1}{@{\hspace*{1em}}c@{\hspace*{1em}}}{\bf Expert} &
\multicolumn{1}{c@{\hspace*{1em}}}{\bf Prediction} &
{\bf Speedup} \\\midrule
2048 & 8.19 secs. & 7.34 secs. & 11.6\% \\
1024 & 13.16 secs. & 11.36 secs. & 15.8\% \\
\phantom{0}512 & 23.96 secs. & 20.55 secs. & 16.6\% \\\bottomrule
\end{tabular}
\caption{Results for the experiments in SuperMUC. Independently of the
workload per processor, the automatically selected parameters attain
speedups between 11\% and 16\% with respect to the expert suggestion.
}
\label{tab:supermuc}
\end{table}
Given 10 million core-hours granted, a scientist can
now run 48, 62, and 68 50-million timestep simulations,
instead of 43, 53, and 59, using 2048, 1024, and 512
processors, respectively.
\section{Conclusions}
We presented a methodology for the automatic selection of parameters for
simulation software governed by parameters that affect performance and/or
accuracy. When using such software, users face a challenging trade-off problem
between accuracy and performance, and even the expert practitioners and the
actual developers spend considerable effort and time to find relatively good
choices for the parameters.
We developed a tool implementing the methodology for the {\sc pppm}{} solver
for dispersion interactions from the LAMMPS suite,
which not only spares the user from spending valuable time on trial and
error evaluation, but also finds close-to-optimal configurations that
attain the requested accuracy with close-to-minimum execution time.
The methodology proceeds in three steps. In the first step,
the parameters of interest are identified, the continuous ones
are discretized, and acceptable ranges of values are given
for each of them. The outcome of this step is a search
space $\mathcal{S}$. In the second step, the methodology splits $\mathcal{S}$
into accurate and inaccurate subspaces ($\mathcal{S_A}$
and $\mathcal{S_I}$), according to the accuracy requested by the user;
only $\mathcal{S_A}$ is further considered.
In the last step, a reduced number of samples (timings) are taken and fitted to
simple functions based on the asymptotic computational
cost of the method under study. These functions are then used to model the
performance of each configuration in the frontier $\mathcal{F}$
(the accurate configurations in the boundary between $\mathcal{S_A}$ and
$\mathcal{S_I}$) and to select the estimated fastest one.
We showed that in order to accurately predict performance and to
find close-to-optimal configurations,
it is crucial to deeply understand the accuracy and performance
behavior of the method. Further, the structure of the problem
may be exploited to reduce the complexity of the search, for instance,
by splitting the search in a divide and conquer fashion, and to
speed up the search process.
The application of our prototype, which completes the search in at most a few
hours, is much faster than manual trial and error of many different
configurations, and finds close-to-optimal configurations that achieve speedups
with respect to the best expert guesses ranging from 1.10x to 2.33x. The
corresponding reduction of time-to-solution allows the practitioners to perform
much more research, that is, run many more simulations, within a given core-hour
budget, allowing them to gain deeper insight in their investigations.
\begin{acknowledgements}
The authors gratefully acknowledge financial support from the Deutsche
Forschungsgemeinschaft (German Research Association) through grant GSC 111,
and from the Hans Hermann Voss-Stiftung through grant OPSF224--ADAMS.
We thank RWTH Aachen IT Center and the Gauss Centre for Supercomputing/Leibniz
Supercomputing Centre (project ID: pr94te) for providing the necessary computing resources,
as well as Daniel Tameling and Rolf Isele-Holder for their support
on the usage of LAMMPS and the PPPM method.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
Theoretical investigations of the critical behavior of the random field Ising model (RFIM) have a decades-long history.\cite{imry-ma75,nattermann98} One of their central themes concerns the validity of the ``dimensional reduction'' property, according to which the long-distance behavior of the RFIM in $d$ dimensions is identical to that of the system without disorder in $d-2$. First pointed out by Grinstein,\cite{grinstein76} the property was shown to emerge from perturbation expansion to all orders by Aharony \textit{et al.}\cite{aharony76} and Young.\cite{young77} Finally, Parisi and Sourlas, in a beautiful $2$-page letter,\cite{parisi79} related the critical behavior of the RFIM to a supersymmetric scalar field theory and showed that the supersymmetry leads to dimensional reduction. Phenomenological,\cite{imry-ma75,bray85} numerical,\cite{nattermann98} and rigorous studies\cite{imbrie84,bricmont87} have however established that the dimensional-reduction prediction is wrong, at least in low dimensions ($d\leq 3$). It has also been understood that the superfield construction of Ref.~[\onlinecite{parisi79}] loses its validity when multiple metastable states are present, which is the case in the region of interest.\cite{parisi84b} In addition, it has been shown that dimensional reduction follows from supersymmetry not only in perturbative expansions as in Ref.~[\onlinecite{parisi79}] but in a nonperturbative manner.\cite{cardy83,klein83,klein84}
Despite the elegance of the formalism, no steps beyond the original superfield formulation have so far proven useful,\cite{footnote0} and the goal of the present work is to provide a solution by combining superfield formalism and functional renormalization group (FRG). This is part of our ongoing program\cite{tarjus04,tissier06,tarjus08,tissier08} to build a consistent and comprehensive theory of the long-distance physics of random field models, and more generally of disordered systems, based on the nonperturbative functional renormalization group (NP-FRG). In papers I\cite{tarjus08} and II\cite{tissier08} of this series, we showed that the breakdown of dimensional reduction is related to the appearance of a nonanalytic dependence of the effective action (or Gibbs free-energy functional in the terminology of magnetic systems) in the dimensionless fields. However, our approach, which was based on a ``replica method'' for handling the average over the random field, could not address the pending question of supersymmetry and its breaking. (In consequence, there was no means to avoid explicit breaking of the supersymmetry in the regulators and in the approximation scheme.) As the effective Hamiltonian (or bare action) of the model in the presence of a random field has always multiple minima in the region of interest (near the critical point), it would seem that a superfield approach and the dimensional-reduction predictions can never be valid. Our objective is nonetheless to give a meaning to the superfield formalism even in cases where multiple minima are present and/or supersymmetry is broken, and to investigate the validity of the dimensional-reduction results as one varies spatial dimension. As will be shown, a resolution of the problem requires an FRG formulation.
Our starting point is that the fundamental flaw of the Parisi-Sourlas supersymmetric construction\cite{parisi79} has in fact a two-fold origin:
(1) the presence of metastable states, which can be described at zero temperature as the multiplicity of solutions of the stochastic field equation associated with the extremization of the bare action of the RFIM, is not counterbalanced by a means to select the ground state and
(2) the use of a single copy of the original disordered system does not give access to the full functional dependence of the cumulants of the renormalized disorder and is thereby unable to account for the rare events, such as avalanches and droplets, that characterize random-field systems.
The second aspect has already been addressed in our previous investigation through the NP-FRG and it requires introducing copies or replicas of the original system with the same disorder but different applied sources (resulting in an \textit{explicit} breaking of the permutational symmetry between replicas). We will show in the companion paper that the spontaneous breaking of supersymmetry (more precisely of ``superrotational invariance'') that comes with the breakdown of dimensional reduction is precisely linked to the emergence of a strong enough nonanalytic dependence of the cumulants of the renormalized disorder. Curing the first problem on the other hand implies a way to properly select the ground state among all solutions of the stochastic field equation. We propose a resolution of this problem through the introduction of a weighting factor and the construction of a superfield theory in a curved superspace. In the present paper, we relate ground-state dominance to a formal property that we call ``Grassmannian ultralocality''. This finding may have value for other problems in which a generating functional is built from the solutions of a stochastic field equation, as in other disordered or glassy systems,\cite{glasses} in turbulence\cite{turbulence} or in nonabelian gauge field theories.\cite{gribov78,esposito04,zinnjustin89}
The outline of the article is as follows. In Sec.~\ref{sec:model} we introduce the superfield formalism for the $T=0$ equilibrium long-distance properties of the RFIM, as proposed by Parisi and Sourlas,\cite{parisi79} and we discuss the symmetries, supersymmetries and the associated Ward-Takahashi identities.
In Sec.~\ref{sec:ultralocality} we discuss the property of ``Grassmannian ultralocality'' and its connection to the fact that a unique solution of the stochastic field equation is taken into account in the computation of the generating functionals; we next formulate a procedure to properly select the ground state, which is relevant for the $T=0$ equilibrium long-distance properties of the RFIM, through the insertion of a weighting factor and the consideration of a curved superspace.
We show in Sec.~\ref{sec:cumulants} that a complete description of the renormalized random functional that describes the physics of the RFIM, including rare events such as avalanches and droplets, requires introducing copies of the original system with independently controlled sources so that the hierarchy of cumulants with their full functional dependence can be generated. We then consider the properties of the superfield theory with multiple copies, its (super)symmetries and the associated Ward-Takahashi identites.
In the following section, Sec.~\ref{sec:ultralocal}, we explore the formal consequences of the property of ``Grassmannian ultralocality''.
We next describe in Sec.~\ref{sec:NPFRG} the generalization to superfields in a curved superspace of our previously developed NP-FRG approach in the effective action formalism. We carefully discuss the issue of the infrared regulator and we derive the exact FRG equations for the cumulants of the renormalized disorder; we also consider the implication for these equations of the hypothesis of ``Grassmannian ultralocality''.
In Sec.~\ref{sec:ground-state_dominance}, we detail how our formalism allows one to describe ground-state dominance at long distance in the NP-FRG. We show that the limit of infinite curvature, which corresponds to a vanishing auxiliary temperature, gives back the FRG equations for the cumulants of the renormalized disorder under the property of ``Grassmannian ultralocality''.
We finally conclude with a summary and a discussion. Additional information is provided in several appendices.
In the companion paper, we apply the NP-FRG in the superfield formalism to describe supersymmetry and its spontaneous breaking in the critical behavior of the RFIM. We introduce a nonperturbative approximation scheme to the exact FRG equations for the cumulants and solve the flow equations numerically to determine the critical exponents and the fixed-point properties as a function of space dimension $d$. This provides a resolution of the long-standing puzzles associated with the long-distance physics of the RFIM.
A short account of this work has appeared in Ref.[\onlinecite{tissier11}].
\section{The Parisi-Sourlas superfield formalism for the RFIM}
\label{sec:model}
\subsection{Model and generating functionals}
The starting point is the field-theoretical description of the RFIM in terms of a scalar field
$\varphi(x)$ in a $d$-dimensional space and a bare action $S[\varphi;h]$ given by
\begin{equation}
\label{eq_ham_dis}
S= \int_{x} \bigg\{ \frac
{1}{2} \left(\partial_{\mu} \varphi(x) \right) ^2 + U_B(\varphi(x)) -
h(x) \varphi(x) \bigg\} ,
\end{equation}
where $ \int_{x} \equiv \int d^d x$, $U_B(\varphi)= (\tau/2) \varphi^2 + (u/4!) \varphi^4$, and $h(x)$ is a
random source (a random magnetic field in the language of magnetic systems) that is taken with a Gaussian distribution characterized by a zero mean and a variance $\overline{h(x)h(y)}=\Delta_B \ \delta^{(d)}(x-y)$. A (ultra-violet) momentum cutoff $\Lambda$,
associated with an inverse microscopic lengthscale such as a lattice
spacing, is also implicitly considered.
The Parisi-Sourlas construction goes as follows.\cite{parisi79} Taking advantage of the fact that at long-distance the thermal fluctuations are negligible compared to those induced by disorder (formally, the critical behavior is controlled by a zero-temperature fixed point\cite{villain84,fisher86,nattermann98}), one can focus on the ground-state configuration which is solution of the stochastic field equation
\begin{equation}
\label{eq_stochastic}
\dfrac{\delta S[\varphi;h]}{\delta \varphi(x)} = J(x),
\end{equation}
where we have added an external source (a magnetic field) $J$ conjugate to the $\varphi$ field.
Provided the solution of Eq.~(\ref{eq_stochastic}) is unique, the equilibrium (Green's) correlation functions of the $\varphi$ field are obtained from appropriate derivatives of the generating functional
\begin{equation}
\label{eq_generating_func1}
\begin{aligned}
\mathcal Z_h[\hat{J},J]=\int& \mathcal D\varphi \; \delta\left[\dfrac{\delta S_B[\varphi]}{\delta \varphi}-h-J\right] \; \left|\det \dfrac{\delta^2 S_B[\varphi]}{\delta \varphi \delta \varphi}\right| \\& \times \exp \int_{x} \hat{J}(x) \varphi(x) ,
\end{aligned}
\end{equation}
where $S_B[\varphi] = \int_{x}\{(1/2) (\partial_{\mu} \varphi(x))^2 + U_B(\varphi(x))\}$; the delta functional $\delta[\;]$ enforces that $\varphi$ is solution of Eq.~(\ref{eq_stochastic}) and the absolute value of the functional determinant of $\delta^2 S_B[\varphi]/(\delta \varphi(x) \delta \varphi(y))$ is the associated jacobian. Due to the postulated uniqueness of the solution, the absolute value can be dropped and the functional can be built through standard field-theoretical techniques.\cite{zinnjustin89} One first introduces auxiliary fields: a bosonic ``response'' field $\hat{\varphi}(x)$ associated with the integral representation of the delta functional and two fermionic ``ghost'' fields $\psi(x)$ and $\bar{\psi}(x)$ [satisfying $\psi^2=\bar{\psi}^2=\psi \bar\psi +\bar\psi \psi=0$] to exponentiate the determinant. We also introduce two fermionic sources $\bar{K}(x),K(x)$ linearly coupled to the ghost fields. This leads to
\begin{equation}
\label{eq_generating_func2}
\begin{aligned}
&\mathcal Z_h[\hat{J},J, \bar{K},K] = \exp \left( \mathcal{W}_{h}[\hat{J},J, \bar{K},K]\right) \\&= \mathcal N \int \mathcal D\varphi \mathcal D\hat{\varphi} \mathcal D\psi D\bar{\psi} \exp \big\{ \int_{x} - \hat{\varphi}(x) \dfrac{\delta S_B[\varphi]}{\delta \varphi(x)}\\& + \hat{\varphi}(x) \left[ h(x) + J(x)\right] + \hat{J}(x) \varphi(x) + \psi(x) \bar{K}(x) \\&+ K(x) \bar{\psi}(x) + \int_{x}\int_{y}\bar{\psi}(x) \dfrac{\delta^2 S_B[\varphi]}{\delta \varphi(x) \delta \varphi(y)} \psi(y)\big\},
\end{aligned}
\end{equation}
where $\mathcal N$ is an irrelevant constant factor that will be dropped in the following and $\mathcal{W}_h$ is a random functional that depends on the bare quenched disorder (\textit{i.e.} the bare random field $h$). One then performs the average of the partition function (and not of its logarithm as in the standard approach to disordered systems) over the Gaussian disorder, which provides
\begin{equation}
\begin{aligned}
\label{eq_generating_func3}
\mathcal Z&[\hat{J},J, \bar{K},K]= \overline{\mathcal Z_h[\hat{J},J, \bar{K},K]}\\&= \int \mathcal D\varphi \mathcal D\hat{\varphi} \mathcal D\psi D\bar{\psi} \exp \big\{-S_{ss}[\varphi,\hat{\varphi},\psi,\bar{\psi}] + \\& \int_{x} \left( \hat{J}(x) \varphi(x) + \psi(x) \bar{K}(x)+K(x) \bar{\psi}(x) + J(x) \hat{\varphi}(x)\right) \big\},
\end{aligned}
\end{equation}
where
\begin{equation}
\label{eq_susyaction1}
\begin{aligned}
S_{ss}= & \int_{x}\hat{\varphi}(x) \dfrac{\delta S_B[\varphi]}{\delta \varphi(x)} - \int_{x}\int_{y}\bar{\psi}(x) \dfrac{\delta^2 S_B[\varphi]}{\delta \varphi(x) \delta \varphi(y)} \psi(y) \\&- \frac{\Delta_B}{2}\int_{x}\hat{\varphi}(x)^2\,.
\end{aligned}
\end{equation}
We define $W[\hat{J},J, \bar{K},K]= \log\mathcal Z[\hat{J},J, \bar{K},K]$, which is the generating functional of the connected Green's functions. An important feature is that due to the identity $Z[\hat{J}=0,J, \bar{K}=0,K=0]=1$, the $\varphi$-field connected correlation functions of the original problem are obtained by functional derivatives of $W$ with respect to $\hat{J}$ that are further evaluated for $K=\hat{K}=\hat{J}=0$. For instance,
\begin{equation}
\label{eq_physical_disconn}
\overline{\left\langle \varphi(x)\right\rangle \left\langle \varphi(y)\right\rangle} - \overline{\left\langle \varphi(x)\right\rangle}\;\overline{\left\langle \varphi(y)\right\rangle} = \frac{\delta^2 W}{\delta \hat{J}(x) \delta \hat{J}(y)}\rvert_{K=\hat{K}=\hat{J}=0},
\end{equation}
where, since we consider the zero-temperature limit, $\left\langle \varphi(x)\right\rangle$ is equal to the solution of the stochatic field equation, Eq.~(\ref{eq_stochastic}).
The next step of the construction is to introduce a superspace by adding to the $d$-dimensional Euclidean space with coordinates $x=\left\lbrace x^\mu\right\rbrace $ two anti-commuting Grassmann coordinates $\theta,\bar{\theta}$ (satisfying $\theta^2=\bar{\theta}^2=\theta \bar{\theta}+\bar{\theta}\theta=0$).\cite{zinnjustin89} By letting $\underline{x}=(x,\theta,\bar{\theta})$ denote the coordinates, the associated (super)metric is given by
\begin{equation}
\label{eq_metric}
d\underline{x}^2 = dx^2 + \frac{4}{\Delta_B}d\bar{\theta} d\theta = g_{mn}dx^m dx^n,
\end{equation}
where $\{m\}\equiv \{\mu,\theta,\bar{\theta}\}$, $dx^2=dx^{\mu} dx^{\mu}$, and a summation over repeated indices is implied; the metric tensor $g_{mn}$ satisfies: $g_{\mu \nu}=\delta_{\mu \nu},\; g_{\theta \bar{\theta}}= - g_{\bar{\theta} \theta}=-2/\Delta_B $, with all other components equal to zero. With the notations $\partial_m= \partial/\partial x^m, \partial_{\mu}= \partial/\partial x^{\mu}, \partial_{\theta}=\partial / \partial \theta$, etc..., one can express the corresponding ``super-Laplacian'' as
\begin{equation}
\label{eq_superlaplacian}
\Delta_{ss}= g^{mn}\partial_m \partial_n = \partial_\mu \partial_\mu+\Delta_B \partial_\theta \partial_{\bar{\theta}},
\end{equation}
where $g^{mp}g_{pn}=\delta^m_n$. After introducing the superfield
\begin{equation}
\label{eq_superfield}
\Phi(\underline{x})=\varphi(x) + \bar{\theta} \psi(x)+ \bar{\psi}(x) \theta + \bar{\theta}\theta \hat{\varphi}(x)
\end{equation}
and the supersource
\begin{equation}
\label{eq_supersource}
\mathcal J(\underline x)=J(x) + \bar{\theta} K(x)+ \bar{K}(x) \theta + \bar{\theta}\theta \hat{J}(x),
\end{equation}
the generating functional can be cast in the form\cite{parisi79}
\begin{equation}
\begin{aligned}
\label{eq_part_func_PS}
\mathcal Z[\mathcal J]&=\int\mathcal D\Phi \exp \left(-S_{ss}[\Phi] + \int_{\underline x} \mathcal J(\underline x) \Phi(\underline x)\right) \\&=\exp (W[\mathcal J])
\end{aligned}
\end{equation}
with
\begin{equation}
\label{eq_susyaction2}
S_{ss}[\Phi]=\int_{\underline{x}}[-\frac 12 \Phi(\underline{x})\Delta_{ss}\Phi(\underline{x})+U_{B}(\Phi(\underline{x}))],
\end{equation}
and with $ \int_{\underline{x}} \equiv \int_x \iint d\theta d\bar{\theta}$. By a Legendre transform, one introduces the effective action $\Gamma[\Phi]$ which is the generating functional of the $1$-particle irreducible (1PI) correlation functions or proper vertices:\cite{zinnjustin89}
\begin{equation}
\label{eq_legendre_1copy}
\Gamma[\Phi] = -W[\mathcal J] + \int_{\underline x} \mathcal J(\underline x) \Phi(\underline x),
\end{equation}
where the (classical) superfield $\Phi$ and the supersource $\mathcal J$ are related through
\begin{equation}
\label{eq_legendre_phi}
\Phi(\underline x)= \frac{\delta W[\mathcal J ] }{\delta \mathcal J(\underline x)}
\end{equation}
and
\begin{equation}
\label{eq_legendre_J_1copy}
\mathcal{J}(\underline x)=\frac{\delta \Gamma[\Phi] }{\delta \Phi(\underline x)},
\end{equation}
which can also easily be expressed in terms of the components of the classical superfield, $\phi, \hat{\phi}, \psi, \bar{\psi}$, and of the supersource, $\hat{J}, J, \bar{K}, K$. (Note that for simplicity we keep the same notation for the superfield $\Phi$ and its classical value, and similarly for the ghost fields $\psi, \bar \psi$; we only make a distinction for the bosonic fields, \textit{i.e.} $\phi=\langle\varphi\rangle$ and $\hat \phi=\langle\hat \varphi\rangle$.)
\subsection{Symmetries, supersymmetries, and Ward-Takahashi identities}
The action $S_{ss}[\Phi]$ is invariant under a large group of transformations:
(i) sign changes associated with a $Z_2$ discrete symmetry $\Phi \rightarrow - \Phi$ (the critical behavior we aim at describing is associated with a spontaneous breaking of this $Z_2$ symmetry, \textit{i.e.} a paramagnetic-to-ferromagnetic transition in the language of magnetism);
(ii) rotations and translations in the $d$-dimensional Euclidean space with infinitesimal generators $L_{\mu \nu}=x^\mu \partial_{\nu}-x^\nu \partial_{\mu}$ and $\partial_{\mu}$, where $\mu, \nu=1,...,d$, respectively;
(iii) transformations of the symplectic group with generators acting on the Grassmann subspace: $\bar t= \bar{\theta}\partial_{\theta}$, $t=\theta \partial_{\bar{\theta}}$ (associated with ``rotations'' in the $2$-dimensional Grassmann subspace) and $N=\bar{\theta}\partial_{\bar{\theta}}-\theta \partial_{\theta}$ (corresponding to the ``ghost-number conservation''); the generators satisfy the commutation relations $[\bar t,t]=N$, $[N,\bar t]=2 \bar t$, and $[N, t]= -2 t$;
(iv) translations in the $2$-dimensional Grassmann subspace with generators $\partial_{\theta},\partial_{\bar{\theta}}$: these are linear, BRS-type, symmetries;\cite{zinnjustin89}
(v) ``superrotations'' that preserve the supermetric and can be represented by the generators $\bar {\mathcal Q}_\mu=-x^\mu \partial_\theta + \frac{2}{\Delta_B} \bar{\theta} \partial_{\mu}$ and $\mathcal Q_\mu=x^\mu \partial_{\bar{\theta}}+ \frac{2}{\Delta_B} \theta \partial_{\mu}$.
The last two sets of transformations mix bosonic and fermionic fields and for this reason represent ``fermionic symmetries''.\cite{zinnjustin89} They are associated with supersymmetries: the translations in the $2$-dimensional Grassmann subspace (iv) form a supergroup and the Euclidean rotations (ii), the symplectic group (iii) and the superrotations (v) form another supergroup known as the orthosymplectic supergroup $OSp(2,d)$.\cite{osp2d}
The linearly realized continuous symmetries and supersymmetries (ii)-(v) lead to a set of Ward-Takahashi (WT) identities that can be expressed either at the level of the generating functional of the Green's functions or that of the effective action.\cite{zinnjustin89} For instance, the invariance of the action under the superrotations gives rise to the following WT identities:
\begin{equation}
\label{eq_ward_0}
\int_{\underline{x}}\Phi(\underline{x})\mathcal Q_{\underline{x}}\Gamma_{\underline{x}}^{(1)}[\Phi]=0
\end{equation}
and for $p > 1$
\begin{equation}
\begin{aligned}
\label{eq_ward_p}
\bigg(\mathcal Q_{\underline{x}_1}+...&+ \mathcal Q_{\underline{x}_p} \bigg)\Gamma_{\underline{x}_1...\underline{x}_{p}}^{(p)}[\Phi] \\&+ \int_{\underline{x}_{p+1}}\Phi(\underline{x}_{p+1})\mathcal Q_{\underline{x}_{p+1}}\Gamma_{\underline{x}_1...\underline{x}_{p+1}}^{(p+1)}[\Phi]=0,
\end{aligned}
\end{equation}
where we have used the notation $\mathcal Q_{\underline{x}}$ to indicate a generic component $\mathcal Q_{\mu}$ of the generator of the superrotation acting on the superspace coordinate $\underline{x}$; the proper (1PI) vertices are defined as
\begin{equation}
\label{eq_derivative_supervertex}
\Gamma_{\underline{x}_1, ... ,\underline{x}_p}^{(p)}[\Phi]=\frac{\delta^p \Gamma [\Phi]}{\delta \Phi(\underline{x}_1)... \delta \Phi(\underline{x}_p)}.
\end{equation}
Similar identites are obtained with the generator $\bar {\mathcal Q}_{\underline{x}}$ (and additional ones with the generators $\partial_{\mu}$, $L_{\mu \nu}$, $\partial_{\theta},\partial_{\bar{\theta}}$, $\bar t, t$, and $N$).
At this point, it is important to stress that it is the existence of the superrotation invariance (v) which leads to the property of dimensional reduction in the critical behavior of the RFIM, with the two Grassmann dimensions acting as negative dimensions.\cite{parisi79,cardy83} One should however keep in mind that the whole formal construction collapses when the stochastic field equation, Eq.~(\ref{eq_stochastic}), has more than one solution, which is usually the case in the region of interest.\cite{parisi84b} In the following, we show how to generalize the Parisi-Sourlas construction in order to preserve the relevance of the superfield formalism.
\section{``Grassmannian ultralocality'' and ground-state dominance}
\label{sec:ultralocality}
\subsection{On the ``ultralocality'' of the random generating functional in the Grassmann subspace}
Before presenting the method to select the ground-state configuration, which should dominate at $T=0$, we discuss an important property of the random generating functional in the superfield framework. To alleviate the notations we consider the case of a $d=0$ Euclidean space, but the same considerations equally apply to any number of Euclidean dimensions. When the stochastic field equation [Eq.~(2)], which can be rewritten as
\begin{equation}
\label{eq_stochastic_app}
\frac{\partial S_B(\varphi)}{\partial \varphi} = h+J \, ,
\end{equation}
has a unique solution, say $\varphi_*(h+J)$, the generating functional of the (Green's) correlation functions of the $\varphi$ field is simply expressed as
\begin{equation}
\label{eq_generating1_app}
\mathcal Z_h(\hat J,J)= e^{\hat J \varphi_*(h+J)} \, , \end{equation}
or, after adding the fermionic sources $K, \bar K$ and using the auxiliary fields [see Eq.~(4)], as
\begin{equation}
\label{eq_generating2_app}
\mathcal Z_h(\hat J,J, \bar K, K) \equiv \mathcal Z_h[\mathcal J]= e^{\hat J \varphi_*(h+J) - \bar K K S_B^{(2)} (\varphi_*(h+J))^{-1}}
\end{equation}
where $S_B^{(2)} (\varphi)$ is the second derivative of the bare action and $\mathcal J$ is the supersource introduced in Eq.~(\ref{eq_supersource}). As a consequence, the (random) generating functional $\mathcal W_h=\log \mathcal Z_h$ is equal to
\begin{equation}
\label{eq_generating3_app}
\mathcal W_h[\mathcal J]= \hat J \, \varphi_*(h+J)-\bar K K \, S_B^{(2)} (\varphi_*(h+J))^{-1} \, .
\end{equation}
It is straighforward to show that this precisely corresponds to an ``ultralocal'' form in the Grassmann subspace,\cite{footnote001}
\begin{equation}
\label{eq_ultralocal_1}
\mathcal W_h[\mathcal J]=\int_{\underline{\theta}}W_h(\mathcal J(\underline{\theta}))
\end{equation}
where $\underline \theta$ collects the two Grassmann coordinates $\bar \theta$ and $\theta$ and $\int_{\underline \theta}\equiv\int \int d\theta d\bar \theta$; indeed,
\begin{equation}
\label{eq_ultralocal_app}
\int_{\underline{\theta}}W_h(\mathcal J(\underline{\theta}))= \hat J \, W_h^{(1)}(J)-\bar K K \, W_h^{(2)} (J)
\end{equation}
with $W_h^{(1)}(J)=\varphi_*(h+J)$ and $W_h^{(2)}(J)=\partial \varphi_*(h+J)/ \partial J=S_B^{(2)} (\varphi_*(h+J))^{-1}$ as it should be. The whole formal construction is fully justified in this case.
What happens when the stochastic field equation has several solutions ? Let us sort the solutions and label them with an index $\alpha$; $\alpha=0$ denotes the ground state, which is unique for a continuous distribution of the random field, aside from exceptional values of $h+J$ for which a coexistence of several ground states may take place.
The sought-for generating functional that describes the equilibrium situation at $T=0$ only takes into account the contribution of the ground state. As a result, one has exactly the same relations as above, with $\varphi_*(h+J)$ replaced by the ground-state solution $\varphi_0(h+J)$. Actually, the notation $\varphi_0(h+J)$ is a little misleading as the ground state is generally a piecewise function of $h+J$ with discontinuities occuring at special values of the latter. These discontinuities are known as ``avalanches" (or ``shocks") and have been observed for instance in numerical computations of the ground state of the RFIM on $2$- and $3$-dimensional lattices.\cite{frontera00,machta03,machta05,dahmen07} The ground-state solution $\varphi_0(h+J)$ should rather be written as $\sum_i \mathcal C_{0,i}(h+J) \varphi_{0,i}(h+J)$ with $\mathcal C_{0,i}$ the characteristic of the $i$th interval along the axis $h+J$ in which the ground state $\varphi_{0,i}$ is a continuous function of $h+J$ (see also Appendix~\ref{appendix:toy}). More generally, including a single solution in the generating functional $\mathcal Z_h(\hat J,J, \bar K, K)$, provided this solution is piecewise defined for \textit{all} values of $h+J$, also leads to the ``ultralocal property'' in the Grassmann subspace for $\mathcal W_h[\mathcal J]$, Eq.~(\ref{eq_ultralocal_1}).
On the other hand, one easily realizes that this property cannot be true when several different solutions are included in the generating functional, no matter how one chooses to weigh these solutions. The weighting of the solutions that corresponds to the above Parisi-Sourlas supersymmetric construction, Eq.~(\ref{eq_generating_func2}), is given by the sign of the determinant of the Hessian $S_B^{(2)}$.\cite{parisi82} Then, setting to zero the fermionic sources for simplicity, one has
\begin{equation}
\label{eq_generating4_app}
\mathcal Z_h(\hat J,J)= \sum_{\alpha,i} (-1)^{n(\alpha)} \mathcal C_{\alpha,i}(h+J) e^{\hat J \varphi_{\alpha,i}(h+J)},
\end{equation}
where $n(\alpha)$ is the index of the $\alpha$th solution, \textit{i.e.} the associated number of negative eigenvalues of the Hessian $S_B^{(2)}$, and $\mathcal C_{\alpha,i}(h+J)$ is the characteristic function of the $i$th interval over which $\varphi_{\alpha}=\varphi_{\alpha,i}$ is a continuous function of $h+J$ [at the boundaries of $\mathcal C_{\alpha,i}(h+J)$ the solution $\varphi_{\alpha}$ has a discontinous jump or possibly stops to exist]. Obviously,
\begin{equation}
\label{eq_ultralocal_breakdown1_app}
\mathcal W_h(\hat J,J)= \log \sum_{\alpha} (-1)^{n(\alpha)} e^{\hat J \varphi_{\alpha}(h+J)},
\end{equation}
where $\varphi_{\alpha}$ denotes $\sum_i \mathcal C_{\alpha,i}(h+J) \varphi_{\alpha,i}(h+J)$, is not ``ultralocal'' in the Grassmann coordinates, \textit{i.e.} cannot be put in the form of Eqs.~(\ref{eq_generating3_app}-\ref{eq_ultralocal_app}) [observe in particular that the form in Eq.~(\ref{eq_generating3_app}) implies a linear dependence on $\hat J$, which is not satisfied by the above expression]. When $\hat J =0$, one finds that $\mathcal Z_h=1$ and $\mathcal W_h=0$, just as in the case where a unique solution is taken into account.\cite{footnote01} However, the result now follows from the property that the sum of all solutions weighted by the sign of the determinant of the Hessian is a topological invariant which is equal to $1$ in the present case.\cite{kurchan02}
Note that contrary to the toy model discussed in Ref.~[\onlinecite{parisi82}], in which the breakdown of the supersymmetric formalism is related to the fact that the stochastic equation has no solutions for a certain range of the random field, the stochastic equation associated with the present scalar field theory in a Gaussian distributed random field has always at least one solution (the ground state always exists).
\subsection{Selecting the ground state}
A natural procedure to select the ground state is to add in the generating functional, Eq.(\ref{eq_generating_func1}), a weighting factor that strongly favors the solution with the smallest action. This can be done through a Bolzmann-like factor, namely,
\begin{equation}
\label{eq_generating_func1beta}
\begin{aligned}
&\mathcal Z_h^{(\beta)}[\hat{J},J]=\int \mathcal D\varphi \; \delta\left[\dfrac{\delta S_B[\varphi]}{\delta \varphi}-h-J\right] \; \det\left[ \dfrac{\delta^2 S_B[\varphi]}{\delta \varphi \delta \varphi}\right]\; \times \\& \exp \big [-\beta \left ( S_B[\varphi]- \int_{x} [J(x) +h(x)] \varphi(x) \right) + \int_{x} \hat{J}(x) \varphi(x)\big ]
\end{aligned}
\end{equation}
where $\beta$ is the inverse of an auxiliary temperature (the actual temperature is equal to zero). Note that $\mathcal Z_h^{(\beta)}$ is \textit{not} the same as the partition function obtained from the equilibrium Boltzmann-Gibbs measure at a temperature $T=1/\beta$. Even if $\beta$ is large enough that only minima contribute to Eq.~(\ref{eq_generating_func1beta}), the latter expression only includes the contribution of the minima whereas the Boltzmann-Gibbs measure also takes into account the contribution of the basins of attraction of the minima (roughly speaking, the thermal fluctuations around the minima).
With the above generating functional, one finds that the average of the field $\varphi$ is given by
\begin{equation}
\label{eq_generating_func2beta}
\begin{aligned}
\langle\varphi(x)\rangle=\frac{\delta \log \mathcal Z_h^{(\beta)}}{\delta \hat J(x)}\bigg \vert_{\hat J=0}=\frac{1}{\mathcal Z_h^{(\beta)}[\hat J=0,J]}\frac{\delta \mathcal Z_h^{(\beta)}}{\delta \hat J(x)}\bigg \vert_{\hat J=0}.
\end{aligned}
\end{equation}
The simplicity of the Parisi-Sourlas formalism is however lost as $\mathcal Z_h^{(\beta)}[\hat J=0,J]\neq 1$. In consequence, simply considering the average over the disorder of $\mathcal Z_h^{(\beta)}$ is no longer sufficient to generate the $\varphi$-field correlation functions of the original problem.
Interestingly, the superfield formalim can still prove useful. After having introduced one bosonic and two fermionic auxiliary fields as before, added two fermionic sources, grouped all the sources in a supersource as in Eq.~(\ref{eq_supersource}), and similarly grouped all the fields in a superfield as in Eq.~(\ref{eq_superfield}), the generating functional can be rewritten as
\begin{equation}
\begin{aligned}
\label{eq_part_funcbeta}
\mathcal Z_h^{(\beta)}[\mathcal J]&=\int\mathcal D\Phi \exp \bigg (- \int \int d\theta d\bar\theta [1+\beta \bar \theta \theta] S_{B}[\Phi(\underline \theta)] \\&+ \int_{x} \int \int d\theta d\bar\theta [1+\beta \bar \theta \theta] \left [h(x) + \mathcal J(x,\underline \theta) \right ] \Phi(x, \underline \theta)\bigg ) \\&=\exp (\mathcal W_h^{(\beta)}[\mathcal J]).
\end{aligned}
\end{equation}
The construction may appear rather formal and untractable but it turns out that Eq.~(\ref{eq_part_funcbeta}) can be expressed in a way which will prove efficient to study the symmetries of the theory and investigate its long-distance properties. It is convenient to introduce a superspace combining the $d$ Euclidean and the $2$ Grassmannian dimensions in which the Grassmann subspace is now curved. To be more specific, we replace the metric tensor of the Parisi-Sourlas formalism (see section II-A) by
\begin{equation}
\begin{aligned}
\label{eq_metric_curved}
&g_{\theta \bar\theta}=- g_{\bar\theta \theta}=- (1-\beta \bar \theta \theta)\,,\\&
g_{\theta \theta}= g_{\bar\theta \bar\theta}=0\,,
\end{aligned}
\end{equation}
keeping $g_{mn}=\delta_{\mu \nu}$ for the Euclidean sector and all cross-components between Euclidean and Grassmannian coordinates equal to zero. So long as we are not interested in mixing bosonic and fermionic directions (mixing occurs when considering superrotations, see section II-A), there is no need to introduce the factor $2/\Delta_B$ in the supermetric as was done in the Parisi-Sourlas formalism above.
The properties of the curved superspace are discussed in Ref.~[\onlinecite{wschebor09}]. In a nutshell, one has the usual prescriptions of Riemannian geometry that in order to satisfy the isometries of the curved Grassmann subspace, one should (i) contract Grassmann indices with either the metric tensor or its inverse, (ii) integrate over the Grassmann coordinates with a measure $\sqrt{\mathrm{ sdet}g}\,d\theta d\bar \theta=(1+ \beta\bar \theta \theta)\, d\theta d\bar \theta$, where $\mathrm{sdet}g=(1+ \beta\bar \theta \theta)^2$ is the superdeterminant of the metric tensor in the Grassmann sector, (iii) use, if necessary, covariant derivatives as well as the proper Laplacian operator $\Delta_{\underline \theta}=g^{mn}(\partial_m\partial_n -\Gamma^p_{mn}\partial_p)$, where $m,n,p=\theta, \bar \theta$, summation over repeated indices is implied, and the $\Gamma^p_{mn}$'s are the Christoffel symbols with nonzero components $\Gamma^{\theta}_{\theta \bar \theta}=-\Gamma^{\theta}_{\bar\theta \theta}=\beta \theta$ and
$\Gamma^{\bar \theta}_{\theta \bar \theta}=-\Gamma^{\bar \theta}_{\bar\theta \theta}=\beta \bar \theta$; the Laplacian for the Grassmann subspace is then explicitly given by
\begin{equation}
\begin{aligned}
\label{eq_laplacian_curved}
\Delta_{\underline \theta}=2(1+ \beta\bar \theta \theta)(\partial_{\theta}\partial_{\bar \theta}-\beta \bar \theta \partial_{\bar \theta}-\beta \theta \partial_{\theta}).
\end{aligned}
\end{equation}
It is easy to show that the parameter $\beta$ (inverse of an auxiliary temperature) is up to a factor $1/6$ equal to the Ricci scalar curvature of the Grassmann subspace.\cite{wschebor09}
We now come back to the discussion of the previous subsection concerning ``ultralocality'' in the Grassmann subspace. For ease of notation, we consider again a $d=0$ Euclidean subspace and we momentarily drop the two fermionic sources. The random generating functional $\mathcal W_h^{(\beta)}[\mathcal J]$ corresponding to the superfield construction in Eqs.~(\ref{eq_generating_func2beta}) and (\ref{eq_part_funcbeta}) is expressed as
\begin{equation}
\begin{aligned}
\label{eq_ultralocal_breakdown_beta}
&\mathcal W_h^{(\beta)}(\hat J,J) =\\& - \beta \big [ S_{B}[\varphi_0(h+J)]-(h+J)\varphi_0(h+J) \big ] +\hat J\, \varphi_{0}(h+J) +\\& \log \big ( 1+\sum_{\alpha\neq 0} (-1)^{n(\alpha)} e^{- \beta \Delta S_{\alpha,0}(h+J) +\hat J \left [\varphi_{\alpha}(h+J)-\varphi_{0}(h+J)\right ]}\big ),
\end{aligned}
\end{equation}
where $\Delta S_{\alpha,0}= (S_{B}[\varphi_{\alpha}] - S_{B}[\varphi_0])-(h+J)(\varphi_{\alpha}-\varphi_0)$.
Assume first that the parameter $\beta$ can be taken sufficiently large for ensuring $\Delta S_{\alpha,0}\ll 1/\beta$ for all solutions $\alpha\neq 0$, so that all contributions but that of the ground state can be neglected in Eq.~(\ref{eq_ultralocal_breakdown_beta}). It is then easy to show that the random functional can be put in an ``ultralocal'' form appropriate for the curved Grassmann subspace,
\begin{equation}
\begin{aligned}
\label{eq_ultralocal_beta1}
&\mathcal W_h^{(\beta)}[\mathcal J] =\int_{\underline \theta}W_h^{(\beta)}[\mathcal J(\underline \theta)],
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
\label{eq_ultralocal_beta2}
W_h^{(\beta)}[J]= S_{B}[\varphi_0(h+J)]-(h+J)\varphi_0(h+J)
\end{aligned}
\end{equation}
is actually independent of $\beta$ and where the integral over $\underline \theta$ now involves the metric factor $(1+\beta \bar \theta \theta)$, \textit{i.e.} $\int_{\underline \theta}\equiv \int \int (1+\beta \bar\theta \theta) d\theta d\bar\theta$.
In fact, no matter how large (but finite) $\beta$, the above assumption may not be valid for all realizations of the random field. Excited states with $\Delta S_{\alpha,0}\lesssim 1/\beta$ may be present. The vast majority of them involve only local changes of configuration with respect to the ground state.\cite{frontera01,zumsande08} As a result, the term $\hat J (\varphi_{\alpha}-\varphi_{0})$ remains small in $d$ Euclidean dimensions, provided $\hat J$ of course stays small or finite (here, we are ultimately interested in the limit $\hat J=0$). In addition, rare excitations, also known as ``droplets'',\cite{droplets} may involve a large-scale reorganization of the ground-state configuration while satisfying $\Delta S_{\alpha,0}\lesssim 1/\beta$ and therefore provide a significant contribution to $\mathcal W_h^{(\beta)}$. In such a situation, the ``ultralocality'' in the curved Grassmann subspace is broken and the random generating function has the general form
\begin{equation}
\begin{aligned}
\label{eq_ultralocal_broken_beta}
&\mathcal W_h^{(\beta)}[\mathcal J]=\\& \int_{\underline \theta}W_h^{(\beta)}[\mathcal J(\underline{\theta}),(1+\frac{\beta}{2} \bar{\theta} \theta)\partial_{\underline{\theta}}\mathcal J(\underline{\theta}), \Delta_{\underline{\theta}}\mathcal J(\underline{\theta})],
\end{aligned}
\end{equation}
where $\partial_{\underline{\theta}}$ is a short-hand notation for designating either $\partial_{\bar \theta}$ or $\partial_{\theta}$, and we recall that $\Delta_{\underline{\theta}}=2 (1+\beta \bar{\theta} \theta)( \partial_{\theta}\partial_{\bar{\theta}}-\beta \bar \theta \partial_{\bar{\theta}}-\beta \theta \partial_{\theta})$ is the Laplacian in the curved Grassmann subspace (see above). Note that the functional dependence on the Euclidean coordinates is completely general at this point.
The droplets being nonetheless rare events, and the other excitations being essentially local, we expect that the deviation from ``ultralocality'' is small when $\beta$ is large, \textit{i.e.},
\begin{equation}
\label{eq_ultralocal_broken2_beta}
W_h^{(\beta)}\simeq W_h^{(\beta)UL}[\mathcal J(\underline{\theta})] + {\rm corrections},
\end{equation}
with the corrections going to zero (and $W_h^{(\beta)UL}$ going to a well defined limit) when $\beta \rightarrow \infty$. In the following, we will check the correctness of this behavior and show how in the FRG flow the contributions coming from errors in selecting the ground state actually become subdominant as one approaches the critical fixed point.
\section{Cumulants of the renormalized disorder and the need for multiple copies}
\label{sec:cumulants}
\subsection{Why the need for multiple copies ?}
A central quantity is the random (``free energy'') functional $\mathcal W_h^{(\beta)}[ \mathcal J]$, introduced above. This random functional is characterized by its (functional) probability distribution or, alternatively, by the infinite set of its cumulants (if of course the cumulants exist). Dealing with cumulants has the advantage of involving an average over the bare disorder: as a result, one recovers the translational (and rotational) invariance in Euclidean space which is otherwise broken by the space-dependent random field. In the following, we will therefore consider a formalism based on cumulants. However, a crucial point when working with such disorder-averaged quantities is that one does not want to lose track of the rare events that characterize systems with quenched disorder. For random-field models, these rare events are expected to take the form of ``avalanches'' or ``shocks'' that are seen in the dependence of the ground state on the applied source at zero temperature and of low-energy excitations known as ``droplets'' at nonzero temperature (see above). As shown in previous work,\cite{balents-fisher93,BLbalents93, balents96,BLchauve00,CUSPledoussal,BLbalents04,BLbalents05,CUSPledoussal09,BLledoussal10,tarjus04,tissier06,tarjus08,tissier08} these phenomena show up in the cumulants of the renormalized disorder as a singular dependence on the arguments (for an illustration in a simple zero-dimensional toy model, see Appendix~\ref{appendix:toy}). Describing such features therefore requires the functional dependence of the cumulants for \textit{generic arguments}. For instance, a complete characterization of the random functional $\mathcal W_h^{(\beta)}[ \mathcal J]$ implies the knowledge of all its cumulants, $\mathcal W_1^{(\beta)}[ \mathcal J_1]$, $\mathcal W_2^{(\beta)}[ \mathcal J_1, \mathcal J_2]$, $\mathcal W_3^{(\beta)}[ \mathcal J_1, \mathcal J_2, \mathcal J_3]$, ..., which are defined as
\begin{equation}
\label{eq_cumW1}
\mathcal W_1^{(\beta)}[ \mathcal J_1]= \overline{\mathcal W_h^{(\beta)}[ \mathcal J_1]}
\end{equation}
\begin{equation}
\begin{aligned}
\label{eq_cumW2}
\mathcal W_2^{(\beta)}[ \mathcal J_1, \mathcal J_2]= &\overline{\mathcal W_h^{(\beta)}[ \mathcal J_1]\mathcal W_h^{(\beta)}[ \mathcal J_2]}\\&-\overline{\mathcal W_h^{(\beta)}[ \mathcal J_1]}\,\, \overline{\mathcal W_h^{(\beta)}[ \mathcal J_2]},
\end{aligned}
\end{equation}
etc. \textit{Generic}, \textit{i.e.} independently tunable, arguments require the introduction of several copies or replicas of the original system, each with the same bare disorder (random field) but coupled to \textit{different} external supersources. It is worth stressing that this is \textit{not} what is done in the Parisi-Sourlas supersymmetric approach nor in the conventional replica trick. In the former, a single copy of the system is considered (see section \ref{sec:model}) and in the latter the sources acting on the replicas are all taken equal. As a consequence, in both cases, one has only access to cumulants in the specific configuration with all arguments equal. Quite differently in the present formalism, we consider multiple copies or replicas and supersources that explicitly break the (permutational) symmetry among these replicas.\cite{mouhanna10} We note in passing that, by construction, this takes care of the problem coming from having to average the logarithm of the partition function over disorder when $\beta\neq 0$.
\subsection{Multi-copy superfield formalism}
\label{sec:multi-copy}
The cumulants of $\mathcal{W}_h^{(\beta)}$ for generic arguments can be obtained from the average over the bare disorder of the extension of Eq.~(\ref{eq_part_funcbeta}) to an arbitrary large number $n$ of copies submitted to independently controllable (super)sources, namely,
\begin{equation}
\begin{aligned}
\label{eq_part_func_multicopy0}
\exp (\mathcal W^{(\beta)}[\{\mathcal J_a\}]) &=\overline{\prod_{a=1}^n \mathcal{Z}_h^{(\beta)}[\mathcal J_a]}\\&=\overline{\exp(\sum_{a=1}^n\mathcal W_h^{(\beta)}[ \mathcal J_a])}.
\end{aligned}
\end{equation}
With the help of the curved superspace introduced above and by combining Eq.~(\ref{eq_part_funcbeta}) and Eq.~(\ref{eq_part_func_multicopy0}), we end up with a superfield theory associated with the following partition function:
\begin{equation}
\begin{aligned}
\label{eq_part_func_multicopy}
&\mathcal Z^{(\beta)}[\{\mathcal J_a\}]=\exp (\mathcal W^{(\beta)}[\{\mathcal J_a\}]) \\&= \int \prod_{a=1}^{n}\mathcal D\Phi_a \exp \bigg(-S^{(\beta)}[\{\Phi_a\}] + \sum_{a=1}^{n} \int_{\underline x} \mathcal J_a(\underline x) \Phi_a(\underline x)\bigg),
\end{aligned}
\end{equation}
where the multicopy action is given by
\begin{equation}
\begin{aligned}
\label{eq_superaction_multicopy}
S^{(\beta)}&[\{\Phi_a\}] = \sum_{a=1}^{n} \int_{\underline{x}} [ \frac 12 (\partial_{\mu}\Phi_a(\underline{x}))^2+U_{B}(\Phi_a(\underline{x}))] \\&- \frac{\Delta_B}{2}\sum_{a_1=1}^{n}\sum_{a_2=1}^{n} \int_{x}\int_{\underline{\theta}_1\underline{\theta}_2} \Phi_{a_1}(x,\underline{\theta}_1)\Phi_{a_2}(x,\underline{\theta}_2)
\end{aligned}
\end{equation}
and it should be kept in mind that the integral over the Grassmann coordinates include a metric factor due to the curvature $\beta$ (as a result, $\int_{\underline x}\equiv \int_x \int \int (1+\beta \bar\theta \theta)d\theta d\bar\theta$). Note that for $n\geq 2$ the first (kinetic) and last (disorder-induced) terms in the above expression of the multicopy action can no longer be combined and simply expressed with the super-Laplacian, as it was the case for the $1$-copy action $S_{SS}[\Phi]$ in the absence of curvature ($\beta=0$) which is defined in Eq.~(\ref{eq_susyaction2}).
The cumulants of $\mathcal{W}_h^{(\beta)}$ can now be generated by the expansion in increasing number of sums over copies,
\begin{equation}
\begin{aligned}
\label{eq_gen_func_multicopy}
\mathcal W^{(\beta)}[\{\mathcal J_a\}] = \sum_{p\geq1}\sum_{a_1=1}^n...\sum_{a_p =1}^n
\frac {1}{p!} \mathcal{W}_{p}^{(\beta)}[\mathcal{J}_{a_1},...,\mathcal{J}_{a_p}],
\end{aligned}
\end{equation}
where the $p$th cumulant $\mathcal{W}_{p}^{(\beta)}$ is fully symmetric under any permutation of its arguments (and independent of $n$). As is obvious from the above equation, at least $p$ distinct copies must be considered to describe the $p$th cumulant with generic arguments, so that an arbitrary large number of copies (or replicas) are needed to generate all cumulants.
By using Eq.~(\ref{eq_ultralocal_broken_beta}), the first cumulant can be formally rewritten as
\begin{equation}
\begin{aligned}
\label{eq_cumW1_formal}
\mathcal W_1^{(\beta)}[ \mathcal J_1]= \int_{\underline{\theta}_1}W_1^{(\beta)}[1]
\end{aligned}
\end{equation}
where $W_1^{(\beta)}[1]$ is a short-hand notation for indicating a functional of $\mathcal J_1(\underline{\theta}_1)$, $(1+\frac{\beta}{2} \bar{\theta}_1 \theta_1) \partial_{\underline{\theta}_1}\mathcal J_1(\underline{\theta}_1)$, and $\Delta_{\underline{\theta}_1}\mathcal J_1(\underline{\theta}_1)$ (the functional character comes from the Euclidean dependence that remains completely general at this point); there is no additional explicit dependence on Grassmann coordinates in $W_1^{(\beta)}$. The second cumulants reads
\begin{equation}
\begin{aligned}
\label{eq_cumW2_formal}
\mathcal W_2^{(\beta)}[ \mathcal J_1, \mathcal J_2]=\int_{\underline{\theta}_1}\int_{\underline{\theta}_2} W_2^{(\beta)}[1,2],
\end{aligned}
\end{equation}
with an obvious extension of the above notation, and similar expressions hold for the higher-order cumulants. A physical interpretation of the cumulants when the random functional $\mathcal W_h^{(\beta)}$ is ``ultralocal'' in the Grassmann coordinates will be given in the next section.
\subsection{Legendre transform and effective action in a curved superspace}
Due to the specific form of the source term in the presence of curvature, \textit{i.e.}, explicitly,
$\sum_a \int_{x}\int \int d\theta d\bar{\theta} (1+\beta \bar{\theta}\theta) \Phi_a(x,\underline\theta) \mathcal J_a(x,\underline\theta)$, the functional $\mathcal W^{(\beta)}[\{\mathcal J_a\}]$ does not exactly generate the average of the superfields; rather, one has an extra metric factor,
\begin{equation}
\label{eq_magn}
\frac{\delta \mathcal W^{(\beta)}[\{\mathcal J_f\}]}{\delta \mathcal J_a(x,\underline\theta)}=(1+\beta\bar \theta \theta)\Phi_a(x,\underline\theta).
\end{equation}
The Legendre transform defining the effective action is then expressed as
\begin{equation}
\label{eq_legendre_multicopy}
\Gamma^{(\beta)}[\{\Phi_a\}]=-\mathcal W^{(\beta)}[\{\mathcal J_a\}]+\sum_a \int_x \int_{\underline\theta} \Phi_a(\underline\theta,x) \mathcal J_a(\underline\theta,x).
\end{equation}
From this equation, one can determine the relation between the second functional derivatives $\Gamma^{(2)}$ and $\mathcal W^{(2)}$ (where we have omitted the superscript $(\beta)$ to alleviate the notation, as we shall do each time superscripts indicating functional derivatives are involved):
\begin{equation}
\label{eq_legendre_invert0}
\begin{aligned}
&\frac{\delta\Phi_a(x_1,\underline\theta_1)}{\delta\Phi_b(x_2,\underline\theta_2)}=\delta_{ab}\delta^{(d)}(x_1-x_2)\delta_{\underline\theta_1 \underline\theta_2}\\&=\sum_c\int d\bar\theta_3 d\theta_3\int_{x_3} \frac{\delta\Phi_a(x_1,\underline\theta_1)}{\delta
\mathcal J_c(x_3,\underline\theta_3)}\, \frac{\delta \mathcal J_c(x_3,\underline\theta_3)}{\delta\Phi_b(x_2,\underline\theta_2)}\\
&=\sum_c\int_{\underline\theta_3}(1-\beta \bar \theta_3\theta_3)\int_{x_3}
\frac{\delta\Phi_a(x_1,\underline\theta_1)}{\delta \mathcal J_c(x_3,\underline\theta_3)}\, \frac{\delta
\mathcal J_c(x_3,\underline\theta_3)}{\delta\Phi_b(x_2,\underline\theta_2)}
\end{aligned}
\end{equation}
so that
\begin{equation}
\label{eq_legendre_invert}
\begin{aligned}
&(1+\beta \bar\theta_1 \theta_1) \delta_{ab}\delta^{(d)}(x_1-x_2)\delta_{\underline\theta_1 \underline\theta_2}=\sum_c\int_{\underline\theta_3}\int_{x_3}(1-2\beta\bar\theta_3\theta_3)\\&
\times \frac{\delta^2\mathcal W^{(\beta)}}{\delta \mathcal J_a(x_1,\underline\theta_1) \delta \mathcal J_c(x_3,\underline\theta_3)}
\, \frac{\delta^2 \Gamma^{(\beta)}}{\delta \Phi_c(x_3,\underline\theta_3)\delta\Phi_b(x_2,\underline\theta_2)},
\end{aligned}
\end{equation}
where $\delta_{\underline{\theta}_1 \underline{\theta}_2}=\delta_{\bar{\theta}_1 \bar{\theta}_2} \delta_{{\theta}_1 {\theta}_2}= (\bar{\theta}_1 - \bar{\theta}_2) ({\theta}_1 - {\theta}_2)$ (with this definition, $\int_{\underline{\theta}_2}\delta_{\underline{\theta}_1 \underline{\theta}_2}= 1+ \beta \bar \theta_1 \theta_1$). $\Gamma^{(2)}$ and $\mathcal W^{(2)}$ are thus essentially inverse operators, provided that the effect of the curvature of the Grassmann subspace is appropriately taken into account.
Like the generating functional $\mathcal W^{(\beta)}[\{ \mathcal J_a\}]$, the effective action $\Gamma^{(\beta)}[\{\Phi_a\}]$ can be expanded in increasing number of unrestricted sums over copies (considering now the superfields $\{\Phi_a\}$ as fundamental variables in place of the supersources $\{ \mathcal J_a\}$):
\begin{equation}
\begin{aligned}
\label{eq_multicopy1_app}
\Gamma^{(\beta)}[\{\Phi_{a}\}] = \sum_{p\geq1} \sum_{a_1=1}^n...\sum_{a_p =1}^n \frac{(-1)^{p-1}}{p!}
\mathsf \Gamma_p^{(\beta)}[\Phi_{a_1},...,\Phi_{a_p}],
\end{aligned}
\end{equation}
where $\mathsf \Gamma_p^{(\beta)}$ is a fully symmetric functional of its $p$ arguments whose functional form is independent of the number $n$ of copies. The sign $(-1)^{p-1}$ is chosen for further convenience.
The above expansion is similar to that in number of unrestricted (free) replica sums developed in Ref.~[\onlinecite{tarjus08}]. In consequence, one can apply, \textit{mutatis mutandis}, the results of
Ref.~[\onlinecite{tarjus08}] and relate the $\mathsf{\Gamma}_p^{(\beta)}$'s to the cumulants $\mathcal W_p^{(\beta)}$ as follows [we drop for simplicity the superscript $(\beta)$ in the expressions]. The first-order term $\mathsf{\Gamma}_1^{(\beta)}[\Phi]$ is the Legendre transform of $\mathcal W_1^{(\beta)}[\mathcal J]$, namely,
\begin{equation}
\label{legendre_Gamma_1}
\mathsf{\Gamma}_1[\Phi] = - \mathcal W_1[\mathcal J] + \int_{\underline x} \mathcal J(\underline x) \Phi(\underline x),
\end{equation}
with
\begin{equation}
\label{legendre_Phi_curved}
(1+\beta \bar\theta \theta) \Phi(\underline x)=\frac{\delta \mathcal W_1 [\mathcal J ] }{\delta \mathcal J(\underline x)}= \mathcal W_{1;\underline x}^{(1)}[\mathcal J],
\end{equation}
whereas the second-order term $\mathsf{\Gamma}_2^{(\beta)}[\Phi_1, \Phi_2]$ is given by
\begin{equation}
\label{eq_cumG2}
\mathsf{\Gamma}_2[\Phi_1, \Phi_2] =\mathcal W_2[\mathcal J[ \Phi_1],\mathcal J[\Phi_2]],
\end{equation}
where $\mathcal J[ \Phi]$ is the \textit{nonrandom} source defined via the inverse of the Legendre transform relation in Eq. (\ref{legendre_Phi_curved}), \textit{i.e.},
\begin{equation}
\label{eq_nonrandom_source_curved}
(1+\beta \bar \theta \theta) \mathcal J[\Phi](\underline x)=\mathsf{\Gamma}_{1;\underline x}^{(1)} [\Phi].
\end{equation}
The above expression of $\mathsf{\Gamma}_2$ motivates our choice of signs for the terms of the expansion of the effective action $\Gamma$, Eq.~(\ref{eq_multicopy1_app}): $\mathsf{\Gamma}_2[\Phi_1, \Phi_2]$ is directly the second cumulant of $\mathcal W_h[\mathcal J]$ (with the proper choice of $\mathcal J[ \Phi]$).
For the higher-order terms, one finds after some lenghty but straightforward manipulations (see Appendix~\ref{appendix:expansion_copies})
\begin{equation}
\begin{split}
\label{eq_cumG3}
&\mathsf{\Gamma}_3 [ \Phi_1, \Phi_2, \Phi_3] = - \mathcal W_3[\mathcal J[ \Phi_1],\mathcal J[ \Phi_2],\mathcal J[ \Phi_3]] + \int_{\underline x}\int_{ \underline x'}\\&(1-\beta \bar \theta \theta)(1-\beta \bar \theta' \theta') \bigg \{\mathcal W_{2;\underline x,.}^{(10)}[\mathcal J[ \Phi_1], \mathcal J[ \Phi_2]] \mathsf{\Gamma}_{1;\underline x \underline x'}^{(2)}[\mathcal J[ \Phi_1] \\& \times \mathcal W_{2;\underline x',.}^{(10)}[\mathcal J[ \Phi_1],\mathcal J[ \Phi_3]] + perm (123)\bigg \} ,
\end{split}
\end{equation}
where $perm (123)$ denotes the two additional terms obtained by circular permutations of the superfields $ \Phi_1, \Phi_2, \Phi_3$ and $\mathsf{\Gamma}_{1}^{(2)}$ is the inverse of $\mathcal W_1^{(2)}$ through an equation similar to Eq.~(\ref{eq_legendre_invert}), and so on. This procedure leads to a unique functional form for $\mathsf \Gamma_p^{(\beta)}$ which is expressed in terms of cumulants of $\mathcal W_h^{(\beta)}$ of order $p$ or less. (Note that this guarantees that the functional form is independent of the number $n$ of copies.)
As illustrated by Eq.~(\ref{eq_cumG3}), $\mathsf{\Gamma}_p^{(\beta)}[ \Phi_1, ..., \Phi_p]$ for $p\geq 3$ cannot be directly taken as the $p$th cumulant of a physically accessible random functional, in particular not of the disorder-dependent Legendre transform of $\mathcal W_h^{(\beta)}[\mathcal J[\Phi]]$. However, as it can be expressed in terms of such cumulants of order equal to or lower than $p$, we will call for simplicity the $\mathsf{\Gamma}_p^{(\beta)}$'s ``cumulants of the renormalized disorder'' (which is true for $p=2$) in what follows. See also section V-B below.
\subsection{Symmetries and WT identities for multiple copies}
\label{sec:symmetries}
The presence of curvature in the Grassmann subspace and of multiple copies still allows invariance of the theory under a large group of transformations. The multi-copy action, which is given in Eq.~(\ref{eq_superaction_multicopy}), is indeed invariant under the following symmetries:
(i) The permutations $S_n$ among copies and a global $Z_2$ symmetry;
(ii) the global rotations and translations in the $d$-dimensional Euclidean space;
(iii) the symplectic transformations with generators $\bar t=\bar{\theta}\partial_{\theta},t= \theta \partial_{\bar{\theta}}$ and $N=\bar{\theta}\partial_{\bar{\theta}}-\theta \partial_{\theta}$ acting on the $2$-dimensional curved Grassmann subspace, independently for each copy;
(iv) the two isometries of the curved Grassmann subspace that generalize the translations of flat space, independently for each copy; their generators are $(1-\beta \bar \theta \theta)\partial_{\theta}$ and $(1-\beta \bar \theta \theta)\partial_{\bar \theta}$.
(v) It was shown in Ref.~[\onlinecite{wschebor09}] that the above isometries (iii) and (iv) are the only possible ones in the presence of a nonzero curvature $\beta$. However, when the curvature is set to zero and, in addition, when restricting the supersources such that in all copies except a given copy $a$, the components $\hat{J}_b = \bar{K}_b = K_b = 0$ ($b\neq a$), the partition function in Eq.~(\ref{eq_part_func_multicopy}) is invariant under the superrotations considered in section III-B. In this case, one is effectively back to a 1-copy system since it is found that
\begin{equation}
\begin{aligned}
\label{eq_part_func_1copy}
\mathcal Z^{(\beta=0)}&[\mathcal J_a, \{\mathcal J_b=J_b\}]=\\& \int\mathcal D\Phi_a \exp \big[-S_{ss}[\Phi_a] + \int_{\underline x} \mathcal J_a(\underline x) \Phi_a(\underline x)\big],
\end{aligned}
\end{equation}
where $S_{ss}$ is given in Eq.~(\ref{eq_susyaction2}) and where we have used that the $1$-copy partition function $\mathcal Z^{(\beta=0)}[\mathcal J_b=J_b,h]=1$ for all copies $b\neq a$. Invariance under the superrotations follows directly.
The above continuous (super) symmetries imply a set of WT identities satisfied by the generating functionals $\mathcal W^{(\beta)}[\{\mathcal J_a\}]$ and $\Gamma^{(\beta)}[\{\Phi_a\}]$. Denoting by $\mathcal D_{\underline{\theta}}$ any one of the generators acting on the Grassmann coordinates $\underline{\theta}$ in a chosen copy $a$, one finds that
\begin{equation}
\label{eq_ward_multicopycurved_W}
\int_{\underline{x}}\mathcal J_a(\underline{x})\mathcal D_{\underline{\theta}}\big (\left [1-\beta \bar\theta \theta \right]\mathcal W_{a \underline{x}}^{(1)}[\{ \mathcal J_f\}]\big )=0,
\end{equation}
where we have again dropped the superscript $(\beta)$ to avoid confusion with the superscripts indicating functional derivation. This WT identity carries over to the effective action:
\begin{equation}
\label{eq_ward_multicopycurved0_gamma}
\int_{\underline{x}} \left (1-\beta \bar\theta \theta \right) \Gamma_{a \underline{x}}^{(1)}[\{ \Phi_f\}]\mathcal D_{\underline{\theta}} \Phi_a(\underline{x})=0,
\end{equation}
where we recall that the integral over the Grassmann coordinates comes with a factor $(1+\beta \bar\theta \theta)$.
Through differentiation with respect to the superfield $\Phi_a$, the above equation leads to WT identities for the 1PI vertices. For the symplectic transformations, by using the fact that Eq.~(\ref{eq_ward_multicopycurved0_gamma}) can be rewritten as
\begin{equation}
\label{eq_ward_multicopycurved1_Gamma}
\int_{\underline{x}} \left (1-\beta \bar\theta \theta \right) \Phi_a(\underline{x})\, \bar t \, \Gamma_{a \underline{x}}^{(1)}[\{ \Phi_f\}]=0,
\end{equation}
one finds for $p\geq 1$
\begin{equation}
\begin{aligned}
\label{eq_wardcurved1_p}
&\big(\sum_{q=1}^p \delta_{a a_q}\, \bar t_q\big) \Gamma_{(a_1\underline{x}_1)...(a_p \underline{x}_{p})}^{(p)}[\{\Phi_f\}] \\&+ \int_{\underline{x}_{p+1}}\Phi_a(\underline{x}_{p+1})\, \bar t_{p+1}\Gamma_{(a_1\underline{x}_1)...(a_{p+1}\underline{x}_{p+1})}^{(p+1)}[\{\Phi_f\}]=0,
\end{aligned}
\end{equation}
where $\bar t_q=\bar \theta_q \partial_{\theta_q}$, and similarly for
the transformations $t$ and $N$.
For the generalized translations $(1-\beta \bar \theta \theta)\partial_{\theta}$ and $(1-\beta \bar \theta \theta)\partial_{\bar \theta}$, a little more care is needed; Eq.~(\ref{eq_ward_multicopycurved0_gamma}) can now be reexpressed as
\begin{equation}
\label{eq_ward_multicopycurved2_Gamma}
\int_{\underline{x}} \left (1-\beta \bar\theta \theta \right) \Phi_a(\underline{x})\,\partial_{\theta}\big([1-\beta \bar \theta \theta] \Gamma_{a \underline{x}}^{(1)}[\{ \Phi_f\}]\big)=0,
\end{equation}
so that the WT identities for the 1PI vertices become
\begin{equation}
\begin{aligned}
\label{eq_wardcurved2_p}
&\big(\sum_{q=1}^p \delta_{a a_q}\, \partial_{\theta_q}[1-\beta \bar \theta_q \theta_q]\big) \Gamma_{(a_1\underline{x}_1)...(a_p \underline{x}_{p})}^{(p)}[\{\Phi_f\}] + \int_{\underline{x}_{p+1}}\\&\Phi_a(\underline{x}_{p+1})\partial_{\theta_{p+1}}[1-\beta \bar \theta_{p+1} \theta_{p+1}]\Gamma_{(a_1\underline{x}_1)...(a_{p+1}\underline{x}_{p+1})}^{(p+1)}[\{\Phi_f\}]=0
\end{aligned}
\end{equation}
and similarly with the other generator. These WT identities generalize those already encountered at the $1$-copy level for a flat Grassmann subspace in Sec. II-B.
In addition, when the curvature $\beta$ is set to zero and the supersources are restricted as discussed above, the superrotational invariance of the partition function also leads to WT identities. After making use of the invariance under translations and symplectic transformations in the (flat) Grassmann subspace, one finds that the solution of the Legendre relation for a supersource $\mathcal J_b(\underline{x})\equiv J_b(x)$, \textit{i.e.},
\begin{equation}
\label{eq_legendre_reduction1copy}
\Gamma_{(b,\underline{x})}^{(1)}[\Phi_b,\{\Phi_f\}']\big \vert_{\beta=0}= J_b(x),
\end{equation}
where $\left\lbrace \Phi_f\right\rbrace'$ denotes the set of all copy superfields but $\Phi_b$, satisfies $\Phi_b(\underline{x})\equiv \phi_b(x)$ (with $\psi_b(x) = \bar{\psi}_b(x) = \hat{\phi}_b(x)=0$); one then derives the following WT identities for the superrotation invariance:
\begin{equation}
\label{eq_ward_0_1copy}
\int_{\underline{x}}\Phi_a(\underline{x})\mathcal Q_{\underline{x}}\Gamma_{\underline{x}}^{(1)}[\Phi_a,\{\Phi_b=\phi_b\}]\big \vert_{\beta=0}=0,
\end{equation}
where the field components $\psi_b(x), \bar{\psi}_b(x), \hat{\phi}_b(x)$ have been set to zero in all copies $b\neq a$ and $\mathcal Q_{\underline{x}}$ is defined in section II-B. A similar expression holds with the generators $\bar{\mathcal Q}_{\underline{x}}$. By functional differentiation, WT identites are also obtained for the higher-order 1PI vertices. This will be further exploited in the next section.
\section{Exploring the formal consequences of ``Grassmannian ultralocality''}
\label{sec:ultralocal}
\subsection{Expansion in ``ultralocal'' cumulants}
Assume now that the random functional $\mathcal{W}_h^{(\beta)}$ is ``ultra-local'' in the Grassmann coordinates, which, as discussed before, implies that a unique configuration is included in the computation of the random generating functional for each realization of the random field. In this case, the expansion of the generating functional $\mathcal W^{(\beta)}[\{\mathcal J_a\}]$ in increasing number of sums over copies, Eq.~(\ref{eq_gen_func_multicopy}), coincides with a ``multilocal'' expansion in Grassmann coordinates, \textit{i.e.},
\begin{equation}
\begin{aligned}
\label{eq_multilocal_multicopy}
\mathcal W_p^{(\beta)}[\mathcal J_1,..., \mathcal J_p] = \int_{\underline{\theta}_1}...\int_{\underline{\theta}_p}
W_{p}^{(\beta)}[\mathcal{J}_{1}(\underline{\theta}_1),...,\mathcal{J}_{p}(\underline{\theta}_p)],
\end{aligned}
\end{equation}
where $W_{p}^{(\beta)}$ no longer depends on the derivatives of the supersources in the Grassmann directions and is the $p$th cumulant of the ``ultralocal'' functional $W_h^{(\beta)}[\mathcal{J}(\underline{\theta})]$ defined in Eq.~(\ref{eq_ultralocal_beta1}). Note that due to the assumed uniqueness of the solution included in the computation of the random generating functional, $W_h[\mathcal{J}(\underline{\theta})]$ as well as its cumulants,
\begin{equation}
\label{eq_cumW1_1copy}
W_1[ \mathcal J_{1}(\underline{\theta}_1)]= \overline{W_h[ \mathcal J_{1}(\underline{\theta}_1)]},
\end{equation}
\begin{equation}
\label{eq_cumW2_1copy}
\begin{aligned}
W_2[ \mathcal J_{1}(\underline{\theta}_1), \mathcal J_{2}(\underline{\theta}_2)]= &\overline{W_h[ \mathcal J_{1}(\underline{\theta}_1)]W_h[ \mathcal J_{2}(\underline{\theta}_2)]}\\&-\overline{W_h[ \mathcal J_{1}(\underline{\theta}_1)]}\, \overline{W_h[ \mathcal J_{2}(\underline{\theta}_2)]},
\end{aligned}
\end{equation}
etc, are independent of the curvature $\beta$ (see section III-B). By an abuse of language, we will characterize the cumulants obtained from an ``ultralocal'' random functional as ``ultralocal'' in the following.
A physical interpretation of $\mathcal W_h^{(\beta)}$ and its cumulants is next obtained by restricting the supersources to their physical component, $\mathcal J(\underline{x})\equiv J(x)$, which plays the role of an applied magnetic field. [Said otherwise, this amounts to considering supersources that are uniform in the Grassmann subspace, \textit{i.e.} to set $\theta = \bar \theta =0$ in the defining expression in Eq.~(\ref{eq_supersource}).] $W_h$ is then given by Eq.~(\ref{eq_ultralocal_beta2}) (properly generalized to $d$-dimensional Euclidean space) and, as stated above, is independent of the auxiliary parameter $\beta$. Its first derivative is by construction equal to the ground-state configuration $\varphi_0[h+J]$. The first cumulant $W_1[J]$ then gives access to the thermodynamics and its first derivative is the average of the physical field (the ``magnetization''), which corresponds at zero temperature to the ground-state configuration. Its higher-order derivatives
\begin{equation}
\label{eq_notation_partderiv_W1}
W_{1; x_1 ... x_p}^{(p)}[ J]=\frac{\delta^p W_1[ J]}{\delta J( x_1)... \delta J( x_p)}
\end{equation}
correspond to Green's functions that are related to what is known in the literature on disordered systems as ``connected'' correlation functions of the $\varphi$ field. For instance, the second derivative is related to the linear response of the magnetization to a change of the applied magnetic field.
The higher-order cumulants describe the distribution of the renormalized disorder. They generate through differentiation,
and after setting all sources equal ($J_1=J_2=\cdots=J$), the so-called ``disconnected'' correlation functions of the original system. We introduce the notation
\begin{equation}
\label{eq_notation_partderiv_W2}
\begin{split}
W&_{p;x_{11}..x_{1q_1},...,x_{p 1}..x_{pq_p}}^{(q_1...q_p)}[J_1... J_p]=\\& \frac{\delta^{q_1+...+q_p} W_p[J_1... J_p]}{\delta J_1(x_{11})..\delta J_1(x_{1q_1})...\delta J_p (x_{p 1})..\delta J_p (x_{pq_p})}.
\end{split}
\end{equation}
Then, for instance, $W_{2; x_1,x_2}^{(11)}[J, J]$ is equal to the standard two-point ``disconnected'' correlation function defined in Eq.~(\ref{eq_physical_disconn}), which, in magnetic systems, is directly accessible to experimental measurements.\cite{footnote011}
\subsection{Expansion of the effective action}
From the above equations and the definition of the Legendre transform, one derives that the effective action has also a ``multilocal'' expansion in Grassmann coordinates, which corresponds to having an ``ultralocal'' form for each term of the expansion of $\Gamma^{(\beta)}$ in number of copies [Eq.~(\ref{eq_multicopy1_app})], \textit{i.e.},
\begin{equation}
\begin{aligned}
\label{eq_cumulant_p_ultralocal}
\mathsf \Gamma_p^{(\beta)}[\Phi_{a_1},...,\Phi_{a_p}] = \int_{\underline{\theta}_1}...\int_{\underline{\theta}_p}
\Gamma_{p}^{(\beta)}[\Phi(\underline{\theta}_1),...,\Phi(\underline{\theta}_p)],
\end{aligned}
\end{equation}
where $\Gamma_{p}^{(\beta)}$ does not contain any additional explicit dependence on Grassmann coordinates (nor any dependence on the derivatives of the superfields in the Grassmann directions). As before, the dependence on the Euclidean coordinates, which actually still makes the $\Gamma_p^{(\beta)}$'s nonlocal functionals of the superfields, is left implicit. The proof is easily derived from the expression of the $\mathsf \Gamma_p^{(\beta)}$'s in terms the $\mathcal W_q^{(\beta)}$'s with $q \leq p$ and the ``ultralocal'' property of the latter. A more specific discussion of the relation between the $\Gamma_p^{(\beta)}$'s and the $W_p^{(\beta)}$'s is provided below.
From the above property one can derive expressions for the proper (1PI) vertices, defined as
\begin{equation}
\Gamma_{(a_1,\underline{x}_1), ... ,(a_p,\underline{x}_p)}^{(p)}[\left\lbrace \Phi_a \right\rbrace ]=\frac{\delta^p \Gamma^{(\beta)} [\left\lbrace \Phi_a \right\rbrace ]}{\delta \Phi_{a_1}(\underline{x}_1)... \delta \Phi_{a_p}(\underline{x}_p)},
\end{equation}
which will prove useful in the following. After specializing to fields in the physical subspace, $\Phi_a(\underline{x})\equiv \phi_a(x)$, which are relevant for the equilibrium behavior of the RFIM, and introducing the notation
\begin{equation}
\begin{aligned}
\label{eq_deriv_Gamma_p}
\Gamma&_{p;x_{11}..x_{1q_1},...,x_{p_1}..x_{pq_p}}^{(q_1...q_p)}[\phi_1... \phi_p]=\\& \frac{\delta^{q_1+...+q_p} \Gamma_p[\phi_1... \phi_p]}{\delta \phi_1(x_{11})..\delta \phi_1(x_{1q_1})...\delta \phi_p (x_{p_1})..\delta \phi_p (x_{pq_p})},
\end{aligned}
\end{equation}
one obtains the following expression for the $1$-point proper vertex,
\begin{equation}
\label{eq_expand_Gamma_1}
\begin{aligned}
&\Gamma_{(a_1\underline{x}_1)}^{(1)}[\left\lbrace \phi_a \right\rbrace ]=(1+\beta\overline \theta_1\theta_1)\bigg \{\Gamma_{1;x_1}^{(1)}[\phi_{a_1}]- \beta \, \times \\& \sum_{a_2}\Gamma_{2;x_1, .}^{(10)}[\phi_{a_1},\phi_{a_2}]+ \beta^2\sum_{a_2,a_3}\Gamma_{3;x_1,.,.}^{(100)}[\phi_{a_1},\phi_{a_2},\phi_{a_3}] + \cdots \bigg \},
\end{aligned}
\end{equation}
and for the $2$-point proper vertex,
\begin{equation}
\begin{aligned}
\label{eq_expand_Gamma_2}
\Gamma^{(2)}_{(a_1\underline{x}_1),(a_2\underline{x}_2)}&[\left\lbrace \phi_a \right\rbrace ] \\&= \delta_{a_1a_2}\widehat{\Gamma}_{a_1;\underline{x}_1\underline{x}_2}^{(2)}[\left\lbrace \phi_a \right\rbrace ] +
\widetilde{\Gamma}_{(a_1\underline{x}_1),(a_2\underline{x}_2)}^{(2)}[\left\lbrace \phi_a \right\rbrace ]
\end{aligned}
\end{equation}
with
\begin{equation}
\begin{aligned}
\label{eq_expand_Gamma_2hat}
\widehat{\Gamma}_{a_1;\underline{x}_1\underline{x}_2}^{(2)}[\left\lbrace \phi_a \right\rbrace ] =&(1+\beta\overline \theta_1\theta_1)\delta_{\underline{\theta}_1\underline{\theta}_2} \bigg \{\Gamma_{1;x_1x_2}^{(2)}[\phi_{a_1}]\\& - \beta \sum_{a_2}\Gamma_{2;x_1x_2, .}^{(20)}[\phi_{a_1},\phi_{a_2}]+ \cdots \bigg \},
\end{aligned}
\end{equation}
where $\delta_{\underline{\theta}_1 \underline{\theta}_2}$ is defined below Eq.~(\ref{eq_legendre_invert}), and
\begin{equation}
\begin{aligned}
\label{eq_expand_Gamma_2tilde}
&\widetilde{\Gamma}_{(a_1\underline{x}_1)(a_2\underline{x}_2)}^{(2)}[\left\lbrace \phi_a \right\rbrace ]=-(1+\beta\overline \theta_1\theta_1)(1+\beta\overline \theta_2\theta_2)\times \\&\bigg \{\Gamma_{2;x_1,x_2}^{(11)}[\phi_{a_1},\phi_{a_2}] - \beta \sum_{a_3}\Gamma_{3;x_1,x_2, .}^{(110)}[\phi_{a_1},\phi_{a_2},\phi_{a_3}]+ \cdots \bigg \}.
\end{aligned}
\end{equation}
As the order increases, the formulas go along the same lines but become more involved; for instance, one finds
\begin{equation}
\begin{aligned}
\label{eq_expand_Gamma_3}
&\Gamma_{(a_1\underline{x}_1),(a_2\underline{x}_2),(a_3\underline{x}_3)}^{(3)}[\left\lbrace \phi_a \right\rbrace ] = \delta_{a_1a_2a_3}(1+\beta\overline \theta_1\theta_1) \delta_{\underline{\theta}_1\underline{\theta}_2\underline{\theta}_3}\\& \bigg \{\Gamma_{1;x_1x_2x_3}^{(3)}[\phi_{a_1}] - \beta \sum_{a_4}\Gamma_{2;x_1x_2x_3,.}^{(30)}[\phi_{a_1},\phi_{a_4}]+ \cdots \bigg \}- \\&\bigg( \delta_{a_1a_2}(1+\beta\overline \theta_1\theta_1)(1+\beta\overline \theta_2\theta_2)\delta_{\underline{\theta}_1\underline{\theta}_2} \bigg \{ \Gamma_{2;x_1x_2,x_3}^{(21)}[\phi_{a_1},\phi_{a_3}]- \\&\beta \sum_{a_4}\Gamma_{3;x_1x_2,x_3,. }^{(210)}[\phi_{a_1},\phi_{a_3},\phi_{a_4}]+ \cdots \bigg \}+ perm(123) \bigg) +\\&(1+\beta\overline \theta_1\theta_1)(1+\beta\overline \theta_2\theta_2)(1+\beta\overline \theta_3\theta_3)\bigg \{ \Gamma_{3;x_1,x_2,x_3}^{(111)}[\phi_{a_1},\phi_{a_2},\phi_{a_3}]\\& - \beta \sum_{a_4}\Gamma_{4;x_1,x_2,x_3,.}^{(1110)}[\phi_{a_1},\phi_{a_2},\phi_{a_3},\phi_{a_4}]+ \cdots \bigg \},
\end{aligned}
\end{equation}
etc, where $\delta_{\underline{\theta}_1 \underline{\theta}_2 \underline{\theta}_3}=\delta_{\underline{\theta}_1 \underline{\theta}_2} \delta_{\underline{\theta}_2 \underline{\theta}_3}$ and $perm(123)$ denotes the two additional terms obtained by circular permutations of the indices $1, 2, 3$.
\subsection{Interpretation of the $\Gamma_p$'s}
The relations between the $\Gamma_p$'s and the $W_p$'s follow from Eqs.~(\ref{legendre_Gamma_1}-\ref{eq_cumG3}) and Eq.~(\ref{eq_multilocal_multicopy}). They are straighforward for the first terms, but get more involved as the order increases. To obtain full information, it is sufficient to consider configurations of the superfields that are uniform in the Grassmann subspace. (Note that the $W_p$'s being independent of $\beta$, so are the $\Gamma_p$'s.)
More precisely, one obtains that $\Gamma_1[ \phi]$ is the Legendre transform of $W_1[J]$, namely,
\begin{equation}
\label{legendre_gamma_1}
\Gamma_1[ \phi ] = - W_1[J] + \int_{x} J(x) \phi( x),
\end{equation}
with
\begin{equation}
\label{legendre_phi}
\phi(x)=\frac{\delta W_1 [J ] }{\delta J(x)}= W_{1;x}^{(1)}[J] .
\end{equation}
The second-order term is given by
\begin{equation}
\label{eq_cumg2}
\Gamma_2[\phi_1, \phi_2] = W_2[J[ \phi_1], J[\phi_2]],
\end{equation}
where $J[ \phi]$ is the (physical) \textit{nonrandom} source defined via the inverse of the Legendre transform relation in Eq.~(\ref{legendre_phi}), \textit{i.e.},
\begin{equation}
\label{eq_nonrandom_source}
J[\phi]( x)= \Gamma_{1;x}^{(1)} [\phi].
\end{equation}
and the third-order one by
\begin{equation}
\begin{split}
\label{eq_cumg3}
\Gamma_3 [ \phi_1, \phi_2,& \phi_3] = - W_3[J[ \phi_1], J[ \phi_2], J[ \phi_3]] + \\& \int_{x y} \bigg \{ W_{2; x,.}^{(10)}[J[ \phi_1], J[ \phi_2]] \left( W_{1}^{(2)}[J[ \phi_1]]\right)^{-1} _{ x\, y} \\& \times W_{2; y,.}^{(10)}[J[ \phi_1], J[ \phi_3]] + perm (123)\bigg \} ,
\end{split}
\end{equation}
etc..., where $perm (123)$ denotes the two additional terms obtained by circular permutations of the fields $ \phi_1, \phi_2, \phi_3$. As stated above, by an abuse of language, we will generically call the $\Gamma_p$'s ``cumulants of the renormalized disorder''.
As we did in Ref.~[\onlinecite{tarjus08}], it is also instructive to introduce a ``renormalized random field'' $\breve{h}[ \phi]( x)$ as the derivative of a random free-energy functional (it can equivalently be defined at the level of superfields),
\begin{equation}
\label{def_ren_randomfield}
\breve{h}[ \phi](x)=- \frac{\delta }{\delta \phi(x)}\left(W_h[J[ \phi]] - \overline{W_h[J[ \phi]]} \right) ,
\end{equation}
where $J[\phi]$ is the nonrandom source given by Eq.~(\ref{eq_nonrandom_source}). The first moment of $\breve{h}[ \phi]$ is equal to zero by construction, and it is easy to derive that the $p$th cumulant ($p\geq 2$) is given by the derivative with respect to $ \phi_1, ..., \phi_p$ of $W_p[J[ \phi_1], ..., J[ \phi_p]]$; this derivative in turn can be related to derivatives of the $\Gamma_q$'s with $q\leq p$; for instance,
\begin{equation}
\label{eq_cumhren2}
\overline{\breve{h}[ \phi_1](x_1) \breve{h}[ \phi_2](x_2)}= \Gamma_{2; x_1,x_2}^{(11)}[ \phi_1, \phi_2].
\end{equation}
The cumulants of order $p\geq 3$ are given by $\Gamma_{p; x_1,.. ,x_P}^{(1..1)}[\phi_1,...,\phi_p]$ plus additional terms involving higher derivatives of $\Gamma_q$'s with $q<p$; again, by an abuse of language, we will simply refer to $\Gamma_{p}^{(1...1)}$ as the ``$p$th cumulant of the renormalized random field''.\cite{tarjus08}
\subsection{WT identities for the superrotational invariance}
Finally, we make use of the above developments to derive explicitly the WT identities associated with superrotational invariance when $\beta=0$ and the multi-copy theory is reduced to a one-copy problem by an appropriate choice of the supersources (see above). We start with Eq.~(\ref{eq_ward_0_1copy}), which we functionally differentiate to introduce higher-order 1PI vertices, and we consider configurations of the superfields that are uniform in the Grassmann subspace but nonuniform in the Euclidean subspace. Thanks to the expressions in section V-B, we decompose each identity into components associated with different polynomials in the Grassmann coordinates. We then find several types of relations: first, relations merely expressing the translational invariance in Euclidean space, namely,
\begin{equation}
\begin{aligned}
\partial_{1\mu}\Gamma_{1;x_1}^{(1)}[\phi] = - \int_{x_2}\phi(x_2)
\partial_{2\mu}\Gamma_{1;x_1x_2}^{(2)}[\phi],
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\left( \partial_{1\mu} + \partial_{2\mu}\right) \Gamma_{1;x_1x_2}^{(2)}[\phi] = - \int_{x_3}\phi(x_3)
\partial_{3\mu}\Gamma_{1;x_1x_2x_3}^{(3)}[\phi],
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
&\left( \partial_{1\mu} + \partial_{2\mu}\right) \Gamma_{2;x_1,x_2}^{(11)}[\phi,\phi] = \\&- \int_{x_3}\phi(x_3)
\partial_{3\mu}\left( \Gamma_{2;x_1x_3, x_2}^{(21)}[\phi,\phi] + \Gamma_{2;x_1,x_2x_3}^{(12)}[\phi,\phi]\right) ,
\end{aligned}
\end{equation}
etc.
Secondly, we also obtain more specific and interesting identities that relate $\Gamma_p$ and $\Gamma_{p+1}$, \textit{e.g.},
\begin{equation}
\label{eq_ward_susy_nonunif_2}
\begin{aligned}
\partial_{1\mu}\Gamma_{2;x_1,x_2}^{(11)}[\phi,\phi] &- \frac{\Delta_B}{2} (x_1^\mu-x_2^\mu)\Gamma_{1;x_1x_2}^{(2)}[\phi] = \\&- \int_{x_3}\phi(x_3)
\partial_{3\mu}\Gamma_{2;x_1x_3,x_2}^{(21)}[\phi,\phi],
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\label{eq_ward_susy_nonunif_3}
&\partial_{1\mu}\Gamma_{3;x_1,x_2,x_3}^{(111)}[\phi,\phi,\phi] - \frac{\Delta_B}{2}\times \\&\big[ (x_1^\mu-x_2^\mu)\Gamma_{2;x_1x_2,x_3}^{(21)}[\phi,\phi] + (x_1^\mu-x_3^\mu)\Gamma_{2;x_1x_3,x_2}^{(21)}[\phi,\phi] \big] \\& = - \int_{x_4}\phi(x_4)
\partial_{4\mu}\Gamma_{3;x_1x_4,x_2,x_3}^{(211)}[\phi,\phi,\phi],
\end{aligned}
\end{equation}
etc. For a uniform physical field $\phi(x) = \phi$, Eq.~(\ref{eq_ward_susy_nonunif_2}) becomes
\begin{equation}
\label{eq_ward_susy_unif_2}
\begin{aligned}
\partial_{1\mu}\Gamma_{2;x_1,x_2}^{(11)}(\phi,\phi) - \frac{\Delta_B}{2} (x_1^\mu-x_2^\mu)\Gamma_{1;x_1x_2}^{(2)}(\phi) = 0,
\end{aligned}
\end{equation}
which after Fourier transforming and using the translational and rotational invariance in Euclidean space leads to
\begin{equation}
\label{eq_ward_susy_momentum_2}
\Gamma_{2}^{(11)}(q^2;\phi,\phi) = \Delta_B \partial_{q^2}\Gamma_{1}^{(2)}(q^2;\phi),
\end{equation}
with the obvious notation: $\Gamma_{2;q_1,q_2}^{(11)}(\phi,\phi)=(2\pi)^d \delta^{(d)}(q_1+q_2)\Gamma_{2}^{(11)}(q_1^2;\phi,\phi)$, etc. Similar identities are derived for the higher orders.
\section{NP-FRG in the superfield formalism}
\label{sec:NPFRG}
\subsection{Nonperturbative FRG}
The main purpose of the present work is to develop a formalism that allows one to study the supersymmetry, more specifically the invariance under superrotations, and its spontaneous breaking in the RFIM. To this end we upgrade our NP-FRG approach\cite{tarjus04,tissier06,tarjus08,tissier08} to a superfield formulation and use the tools developed in the previous sections to extend the Parisi-Sourlas formalism to the relevant situations in which multiple minima of the bare action may be present. The key points involve:
(1) adding an infrared regulator that enforces a progressive account of the fluctuations while ensuring that the initial condition of the RG flow satisfies the (super) symmetries of the action in Eq.~(\ref{eq_superaction_multicopy}) and corresponds to a unique solution of the stochastic field equation,
(2) considering copies of the original disordered system that give access to the full functional field dependence of the renormalized cumulants of the disorder,
(3) selecting the ground state through the introduction of a proper weighting factor and use of a curved superspace,
(4) using the WT identities associated with the (super) symmetries to ensure that neither the regulator nor the approximations explicitly break the latter.
Extending our previous work to the superfield theory, we introduce a generating functional of the correlation functions at the running scale $k$ for an arbitrary number $n$ of copies of the system (coupled to the same random field but submitted to different external sources) and a weighting factor involving the auxiliary parameter $\beta$,
\begin{equation}
\begin{aligned}
\label{eq_part_func}
\mathcal Z_k^{(\beta)}[\big \{ \mathcal J_a \big \}&]= \int \prod_{a=1}^n\mathcal D\Phi_a \exp \big \lbrace - \Delta S_k^{(\beta)}[\left\lbrace \Phi_a \right\rbrace] \\&- S^{(\beta)}[\left\lbrace \Phi_a \right\rbrace]+ \sum_{a=1}^n \int_{\underline{x}}\mathcal J_a(\underline x) \Phi_a(\underline x))\big \rbrace,
\end{aligned}
\end{equation}
where $S^{(\beta)}[\{\Phi_a \}]$ is defined in Eq.~(\ref{eq_superaction_multicopy}) and $\int_{\underline{x}}$ involves the curved superspace measure as in the preceding sections. As previously discussed, the $n$-copy action is invariant under the $S_n$ permutational symmetry, the $Z_2$ symmetry, the translations and rotations in the Euclidean space, and, separately for each copy, under the isometries of the curved Grassmann subspace. The regulator is as usual taken quadratic in the superfields. Demanding that it satisfies the above symmetries and requiring in addition that it keeps the multilocal form of the bare action in Eq.~(\ref{eq_superaction_multicopy}) imply the following form:
\begin{equation}
\label{eq_regulator}
\Delta S_k^{(\beta)}=\frac 12 \sum_{a_1,a_2=1}^n\int_{\underline{x}_1}\int_{\underline{x}_2} \Phi_{a_1}(\underline{x}_1)\mathcal R_{k,a_1a_2}(\underline{x}_1,\underline{x}_2)\Phi_{a_2}(\underline{x}_2),
\end{equation}
where $\mathcal R_{k,a_1a_2}$ denotes infrared cutoff functions satisfying
\begin{equation}
\begin{aligned}
\label{eq_replicatedregulator}
\mathcal R_{k,a_1a_2}(\underline{x}_1,\underline{x}_2) = &\delta_{a_1a_2}\delta_{\underline{\theta}_1 \underline{\theta}_2}(1-\beta \bar \theta_1 \theta_1) \widehat{R}_k(|x_1-x_2|)\\&+\widetilde{R}_k(|x_1-x_2|),
\end{aligned}
\end{equation}
where $\delta_{\underline{\theta}_1 \underline{\theta}_2}$ is defined below Eq.~(\ref{eq_legendre_invert}). The infrared cutoff functions are chosen such that the integration over modes with momentum $\vert q \vert \ll k$ is suppressed (see below); these functions must go to zero when $k\rightarrow 0$ so that full integration is recovered in this limit.\cite{berges02,tarjus04,tarjus08}
[More precisely, we shall see below that $\widetilde{R}_k(q^2)$ goes to zero except for $q=0$, which nonetheless does not alter the property that the regularized theory converges to the full theory when $k \rightarrow 0$.]
The central quantity of our NP-FRG approach is the so-called ``effective average action'' $\Gamma_k^{(\beta)}$,\cite{wetterich93,berges02} which is the generating functional of the 1PI (proper) vertices\cite{zinnjustin89} at scale $k$ and is obtained from $\mathcal W_k^{(\beta)}\equiv \log \mathcal Z_k^{(\beta)}$ by a modified Legendre transform [compare with Eq.~(\ref{eq_legendre_multicopy})],
\begin{equation}
\begin{aligned}
\label{eq_legendre_effective_average_action}
&\Gamma_k^{(\beta)}[\left\lbrace \Phi_a \right\rbrace] = \\&-\mathcal W_k^{(\beta)}[\left\lbrace \mathcal J_a \right\rbrace] +\sum_{a=1}^n \int_{\underline x} \mathcal J_a(\underline x) \Phi_a(\underline x)-\Delta S_k^{(\beta)}[\left\lbrace \Phi_a \right\rbrace].
\end{aligned}
\end{equation}
Its flow with the infrared scale $k$ is described by an exact RG equation (ERGE),\cite{wetterich93,berges02} which, after accounting for the curvature, reads
\begin{equation}
\begin{split}
\label{eq_ERGE_functional}
\partial_t \Gamma_k^{(\beta)}[\left\lbrace \Phi_a \right\rbrace]&=\frac 12 \sum_{a_1,a_2=1}^n\int_{\underline{x}_1}\int_{\underline{x}_2} (1-\beta \bar{\theta}_1 \theta_1) (1-\beta \bar{\theta}_2 \theta_2) \\&\times \partial_t \mathcal R_{k;a_1 a_2}(\underline{x}_1,\underline{x}_2) \mathcal P_{k;(a_1,\underline{x}_1)(a_2,\underline{x}_2)}^{(\beta)}[\left\lbrace \Phi_a \right\rbrace] ,
\end{split}
\end{equation}
where $t=\log(k/\Lambda)$; the (modified) propagator $\mathcal P_{k}^{(\beta)}$ is defined as the inverse of $\Gamma_k^{(2)} + \mathcal R_k$ in the sense defined in Eq.~(\ref{eq_legendre_invert}), \textit{i.e.},
\begin{equation}
\label{eq_full_propagator}
\begin{aligned}
&\sum_{a_3}\int_{\underline x_3}(1-2\beta\bar\theta_3\theta_3)\, \mathcal P_{k;(a_1,\underline{x}_1)(a_3,\underline{x}_3)}^{(\beta)} \bigg (\Gamma^{(2)}_{k;(a_3,\underline{x}_3)(a_2,\underline{x}_2)} \\&+ \mathcal R_{k;(a_3,\underline{x}_3)(a_2,\underline{x}_2)}\bigg ) =(1+\beta\bar\theta_1\theta_1) \delta_{a_1 a_2} \delta_{\underline x_1 \underline x_2} \, ,
\end{aligned}
\end{equation}
where $\Gamma_k^{(2)}[\left\lbrace \Phi_a \right\rbrace] $ is the second functional derivative of the effective average action with respect to the superfields and $\delta_{\underline x_1 \underline x_2}\equiv \delta^{(d)}(x_1-x_2) \delta_{\underline\theta_1 \underline\theta_2}$ [with, again, $\int_{\underline \theta_2}\delta_{\underline \theta_1 \underline\theta_2}=(1+\beta\bar\theta_1\theta_1)$]. Here and in the rest of this section, we omit the superscript $(\beta)$ in the functional derivatives of $\Gamma_k$ in order to avoid the awkward proliferation of superscripts. (Note that conventional translational invariance being explicitly broken in the curved Grassmann space, Fourier transforming is of no use when $\beta \neq 0$.)
Due to the properties of the infrared cutoff functions, the effective average action reduces to the standard effective action $\Gamma^{(\beta)}[\left\lbrace \Phi_a \right\rbrace]$ when $k\rightarrow 0$. The initial condition at the microscopic (UV) scale, when $k\rightarrow \Lambda$, is more subtle. By using the definition of the effective average action, Eq.~(\ref{eq_legendre_effective_average_action}), and of the regulator, Eqs.~(\ref{eq_regulator},\ref{eq_replicatedregulator}), and after a change of integration variables, one obtains the expression:
\begin{equation}
\begin{aligned}
\label{eq_UVscale}
\exp&(-\Gamma_k^{(\beta)}[\left\lbrace \Phi_a \right\rbrace])=\int \prod_{a=1}^n\mathcal D\chi_a \exp \big \{- S^{(\beta)}[\left\lbrace \Phi_a +\chi_a\right\rbrace]\\& + \sum_{a=1}^n \int_{\underline{x}} \Gamma_{k;\underline{x}}^{(1)}[\left\lbrace \Phi_a +\chi_a\right\rbrace] \chi_a(\underline{x}) - \frac{1}{2}\int_{x_1 x_2} \widehat{R}_k(|x_1-x_2|) \\& \times \sum_{a=1}^n \int_{\underline{\theta}}\chi_a(x_1,\underline{\theta})\chi_a(x_2,\underline{\theta}) - \frac{1}{2}\int_{x_1 x_2} \widetilde{R}_k(|x_1-x_2|) \\& \times \big( \sum_{a=1}^n \int_{\underline{\theta}} \chi_a(x_1,\underline{\theta})\big) \big( \sum_{a=1}^n \int_{\underline{\theta}} \chi_a(x_2,\underline{\theta})\big)\big \}.
\end{aligned}
\end{equation}
If one requires that $\widehat{R}_k(|x|)$ diverges for all $x$ when $k \rightarrow \Lambda$ while $\widetilde{R}_k(|x|)$ stays bounded, the term in $\widehat{R}_k$ in the above expression acts as a delta functional for all the superfield variables $\chi_a$. As a consequence,
\begin{equation}
\Gamma_\Lambda^{(\beta)} [\left\lbrace \Phi_a \right\rbrace]= S^{(\beta)}[\left\lbrace \Phi_a \right\rbrace],
\end{equation}
\textit{i.e.}, the effective average action reduces to the bare action defined in Eq.~(\ref{eq_superaction_multicopy}).
\subsection{Properties and role of the cutoff functions}
We first go back from Eq.~(\ref{eq_part_func}) to the original formulation with the $\varphi_a$ fields in the presence of disorder (setting the fermionic sources to zero). The generating functional can then be expressed as
\begin{equation}
\begin{aligned}
\label{eq_part_func_regulator}
&\mathcal Z_k^{(\beta)}[\{ \hat J_a, J_a\}]\propto \int \mathcal D h\; \frac{\exp \big[\frac{-\vert h_q\vert^2}{2(\Delta_B-\widetilde{R}_k(q^2))}\big]}{\sqrt{\det(\Delta_B-\widetilde R_k)}}\prod_{a=1}^n \int \mathcal D\varphi_a \\&\delta\left [\dfrac{\delta S_B[\varphi_a]}{\delta \varphi_a(x)}+\int_y \widehat{R}_k(x-y)\varphi_a(y) -h(x)-J_a(x)\right ] \times \\& \det \left(\dfrac{\delta^2 S_B[\varphi_a]}{\delta \varphi_a \delta \varphi_a}+\widehat{R}_k\right)\; \exp \int_{x} \hat{J}_a(x) \varphi_a(x).
\end{aligned}
\end{equation}
One can see that $\widehat{R}_k$ is added to the second functional derivative $S_B^{(2)}[\varphi]$ of the bare action and plays the role of an infrared cutoff at scale $k$ for the fluctuations of the $\varphi$ field, as in the NP-FRG of pure systems.\cite{berges02} More specifically in the present case, a large enough cutoff function $\widehat{R}_k$ ensures that the operator $S_B^{(2)}[\varphi]+\widehat{R}_k$ is definite positive which guarantees that the stochastic field equation has a unique solution. This is certainly true at the UV scale $\Lambda$ where $\widehat{R}_{k\rightarrow \Lambda}$ diverges. At the beginning of the flow, the regularized theory is therefore ``ultralocal'' in the Grassmann subspace. Since as discussed in section V the ``ultralocal'' cumulants $W_p$ and $\Gamma_p$ are independent of the curvature $\beta$ when uniqueness of the solution is enforced, the regularized theory at the start of the RG flow is also invariant under the superrotations.
The other cutoff function $\widetilde{R}_k$ on the other hand (with the requirement that it is always positive) reduces the variance of the random magnetic source, \textit{i.e.} reduces the fluctuations of the bare disorder. From this, one gets the additional constraint at the beginning of the flow, when the random field distribution is not yet renormalized, that $\Delta_B \geq \widetilde{R}_{\Lambda}(q^2)$ for all $q$'s. Note that as one follows the RG flow by reducing the IR scale $k$, Eq.~(\ref{eq_part_func_regulator}) is still valid but rather useless: it is indeed more convenient to work with renormalized quantities, namely a renormalized disorder and a renormalized action or effective action. When $k$ decreases, so does the cutoff function $\widehat{R}_k$, which presumably leads at some point to a breaking of the ``ultralocal'' property when $\beta$ is finite. One should however keep in mind that the relevant solutions are associated with a renormalized random functional and no longer with the bare action, so that the deviation from ``Grassmannian ultralocality'' may be small and, even further, vanish along the flow (see below).
Finally, the superfield formalism offers a way to constrain the cutoff functions by relating them. As noted before, the action in Eq.~(\ref{eq_superaction_multicopy}) is invariant under the superrotations when the supersources are taken as uniform in the Grassmann subspace (\textit{i.e.}, $\hat{J}_b= \bar{K}_b = K_b = 0$) for all copies but one and when the curvature $\beta$ is set to zero. The regulator $\Delta S_k$ can be made explicitly invariant under the same conditions by choosing $\mathcal R_{k,aa}$ to be a function of the super-Laplacian $\Delta_{SS}$ only. As a result,\cite{footnote1}
\begin{equation}
\label{eq_susy_regulator}
\widetilde{R}_k(q^2)=-\Delta_B\, \partial_{q^2}\widehat{R}_k(q^2),
\end{equation}
where $q$ denotes the momentum in Euclidean $d$-dimensional space. Note that the above expression involves the strength of the bare disorder whereas, as also discussed above, a more general relation should allow for a proportionality factor expressed in terms of renormalized quantities rather than bare ones. This will be discussed in section VII-B.
\subsection{ERGE for the cumulants}
Aiming at deriving exact FRG flow equations for the ``cumulants'' $\mathsf \Gamma_{kp}^{(\beta)}$, we first rewrite more explicitly the ERGE for the effective average action,
\begin{equation}
\begin{split}
\label{eq_ERGE_functional_explicit}
&\partial_t \Gamma_k^{(\beta)}[\left\lbrace \Phi_a \right\rbrace]=\frac 12 \int_q \bigg \{ \partial_t \widehat R_{k}(q^2) \int_{\underline{\theta}_1}(1-2 \beta \bar{\theta}_1 \theta_1) \sum_{a_1} \\& \mathcal P_{k;(a_1,-q \underline{\theta}_1)(a_1,q \underline{\theta}_1)}^{(\beta)}[\left\lbrace \Phi_a \right\rbrace] + \partial_t \widetilde R_{k}(q^2) \int_{\underline{\theta}_1}\int_{\underline{\theta}_2} (1-\beta \bar{\theta}_1 \theta_1) \\& (1-\beta \bar{\theta}_2 \theta_2) \sum_{a_1 a_2} \mathcal P_{k;(a_1,-q \underline{\theta}_1)(a_2,q \underline{\theta}_2)}^{(\beta)}[\left\lbrace \Phi_a \right\rbrace] \bigg \},
\end{split}
\end{equation}
and we use the expansion in increasing number of sums over copies. For the expansion of the full propagator $\mathcal P_{k}^{(\beta)}$ appearing in the right-hand side, it is convenient to introduce notations as follows. A generic matrix $\mathcal A_{(a_1 \underline x_1)(a_2 \underline x_2)}[\left\lbrace \Phi_a\right\rbrace ]$ can be decomposed as
\begin{equation}
\label{eq_genericdecompos}
\mathcal A_{a_1a_2}[\left\lbrace \Phi_a\right\rbrace ]=\delta_{a_1a_2} \widehat{\mathcal A}_{a_1}[\left\lbrace \Phi_a\right\rbrace ] + \widetilde{\mathcal A}_{a_1a_2}[\left\lbrace \Phi_a\right\rbrace ],
\end{equation}
where we have dropped the explicit dependence on the coordinates. In the above expression, it is understood that the second term $\widetilde{\mathcal A}_{a_1a_2}$ no longer contains any explicit Kronecker symbol. Each component can now be expanded in increasing number of sums over copies,
\begin{equation}
\label{eq_widehatA}
\widehat{\mathcal A}_{a_1}[\left\lbrace \Phi_a\right\rbrace ]=\widehat{\mathcal A}^{[0]}[ \Phi_{a_1} ]+\sum_{b} \widehat{\mathcal A}^{[1]}[ \Phi_{a_1} \vert \Phi_b ]+\cdots
\end{equation}
\begin{equation}
\label{eq_widetildeA}
\widetilde{\mathcal A}_{a_1a_2}[\left\lbrace \Phi_a\right\rbrace ]=\widetilde{\mathcal A}^{[0]}[ \Phi_{a_1}, \Phi_{a_2}]+\sum_{b}\widetilde{\mathcal A}^{[1]}[ \Phi_{a_1}, \Phi_{a_2} \vert \Phi_{b}]+\cdots,
\end{equation}
where the superscripts in square brackets denote the order in the expansion (and should not be confused with superscripts in parentheses indicating functional derivatives). The $\widehat{\mathcal A}^{[p]}$'s and $\widetilde{\mathcal A}^{[p]}$'s are symmetric under any permutation of their arguments, and their functional form is independent of the number $n$ of copies (taken as arbitrarily large).
After inserting Eq.~(\ref{eq_multicopy1_app}) for $\Gamma_{k}^{(\beta)}$ and the above expressions applied to the propagator $\mathcal{P}_k^{(\beta)}$, we obtain a hierarchy of ERGE's for the cumulants $\mathsf \Gamma_{kp}^{(\beta)}$. The first equations read:
\begin{equation}
\label{eq_flow_Gamma1}
\begin{split}
&\partial_t \mathsf \Gamma_{k 1}^{(\beta)}\left[ \Phi_1\right ]=
\dfrac{1}{2} \int_{ q} \bigg \{ \partial_t \widehat{R}_k(q^2) \int_{\underline{\theta}_1}(1-2 \beta \bar{\theta}_1 \theta_1) \times \\& \big (\widehat{\mathcal P}_{k;(-q \underline{\theta}_1)(q \underline{\theta}_1)}^{[0]}[ \Phi_1] +\widetilde{\mathcal P}_{k;(-q \underline{\theta}_1)(q \underline{\theta}_1)}^{[0]}[ \Phi_1, \Phi_1] \big ) + \partial_t \widetilde R_{k}(q^2) \times \\& \int_{\underline{\theta}_1}\int_{\underline{\theta}_2} (1-\beta \bar{\theta}_1 \theta_1)(1-\beta \bar{\theta}_2 \theta_2)\widehat{\mathcal P}_{k;(-q \underline{\theta}_1)(q \underline{\theta}_2)}^{[0]}[\Phi_1] \bigg \} ,
\end{split}
\end{equation}
\begin{equation}
\label{eq_flow_Gamma20}
\begin{split}
\partial_t &\mathsf \Gamma_{k 2}^{(\beta)}\left[ \Phi_1 , \Phi_2\right ]= -
\dfrac{1}{2} \int_{ q} \bigg \{ \partial_t \widehat{R}_k(q^2) \int_{\underline{\theta}_1}(1-2 \beta \bar{\theta}_1 \theta_1) \times \\&\big (\widehat{\mathcal P}_{k;(-q \underline{\theta}_1)(q \underline{\theta}_1)}^{[1]}[ \Phi_1\vert \Phi_2] +\widetilde{\mathcal P}_{k;(-q \underline{\theta}_1)(q \underline{\theta}_1)}^{[1]}[ \Phi_1, \Phi_1 \vert \Phi_2] \big ) + \\& \partial_t \widetilde R_{k}(q^2) \int_{\underline{\theta}_1}\int_{\underline{\theta}_2} (1-\beta \bar{\theta}_1 \theta_1)(1-\beta \bar{\theta}_2 \theta_2) \times \\& \big (\widehat{\mathcal P}_{k;(-q \underline{\theta}_1)(q \underline{\theta}_2)}^{[1]}[ \Phi_1\vert \Phi_2] + \widetilde{\mathcal P}_{k;(-q \underline{\theta}_1)(q \underline{\theta}_2)}^{[0]}[ \Phi_1, \Phi_2] \big )\\&+ perm (12)\bigg \},
\end{split}
\end{equation}
and so on, where $perm (12)$ denotes the expression obtained by permuting $ \Phi_1$ and $ \Phi_2$; for clarity we have omitted the superscript $(\beta)$ in the right-hand side.
The components of the propagator, $\widehat{\mathcal P}_{k}^{[p]}$ and $\widetilde{\mathcal P}_{k}^{[p]}$, can be expressed in terms of second derivatives of the cumulants $\mathsf \Gamma_{kq}$ by inserting into Eq.~(\ref{eq_full_propagator}) the expressions (\ref{eq_widehatA},\ref{eq_widetildeA}) with $\mathcal A$ equal to the second functional derivative of Eq.~(\ref{eq_multicopy1_app}). The algebraic manipulations are given in Appendix~\ref{appendix:expansion_copies}. One finds for instance that the propagator $\widehat{\mathcal P}_{k}^{[0]}$ can be symbolically expressed as
\begin{equation}
\label{eq_hatpropagator}
\widehat {\mathcal P}_{k}^{[0]}[ \Phi_1 ]=\left( \mathsf{\Gamma} _{k 1}^{(2)}[ \Phi_1 ]+\widehat R_k U\right) ^{-1},
\end{equation}
which means that
\begin{equation}
\label{eq_hatpropagator_definition}
\begin{aligned}
&\int_{\underline{x}_3}(1-2\beta \bar \theta_3 \theta_3)\, \widehat{\mathcal P}_{k;\underline{x}_1 \underline{x}_3}^{[0]} \bigg (\mathsf \Gamma^{(2)}_{k1;\underline{x}_3 \underline{x}_2}+ \widehat R_{k;x_3 x_2} U_{\underline{\theta}_3 \underline{\theta}_2}\bigg ) \\& =U_{\underline{\theta}_1 \underline{\theta}_2} \delta^{(d)}( x_1 - x_2) \, ,
\end{aligned}
\end{equation}
where we have introduced $U_{\underline{\theta}_1 \underline{\theta}_2}=(1 + \beta \bar \theta_1 \theta_1) \delta_{\underline{\theta}_1 \underline{\theta}_2}$, and that $\widetilde{\mathcal P}_{k}^{[0]}$ is given by
\begin{equation}
\begin{aligned}
\label{eq_tildepropagator}
&\widetilde {\mathcal P}_{k;\underline{x}_1 \underline{x}_2}^{[0]}[ \Phi_1, \Phi_2 ]=\int_{\underline{x}_3}\int_{ \underline{x}_4}(1 -2 \beta \bar \theta_3 \theta_3)(1 -2 \beta \bar \theta_4 \theta_4)\\& \widehat {\mathcal P}_{k;\underline{x}_1 \underline{x}_3}^{[0]}[ \Phi_1 ] \big( \mathsf{\Gamma} _{k2;\underline{x}_3, \underline{x}_4}^{(11)}[ \Phi_1, \Phi_2 ]-(1 + \beta \bar \theta_3 \theta_3)(1 + \beta \bar \theta_4 \theta_4)\\&
\times \widetilde R_{k;x_3 x_4} \big ) \widehat {\mathcal P}_{k;\underline{x}_4 \underline{x}_2}^{[0]}[ \Phi_2 ].
\end{aligned}
\end{equation}
Eqs.~(\ref{eq_hatpropagator}) and (\ref{eq_tildepropagator}) can be inserted in Eq.~(\ref{eq_flow_Gamma1}), which provides an ERGE for $\mathsf \Gamma_{k 1}^{(\beta)}$ only expressed in terms of cumulants associated with the effective average action.
After introducing the short-hand notation $\widetilde{\partial}_t$ to indicate a derivative acting only on the cutoff functions (\textit{i.e.}, $\widetilde{\partial}_t \equiv \partial_t \widehat{R}_k\, \delta/\delta \widehat{R}_k + \partial_t \widetilde{R}_k \, \delta/\delta \widetilde{R}_k$), Eq. (\ref{eq_flow_Gamma20}) can now be rewritten as
\begin{equation}
\label{eq_flow_Gamma2_final}
\begin{split}
&\partial_t \mathsf \Gamma_{k2}^{(\beta)}\left[ \Phi_1 , \Phi_2\right ]= \dfrac{1}{2} \widetilde{\partial}_t \bigg \{ \int_{\underline{x}_3}\int_{ \underline{x}_4}(1 -2 \beta \bar \theta_3 \theta_3)(1 -2 \beta \bar \theta_4 \theta_4) \times \\&\bigg [ \widehat{\mathcal P}_{k;\underline{x}_3 \underline{x}_4}^{[0]}\left[ \Phi_1 \right ] ( \mathsf \Gamma _{k2;\underline{x}_4 \underline{x}_3,.}^{(20)}\left[ \Phi_1, \Phi_2 \right ] - \mathsf \Gamma _{k3;\underline{x}_4, \underline{x}_3,.}^{(110)}\left[ \Phi_1, \Phi_1, \Phi_2 \right ]) \\&+ \widetilde{\mathcal P}_{k;\underline{x}_3 \underline{x}_4}^{[0]}\left[ \Phi_1, \Phi_1 \right ] \mathsf \Gamma _{k2;\underline{x}_4 \underline{x}_3,.}^{(20)}\left[ \Phi_1, \Phi_2 \right ] +\dfrac{1}{2} \widetilde{ \mathcal P}_{k;\underline{x}_3 \underline{x}_4}^{[0]}\left[ \Phi_1, \Phi_2 \right ]\times \\&(\mathsf \Gamma _{k2;\underline{x}_4, \underline{x}_3}^{(11)}\left[ \Phi_1, \Phi_2 \right ] - (1 + \beta \bar \theta_4 \theta_4)(1 + \beta \bar \theta_3 \theta_3)\widetilde{R}_{k;x_4x_3} ) \\& + perm (12) \bigg ]\bigg \},
\end{split}
\end{equation}
where, again, $perm (12)$ denotes the expression obtained by permuting $ \Phi_1$ and $ \Phi_2$. Similar ERGE's are obtained for the higher-order cumulants. A graphical representation of the hierachy of ERGE's is provided is Appendix~\ref{appendix:graphical}.
This provides a hierachy of exact RG equations for the cumulants of the renormalized disorder (including the first one which leads to a description of the thermodynamics). One should note that (i) the cumulants are functionals of the superfields and contain full information on the complete set of 1PI correlation functions, (ii) the flow equations are coupled, the $(p+1)$th cumulant appearing in the right-hand side of the equation for the $p$th cumulant, and (iii) to obtain the flow equation for $\mathsf \Gamma_{kp}[\Phi_1,...,\Phi_p]$ with its full functional dependence on the $p$ field arguments, one needs to consider at least $p$ copies of the original system. Formally, the whole hierarchy of flow equations for the cumulants can thus be obtained by considering an arbitrary large number of copies.
\subsection{ERGE for the cumulants under the hypothesis of ``Grassmannian ultralocality''}
As such the hierarchy of ERGE's obtained above is expressed in terms of superfields and superspace coordinates and is untractable in general. It is instructive to consider the simplification that arises under the hypothesis of ``Grassmannian ultralocality''. This is achieved by considering the flow equations when the cumulants are evaluated for ``physical'' configurations $\Phi_a(\underline x)=\phi_a(x)$ of the superfields, \textit{i.e.} configurations that are uniform in the Grassmann subspace. Then, from Eq.~(\ref{eq_cumulant_p_ultralocal}), $\mathsf \Gamma_{kp}^{(\beta)}\left[ \Phi_1 ,..., \Phi_p\right ]=\beta^p\, \Gamma_{kp}^{(\beta)}\left[ \phi_1 ,...,\phi_p\right ]$. We next insert the results of section V in the ERGE's and we introduce the following ``hat'' and ``tilde'' propagators,
\begin{equation}
\label{eq_hatP_zero}
\widehat {P}_{k}^{[0]}[\phi ]=\left(\Gamma _{k1}^{(2)}[ \phi ]+\widehat R_k\right) ^{-1}
\end{equation}
and
\begin{equation}
\label{eq_tildeP_zero}
\widetilde {P}_{k}^{[0]}[\phi_1, \phi_2 ]= \widehat {P}_{k}^{[0]}[ \phi_1 ](\Gamma _{k2}^{(11)}[\phi_1, \phi_2 ]-\widetilde R_k ) \widehat {P}_{k}^{[0]}[ \phi_2 ],
\end{equation}
which are obtained from Eqs.~(\ref{eq_hatpropagator}-\ref{eq_tildepropagator}) with, as in Eqs.~(\ref{eq_expand_Gamma_2hat},\ref{eq_expand_Gamma_2tilde}), $\widehat {\mathcal P}_{k;\underline \theta_1 \underline \theta_2}^{[0]}[\phi]=(1+\beta \bar\theta_1 \theta_1) \delta_{\underline \theta_1 \underline \theta_2}\widehat {P}_{k}^{[0]}[\phi]$ and $\widetilde {\mathcal P}_{k;\underline \theta_1 \underline \theta_2}^{[0]}[\phi_1,\phi_2]=(1+\beta \bar\theta_1 \theta_1)(1+\beta \bar\theta_2 \theta_2) \widetilde {P}_{k}^{[0]}[\phi_1,\phi_2]$.
The exact flow equation for the first cumulant of the renormalized disorder is finally obtained as
\begin{equation}
\label{eq_flow_Gamma1_ULapp}
\begin{split}
&\partial_t\Gamma_{k1}\left[\phi_1\right ]= \\&-
\dfrac{1}{2} \tilde{\partial}_t \int_{x_2x_3}\widehat{P}_{k;x_2x_3}^{[0]}[\phi_1] \big(\Gamma_{k2;x_2,x_3}^{(11)}\left[\phi_1,\phi_1\right ] - \widetilde{R}_{k;x_2x_3}\big),
\end{split}
\end{equation}
where the explicit dependence on the Euclidean coordinates has been momentarily reinstalled. One similarly derives an ERGE for the second cumulant,
\begin{equation}
\label{eq_flow_Gamma2_ULapp}
\begin{split}
&\partial_t\Gamma_{k2}\left[\phi_1,\phi_2\right ]=\\&
\dfrac{1}{2} \tilde{\partial}_t \int_{x_3x_4}\big \{- \widehat{P}_{k;x_3x_4}^{[0]}\left[\phi_1\right ] \Gamma_{k3;x_3,.,x_4}^{(101)}\left[\phi_1,\phi_2,\phi_1\right ]+\\& \widetilde{P}_{k;x_3x_4}^{[0]}\left[\phi_1,\phi_1\right ] \Gamma_{k2;x_3x_4,.}^{(20)}\left[\phi_1,\phi_2\right ]+ \frac{1}{2}\widetilde{P}_{k;x_3x_4}^{[0]}\left[\phi_1,\phi_2\right ] \\& \times \left( \Gamma_{k2;x_3,x_4}^{(11)}\left[\phi_1,\phi_2\right ] - \widetilde{R}_{k;x_3x_4}\right) + perm(12)\big \},
\end{split}
\end{equation}
where $perm (12)$ denotes the expression obtained by permuting $\phi_1$ and $\phi_2$.
Generically, the flow of $\Gamma_{kp}\left[\phi_1,...,\phi_p\right ]$ involves three types of quantities: the propagators $\widehat{P}_{k}^{[0]}$ and $\widetilde{P}_{k}^{[0]}$, second functional derivatives of $\Gamma_{kp}$ in which all the arguments are different, and second functional derivatives of $\Gamma_{k(p+1)}$ with two of their arguments equal to each other (for a graphical representation of the hierachy of ERGE's, see Appendix~\ref{appendix:graphical}). We will come back in more detail to the structure of these flow equations in the following paper, but we note for future reference that these equations are independent of the auxialiary parameter $\beta$, and so is their solution if the initial condition is itself ``ultralocal''.
Finally, we point out that the above ERGE's coincide with those previously derived without the superfield formalism by means of an expansion in number of free replica sums, when evaluated at $T=0$.\cite{tarjus04, tarjus08} The same is true for the ERGE for all higher-order cumulants. It is however important to stress that our previous replica approach provides no obvious way to make the superrotational invariance (or lack of it) explicit and, therefore, neither to relate the two infrared cutoff functions $\widetilde R_k$ and $\widehat R_k$ nor to provide guidance for devising nonperturbative approximations to the ERGE's that do not explicitly break the underlying supersymmetry (see the companion paper).
\section{Ground-state dominance in the NP-FRG}
\label{sec:ground-state_dominance}
\subsection{Taking the limit of infinite curvature (zero auxiliary temperature)}
We now come to a central step of our approach. Introducing the auxiliary temperature $\beta^{-1}$ has allowed us to place the superfield formalism on a firm ground. However, being ultimately interested in studying the ground-state properties of the system, we would like to take the limit $\beta^{-1} \rightarrow 0$ in the exact flow equations derived above. To make the dependence on $\beta$ explicit in the ERGE's, we first rescale the Grassmann coordinates and, accordingly, the auxiliary fields: $(\bar \theta, \theta) = \beta^{-1/2}(\bar \omega, \omega)$, $(\bar \psi, \psi) = \beta^{1/2}(\widetilde{\bar \psi}, \widetilde \psi)$, and $\hat \phi=\beta \widetilde{\hat \phi}$, so that $\Phi(\underline \theta)\equiv \widetilde \Phi(\underline \omega)$ with $\widetilde \Phi(\underline \omega)=\phi +\bar \omega \widetilde \psi+\widetilde{\bar \psi} \omega+\bar \omega \omega \widetilde{\hat \phi}$.
The cumulants associated with the effective average action can be formally written as
\begin{equation}
\label{eq_}
\mathsf \Gamma_{kp}^{(\beta)}[\Phi_{a_1},...,\Phi_{a_p}]=\int_{\underline \theta_1}...\int_{\underline \theta_p} \Gamma_{kp}^{(\beta)}[1,...,p],
\end{equation}
where in the right-hand side, as in Eqs.~(\ref{eq_cumW1_formal},\ref{eq_cumW2_formal}), $q \in \{1,...,p\}$ denotes
\begin{equation}
\{\Phi_{a_q}(\underline{\theta}_q), (1+\frac{\beta}{2} \bar{\theta}_q \theta_q)\, \partial_{\underline \theta_q}\Phi_{a_q}(\underline{\theta}_q), \Delta_{\underline{\theta}_q}\Phi_{a_q}(\underline{\theta}_q)\}.
\end{equation}
When changing Grassmann coordinates from $\underline\theta$ to $\underline\omega$, we assume that
$\Gamma_{kp}^{(\beta)}[1,...,p]=\Gamma_{kp}^{(\beta)}[\tilde 1,...,\tilde p]$, where $\tilde 1$ denotes $\{\widetilde \Phi_{a_1}(\underline{\omega}_1)$, $(1+\frac{1}{2} \bar{\omega}_1\omega_1 ) \partial_{\underline{\omega}_1}\widetilde \Phi_{a_1}(\underline{\omega}_1), \Delta_{\underline{\omega}_1}\widetilde \Phi_{a_1}(\underline{\omega}_1) \}$, and so on, with the same functional form for $\Gamma_{kp}^{(\beta)}$. This guarantees that the derivatives of the ``non-ultralocal'' contributions to the $\Gamma_{kp}^{(\beta)}$'s do not come with increasing factors of $\beta$ which would completely spoil the limit of infinite $\beta$. We shall support this argument by studying the first ``non-ultralocal'' corrections (see below).
Then, taking into account that an integral over a Grassmann variable acts like a derivative leads to
\begin{equation}
\label{eq_betadependence_cumulant}
\mathsf \Gamma_{kp}^{(\beta)}[\Phi_{a_1},...,\Phi_{a_p}]=\beta^p \int_{\underline \omega_1}...\int_{\underline \omega_p} \Gamma_{kp}^{(\beta)}[\tilde 1,...,\tilde p].
\end{equation}
In addition, by using the identity $\delta /\delta \Phi_a(\underline\theta) \equiv \delta /\delta \widetilde \Phi_a(\underline\omega)$, which can be proven by considering the components of the superfield in the old and the new coordinate system, we arrive at
\begin{equation}
\begin{aligned}
\label{eq_betadependence_derivativecumulant}
&\frac{\delta ^q}{\delta \Phi_{b_1}(\underline\theta_{b_1})...\delta \Phi_{b_q}(\underline\theta_{b_q})}\mathsf \Gamma_{kp}^{(\beta)}[\Phi_{a_1},...,\Phi_{a_p}]\\&=\beta^{p-q}\int_{\underline \omega_1}...\int_{\underline \omega_p} \frac{\delta ^q}{\delta\widetilde \Phi_{b_1}(\underline\omega_{b_1})...\delta\widetilde \Phi_{b_q}(\underline\omega_{b_q})}\Gamma_{kp}^{(\beta)}[\tilde 1,...,\tilde p],
\end{aligned}
\end{equation}
where the functional dependence on the Euclidean coordinates is left implicit.
After the above preliminaries, we consider the ERGE's for the cumulants, \textit{e.g.} Eqs.~(\ref{eq_flow_Gamma1},\ref{eq_flow_Gamma2_final}) for the first two cumulants. We have to study the explicit $\beta$ dependence of the propagators $\widehat{\mathcal P}_{k}^{[0]}$ and $\widetilde{\mathcal P}_{k}^{[0]}$. After changing the Grassmann coordinates in Eqs.~(\ref{eq_hatpropagator_definition},\ref{eq_tildepropagator}) and using Eq.~(\ref{eq_betadependence_derivativecumulant}), we easily obtain
\begin{equation}
\label{eq_betadependence_hatP0}
\widehat {\mathcal P}_{k;\underline{\theta}_1 \underline{\theta}_2}^{[0]}[ \Phi_1 ] =\frac{1}{\beta} \widehat {\mathcal P}_{k;\underline{\omega}_1 \underline{\omega}_2}^{[0]}[ \widetilde \Phi_1 ]
\end{equation}
and
\begin{equation}
\label{eq_betadependence_tildeP0}
\widetilde {\mathcal P}_{k;\underline{\theta}_1 \underline{\theta}_2}^{[0]}[ \Phi_1, \Phi_2 ]=\widetilde {\mathcal P}_{k;\underline{\omega}_1 \underline{\omega}_2}^{[0]}[ \widetilde \Phi_1, \widetilde \Phi_2 ],
\end{equation}
where $\widetilde \Phi$ is the superfield defined with the new coordinate system and the rescaling of the auxiliary fields (see above).
The ERGE's for the cumulants can then be reexpressed as
\begin{equation}
\label{eq_flow_Gamma1_scaled}
\begin{split}
&\partial_t \int_{\underline{\omega}_1} \Gamma_{k 1}^{(\beta)}\left[\tilde1\right ]=
\dfrac{1}{2} \mathrm{Tr} \bigg \{ \partial_t \widehat{R}_k \int_{\underline{\omega}_1}(1-2 \bar{\omega}_1 \omega_1) \times \\& \big (\frac{1}{\beta} \widehat{\mathcal P}_{k;\underline{\omega}_1 \underline{\omega}_1}^{[0]}[\widetilde \Phi_1] +\widetilde{\mathcal P}_{k;\underline{\omega}_1 \underline{\omega}_1}^{[0]}[\widetilde \Phi_1,\widetilde \Phi_1] \big ) + \partial_t \widetilde R_{k} \times \\& \int_{\underline{\omega}_1}\int_{\underline{\omega}_2}(1- \bar{\omega}_1 \omega_1)(1- \bar{\omega}_2 \omega_2)\widehat{\mathcal P}_{k;\underline{\omega}_1 \underline{\omega}_2}^{[0]}[\widetilde \Phi_1] \bigg \} ,
\end{split}
\end{equation}
\begin{equation}
\label{eq_flow_Gamma2_final_scaled}
\begin{split}
&\partial_t \int_{\underline{\omega}_1}\int_{ \underline{\omega}_2} \Gamma_{k2}^{(\beta)}\left[ \tilde1 , \tilde2\right ]= \dfrac{1}{2} \widetilde{\partial}_t \mathrm{Tr} \bigg \{ \int_{\underline{\omega}_3}\int_{ \underline{\omega}_4}(1-2 \bar{\omega}_3 \omega_3)\times \\&(1-2 \bar{\omega}_4 \omega_4)\bigg [ \frac{1}{\beta}\, \widehat{\mathcal P}_{k;\underline{\omega}_3 \underline{\omega}_4}^{[0]}\left[ \widetilde \Phi_1 \right ] \mathsf \Gamma _{k2;\underline{\omega}_4 \underline{\omega}_3,.}^{(20)}\left[ \widetilde \Phi_1, \widetilde \Phi_2 \right ] -\\& \widehat{\mathcal P}_{k;\underline{\omega}_3 \underline{\omega}_4}^{[0]}\left[ \widetilde \Phi_1 \right ] \mathsf \Gamma _{k3;\underline{\omega}_4, \underline{\omega}_3,.}^{(110)}\left[ \widetilde \Phi_1, \widetilde \Phi_1, \widetilde \Phi_2 \right ]+ \widetilde{\mathcal P}_{k;\underline{\omega}_3 \underline{\omega}_4}^{[0]}\left[ \widetilde \Phi_1, \widetilde \Phi_1 \right ]\times \\& \mathsf \Gamma _{k2;\underline{\omega}_4 \underline{\omega}_3,.}^{(20)}\left[ \widetilde \Phi_1, \widetilde \Phi_2 \right ] +\dfrac{1}{2} \widetilde{ \mathcal P}_{k;\underline{\omega}_3 \underline{\omega}_4}^{[0]}\left[ \widetilde \Phi_1, \widetilde \Phi_2 \right ]\bigg (\mathsf \Gamma _{k2;\underline{\omega}_4, \underline{\omega}_3}^{(11)}\left[ \widetilde \Phi_1, \widetilde \Phi_2 \right ] \\& - (1+ \bar{\omega}_4 \omega_4)(1+ \bar{\omega}_3 \omega_3)\widetilde{R}_{k} \bigg ) + perm (12) \bigg ]\bigg \},
\end{split}
\end{equation}
where $\mathrm{Tr}$ indicates a trace over the Euclidean momenta (which are not shown explicitly). Similar expressions are obtained for the higher-order cumulants. One actually finds a structure that is analogous to that derived in our previous replica approach\cite{tarjus08} when considering the (bath) temperature $T$: the cumulant of order $p$ comes with a factor $T^{-p}$, its $q$th derivative with a factor $T^{-(p-q)}$, the ``hat'' propagator has a factor $1/T$ and the ``tilde'' propagator no explicit factor of $T$. The limit $\beta \rightarrow \infty$ allows one to drop the terms in the above flow equations that have an explicit factor of $1/\beta$, namely the first term of the right-hand sides. It should however be kept in mind that as in the ``thermal'' case when $T\rightarrow 0$, the limit $1/\beta \rightarrow 0$ is expected to be nonuniform in the (super)field dependence. This will be discussed further down and in the following paper.
Having introduced the change of Grassmann coordinates and the formal way to study the limit $\beta \rightarrow \infty$, we can go one step beyond in the analysis of the ERGE's. To do so, we consider superfields that are uniform in the Grassmann subspace, \textit{i.e.} $\Phi_a=\widetilde \Phi_a=\phi_a$, and we generalize the results of sections V-B and VI-B to the case where the cumulants $\mathsf \Gamma_{kp}^{(\beta)}$ have a generic ``non-ultralocal'' component,
\begin{equation}
\begin{aligned}
\label{eq_cumulant_p_nonultralocal}
\mathsf \Gamma_{kp}^{(\beta)}[\Phi_{1},...,\Phi_{p}] = \beta^p\int_{\underline{\omega}_1}...\int_{\underline{\omega}_p}&
\bigg (\Gamma_{kp}^{(\beta)UL}[\widetilde \Phi_1(\underline{\omega}_1),...,\widetilde \Phi_p(\underline{\omega}_p)]\\&+ \Gamma_{kp}^{(\beta)NUL}[\widetilde 1,...,\widetilde p] \bigg ),
\end{aligned}
\end{equation}
where $\Gamma_{kp}^{(\beta)NUL}$ involves derivatives of the superfields in the Grassmann directions and is equal to zero when the superfields are uniform in the Grassmann subspace: as a result, $\mathsf{\Gamma}_{kp}^{(\beta)}[\phi_{1},...,\phi_{p}]=\beta^p\Gamma_{kp}^{(\beta)UL}[\phi_1,...,\phi_p]$ . The second functional derivative of the effective average action can be decomposed as in Eq.~(\ref{eq_genericdecompos}) and, by applying the WT identities associated with invariance under the isometries of the curved Grassmann subspace (see section IV-D and Appendix~\ref{appendix:nonultralocal}), one finds the following general structure for the ``hat'' and ``tilde'' components:
\begin{equation}
\begin{split}
\label{eq_hatGamma2_unifNUL}
\widehat {\Gamma}_{k;a_1\underline{\omega}_1 \underline{\omega}_2}^{(2)}[ \{ \phi_a\}]& =(1+ \bar{\omega}_1 \omega_1)\delta_{\underline{\omega}_1 \underline{\omega}_2}\widehat {A}_{k;a_1}[ \{ \phi_a\}] \\&+ (1+\bar \omega_1 \omega_2+\bar \omega_2 \omega_1)\widehat {B}_{k;a_1}[ \{ \phi_a\}]
\end{split}
\end{equation}
and
\begin{equation}
\label{eq_tildeGamma2_unifNUL}
\widetilde {\Gamma}_{k;(a_1\underline{\omega}_1)(a_2 \underline{\omega}_2)}^{(2)}[ \{\phi_a\}]=(1+ \bar{\omega}_1 \omega_1) (1+ \bar{\omega}_2 \omega_2)\widetilde {C}_{k;a_1a_2}[ \{\phi_a\}],
\end{equation}
where we recall that a factor $1/\beta$ is present when changing variables from $\underline \theta$ to $\underline \omega$ in $\widehat {\Gamma}_{k}^{(2)}$.
In addition, it can be proven that $\widehat{A}_{k;a_1}$ and $\widetilde{C}_{k;a_1a_2}$ are obtained only from the ``ultralocal'' part of the cumulants, namely (leaving again implicit the dependence on the Euclidean coordinates),
\begin{equation}
\begin{aligned}
\label{eq_expand_Gamma_2hatNUL1}
\widehat{A}_{k;a_1}[\left\lbrace \phi_a \right\rbrace ] =\Gamma_{k1}^{UL(2)}[\phi_{a_1}] - \beta \sum_{a_2}\Gamma_{k2}^{UL(20)}[\phi_{a_1},\phi_{a_2}]+ \cdots,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\label{eq_expand_Gamma_2tildeNUL}
\widetilde{C}_{k;a_1a_2}[\left\lbrace \phi_a \right\rbrace ]=-&\Gamma_{k2}^{UL(11)}[\phi_{a_1},\phi_{a_2}]\\& + \beta \sum_{a_3}\Gamma_{k3}^{UL(110)}[\phi_{a_1},\phi_{a_2},\phi_{a_3}]+ \cdots,
\end{aligned}
\end{equation}
whereas $\widehat{B}_{k;a_1}$, whose precise expression is not particularly illuminating at this point, involves the ``non-ultralocal'' part of the cumulants. As a result, $\widehat{B}_{k;a_1}$ is equal to zero when the effective average action is purely ``ultralocal'' in the Grassmann subspace. The demonstration of the above property and of Eqs.~(\ref{eq_hatGamma2_unifNUL}-\ref{eq_expand_Gamma_2tildeNUL}) is provided in Appendix~\ref{appendix:nonultralocal}.
The full propagator $\mathcal P_{k;(a_1\underline{\omega}_1)(a_2 \underline{\omega}_2)}$, which is the inverse of $\Gamma_k^{(2)} +\mathcal R_k$, has the same structure as in Eqs.~(\ref{eq_hatGamma2_unifNUL},\ref{eq_tildeGamma2_unifNUL}) with
\begin{equation}
\begin{split}
\label{eq_hatP0_unifNUL}
\widehat {\mathcal P}_{k;a_1\underline{\omega}_1 \underline{\omega}_2}[ \{ \phi_a\}]& =(1+ \bar{\omega}_1 \omega_1)\delta_{\underline{\omega}_1 \underline{\omega}_2}\widehat {P}_{k;a_1}[ \{ \phi_a\}] \\&+ (1+\bar \omega_1 \omega_2+\bar \omega_2 \omega_1)\widehat {Q}_{k;a_1}[ \{ \phi_a\}]
\end{split}
\end{equation}
and
\begin{equation}
\label{eq_tildeP0_unif}
\widetilde {\mathcal P}_{k;(a_1\underline{\omega}_1)(a_2 \underline{\omega}_2)}[ \{\phi_a\}]=(1+ \bar{\omega}_1 \omega_1) (1+ \bar{\omega}_2 \omega_2)\widetilde {P}_{k;a_1a_2}[ \{\phi_a\}],
\end{equation}
where $\widehat {\mathcal P}_{k;a}$ and $\widetilde {\mathcal P}_{k;a_1a_2}$ are expressed only in terms of ``ultralocal'' cumulants whereas $\widehat {Q}_{k;a_1}$ also involves ``non-ultralocal'' terms and therefore vanishes when the latter go to zero. All these quantities can be expanded in increasing number of sums over copies. The zeroth-order components of the propagator, $\widehat {\mathcal P}_{k}^{[0]}$, $\widetilde{P}_{k}^{[0]}$ and $\widehat{Q}_{k}^{[0]}$, are then expressed as
\begin{equation}
\label{eq_hatP_zero_NUL}
\widehat {P}_{k}^{[0]}[\phi ]=\left(\Gamma _{k1}^{UL(2)}[ \phi ]+\widehat R_k\right) ^{-1},
\end{equation}
which denotes an inversion in the sense of operators in Euclidean space,
\begin{equation}
\label{eq_tildeP_zero_NUL}
\widetilde {P}_{k}^{[0]}[\phi_1, \phi_2 ]= \widehat {P}_{k}^{[0]}[ \phi_1 ](\Gamma _{k2}^{UL(11)}[\phi_1, \phi_2 ]-\widetilde R_k ) \widehat {P}_{k}^{[0]}[ \phi_2 ],
\end{equation}
and
\begin{equation}
\begin{split}
\label{eq_hatQ_zero_NUL}
&\widehat{Q}_{k}^{[0]}[\phi ]=\\&-\left(\Gamma _{k1}^{UL(2)}[ \phi ]+\widehat R_k\right) ^{-1}\widehat B_k^{[0]}[\phi]\left(\Gamma _{k1}^{UL(2)}[ \phi ]-\widehat B_k^{[0]}[\phi]+\widehat R_k\right) ^{-1}
\end{split}
\end{equation}
where the Euclidean indices are omitted for simplicity. Note the (expected) correspondence between the above expressions for $\widehat {P}_{k}^{[0]}$ and $\widetilde {P}_{k}^{[0]}$ and those derived under the hypothesis of ``Grassmannian ultralocality'', Eqs.~(\ref{eq_hatP_zero}) and (\ref{eq_tildeP_zero}).
Reinstalling the explicit dependence on the Euclidean momenta, we find the following \textit{exact} RG flow equations for the ``ultralocal'' parts of the cumulants,
\begin{equation}
\label{eq_flow_Gamma1_scaledUL}
\begin{split}
&\partial_t \Gamma_{k 1}^{UL(\beta)}\left[\phi_1\right ]=-
\dfrac{1}{2} \tilde{\partial}_t \int_{qq'} \widehat{P}_{k;qq'}^{[0]}[\phi_1] \bigg( \Gamma_{k2;q,q'}^{UL(11)}\left[\phi_1,\phi_1\right ] \\&- \delta^{(d)}(q-q') \widetilde{R}_{k}(q^2)\bigg)+\dfrac{1}{2\beta}\int_{q}\partial_t \widehat{R}_k(q^2)\widehat{Q}_{k;-qq}^{[0]}[\phi_1],
\end{split}
\end{equation}
\begin{equation}
\label{eq_flow_Gamma2_final_scaledUL}
\begin{split}
&\partial_t\Gamma_{k2}^{UL(\beta)}\left[\phi_1,\phi_2\right ]=\dfrac{1}{2} \tilde{\partial}_t \int_{qq'}\bigg \{\widetilde{P}_{k;qq'}^{[0]}\left[\phi_1,\phi_1\right ] \Gamma_{k2;qq',.}^{UL(20)}\left[\phi_1,\phi_2\right ]\\&- \widehat{P}_{k;qq'}^{[0]}\left[\phi_1\right ] \Gamma_{k3;q,.,q'}^{UL(101)}\left[\phi_1,\phi_2,\phi_1\right ] + \frac{1}{2}\widetilde{P}_{k;qq'}^{[0]}\left[\phi_1,\phi_2\right ] \times \\& \left( \Gamma_{k2;q,q'}^{UL(11)}\left[\phi_1,\phi_2\right ] - (2\pi)^{d}\delta^{(d)}(q-q') \widetilde{R}_{k}(q^2)\right) + perm(12)\bigg \}\\&+\dfrac{1}{2\beta} \tilde{\partial}_t \int_{qq'}\bigg \{\widehat{Q}_{k;qq'}^{[0]}\left[\phi_1\right ] \Gamma_{k2;q'q,.}^{UL(20)}\left[\phi_1,\phi_2\right] + perm(12)\bigg \},
\end{split}
\end{equation}
and similarly for the higher orders.
Setting $1/\beta=0$ in the above ERGE's allows one to get rid of the terms that involve the ``non-ultralocal'' contributions to the effective average action. The $\beta \rightarrow \infty$ limit therefore coincides with the RG equations for the cumulants obtained under the hypothesis of ``Grassmannian ultralocality'' (compare with section VI-D). This shows that the ERGE's derived under the assumption of ``Grassmannian ultralocality'' describe the renormalization of the RFIM at equilibrium with a proper selection of the ground state. This is an important piece in the resolution of the problem associated with the long-distance physics of the model.
\subsection{Illustration of the RG flow for ``non-ultralocal'' contributions}
We now illustrate the structure of the FRG flow for the ``non-ultralocal'' components of the cumulants by looking at the lowest-order correction to the first cumulant, assuming that all other cumulants are purely ``ultralocal''. This allows us to verify that this flow is well-behaved, thereby giving a direct confirmation to the arguments used in the previous subsection.
More specifically, we consider
\begin{equation}
\begin{aligned}
\label{eq_cumulant_1_nonultralocal_correction}
&\mathsf \Gamma_{k1}[\Phi_{1}] = \beta \int_{\underline{\omega}_1}
\bigg (\Gamma_{k1}^{UL}[\widetilde \Phi_1(\underline{\omega}_1)]\\&+\frac{1}{2} Y_{k}[\widetilde \Phi_1(\underline{\omega}_1)](1+ \bar \omega_1 \omega_1)\partial_{ \omega_1}\widetilde \Phi_1(\underline{\omega}_1)\partial_{\bar \omega_1}\widetilde \Phi_1(\underline{\omega}_1) \bigg ),
\end{aligned}
\end{equation}
and for $p \geq 2$,
\begin{equation}
\begin{aligned}
\label{eq_cumulant_p_ultralocal_correction}
\mathsf \Gamma_{kp}[\Phi_{1},...,\Phi_{p}] = \beta^p\int_{\underline{\omega}_1}...\int_{\underline{\omega}_p}
\Gamma_{kp}^{UL}[\widetilde \Phi_1(\underline{\omega}_1),...,\widetilde \Phi_p(\underline{\omega}_p)],
\end{aligned}
\end{equation}
where we have omitted the superscript $(\beta)$.
The FRG equations for the ``ultralocal'' components are the same as in Eqs.~(\ref{eq_flow_Gamma1_scaledUL},\ref{eq_flow_Gamma2_final_scaledUL}) and the propagators $\widehat P_k^{[0]}[\phi]$, $\widetilde P_k^{[0]}[\phi_1,\phi_2]$ and $\widehat Q_k^{[0]}[\phi]$ are given in eqs.~(\ref{eq_hatP_zero_NUL}-\ref{eq_hatQ_zero_NUL}) with $\widehat B_k^{[0]}[\phi]$ now simply equal to
\begin{equation}
\label{eq_tildeB_Y}
\widehat B_k^{[0]}[\phi]=-Y_k[\phi].
\end{equation}
The most convenient way to obtain the flow of $Y_k[\phi]$ is to differentiate twice the ERGE for $\mathsf \Gamma_{k1}$ in Eq.~(\ref{eq_flow_Gamma1_scaled}) and evalutate the resulting expression for a superfield configuration that is uniform in the Grassmann subspace (\textit{i.e.} $\Phi=\phi$).
The final RG equation for $Y_k[\phi]$ reads
\begin{equation}
\begin{aligned}
\label{eq_flow_correction}
&\partial_t Y_k\left[\phi \right]=
\tilde \partial_t \mathrm{Tr} \bigg\{\widetilde{P}_{k}^{[0]}\left[\phi,\phi \right] \bigg (\frac{1}{2} Y_k^{(2)}\left[\phi \right]+Y_k^{(1)}\left[\phi \right]^2 \times \\&
\big [\widehat Q_k^{[0]}[\phi]-\widehat{P}_{k}^{[0]}\left[\phi \right]\big ] \bigg )
+2 \big [\widehat Q_k^{[0]}[\phi]-\widehat{P}_{k}^{[0]}\left[\phi \right]\big ]\bigg (\widetilde{P}_{k}^{[0]}\left[\phi,\phi \right] \times \\&
\Gamma_{k1}^{UL(3)}\left[\phi \right ]- \widehat{P}_{k}^{[0]}\left[\phi \right] \Gamma_{k2}^{UL(21)}\left[\phi,\phi \right]\bigg )Y_k^{(1)}\left[\phi \right]
+\widehat Q_k^{[0]}[\phi] \times \\&
\bigg (\widetilde{P}_{k}^{[0]}\left[\phi,\phi \right ]\Gamma_{k1}^{UL(3)}\left[\phi \right ]^2+\Gamma_{k2}^{UL(22)}\left[\phi,\phi \right]-2\widehat{P}_{k}^{[0]}\left[\phi \right]\times \\&
\Gamma_{k1}^{UL(3)}\left[\phi \right ] \Gamma_{k2}^{UL(21)}\left[\phi,\phi \right]\bigg )\bigg \}
+\frac{1}{\beta} \tilde \partial_t \mathrm{Tr} \bigg\{ \frac{1}{2} Y_k^{(2)}\left[\phi \right]\widehat Q_k^{[0]}[\phi]\\&
+ \frac{1}{4} Y_k^{(1)}\left[\phi \right]^2\big [\widehat Q_k^{[0]}[\phi]-\widehat{P}_{k}^{[0]}\left[\phi \right]\big ]\big [7\widehat Q_k^{[0]}[\phi]-3\widehat{P}_{k}^{[0]}\left[\phi \right]\big ] \\&
+2 Y_k^{(1)}\left[\phi \right]\widehat Q_k^{[0]}[\phi]\big [\widehat Q_k^{[0]}[\phi]-\widehat{P}_{k}^{[0]}\left[\phi \right]\big ]\Gamma_{k1}^{UL(3)}\left[\phi \right]
+\\&
\frac{1}{2} \widehat Q_k^{[0]}\left[\phi \right]^2\Gamma_{k1}^{UL(3)}\left[\phi \right]^2 \bigg \},
\end{aligned}
\end{equation}
where $\mathrm{Tr}$ denotes a trace over Euclidean momenta (which are left implicit in the right-hand side of the equation).
Provided that $Y_k[\phi]$ converges to a finite (nondiverging) fixed-point value, its contribution to the flow of the ``ultralocal'' components of the cumulants, which appears through the propagator $\widehat Q_k^{[0]}[\phi]$ in Eqs.~(\ref{eq_flow_Gamma1_scaledUL}) and (\ref{eq_flow_Gamma2_final_scaledUL}), can indeed be neglected when $\beta \rightarrow \infty$. This will be shown in the companion paper. (More generally, it can be shown that the contribution becomes subdominant at long distance, \textit{i.e.} when $k \rightarrow 0$, even when $\beta$ is large but finite.)
\subsection{Scaling dimensions near a zero-temperature fixed point and asymptotic dominance of the ground state}
To search for the fixed point that controls the critical behavior of the RFIM associated with the spontaneous breaking of the (global) $Z_2$ symmetry, the NP-FRG flow equations must first be recast in a scaled form. This can be done by introducing appropriate scaling dimensions.\cite{tarjus04,tarjus08} Near a zero-temperature fixed point, it is convenient to introduce a ``renormalized temperature''. As shown in Refs.~[\onlinecite{tarjus04},\onlinecite{tarjus08}], this can be done by considering (i) the strength of the renormalized random field $\Delta_k$, which can be defined from the vertex $\Gamma_{k2}^{(11)}$ evaluated for a specific configuration of the fields and reduces to the bare value $\Delta_B$ at the UV scale $\Lambda$, and (ii) the amplitude of the field renormalization constant $Z_k$, which is as usual obtained from $\Gamma_{k1}^{(2)}$ for a specific field configuration and is equal to $1$ at the UV scale. Specifically,
\begin{equation}
\label{eq_running_temperature}
T_k\propto \frac{Z_{k} k^2}{ \Delta_{k}}.
\end{equation}
An associated running exponent $\theta_k$ is defined from
\begin{equation}
\label{eq_running_theta}
\theta_k = \partial_t \log T_k
\end{equation}
whereas the (running) anomalous dimension $\eta_k$ is obtained as
\begin{equation}
\label{eq_running_eta}
\eta_k= -\partial_t \log Z_{k}.
\end{equation}
One may also introduce a running exponent $\bar{\eta}_k=2-\theta_k+\eta_k$ from
\begin{equation}
\label{eq_running_etabar}
\bar{\eta}_k-2\eta_k= \partial_t \log \Delta_{k}.
\end{equation}
Note that the two anomalous dimensions $\eta$ and $\bar \eta$ describe the spatial decay of the pair ``connected'' and ``disconnected'' correlations functions (see section \ref{sec:ultralocal}) at
criticality; this translates into
\begin{equation}
\begin{aligned}
\widehat P_k^{[0]}(q=0) \sim k^{-(2-\eta)}\, , \; \widetilde P_k^{[0]}(q=0) \sim k^{-(4-\bar \eta)}\, ,
\end{aligned}
\end{equation}
with $\eta \leq \bar \eta \leq 2 \eta$\cite{nattermann98,soffer85}.
A systematic way to proceed is to introduce, on top of the canonical scaling dimensions for the effective average action and the related cumulants and of a rescaling of the Euclidean momenta and coordinates ($\hat q=q/k$, $\hat x=kx$), a rescaling of the Grassmann coordinates via a renormalized curvature $\beta_k \propto 1/T_k$, with $T_k$ defined in Eq.~(\ref{eq_running_temperature}):
\begin{equation}
\label{eq_grassmann_rescaling}
\widehat\theta=\left (\frac{\beta}{\beta_k}\right )^{\frac{1}{2}} \theta \; , \widehat{\bar\theta}=\left (\frac{\beta}{\beta_k}\right )^{\frac{1}{2}} \bar \theta
\end{equation}
where
\begin{equation}
\label{eq_running_curvature}
\beta_k=\beta \frac{ \Delta_{k}/\Delta_B}{Z_{k} (k/\Lambda)^2}.
\end{equation}
The renormalized curvature reduces to the (bare) curvature $\beta$ at the UV scale $\Lambda$ and diverges as $k^{-\theta}$ in the IR. It is worth pointing out that a symmetry between the rescaling of the Euclidean and of the Grassmann coordinates exists in the case where the dimensional reduction applies: then, $\theta=2$ so that all coordinates are rescaled by the same factor $k=k^{\theta/2}$.
We can now introduce dimensionless quantities. The dimensionless superfield is defined through
\begin{equation}
\label{eq_superfield_dimension}
\Phi(x,\underline \theta)=\left( \frac{\beta_k k^{d-2}}{\beta Z_{k}}\right)^{1/2}\Phi_{ren}(\hat x, \widehat{\underline \theta}),
\end{equation}
which implies for the components:
\begin{equation}
\label{eq_field_dimension}
\phi(x)=\left( \frac{\beta_k k^{d-2}}{\beta Z_{k}}\right)^{1/2}\varphi(\hat x),
\end{equation}
\begin{equation}
\label{eq_response_field_dimension}
\hat \phi(x)=\left( \frac{k^{d-2}}{Z_{k} \beta_k}\right)^{1/2}\hat \varphi(\hat x),
\end{equation}
etc.
The ``ultralocal'' component of the $p$th cumulant is rescaled as
\begin{equation}
\label{eq_cumulant_dimension}
\Gamma_{kp}^{UL}[\phi_1,...,\phi_p]= k^d(\beta_k)^p\, \gamma_{kp}^{UL}[\varphi_1,...,\varphi_p] ,
\end{equation}
and the ``non-ultralocal'' cumulant can similarly be put in a dimensionless form . For instance, the term $\widehat B_k^{[0]}[\phi]$ appearing in the definition of the propagator $\widehat Q_k^{[0]}[\phi]$ is expressed as
\begin{equation}
\label{eq_NUL_dimension}
\widehat B_k^{[0]}[\phi]=\beta_k Z_k k^2 \hat b_k[\varphi].
\end{equation}
The dimensionless quantities will be systematically denoted by lower-case letters (except for the superfield and for the coordinates, for obvious reasons).
In addition, and following the discussion in section VI-B, the cutoff functions are chosen according to
\begin{equation}
\label{eq_scaled_hatR}
\widehat{R}_k(q^2)=Z_k k^2 s(\hat q^2),
\end{equation}
with $s(x)$ such that $s(0)>0$ and $s(x\rightarrow \infty)=0$ (see the companion paper and Refs.~[\onlinecite{tarjus04},\onlinecite{tarjus08}]), and
\begin{equation}
\label{eq_scaled_wideR}
\widetilde{R}_k(q^2)=-\Delta_k s'(\hat q^2),
\end{equation}
with $s'(x)$ the derivative of $s(x)$; the above form of the regulator then satisfies the superrotational invariance whenever $\Delta_k/Z_k=\Delta_B$.
We focus here on the ``ultralocal'' components of the cumulants. The ERGE's for the latter can then be put in a scaled form which has the following generic structure:
\begin{equation}
\label{eq_ERGE_dimensionlessform_p}
\begin{split}
\partial_t \gamma_{kp}^{UL}&[\varphi_1,...,\varphi_p] =- p (2+ \eta_k -\bar \eta_k) \gamma_{kp}^{UL}[\varphi_1,...,\varphi_p]+ \\& \frac{1}{2}\sum_{a=1}^p(d-4+\bar\eta_k)\int_{\hat q_a} \varphi_a(\hat q_a) \gamma_{kp;\hat q_a}^{UL(1)}[\varphi_1,...,\varphi_p] \\&+ \mathcal F_{\gamma_p}^{UL}[\varphi_1,...,\varphi_p]+\frac{1}{\beta_k}\mathcal F_{\gamma_p}^{NUL}[\varphi_1,...,\varphi_p],
\end{split}
\end{equation}
where $\mathcal F_{\gamma_p}^{UL}$ is a dimensionless beta-functional expressed only in terms of (dimensionless) ``ultralocal'' components and $\mathcal F_{\gamma_p}^{NUL}$ is a dimensionless beta-functional that contains (dimensionless) ``non-ultralocal'' components. Explicit expressions for the first cumulants are easily derived from Eqs.~(\ref{eq_flow_Gamma1_scaledUL},\ref{eq_flow_Gamma2_final_scaledUL}) and will be given in the following paper. Here, only the general structure is needed.
Fixed points are then found by setting the left-hand side of Eq.~(\ref{eq_ERGE_dimensionlessform_p}) to zero. This will be studied in detail in the companion paper. At this point we would only like to stress that the ``non-ultralocal'' contributions to the flow of the dimensionless renormalized cumulants come with a factor $1/\beta_k$. Even if $\beta^{-1}$ is not taken equal to zero, and provided the ``non-ultralocal'' contributions remain bounded (see following paper), the flow leads to a fixed point characterized by the property of ``Grassmannian ultralocality'': one indeed expects $1/\beta_k\propto T_k$ to flow to zero, \textit{i.e.} $\theta_{k\rightarrow 0 }>0$, as already shown in computer studies\cite{nattermann98} and in Refs.~[\onlinecite{tissier06},\onlinecite{tissier08}]. Ground-state dominance is thus found asymptotically as the flow goes to the zero-temperature fixed point. Guided by our previous work\cite{tissier06,tissier08} and by that of Balents, Ledoussal and coworkers on the random manifold model,\cite{BLchauve00,BLbalents04,BLbalents05,BLledoussal10} we anticipate that when the fixed point is characterized by a nonanalytic, cusp-like, dependence of the cumulants of the renormalized random field, the approach to the $k\rightarrow 0$ limit (which coincides to the fixed point obtained by setting rightaway $\beta^{-1}= 0$ in the ERGE's) involves a boundary layer, generically in $\vert\varphi_a - \varphi_b \vert /T_k$.\cite{footnote2} This boundary layer is physically associated with the presence of rare, power-law distributed, ``droplet'' excitations above the ground state at the running scale $k$.\cite{footnote3}
\section{Conclusion}
\label{sec:conclusion}
In this paper we have extended our nonperturbative FRG approach of disordered systems, which was presented in the previous articles of this series.\cite{tarjus08,tissier08} The objective was to discuss the property of dimensional reduction and its breakdown in the RFIM from the Parisi-Sourlas\cite{parisi79} perspective of an underlying supersymmetry and its breaking. To this end, we have reformulated the FRG in a superfield formalism.
We have identified two sources of problems in the Parisi-Sourlas supersymmetric formalism. One stems from the presence of metastable states which, due to the resulting multiplicity of solutions to the stochastic field equation that describes the long-distance physics of the RFIM, prevents the selection of the ground state in the random generating functional. The other one was already discussed in our previous papers and comes from the fact that a single copy of the original system is considered. As such, the formalism is therefore unable to describe rare events, such as avalanches and droplets, that manifest themselves as nonanalyticities in the field dependence of the cumulants of the renormalized disorder.
We have provided ways to cure both problems, through the introduction of a weighting factor with an auxiliary temperature and through the use of multiple copies. The resulting theory involves superfields in a curved superspace whose curvature is related to the inverse of the auxiliary temperature. The presence of metastable states leads to a breakdown of a formal property which we have called ``Grassmannian ultralocality'' and is associated with the fact that a unique solution is incorporated in the random generating functional. On the other hand, as will be discussed in detail in the companion paper, nonanalyticities originating from the presence of avalanches in the ground state as one varies the applied source trigger a spontaneous breaking of supersymmetry, more precisely of ``superrotational invariance''.
Through the introduction of an appropriate infrared regulator which guarantees that the stochastic field equation has a unique solution at the initial condition of the flow and which can be chosen to satisfy all (super) symmetries of the theory, we have derived exact RG flow equations for the cumulants of the renormalized disorder in the effective average action formalism. We have shown that in the limit of infinite curvature, \textit{i.e.} of zero auxiliary temperature, the hierarchy of exact FRG equations coincides with that obtained under the hypothesis of ``Grassmannian ultralocality''. Through this procedure, the ground state is properly selected and is the only solution that contributes to the random generating functional. We have moreover found that the corrections to ``ultralocality'' that are present for a finite curvature become subdominant at long distance as one approaches the expected zero-temperature fixed point.
In the following paper, we investigate the exact FRG equations derived for the cumulants in the limit of infinite curvature and show that nonanalytic behavior of the effective average action in its field arguments is related to the appearance of a spontaneous breaking of the superrotational invariance. We also devise and study a nonperturbative approximation scheme that does not break explicitly the superrotational invariance and allows us to study the critical behavior of the RFIM and the breakdown of dimensional reduction as a function of the spatial dimension $d$.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec_intro}
\vspace{-0.1in}
With a huge increase of interest in co-existence and spectrum sharing among radar and communications, joint radar-communications (JRC) strategies are being actively developed for 5G and beyond wireless systems \cite{jointRadCom_review_TCOM,SPM_JRC_2019,SPM_Zheng_2019,chiriyath2017radar,Eldar_SPM_JRC_2020,Canan_SPM_2020}. A promising approach to practical JRC implementation is to design dual-functional radar-communications (DFRC) systems, which employ a single hardware that can simultaneously perform radar sensing and data transmission with a co-designed waveform \cite{DFRC_SPM_2019,DFRC_Waveform_Design,SPM_JRC_2019}. Orthogonal
frequency-division multiplexing (OFDM) has been widely investigated as a DFRC waveform due its wide availability in wireless communication systems and its potential to achieve high radar performance \cite{RadCom_Proc_IEEE_2011,General_Multicarrier_Radar_TSP_2016,ICI_OFDM_TSP_2020}. In the literature, estimator design for OFDM radar sensing has been studied in both single-antenna \cite{RadCom_Proc_IEEE_2011,Firat_OFDM_2012,ICI_OFDM_TSP_2020} and multiple-input multiple-output (MIMO) \cite{mmWave_JRC_TAES_2019,MIMO_OFDM_radar_TAES_2020} settings.
In high-mobility scenarios, such as millimeter-wave (mmWave) vehicular JRC systems \cite{SPM_JRC_2019}, Doppler-induced intercarrier interference (ICI) can significantly degrade the performance of OFDM from both radar and communications perspective \cite{OFDM_ICI_TVT_2017,ICI_OFDM_radar_2015,multiCFO_TWC_2018}. To improve OFDM radar performance, various ICI mitigation approaches have been proposed \cite{Firat_OFDM_2012,OFDM_ICI_TVT_2017,ICI_OFDM_radar_2015,ICI_OFDM_TSP_2020}. In \cite{OFDM_ICI_TVT_2017}, the ICI effect is eliminated via an all-cell Doppler correction (ACDC) method, which requires the OFDM symbol matrix to be rank-one and therefore leads to a substantial loss in data rate. Similarly, ICI mitigation technique in \cite{ICI_OFDM_radar_2015} imposes certain constraints on transmit symbols, which impedes dual-functional operation. An alternating maximization approach is designed in \cite{ICI_OFDM_TSP_2020} to reduce the complexity of high-dimensional maximum-likelihood (ML) search by assuming that the number of targets is known a-priori. The work in \cite{Firat_OFDM_2012} considers a single-target scenario and proposes a pulse compression technique to compensate for ICI-induced phase rotations across OFDM subcarriers.
In this paper, we propose an ICI-aware delay-Doppler-angle estimation algorithm for a MIMO-OFDM radar with arbitrary transmit symbols in a generic multi-target scenario.
The main ingredient is to re-formulate radar sensing as a joint carrier frequency offset (CFO)\footnote{Borrowing from the OFDM communications literature \cite{multiCFO_TWC_2018}, we use Doppler and CFO interchangeably throughout the text.} and channel estimation problem, which allows us to decontaminate the ICI effect from the resulting channel estimates, leading to high-accuracy delay-Doppler estimation.
To that end, we first perform angle estimation using the MUSIC high-resolution direction finding algorithm \cite{MUSIC_1986} based on the observation that spatial covariance matrix does not depend on target delays and Dopplers. To suppress mutual multi-target interferences \cite{Est_MIMO_radar_2013} in the spatial domain, we then devise an APES-like spatial filtering approach \cite{Est_MIMO_radar_2008,jointRadCom_review_TCOM} that performs joint Doppler/CFO and unstructured radar channel estimation for each estimated target angle separately. Finally, delay-Doppler of each target can be estimated from target-specific channel estimates by exploiting the OFDM time-frequency structure. Simulations are carried out to demonstrate the performance of the proposed algorithm in high-mobility scenarios. To the best of authors' knowledge, this is the first algorithm for MIMO-OFDM radar that takes into account the effect of ICI in estimating multiple target parameters without imposing any structure on data symbols.
\vspace{-0.03in}
\textit{Notations:} Uppercase (lowercase) boldface letters are used to denote matrices (vectors). $(\cdot)^{*}$, $(\cdot)^{T}$ and $(\cdot)^{H}$ represent conjugate, transpose and Hermitian transpose operators, respectively. $\Re \left\{ \cdot \right \}$ denotes the real part. The $\thnew{n}$ entry of a vector $ \mathbf{x} $ is denoted as $\left[ \mathbf{x} \right]_i$, while the $\thnew{(m,n)}$ element of a matrix $ \mathbf{X} $ is $\left[ \mathbf{X} \right]_{m,n}$. $\projrange{ \mathbf{X} } = \mathbf{X} ( \mathbf{X} ^H \mathbf{X} )^{-1} \mathbf{X} ^H$ represents the orthogonal projection operator onto the column space of $ \mathbf{X} $ and $\odot$ denotes the Hadamard product.
\vspace{-0.2in}
\section{OFDM Radar System Model}
\vspace{-0.1in}
Consider an OFDM DFRC transceiver that communicates with an OFDM receiver while concurrently performing radar sensing using the backscattered signals for target detection \cite{RadCom_Proc_IEEE_2011,DFRC_SPM_2019}. The DFRC transceiver is equipped with an $ N_{\rm{T}} $-element transmit (TX) uniform linear array (ULA) and an $ N_{\rm{R}} $-element receive (RX) ULA.
We assume co-located and perfectly decoupled TX/RX antenna arrays so that the radar receiver does not suffer from self-interference due to full-duplex radar operation \cite{Interference_MIMO_OFDM_Radar_2018,RadCom_Proc_IEEE_2011,OFDM_Radar_Phd_2014,80211_Radar_TVT_2018}. In this section, we derive OFDM transmit and receive signal models and formulate the multi-target parameter estimation problem.
\vspace{-0.1in}
\subsection{Transmit Signal Model}\label{sec_transmit}
\vspace{-0.1in}
We consider an OFDM communication frame consisting of $M$ OFDM symbols, each of which has a total duration of $ T_{\rm{sym}} = T_{\rm{cp}} + T$ and a total bandwidth of $N \Delta f = B$. Here, $ T_{\rm{cp}} $ and $T$ denote, respectively, the cyclic prefix (CP) duration and the elementary symbol duration, $ \Delta f = 1/T$ is the subcarrier spacing, and $N$ is the number of subcarriers \cite{RadCom_Proc_IEEE_2011}. Then, the complex baseband transmit signal for the $\thnew{m}$ symbol is given by
\begin{equation}\label{eq_ofdm_baseband}
s_{m} (t) = \frac{1}{\sqrt{N}} \sum_{n = 0}^{N-1} x_{n,m} \, e^{j 2 \pi n \Delta f t} \rect{\frac{t - m T_{\rm{sym}} }{ T_{\rm{sym}} }}
\end{equation}
where $ x_{n,m} $ denotes the complex data symbol on the $\thnew{n}$ subcarrier for the $\thnew{m}$ symbol \cite{General_Multicarrier_Radar_TSP_2016}, and $\rect{t}$ is a rectangular function that takes the value $1$ for $t \in \left[0, 1 \right]$ and $0$ otherwise. Assuming a single-stream beamforming model \cite{80211_Radar_TVT_2018,mmWave_JRC_TAES_2019,MIMO_OFDM_Single_Stream}, the transmitted signal over the block of $M$ symbols for $t \in \left[0, M T_{\rm{sym}} \right]$ can be written as
\begin{equation}\label{eq_passband_st}
\Re \left\{ \mathbf{f}_{\rm{T}} \sum_{m = 0}^{M-1} s_{m} (t) e^{j 2 \pi f_c t} \right\}
\end{equation}
where $ f_c $ and $ \mathbf{f}_{\rm{T}} \in \complexset{ N_{\rm{T}} }{1}$ denote, respectively, the carrier frequency and the TX beamforming vector.
\vspace{-0.1in}
\subsection{Receive Signal Model}\label{sec_radar_rec}
Suppose there exists a point target in the far-field, characterized by a complex channel gain $\alpha$ (including path loss and radar cross section effects), an azimuth angle $\theta$, a round-trip delay $\tau$ and a normalized Doppler shift $\nu = 2 v/c$ (leading to a time-varying delay $\tau(t) = \tau - \nu t$), where $v$ and $c$ denote the radial velocity and speed of propagation, respectively.
In addition, let $ \mathbf{a}_{\rm{T}} (\theta) \in \complexset{ N_{\rm{T}} }{1}$ and $ \mathbf{a}_{\rm{R}} (\theta) \in \complexset{ N_{\rm{R}} }{1}$ denote, respectively, the steering vectors of the TX and RX ULAs, i.e., $\left[ \mathbf{a}_{\rm{T}} (\theta) \right]_i = e^{j \frac{2 \pi}{\lambda} d i \sin(\theta)}$ and $\left[ \mathbf{a}_{\rm{R}} (\theta) \right]_i = e^{j \frac{2 \pi}{\lambda} d i \sin(\theta)}$,
where $\lambda$ and $d = \lambda/2$ denote the signal wavelength and antenna element spacing, respectively. Given the transmit signal model in \eqref{eq_passband_st}, the backscattered signal impinging onto the $\thnew{i}$ element of the radar RX array can be expressed as
\begin{align}\nonumbe
& y_i(t) = \alpha \left[ \mathbf{a}_{\rm{R}} (\theta) \right]_i \mathbf{a}_{\rm{T}} ^T(\theta) \mathbf{f}_{\rm{T}} \sum_{m = 0}^{M-1} s_{m} \big(t - \tau(t)\big) e^{-j 2 \pi f_c \tau} e^{j 2 \pi f_c \nu t }~.
\end{align}
We make the following standard assumptions: \textit{(i)} the CP duration is larger than the round-trip delay of the furthermost target\footnote{We focus on small surveillance volumes where the targets are relatively close to the radar, such as vehicular applications.}, i.e., $ T_{\rm{cp}} \geq \tau$, \cite{Firat_OFDM_2012,OFDM_Radar_Phd_2014,SPM_JRC_2019}, \textit{(ii)} the Doppler shifts satisfy $ \lvert \nu \rvert \ll 1/N$ \cite{Firat_OFDM_2012,ICI_OFDM_TSP_2020}, and \textit{(iii)} the time-bandwidth product (TBP) $B M T_{\rm{sym}} $ is sufficiently low so that the wideband effect can be ignored, i.e., $ s_{m} (t - \tau(t)) \approx s_{m} (t - \tau)$ \cite{OFDM_ICI_TVT_2017}. Under this setting, sampling $y_i(t)$ at $t = m T_{\rm{sym}} + T_{\rm{cp}} + \ell T / N$ for $\ell = 0, \ldots, N-1$ (i.e., after CP removal for the $\thnew{m}$ symbol) and neglecting constant terms, the time-domain signal received by the $\thnew{i}$ antenna in the $\thnew{m}$ symbol can be written as \cite{ICI_OFDM_TSP_2020}
\begin{align}\label{eq_rec_bb2}
y_{i,m}[\ell] &= \alpha \left[ \mathbf{a}_{\rm{R}} (\theta) \right]_i \mathbf{a}_{\rm{T}} ^T(\theta) \mathbf{f}_{\rm{T}} \, e^{j 2 \pi f_c m T_{\rm{sym}} \nu } e^{j 2 \pi f_c T \frac{\ell}{N} \nu} \\ \nonumber &~~\times \frac{1}{\sqrt{N}} \sum_{n = 0}^{N-1} x_{n,m} \, e^{j 2 \pi n \frac{\ell}{N}} e^{-j 2 \pi n \Delta f \tau} ~.
\end{align}
\subsection{Fast-Time/Slow-Time Representation with ICI}
For the sake of convenience, let us define, respectively, the frequency-domain and temporal steering vectors and the ICI phase rotation matrix as
\begin{align} \label{eq_steer_delay}
\mathbf{b} (\tau) & \triangleq \transpose{ \left[ 1, e^{-j 2 \pi \Delta f \tau}, \ldots, e^{-j 2 \pi (N-1) \Delta f \tau} \right] } \\ \label{eq_steer_doppler}
\mathbf{c} (\nu) & \triangleq \transpose{ \left[ 1, e^{-j 2 \pi f_c T_{\rm{sym}} \nu }, \ldots, e^{-j 2 \pi f_c (M-1) T_{\rm{sym}} \nu } \right] } \\ \label{eq_ici_D}
\mathbf{D} (\nu) &\triangleq \diag{1, e^{j 2 \pi f_c \frac{T}{N} \nu}, \ldots, e^{j 2 \pi f_c \frac{T(N-1)}{N} \nu} } ~.
\end{align}
Accordingly, the radar observations in \eqref{eq_rec_bb2} can be expressed as
\begin{align} \label{eq_ym}
\mathbf{y} _{i,m} &= \alpha \, \left[ \mathbf{a}_{\rm{R}} (\theta) \right]_i \mathbf{a}_{\rm{T}} ^T(\theta) \mathbf{f}_{\rm{T}} \mathbf{D} (\nu) \mathbf{F} _N^{H} \Big( \mathbf{x} _m \odot \mathbf{b} (\tau) \left[ \mathbf{c} ^{*}(\nu)\right]_m \Big)
\end{align}
where $ \mathbf{F} _N \in \complexset{N}{N}$ is the unitary DFT matrix with $\left[ \mathbf{F} \right]_{\ell,n} = \frac{1}{\sqrt{N}} e^{- j 2 \pi n \frac{\ell}{N}} $, $ \mathbf{y} _{i,m} \triangleq \left[ y_{i,m}[0] \, \ldots \, y_{i,m}[N-1] \right]^T$ and $ \mathbf{x} _m \triangleq \left[ x_{0,m} \, \ldots \, x_{N-1,m} \right]^T$.
Aggregating \eqref{eq_ym} over $M$ symbols and considering the presence of multiple targets, the OFDM radar signal received by the $\thnew{i}$ antenna over a frame can be written in a fast-time/slow-time compact matrix form as
\begin{align} \label{eq_ym_all_multi}
\mathbf{Y} _i = \sum_{k=0}^{K-1} \alpha^{(i)}_k \underbrace{ \mathbf{D} (\nu_k)}_{\substack{\rm{ICI} } } \mathbf{F} _N^{H} \Big( \mathbf{X} \odot \mathbf{b} (\tau_k) \mathbf{c} ^{H}(\nu_k) \Big) + \mathbf{Z} _i
\end{align}
for $i = 0,\ldots, N_{\rm{R}} -1$, where $\alpha^{(i)}_k \triangleq \alpha_k \, \left[ \mathbf{a}_{\rm{R}} (\theta_k) \right]_i \mathbf{a}_{\rm{T}} ^T(\theta_k) \mathbf{f}_{\rm{T}} $, $(\alpha_k, \tau_k, \nu_k, \theta_k)$ are the parameters of the $\thnew{k}$ target,
$ \mathbf{Y} _i \triangleq [ \mathbf{y} _{i,0} \, \ldots \, \allowbreak \mathbf{y} _{i,M-1} ] \in \complexset{N}{M}$, $ \mathbf{X} \triangleq \left[ \mathbf{x} _0 \, \ldots \, \mathbf{x} _{M-1} \right] \in \complexset{N}{M}$ and $ \mathbf{Z} _i \in \complexset{N}{M}$ is the additive noise matrix with $\vecc{ \mathbf{Z} _i} \sim {\mathcal{CN}}({ {\boldsymbol{0}} }_{NM}, \allowbreak \sigma^2 { \boldsymbol{\mathrm{I}} }_{NM} ) $. In \eqref{eq_ym_all_multi}, each column contains fast-time samples within a particular symbol and each row contains slow-time samples at a particular range bin. The diagonal phase rotation matrix $ \mathbf{D} (\nu)$ quantifies the ICI effect in fast-time domain, leading to Doppler-dependent phase-shifts across fast-time samples of each OFDM symbol, similar to the CFO effect in OFDM communications \cite{Visa_CFO_TSP_2006,multiCFO_TSP_2019}. Fig.~\ref{fig_ici_comp_range_profile} illustrates the effect of ICI on the range profile of an OFDM radar.
Given the transmit data symbols $ \mathbf{X} $, the problem of interest for OFDM radar sensing is to estimate channel gains $\{\alpha_k\}_{k=0}^{K-1}$, azimuth angles $\{\theta_k\}_{k=0}^{K-1}$, delays $\{\tau_k\}_{k=0}^{K-1}$ and Doppler shifts $\{\nu_k\}_{k=0}^{K-1}$ from the received $ N_{\rm{R}} \times N \times M$ space/fast-time/slow-time data cube $\{ \mathbf{Y} _i\}_{i=0}^{ N_{\rm{R}} -1}$ in \eqref{eq_ym_all_multi}.
\begin{figure}
\centering
\vspace{-0.2in}
\includegraphics[width=0.7\linewidth]{Figures/fig_1_ici_comparison.eps}
\vspace{-0.1in}
\caption{\footnotesize Range profiles of MIMO-OFDM radar with the parameters given in Sec.~\ref{sec_sim} for two different target velocities. The scenario contains $3$ targets having the same velocity $v$, the ranges $(60, 100, 150) \, \rm{m}$, the angles $(25^\circ, 30^\circ, 35^\circ)$ and the SNRs (i.e., $\abs{\alpha_k}^2/\sigma^2$) $(30, 5, 0) \, \rm{dB}$, respectively.
}
\label{fig_ici_comp_range_profile}
\vspace{-0.22in}
\end{figure}
\section{Parameter Estimation via APES Spatial Filtering}\vspace{-0.1in}
The ML approach to \eqref{eq_ym_all_multi} requires a computationally prohibitive $3K$ dimensional search over the parameter space. Additionally, the number of targets $K$ is unknown a-priori. To tackle this challenging estimation problem, we devise a three-stage low-complexity algorithm as described in the following subsections.
\subsection{Step 1: Angle Estimation via MUSIC}\vspace{-0.1in}
For mathematical convenience, we consider the space/fast-time snapshot of the data cube in \eqref{eq_ym_all_multi} corresponding to the $\thnew{m}$ OFDM symbol:
\begin{align}\label{eq_ybbar}
\widebar{\boldY} _m &\triangleq \left[ \mathbf{y} _{0,m} \, \ldots \, \mathbf{y} _{ N_{\rm{R}} -1,m} \right] \in \complexset{N}{ N_{\rm{R}} } \\ \nonumber
&= \sum_{k=0}^{K-1} \alpha_k \, \mathbf{a}_{\rm{T}} ^T(\theta_k) \mathbf{f}_{\rm{T}} \mathbf{D} (\nu_k) \mathbf{F} _N^{H} \diag{ \mathbf{x} _m} \\ \nonumber &~~~~~~~~\times \mathbf{b} (\tau_k) \left[ \mathbf{c} ^{*}(\nu_k) \right]_m \mathbf{a}_{\rm{R}} ^T(\theta_k) + \widebar{ \mathbf{Z} } _{m}
\end{align}
for $m=0,\ldots,M-1$. We propose to perform angle estimation using the MUSIC algorithm \cite{MUSIC_1986}. To this end, we first construct the spatial covariance matrix (SCM) of the data cube in \eqref{eq_ybbar} as
\vspace{-0.05in}
\begin{equation}\label{eq_SCM_obs}
\mathbf{R} \triangleq \sum_{m=0}^{M-1} \widebar{\boldY} _m^H \widebar{\boldY} _m ~.
\vspace{-0.05in}
\end{equation}
Under the assumption of spatially non-overlapping targets (i.e., $ \mathbf{a}_{\rm{R}} ^H(\theta_{k_1}) \mathbf{a}_{\rm{R}} (\theta_{k_2}) \approx 0, \, \forall \, k_1 \neq k_2$), the SCM can be approximated as (the proof is omitted due to space limitation)
\vspace{-0.05in}
\begin{equation}
\mathbf{R} \approx \norm{ \mathbf{X} }_F^2 \sum_{k=0}^{K-1} \abs{\alpha_k}^2 \abs{ \mathbf{a}_{\rm{T}} ^T(\theta_k) \mathbf{f}_{\rm{T}} }^2 \mathbf{a}_{\rm{R}} ^{*}(\theta_k) \mathbf{a}_{\rm{R}} ^T(\theta_k) + \sigma^2 { \boldsymbol{\mathrm{I}} }
\vspace{-0.05in}
\end{equation}
which is independent of target delays and Dopplers. Hence, creating the MUSIC spectrum from the SCM in \eqref{eq_SCM_obs} does not require target delay-Doppler information. Assuming $ N_{\rm{R}} > K$, let the eigendecomposition of the SCM be denoted as $ \mathbf{R} = \mathbf{U} _s \mathbf{\Lambda} _s \mathbf{U} ^H_s + \mathbf{U} _n \mathbf{\Lambda} _n \mathbf{U} ^H_n$, where the diagonal matrix $ \mathbf{\Lambda} _s$ contains the $K$ largest eigenvalues, $ \mathbf{\Lambda} _n$ contains the remaining $ N_{\rm{R}} -K$ eigenvalues, and $ \mathbf{U} _s$ and $ \mathbf{U} _n$ have the corresponding eigenvectors as their columns. Then, the MUSIC spectrum can be computed as
\vspace{-0.05in}
\begin{align} \label{eq_spatial_spectrum}
f(\theta) &= \frac{ 1 }{ \mathbf{a}_{\rm{R}} ^T(\theta) \mathbf{U} _n \mathbf{U} _n^H \mathbf{a}_{\rm{R}} ^{*}(\theta) } ~.
\vspace{-0.05in}
\end{align}
Let $ \mathcal{S} = \{ { \widehat{\theta} }_0, \ldots, { \widehat{\theta} }_{K-1} \}$ be the set of estimated angles in Step~1, which correspond to the peaks of the MUSIC spectrum\footnote{To prevent performance degradation at low SNRs due to spurious peaks and misidentification of signal and noise subspaces, improved versions of MUSIC can be employed, e.g., \cite{SSMUSIC_2002}.} in \eqref{eq_spatial_spectrum}.
\subsection{Step 2: Joint Doppler/CFO and Unstructured Channel Estimation via APES Beamforming}\vspace{-0.1in}
In Step~2, we formulate a joint Doppler/CFO and channel estimation problem for each ${ \widehat{\theta} } \in \mathcal{S} $, assuming the existence of a single target at a given angle. Invoking the assumption of spatially non-overlapping targets, we treat interferences from other target components as noise and consider a single-target model in \eqref{eq_ybbar} for each ${ \widehat{\theta} } \in \mathcal{S} $. To that aim, let
\begin{equation}\label{eq_H_dec}
\mathbf{H} = \left[ \mathbf{h} _0 \, \ldots \, \mathbf{h} _{M-1} \right] \in \complexset{L}{M}
\end{equation}
denote the unstructured, single-target radar channels in the time domain with $L$ taps, collected over $M$ OFDM symbols. Here, $L \leq N T_{\rm{cp}} / T$ due to the CP requirement. Based on this unstructured representation, \eqref{eq_ybbar} can be re-written as
\vspace{-0.05in}
\begin{align} \label{eq_ym_all_single2}
\widebar{\boldY} _m = \mathbf{D} (\nu) \widebar{\boldX} _m \mathbf{h} _m \mathbf{a}_{\rm{R}} ^T({ \widehat{\theta} }) + \widebar{ \mathbf{Z} } _{m}
\vspace{-0.05in}
\end{align}
where $ \widebar{\boldX} _m \triangleq \mathbf{F} _N^{H} \diag{ \mathbf{x} _m} \mathbf{F} _{N,L}$, $ \mathbf{F} _{N,L} \in \complexset{N}{L}$ denotes the first $L$ columns of $ \mathbf{F} _N$ and $ \widebar{ \mathbf{Z} } _{m}$ contains noise and interferences from other targets in $ \mathcal{S} $. According to \eqref{eq_ybbar}, the frequency-domain radar channels have the form
\begin{equation}\label{eq_H_def}
\mathbf{F} _{N,L} \mathbf{H} = \widebar{\alpha} \, \mathbf{b} (\tau) \mathbf{c} ^{H}(\nu)
\end{equation}
with $ \widebar{\alpha} \triangleq \alpha \, \mathbf{a}_{\rm{T}} ^T({ \widehat{\theta} }) \mathbf{f}_{\rm{T}} $ representing the complex channel gain including the transmit beamforming effect. \vspace{-0.1in}
\begin{remark}[Duality Between OFDM Communications and OFDM Radar in \eqref{eq_ym_all_single2}]\label{remark_down_comm}
Based on the observation that radar targets can be interpreted as uncooperative users from a communications perspective (as they transmit information to the radar receiver via reflections in an unintentional manner \cite{jointRadCom_review_TCOM,chiriyath2017radar}), we point out an interesting duality between the OFDM radar signal model with ICI in \eqref{eq_ym_all_single2} and an OFDM communications model with CFO (e.g., \cite[Eq.~(5)]{multiCFO_TWC_2018} and \cite[Eq.~(4)]{zhang2014blind}). Precisely, $ \mathbf{D} (\nu)$ represents CFO between the OFDM transmitter and receiver for a communications setup, while it quantifies the ICI effect due to high-speed targets for OFDM radar. Similarly, $ \widebar{\boldX} _m$ represents data/pilot symbols for communications and probing signals for radar\footnote{For radar sensing, every symbol acts as a pilot due to dual-functional operation on a single hardware platform.}. In addition, $ \mathbf{h} _m$ represents the time-domain channel for communications and the structured (delay-Doppler parameterized) channel for radar.
\end{remark}
\vspace{-0.1in}
In light of Remark~\ref{remark_down_comm}, we re-formulate the radar delay-Doppler estimation problem as a communication channel estimation problem, where the objective is to jointly estimate the unstructured time-domain channels $ \mathbf{H} $ and the CFO $\nu$ from \eqref{eq_ym_all_single2}. To perform channel estimation in \eqref{eq_ym_all_single2}, we propose an APES-like beamformer \cite{Est_MIMO_radar_2008}
\begin{align} \label{eq_apes}
\mathop{\mathrm{min}}\limits_{ \mathbf{w} , \mathbf{H} , \nu} &~~ \sum_{m=0}^{M-1}
\normbig{ \widebar{\boldY} _m \mathbf{w} ^{*} - \mathbf{D} (\nu) \widebar{\boldX} _m \mathbf{h} _m }^2 \\ \nonumber
\mathrm{s.t.}&~~ \mathbf{w} ^H \mathbf{a}_{\rm{R}} ({ \widehat{\theta} }) = 1
\end{align}
where $ \mathbf{w} \in \complexset{ N_{\rm{R}} }{1}$ is the APES spatial beamforming vector for an estimated angle ${ \widehat{\theta} } \in \mathcal{S} $. The optimal channel estimate for the $\thnew{m}$ symbol in \eqref{eq_apes} for a given $ \mathbf{w} $ and $\nu$ is given by
\begin{equation}\label{eq_hm_est}
\widehat{ \mathbf{h} } _m = \Big( \widebar{\boldX} _m^H \widebar{\boldX} _m \Big)^{-1} \widebar{\boldX} _m^H \mathbf{D} ^H(\nu) \widebar{\boldY} _m \mathbf{w} ^{*} ~.
\end{equation}
Plugging \eqref{eq_hm_est} back into \eqref{eq_apes} yields
\begin{align} \label{eq_apes2}
\mathop{\mathrm{min}}\limits_{ \mathbf{w} , \nu} &~~ \mathbf{w} ^T \mathbf{Q} (\nu) \mathbf{w} ^{*} ~~~~~~ \mathrm{s.t.}~~ \mathbf{w} ^H \mathbf{a}_{\rm{R}} ({ \widehat{\theta} }) = 1
\end{align}
where
$ \mathbf{Q} (\nu) \triangleq \mathbf{R} - \mathbf{\Sigma} (\nu)$
is the residual SCM, $ \mathbf{R} $ is the SCM of the observed data cube in \eqref{eq_SCM_obs} and
\begin{equation}\label{eq_cfo_covariance}
\mathbf{\Sigma} (\nu) \triangleq \sum_{m=0}^{M-1} \Big( \projrange{ \widebar{\boldX} _m} \mathbf{D} ^H(\nu) \widebar{\boldY} _m \Big)^H \Big( \projrange{ \widebar{\boldX} _m} \mathbf{D} ^H(\nu) \widebar{\boldY} _m \Big)
\end{equation}
is the SCM of the CFO compensated observed data component that lies in the subspace spanned by the columns of the pilot symbols.
For a given CFO $\nu$, the optimal beamformer in \eqref{eq_apes2} can be obtained in closed-form as \cite{Est_MIMO_radar_2008}
\begin{equation}\label{eq_what}
\widehat{ \mathbf{w} } = \frac{ \mathbf{Q} ^{*}(\nu)^{-1} \mathbf{a}_{\rm{R}} ({ \widehat{\theta} }) }{ \mathbf{a}_{\rm{R}} ^H({ \widehat{\theta} }) \mathbf{Q} ^{*}(\nu)^{-1} \mathbf{a}_{\rm{R}} ({ \widehat{\theta} }) }~.
\end{equation}
Substituting \eqref{eq_what} into \eqref{eq_apes2}, the CFO can be estimated as
\begin{equation}\label{eq_nuhat}
{ \widehat{\nu} } = \arg \max_{\nu} ~~ \mathbf{a}_{\rm{R}} ^H({ \widehat{\theta} }) \mathbf{Q} ^{*}(\nu)^{-1} \mathbf{a}_{\rm{R}} ({ \widehat{\theta} }) ~.
\end{equation}
Finally, plugging \eqref{eq_what} and \eqref{eq_nuhat} into \eqref{eq_hm_est}, the channel estimates can be expressed as
\begin{equation}\label{eq_hm_est2}
\widehat{ \mathbf{h} } _m = \frac{ \Big( \widebar{\boldX} _m^H \widebar{\boldX} _m \Big)^{-1} \widebar{\boldX} _m^H \mathbf{D} ^H({ \widehat{\nu} }) \widebar{\boldY} _m \mathbf{Q} ({ \widehat{\nu} })^{-1} \mathbf{a}_{\rm{R}} ^{*}({ \widehat{\theta} }) }{ \mathbf{a}_{\rm{R}} ^T({ \widehat{\theta} }) \mathbf{Q} ({ \widehat{\nu} })^{-1} \mathbf{a}_{\rm{R}} ^{*}({ \widehat{\theta} }) }~.
\end{equation}
\subsection{Step 3: Delay-Doppler Recovery from Unstructured Channel Estimates}\vspace{-0.1in}
Given the unstructured channel estimates $ \widehat{ \mathbf{H} } \triangleq \left[ \widehat{ \mathbf{h} } _0, \ldots, \widehat{ \mathbf{h} } _{M-1} \right]$ obtained in \eqref{eq_hm_est2}, we aim to estimate channel gain, delay and Doppler shift via a least-squares (LS) approach by exploiting the structure in \eqref{eq_H_def} as follows:
\begin{align} \label{eq_apes_step2}
\mathop{\mathrm{min}}\limits_{\alpha, \tau, \nu} &~~
\normbig{ \mathbf{F} _{N,L} \widehat{ \mathbf{H} } - \widebar{\alpha} \, \mathbf{b} (\tau) \mathbf{c} ^H(\nu) }_F^2 ~.
\end{align}
In \eqref{eq_apes_step2}, delay and Doppler estimates ${ \widehat{\tau} }$ and ${ \widehat{\nu} }$ can be obtained simply via 2-D FFT (i.e., IFFT and FFT across the columns and rows of $ \mathbf{F} _{N,L} \widehat{ \mathbf{H} } $, respectively). Then, channel gain can be estimated as $ \widehat{ \widebar{\alpha} } = \mathbf{b} ^H({ \widehat{\tau} }) \mathbf{F} _{N,L} \widehat{ \mathbf{H} } \mathbf{c} ({ \widehat{\nu} }) / (\norm{ \mathbf{b} ({ \widehat{\tau} })}^2 \norm{ \mathbf{c} ({ \widehat{\nu} })}^2 )$. The overall algorithm is summarized in Algorithm~\ref{alg_apes}.
\begin{algorithm}
\caption{APES Filtering for ICI-Aware MIMO-OFDM Radar}
\label{alg_apes}
\begin{algorithmic}
\State \textbf{Input:} Space/fast-time/slow-time data cube $\{ \mathbf{Y} _i\}_{i=0}^{ N_{\rm{R}} -1}$ in \eqref{eq_ym_all_multi}.
\State \textbf{Output:} Delay-Doppler-angle triplets $\{ ({ \widehat{\tau} }_k, { \widehat{\nu} }_k, { \widehat{\theta} }_k) \}_{k=0}^{K-1}$.
\State \textbf{Step~1:} Estimate target angles by identifying the peaks in the MUSIC spatial spectrum in \eqref{eq_spatial_spectrum}.
\State \textbf{Step~2:} For each estimated angle ${ \widehat{\theta} }$:
\State \hskip1.0em Estimate the Doppler/CFO from \eqref{eq_nuhat}.
\State \hskip1.0em Estimate the time-domain channels $ \widehat{ \mathbf{H} } $ via \eqref{eq_hm_est2}.
\State \textbf{Step~3:} For each estimated angle ${ \widehat{\theta} }$, estimate delay-Doppler in \eqref{eq_apes_step2} from the unstructured channel estimates $ \widehat{ \mathbf{H} } $.
\end{algorithmic}
\normalsize
\end{algorithm}
\vspace{-0.2in}
\section{Simulation Results}\label{sec_sim}
\vspace{-0.1in}
To demonstrate the performance of Algorithm~\ref{alg_apes}, we consider an OFDM system with $ f_c = 60 \, \rm{GHz}$, $B = 50 \, \rm{MHz}$, $N = 2048$, $ \Delta f = 24.41 \, \rm{kHz}$, $ T_{\rm{sym}} = 51.2 \, \rm{\mu s}$, $M = 64$, $ N_{\rm{T}} = 8$ and $ N_{\rm{R}} = 8$.
The data symbols $ \mathbf{X} $ are randomly generated from the QPSK alphabet and the transmit beamformer is set to point towards $30^\circ$, i.e., $ \mathbf{f}_{\rm{T}} = \mathbf{a}_{\rm{T}} ^{*}(30^\circ)$.
To illustrate the output of Algorithm~\ref{alg_apes}, we first consider a scenario consisting of $K=3$ targets with the ranges $(60, 100, 150) \, \rm{m}$, the velocities $(-60, 30, 120) \, \rm{m/s}$, the angles $(10^\circ, 25^\circ, 45^\circ)$ and the SNRs (i.e., $\abs{\alpha_k}^2/\sigma^2$) $(30, 15, 25) \, \rm{dB}$. Fig.~\ref{fig_music_step1} shows the MUSIC spectrum \eqref{eq_spatial_spectrum} obtained in Step~1 of Algorithm~\ref{alg_apes}.
In addition, Fig.~\ref{fig_range_vel_step3} demonstrates the range-velocity profiles obtained in Step~3 for each target angle along with the results of standard 2-D FFT \cite{RadCom_Proc_IEEE_2011}. It is observed that the proposed algorithm can successfully separate multiple target reflections in the angular domain, estimate their Dopplers/CFOs for ICI elimination and accurately recover delays and Dopplers.
\begin{figure}
\centering
\vspace{-0.2in}
\includegraphics[width=0.8\linewidth]{Figures/MUSIC_BF2.eps}
\vspace{-0.1in}
\caption{\footnotesize MUSIC spatial spectrum of OFDM radar in Step~1 along with the results of ordinary beamforming (BF) $ \mathbf{a}_{\rm{R}} ^T(\theta) \mathbf{R} \mathbf{a}_{\rm{R}} ^{*}(\theta)$.}
\label{fig_music_step1}
\vspace{-0.15in}
\end{figure}
Second, we compare the performance of Algorithm~\ref{alg_apes} with the 2-D FFT benchmark \cite{RadCom_Proc_IEEE_2011} in a single-target scenario with $(R, v, \theta) = (80\, \rm{m}, 70\, \rm{m/s}, 30^\circ)$ and an SNR of $-5 \, \rm{dB}$ using $M=8$ symbols. Since there exists no previous estimators for ICI-aware MIMO-OFDM radar, 2-D FFT is applied on the fast-time/slow-time snapshot obtained by receive beamforming of the data cube towards the true target angle. Fig.~\ref{fig_single_target_comparison} shows the range and velocity RMSEs of Algorithm~\ref{alg_apes} and the 2-D FFT benchmark with respect to SNR and target velocity over $100$ Monte Carlo noise realizations. As seen from the figure, ICI-induced high side-lobe levels in the delay-Doppler domain significantly degrade the performance of the 2-D FFT algorithm, while the proposed algorithm can suppress the ICI effect by exploiting the signal structure within an APES beamforming framework.
\begin{figure}
\centering
\vspace{-0.15in}
\includegraphics[width=1.1\linewidth]{Figures/rv_apes_2d_fft.eps}
\vspace{-0.3in}
\caption{\footnotesize Range-velocity profiles obtained by standard 2-D FFT after receive beamforming towards $10^\circ$ and those obtained by Algorithm~\ref{alg_apes} in Step~3 as the output of 2-D FFT of target-specific frequency-domain channel estimates for Target~1, Target~2 and Target~3.}
\label{fig_range_vel_step3}
\vspace{-0.1in}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.82\linewidth]{Figures/aa.eps} \vspace{-0.15in}
\caption{Range and velocity estimation performances of Algorithm~1 and 2-D FFT benchmark.}
\label{fig_single_target_comparison}
\vspace{-0.1in}
\end{figure}
\vspace{-0.15in}
\section{Conclusion}\vspace{-0.15in}
This paper addresses the parameter estimation problem for a MIMO-OFDM radar in the presence of non-negligible ICI caused by high-mobility targets. Based on an APES spatial filtering approach, a novel delay-Doppler-angle estimation algorithm is proposed by re-formulating radar sensing as a communication channel estimation problem. Simulation results show that the proposed algorithm enables high-accuracy multi-target parameter estimation under strong ICI by separating individual target reflections in the angular domain.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{}
In this paper, we describe a Bayesian deep neural network (DNN) for predicting FreeSurfer segmentations of structural MRI volumes, in minutes rather than hours. The network was trained and evaluated on a large dataset (n = 11,480), obtained by combining data from more than a hundred different sites, and also evaluated on another completely held-out dataset (n = 418). The network was trained using a novel spike-and-slab dropout-based variational inference approach. We show that, on these datasets, the proposed Bayesian DNN outperforms previously proposed methods, in terms of the similarity between the segmentation predictions and the FreeSurfer labels, and the usefulness of the estimate uncertainty of these predictions. In particular, we demonstrated that the prediction uncertainty of this network at each voxel is a good indicator of whether the network has made an error and that the uncertainty across the whole brain can predict the manual quality control ratings of a scan. The proposed Bayesian DNN method should be applicable to any new network architecture for addressing the segmentation problem.
\tiny
\fontsize{8}{11}\helveticabold { \section{Keywords:} brain segmentation, deep learning, magnetic resonance imaging, bayesian neural networks, variational inference, automated quality control}
\end{abstract}
\section{Introduction}
Identifying which voxels in a structural magnetic resonance imaging (sMRI) volume correspond to different brain structures (i.e. segmentation) is an essential processing step in neuroimaging analyses. These segmentations are often generated using the FreeSurfer package \citep{fischl2012FreeSurfer}, a process which can take a day or more for each subject \citep{Runtimes}. The computational resources for doing this at a scale of hundreds to thousands of subjects are beyond the capabilities of the computational resources available to most researchers. This has led to an interest in the use of deep neural networks as a general approach for learning to predict the outcome of a processing task, given the input data, in a much shorter time period than the processing would normally take. In particular, several deep neural networks have been trained to perform segmentation of brain sMRI volumes \citep{ronneberger2015u,roy2018quicknat,fedorov2017end,fedorov2017almost,li2017compactness,dolz20183d}, taking between a few seconds and a few minutes per volume. These networks predict a manual or an automated segmentation from the structural volumes (\cite{roy2018quicknat}, \cite{fedorov2017end}, \cite{fedorov2017almost}, and \cite{dolz20183d} used FreeSurfer, and \cite{petersen2010alzheimer} used GIF \citep{cardoso2015geodesic}).
These networks, however, have been trained on a limited number (on the order of hundreds) of examples from a limited number of sites (i.e. locations and/or scanners), which can lead to poor cross-site generalization for complex segmentation tasks with a large number of classes \citep{mcclure2018distributed}. This includes several of the recent DNNs proposed for fine-grain sMRI segmentation. (Note: We focus on DNNs which predict $>$30 classes.)
\cite{roy2018quicknat} performed 33 class segmentation using 581 sMRI volumes from the IXI dataset to train an initial model and then fine-tuned on 28 volumes from the MALC dataset \citep{marcus2007open}. They showed an approximately 9.4\% average Dice loss on out-of-site data from the ADNI-29 \citep{mueller2005alzheimer}, CANDI \citep{kennedy2012candishare}, and IBSR \citep{rohlfing2012image} datasets.
\cite{fedorov2017almost} used 770 sMRI volumes from HCP \citep{van2013wu} to train an initial model and then fine-tuned on 7 volumes from the FBIRN dataset \citep{keator2016function}. \cite{li2017compactness} performed a 160 class segmentation using 443 sMRI volumes from the ADNI dataset \citep{petersen2010alzheimer} for training. \cite{fedorov2017almost} and \cite{li2017compactness} did not report test results for sites that where not used during training.
These results show that it is possible to train a neural network to carry out segmentation of a sMRI volume. However, they provide a limited indication of whether such a network would work on data from any new site not encountered in training. While fine-tuning on labelled data from new sites can improve performance, even while using small amounts of data \citep{fedorov2017almost,roy2018quicknat,mcclure2018distributed}, a robust neural network segmentation tool should generalize to new sites without any further effort.
As part of the process of adding segmentation capabilities to the ``Nobrainer" tool \footnote{\url{https://github.com/neuronets/nobrainer}}, we trained a network to predict FreeSurfer segmentations given a training set of $\sim$10,000 sMRI volumes. This paper describes this process, as well as a quantitative and qualitative evaluation of the performance of the resulting model.
Beyond the segmentation performance of the network, a second aspect of interest to us is to understand whether it is feasible for a network to indicate how confident it is about its prediction at each location in the brain. We expect the network to make errors, be it because of noise, unusual positioning of the brain, very different contrast than what it was trained on, etc. Because our model is probabilistic and seeks to learn uncertainties, we expect it to be less confident in its predictions in such cases. It is also possible that, for certain locations, there are very similar brain structures labelled as different regions in different people. In such locations, there would be a limit to how well the network could perform, the Bayes error rate \citep{hastie2005elements}. Additionally, the network should be less confident for examples that are very different from those seen in the training set (e.g., contain large artifacts). While prediction uncertainty can be computed for standard neural networks, as done by \cite{dolz20183d}, these uncertainty estimates are often overconfident \citep{guo2017calibration,mcclure2017representing}. Bayesian neural networks (BNNs) have been proposed as a solution to this issue. One popular BNN approach is Monte-Carlo (MC) Bernoulli Dropout \citep{srivastava2014dropout,gal2016dropout}. Using this method, \cite{li2017compactness,roy2018bayesian} showed that the segmentation performance of the BNN predictions was better for voxels with low dropout sampling-based uncertainties and that injected input noise can lead to increased uncertainty. \cite{roy2018bayesian} also found that using MC Bernoulli dropout decreased the drop in segmentation performance from 9.4\% to 7.8\% on average when testing on data from new sites compared to \cite{roy2018quicknat}. However, MC Bernoulli dropout does not learn dropout probabilities from data, which can lead to not properly modeling the uncertainty of the predicted segmentation. Recent works has shown that these dropout probabilities can be learned using a concrete relaxation \citep{gal2017concrete}. Additionally, learning individual uncertainties for each weight has been shown to be beneficial for many purposes (e.g. pruning and continual learning) \citep{blundell2015weight,nguyen2018variational,mcclure2018distributed}. In this paper, we propose using both learned dropout uncertainties and individual weight uncertainties.
Finally, we test the hypothesis that overall prediction uncertainty across an entire image reflects its ``quality", as measured by human quality control (QC) scores. Given the effort required to produce such scores, there have been multiple attempts to either crowdsource the process \citep{Keshavan363382} or automate it \citep{esteban2017mriqc}. The latter, in particular, does not rely on segmentation information, so we believe it is worthwhile to test whether uncertainty derived from segmentation is more effective.
\section{Methods}
\begin{table} [h!]
\centering
\begin{tabular}{|c|c|c|}
\hline
{\bf Dataset} & {\bf Number of Examples} \\
\hline
CoRR \citep{zuo2014open} & 3,039 \\
\hline
OpenfMRI \citep{poldrack2013toward} & 1,873 \\
\hline
NKI \citep{nooner2012nki} & 1,136 \\
\hline
SLIM \citep{liu2017longitudinal} & 1,003 \\
\hline
ABIDE \citep{di2014autism} & 992 \\
\hline
HCP \citep{van2013wu} & 956 \\
\hline
ADHD200 \citep{bellec2017neuro} & 719 \\
\hline
CMI \citep{alexander2017open} & 611 \\
\hline
SALD \citep{wei2018structural} & 477 \\
\hline
Buckner \citep{biswal2010toward} & 183 \\
\hline
HBNSSI \citep{o2017healthy} & 178 \\
\hline
GSP \citep{holmes2015brain} & 152 \\
\hline
Haxby \citep{haxby2011common,nastase2017attention} & 55 \\
\hline
Gobbini \citep{di2017neural} & 51 \\
\hline
ICBM \citep{mazziotta2001probabilistic} & 45 \\
\hline
Barrios \citep{barrios} & 10 \\
\hline
\end{tabular}
\caption{The number of examples used from different datasets.}
\label{n}
\end{table}
\subsection{Data}
\subsubsection{Imaging Datasets}
We combined several datasets (Table \ref{n}), many of which themselves contain data from multiple sites, into a single dataset with 11,480 T1 sMRI volumes. In-site validation and test sets were created from the combined dataset using a 80-10-10 training-validation-test split. This resulted in a training set of 9,184 volumes, a validation set of 1,148 volumes, and a test set of 1,148 volumes. The training set was used for training the networks, the validation set for setting DNN hyperparameters (e.g, Bernoulli dropout probabilities), and the test set was used for evaluating the performance of the DNNs on new data from the same sites that were used for training.
We additionally used 418 sMRI volumes from the NNDSP dataset \citep{lee2018automated} as a held-out dataset to test generalization of the network to an unseen site. In addition to sMRI volumes, each NNDSP sMRI volume was given a QC score from 1 to 4, higher scores corresponding to worse scan quality, by two raters (3 if values differed by more than 1), as described in \cite{blumenthal2002motion}. If a volume had a QC score greater than 2, it was labeled as a bad quality scan; otherwise, the scan was labeled as a good quality scan.
\subsubsection{Segmentation Target}
We computed 50-class FreeSurfer \citep{fischl2012FreeSurfer} segmentations, as in \cite{fedorov2017almost}, for all subjects in each of the datasets described earlier. These were used as the labels for prediction. Although, FreeSurfer segmentations may not be perfectly correct, as compared to manual, expert segmentations, using them allowed us to create a large training dataset, as one could not feasibly label it by hand.
FreeSurfer trained networks can also outperform FreeSurfer segmentations when compared to expert segmentations \citep{roy2018quicknat}.
The trained network could be fine-tuned with expert small amounts of labeled data, which would likely improve the results \citep{roy2018quicknat,mcclure2018distributed}.
\subsubsection{Data Pre-processing}
The sMRI volumes were resampled to 1mm isotropic cubic volumes of 256 voxels per side and the voxel intensities were normalized according to Freesurfer's mri\_convert with the conform flag. After resampling, input volumes were individually z-scored across voxels. We then split each sMRI volume into 512 non-overlapping $32\times32\times32$ sub-volumes, similarly to \citep{fedorov2017end,fedorov2017almost}, to be used as inputs for the neural network. The prediction target is the corresponding segmentation sub-volume. This resulted in 512 pairs, $({\mathbf{x}}, {\mathbf{y}})$, of sMRI and label sub-volumes, respectively, for each sMRI volume.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Layer & Filter & Padding & Dilation ($l$) & Non-linearity \\
\hline
1 & $96\textnormal{x}3^3$ & 1 & 1 & ReLU \\
\hline
2 & $96\textnormal{x}3^3$ & 1 & 1 & ReLU \\
\hline
3 & $96\textnormal{x}3^3$ & 1 & 1 & ReLU \\
\hline
4 & $96\textnormal{x}3^3$ & 2 & 2 & ReLU \\
\hline
5 & $96\textnormal{x}3^3$ & 4 & 4 & ReLU \\
\hline
6 & $96\textnormal{x}3^3$ & 8 & 8 & ReLU \\
\hline
7 & $96\textnormal{x}3^3$ & 1 & 1 & ReLU \\
\hline
8 & $50\textnormal{x}1^3$ & 0 & 1 & Softmax \\
\hline
\end{tabular}
\caption{The MeshNet dilated convolutional neural network architecture used for brain segmentation.}
\label{Arch}
\end{table}
\subsection{Convolutional Neural Network}
\subsubsection{Architecture}
Several deep neural network architectures have been proposed for brain segmentation, such as U-net \citep{ronneberger2015u}, QuickNAT \citep{roy2018quicknat}, HighResNet \citep{li2017compactness} and MeshNet \citep{fedorov2017end,fedorov2017almost}. We chose MeshNet because of its relatively simple structure, its lower number of learned parameters, and its competitive performance, since the computational cost of Bayesian neural networks scales based on structural complexity and number of parameters.
MeshNet uses dilated convolutional layers \citep{yu2015multi} due to the 3D structural nature of sMRI data. Applying a discrete volumetric dilated convolutional layer to one input channel for one weight filter can be expressed as:
\begin{equation}
({\mathbf{w}}_f*_l{\mathbf{h}})_{i,j,k} = \sum_{\tilde{i}=-a}^a \sum_{\tilde{j}=-b}^b \sum_{\tilde{k}=-c}^c w_{f,\tilde{i},\tilde{j},\tilde{k}}h_{i-l\tilde{i},j-l\tilde{j},k-l\tilde{k}} = ({\mathbf{w}}_f*_l{\mathbf{h}})_{\mathbf{v}} = \sum_{{\mathbf{t}} \in \mathcal{W}_{abc}} w_{f,{\mathbf{t}}}h_{{\mathbf{v}}-l{\mathbf{t}}}.
\end{equation}
\noindent where $h$ is the input to the layer, $a$, $b$, and $c$ are the bounds for the $i$, $j$, and $k$ axes of the filter with weights ${\mathbf{w}}_f$, $(i, j, k)$ is the voxel, ${\mathbf{v}}$, where the convolution is computed. The set of indices for the elements of ${\mathbf{w}}_f$ can be defined as $\mathcal{W}_{abc} = \{-a,...,a\}\times\{-b,...,b\}\times\{-c,...,c\}$. The dilation factor, number of filters, and other details of the MeshNet-like architecture that we used for all experiments is shown in Table \ref{Arch}. Note that we increased the number of filters per layer from 72 to 96, compared to \cite{fedorov2017almost} and \cite{mcclure2018distributed}, since we greatly increased the number of training volumes.
\subsubsection{Maximum a Posteriori Estimation}
When training a neural network, the weights of the network, ${\mathbf{w}}$, are often learned using maximum likelihood estimation (MLE). For MLE, $\log p(\mathcal{D}|{\mathbf{w}})$ is maximized where $\mathcal{D} = \{({\mathbf{x}}_1, {\mathbf{y}}_1),...,({\mathbf{x}}_N, {\mathbf{y}}_N)\}$ is the training dataset and $({\mathbf{x}}_n, {\mathbf{y}}_n)$ is the $n$th input-output example. This often overfits, however, so we used a prior on the network weights, $p({\mathbf{w}})$, to obtain a maximum a posteriori (MAP) estimate, by optimizing $\log p({\mathbf{w}}|\mathcal{D})$:
\begin{equation}
\label{L_MAP}
{\mathbf{w}}^* = \underset{{\mathbf{w}}}{\mathrm{argmax}}\sum^N_{n=1} \log p({\mathbf{y}}_n|{\mathbf{x}}_n,{\mathbf{w}}) + \log p({\mathbf{w}}).
\end{equation}
We used a fully factorized Gaussian prior (i.e. $p(w_{f,\tilde{i},\tilde{j},\tilde{k}}) = \mathcal{N}(0,1))$. This results in the MAP weights being learned by minimizing the softmax cross-entropy with L2 regularization. At test time, this point estimate approximation, ${\mathbf{w}}^*$, is used to make a prediction for new examples:
\begin{equation}
p({\mathbf{y}}_{test}|{\mathbf{x}}_{test}) \approx p({\mathbf{y}}_{test}|{\mathbf{x}}_{test},{\mathbf{w}}^*)
\end{equation}
\begin{figure} [b]
\centering
\includegraphics[width=0.8\textwidth]{mc_sampling_w_labels.png}
\caption{Illustration of generating a prediction from a Bayesian neural network using Monte Carlo sampling (modified from \cite{blundell2015weight}). A standard neural network ({\bf A}, top left) has one weight for each of its connections (${\mathbf{w}}^*$), learned from the training set and used in generating a prediction for a test example. A Bayesian neural network ({\bf B}, bottom left) has, instead, a posterior distribution for each weight, parameterized by theta ($q_{\theta}({\mathbf{w}})$). The process of training starts with an assigned prior distribution for each weight, and returns an approximate posterior distribution. At test time ({\bf C}, right), a weight sample ${\bf w_1}$ (red) is drawn from the posterior distribution of the weights, and the resulting network is used to generate a prediction $p(y|x,{\bf w_1})$ for an example $x$. The same can be done for samples ${\bf w_2}$ (blue) and ${\bf w_3}$ (green), yielding predictions $p(y|x,{\bf w_2})$ and $p(y|x,{\bf w_3})$, respectively. The three networks are treated as an ensemble and their predictions averaged.}
\label{fig:mc_sampling}
\end{figure}
\subsubsection{Approximate Bayesian Inference}
In Bayesian inference for neural networks, a distribution of possible weights is learned instead of just a MAP point estimate. Using Bayes' rule, $p({\mathbf{w}}|\mathcal{D})=p(\mathcal{D}|{\mathbf{w}})p({\mathbf{w}})/p(\mathcal{D})$, where $p({\mathbf{w}})$ is the prior over weights. However, directly computing the posterior, $p({\mathbf{w}}|\mathcal{D})$, is often intractable, particularly for DNNs. As a result, an approximate inference method must be used.
One of the most popular approximate inference methods for neural networks is variational inference, since it scales well to large DNNs. In variational inference, the posterior distribution $p({\mathbf{w}}|\mathcal{D})$ is approximated by a learned variational distribution of weights $q_\theta({\mathbf{w}})$, with learnable parameters $\theta$. This approximation is enforced by minimizing the Kullback-Leibler divergence (KL) between $q_\theta({\mathbf{w}})$, and the true posterior, $p({\mathbf{w}}|\mathcal{D})$, ${\mathrm{KL}}[q_\theta({\mathbf{w}})||p({\mathbf{w}}|\mathcal{D})]$, which measures how $q_\theta({\mathbf{w}})$ differs from $p({\mathbf{w}}|\mathcal{D})$ using relative entropy. This is equivalent to maximizing the variational lower bound \citep{hinton1993keeping,graves2011practical,blundell2015weight,kingma2015variational,gal2016dropout,molchanov2017variational,louizos2017multiplicative}, also known as the evidence lower bound (ELBO),
\begin{equation}
\label{L_ELBO}
\mathcal{L}_{ELBO}(\theta) = \mathcal{L}_{\mathcal{D}}(\theta) - \mathcal{L}_{KL}(\theta),
\end{equation}
where $\mathcal{L}_{\mathcal{D}}(\theta)$ is
\begin{equation}
\label{L_D}
\mathcal{L}_{\mathcal{D}}(\theta) = \sum^N_{n=1} \mathbb{E}_{q_\theta({\mathbf{w}})}[\log p({\mathbf{y}}_n|{\mathbf{x}}_n,{\mathbf{w}})]
\end{equation}
and $\mathcal{L}_{KL}(\theta)$ is the KL divergence between the variational distribution of weights and the prior,
\begin{equation}
\label{L_KL}
\mathcal{L}_{KL}(\theta) = {\mathrm{KL}}[q_\theta({\mathbf{w}})||p({\mathbf{w}})]
\end{equation},
\noindent, which measures how $q_\theta({\mathbf{w}})$ differs from $p({\mathbf{w}})$ using relative entropy.
Maximizing $L_{\mathcal{D}}$ seeks to learn a $q_\theta({\mathbf{w}})$ that explains the training data, while minimizing $L_{KL}$ (i.e. keeping $q_\theta({\mathbf{w}})$ close to $p({\mathbf{w}})$) prevents learning a $q_\theta({\mathbf{w}})$ that overfits to the training data.
The objective function in Eq. \ref{L_ELBO} is usually impractical to compute for deep neural networks, due to both: (1) being a full-batch approach and (2) integrating over $q_\theta({\mathbf{w}})$. (1) is often dealt with by using stochastic mini-batch optimization \citep{robbins1951stochastic} and (2) is often approximated using Monte Carlo sampling. As discussed in \cite{graves2011practical,kingma2015variational}, these methods can be used to perform stochastic gradient variational Bayes (SGVB) in deep neural networks. For each parameter update, an unbiased estimate of $\nabla_\theta \mathcal{L}_{\mathcal{D}}$ for a mini-batch, $\{({\mathbf{x}}_1,{\mathbf{y}}_1),...,({\mathbf{x}}_M,{\mathbf{y}}_M)\}$, is calculated using one weight sample, ${\mathbf{w}}_m$, from $q_\theta({\mathbf{w}})$ for each mini-batch example. This results in the following approximation to Eq. \ref{L_ELBO}:
\begin{equation}
\mathcal{L}_{ELBO}(\theta) \approx \mathcal{L}_{\mathcal{D}}^{SGVB}(\theta) - \mathcal{L}_{KL}(\theta),
\end{equation}
where
\begin{equation}
\mathcal{L}_{\mathcal{D}}(\theta) \approx \mathcal{L}_{\mathcal{D}}^{SGVB}(\theta) = \frac{N}{M} \sum_{m = 1}^M \log p({\mathbf{y}}_m|{\mathbf{x}}_m,{\mathbf{w}}_m).
\end{equation}
At test time, the weights, ${\mathbf{w}}$ would ideally be marginalized out, $p({\mathbf{y}}_{test}|{\mathbf{x}}_{test}) = \int p({\mathbf{y}}_{test}|{\mathbf{x}}_{test},{\mathbf{w}}) q_{\theta}({\mathbf{w}}) d{\mathbf{w}}$, when making a prediction for a new example. However, this is often impractical to compute for DNNs, so a Monte-Carlo approximation is often used. This results in the prediction of a new example being made by averaging the predictions of multiple weight samples from $q_{\theta}({\mathbf{w}})$ (Figure \ref{fig:mc_sampling}):
\begin{equation}
p({\mathbf{y}}_{test}|{\mathbf{x}}_{test}) \approx \frac{1}{N_{MC}}\sum_n^{N_{MC}}p({\mathbf{y}}_{test}|{\mathbf{x}}_{test},{\mathbf{w}}_n)
\end{equation}
\noindent where ${\mathbf{w}}_n \sim q_{\theta}({\mathbf{w}})$.
\paragraph{MC Bernoulli Dropout}
For MC Bernoulli dropout (BD) \citep{gal2016dropout}, we drew weights from $q_\theta({\mathbf{w}})$ by drawing a Bernoulli random variable ($b_{i,j,k}\sim Bern(p_{l})$), where $i,j,k$ are the indices of the volume axes, for every element of the layer, $l$, input, $\bf{h}$, and then elementwise multiplying ${\mathbf{b}}$ and ${\mathbf{h}}$ before applying the next dilated convolutional layer. This effectively sets the filter weights to zero when applied to a dropped element. \cite{gal2016dropout} approximated the KLD between this Bernoulli variational distribution and a zero-mean Gaussian by replacing the variational distribution with a mixture of Gaussians, resulting in an L2-like penalty. However, this can lead to pathological behaviour due to Bernoulli distributions not having support over all real numbers \citep{hron2018variational}. In Bernoulli dropout, $p_{l}$ codes for the uncertainty of the weights and is often set layerwise via hyperparameter search. (For our experiments, we found the best value of $p$ to be 0.9 after searching over the values of 0.95, 0.9, 0.75, and 0.5 using the validation set.) However, Bayesian models would ideally learn how uncertain to be for each weight.
\paragraph{Spike-and-Slab Dropout with Learned Model Uncertainty}
We propose a form of dropout that both learns the dropout probability for each filter using a concrete relaxation of dropout \citep{gal2017concrete}, and an individual uncertainty for each weight using fully factorized Gaussian (FFG) filters \citep{graves2011practical,blundell2015weight,molchanov2017variational,nguyen2018variational,mcclure2018distributed}. This is in contrast to previous spike-and-slab dropout methods, which did not learn the model (or epistemic) uncertainty \citep{der2009aleatory,kendall2017uncertainties} from data either by learning the dropout probabilities or by learning the variance parameter of the Gaussian components of the weights \citep{mcclure2017representing}. In our proposed method, we assume each of the $F$ filters are independent (i.e. $p({\mathbf{w}}) = \prod_{f=1}^F p({\mathbf{w}}_f)$), as done in previous FFG methods \citep{graves2011practical,blundell2015weight,molchanov2017variational,nguyen2018variational,mcclure2018distributed}. We then decompose each filter into a dropout-based component, $b_f$, and a Gaussian component, ${\mathbf{g}}_f$, such that ${\mathbf{w}}_f = b_f {\mathbf{g}}_f$. Per this decomposition, we perform variational inference on the joint distribution of $\{b_1,...,b_F,{\mathbf{g}}_1,...{\mathbf{g}}_F\}$, instead of on p(${\mathbf{w}}$) directly \citep{titsias2011spike,mcclure2017representing}. We then assume each element of ${\mathbf{g}}_f$ is independent (i.e. $p({\mathbf{g}}_f) = \prod_{{\mathbf{t}} \in \mathcal{W}_{abc}} p(g_{f,{\mathbf{t}}})$), and that each weight is Gaussian (i.e. $g_{f,{\mathbf{t}}} \sim \mathcal{N}(\mu_{f,{\mathbf{t}}},\sigma_{f,{\mathbf{t}}}^2)$) with learned parameters $\mu_{f,{\mathbf{t}}}$ and $\sigma_{f,{\mathbf{t}}}$. Instead of drawing each $b_{f}$ from $Bern(p_{l})$, we draw them from a concrete distribution \citep{gal2017concrete} with a learned dropout probability, $p_{f}$, for each filter:
\begin{equation}
b_{f}=sigmoid\big( \frac{1}{t} (\log p_{f} - \log(1 - p_{f}) + \log u - \log(1 - u))
\end{equation}
\noindent where $u \sim Unif(0,1)$. This concrete distribution converges to the Bernoulli distribution as the sigmoid scaling parameter, $t$, goes to zero. (In this paper, we used $t=0.02$.) As discussed in \cite{kingma2015variational} and \cite{molchanov2017variational}, randomly sampling each $g_{f,{\mathbf{t}}}$ for each mini-batch example can be computationally expensive, so we used the fact that the sum of independent Gaussian variables is also Gaussian to move the noise from the weights to the convolution operation, as in \cite{mcclure2018distributed}. For, dilated convolutions and the proposed spike-and-slab variational distribution, this is described by:
\begin{equation}
({\mathbf{w}}_f*_l {\mathbf{h}})_{\mathbf{v}} = b_f ({\mathbf{g}}_f *_l {\mathbf{h}})_{\mathbf{v}}
\end{equation}
where
\begin{equation}
({\mathbf{g}}_f *_l {\mathbf{h}})_{\mathbf{v}} \sim \mathcal{N}(\mu_{f,{\mathbf{v}}}^*,(\sigma_{f,{\mathbf{v}}}^*)^2),
\label{eq:w_f}
\end{equation}
\begin{equation}
\mu_{f,{\mathbf{v}}}^* = \sum_{{\mathbf{t}} \in \mathcal{W}_{abc}}\mu_{f,{\mathbf{t}}}h_{{\mathbf{v}}-l{\mathbf{t}}},
\end{equation}
and
\begin{equation}
(\sigma_{f,{\mathbf{v}}}^*)^2 = \sum_{{\mathbf{t}} \in \mathcal{W}_{abc}}\sigma_{f,{\mathbf{t}}}^2h_{{\mathbf{v}}-l{\mathbf{t}}}^2.
\end{equation}
For this spike-and-slab dropout (SSD) implementation, we used a spike-and-slab prior, instead of the Gaussian prior used by \cite{gal2016dropout} and \cite{gal2017concrete}. Using a spike-and-slab prior with MC Bernoulli dropout was discussed in \cite{gal2016uncertainty}, but not implemented. As in the variational distribution, each filter is independent in the prior. Per the spike-and-slab decomposition discussed above, the KL-divergence term of the ELBO can be written as
\begin{equation}
\mathcal{L}_{KL}(\theta) = \sum_{f=1}^F {\mathrm{KL}}[q_{p_f}(b_f)q_{{\mathbf{\mu}},{\mathbf{\sigma}}}({\mathbf{g}}_f)||p(b_f)p({\mathbf{g}}_f)],
\end{equation}
\noindent where $\theta = \bigcup_f^F \bigcup_{{\mathbf{t}} \in \mathcal{W}_{abc}} \{p_f,\mu_{f,{\mathbf{t}}},\sigma_{f,{\mathbf{t}}}\}$ are the learned parameters and $p(b_f)$ and $p({\mathbf{g}}_f)$ are priors. Assuming that each weight in a filter is independent, as commonly done in the literature \citep{graves2011practical,blundell2015weight,nguyen2018variational}, allows the term to be rewritten as
\begin{equation}
\mathcal{L}_{KL}(\theta) = \sum_{f=1}^F ({\mathrm{KL}}[q_{p_f}||p(b_f)] + \sum_{{\mathbf{t}} \in \mathcal{W}_{abc}} {\mathrm{KL}}[q_{{\mathbf{\mu}},{\mathbf{\sigma}}}(g_{f,{\mathbf{t}}})||p(g_{f,{\mathbf{t}}})]) .
\end{equation}
\noindent For ${\mathrm{KL}}[q_{p_f}||p(b_f)]$, we used the KL-divergence between two Bernoulli distributions,
\begin{equation}
{\mathrm{KL}}[q_{p_f}(b_f)||p(b_f)] = p_f \log\frac{p_f}{p_{prior}} + (1-p_f) \log\frac{1-p_f}{1-p_{prior}},
\end{equation}
\noindent since we used a relatively small sigmoid scaling parameter. Using $p(g_{f,{\mathbf{t}}}) = \mathcal{N}(\mu_{prior},\sigma_{prior}^2)$,
\begin{equation}
{\mathrm{KL}}[q_{{\mathbf{\mu}},{\mathbf{\sigma}}}(g_{f,{\mathbf{t}}})||p(g_{f,{\mathbf{t}}})] = \log\frac{\sigma_{prior}}{\sigma_{f,{\mathbf{t}}}} + \frac{\sigma_{f,{\mathbf{t}}}^2 + (\mu_{f,{\mathbf{t}}}-\mu_{prior})^2}{2\sigma_{prior}^2} - \frac{1}{2}. \end{equation}
\noindent For this paper, the spike-and-slab prior parameters were set as $p_{prior} = 0.5$, $\mu_{prior} = 0$, and $\sigma_{prior} = 0.1$. $p_{prior} = 0.5$ corresponds to a maximum entropy prior (i.e. in the absence of new data be maximally uncertain). Alternatively, a $p_{prior}$ close to $0$ is a sparcity prior (i.e. in the absence of data do not use a filter).
\subsection{Implementation Details}
The DNNs were implemented using Tensorflow \citep{abadi2016tensorflow}. During training, the parameters of each DNN were updated using Adam \citep{kingma2014adam} with an initial learning rate of 1e-4. A mini-batch size of 32 subvolumes was used with data parallelization across 4 12GB NVIDIA Titan X Pascal GPUs was used for training and a mini-batch size of 8 subvolumes on 1 12GB NVIDIA Titan X Pascal GPU was used for validation and testing.
\subsection{Quantifying performance}
\subsubsection{Segmentation performance measure}
To measure the quality of the produced segmentations, we calculated the Dice coefficient, which is defined by
\begin{equation}
Dice_c = \frac{2|\hat{{\mathbf{y}}}_c \cdot {\mathbf{y}}_c|}{||\hat{{\mathbf{y}}}_c||^2 + ||{\mathbf{y}}_c||^2} = \frac{2TP_c}{2TP_c + FN_c + FP_c},\end{equation}
\noindent where $\hat{{\mathbf{y}}}_c$ is the binary segmentation for class $c$ produced by a network, ${\mathbf{y}}_c$ is the ground truth produced by FreeSurfer, $TP_c$ is the true positive rate for class $c$, $FN_c$ is the false negative rate for class $c$, and $FP_c$ is the false positive rate for class $c$. We calculate the Dice coefficient separately for each class $c=1,\ldots,50$, and average across classes to compute the overall performance of a network for one sMRI volume.
\subsubsection{Uncertainty measure}
We quantify the uncertainty of a prediction, $p({\mathbf{y}}_{m,c}|x_m)$, using the aleatoric uncertainty \citep{der2009aleatory,kendall2017uncertainties}, which was measured by the entropy of the softmax across the 50 output classes,
\begin{equation}
H({\mathbf{y}}_{m,c}|x_m) = -\sum_{c=1}^{50} p({\mathbf{y}}_{m,c}|x_m) \ \log p({\mathbf{y}}_{m,c}|x_m).
\end{equation}
\noindent We calculate the uncertainty for each output voxel separately, and the uncertainty for one sMRI volume by averaging across all output voxels not classified as background (i.e. given the unknown label).
\begin{table} [h!]
\centering
\begin{tabular}{|c|c|c|}
\hline
Method & In-site & Out-of-site \\
\hline
MAP & 0.7790 $\pm$ 0.0576 & 0.7333 $\pm$ 0.0498 \\
\hline
BD & 0.7764 $\pm$ 0.0506 & 0.7369 $\pm$ 0.0474 \\
\hline
SSD & 0.8373 $\pm$ 0.0471 & 0.7921 $\pm$ 0.0444 \\
\hline
\end{tabular}
\caption{The average and standard deviation of the class Dices across test volumes for the maximum a posteriori (MAP), MC Bernoulli dropout (BD), and spike-and-slab dropout (SSD) network on the in-site and out-of-site test sets.}
\label{dices}
\end{table}
\section{Results}
\subsection{Segmentation performance}
We trained MAP, MC Bernoulli Dropout (BD), and Spike-and-Slab Dropout (SSD) Meshnet-like CNNs on the 9,298 sMRI volumes in the training set. We then applied our networks to produce segmentations for both the in-site test set and the out-of-site test data. For the BD and SSD networks, 10 MC samples were used for test predictions. The means and standard deviations across volumes for the average Dice across all 50 classes are shown in Table \ref{dices}. Dice scores for each label for the in-site and out-of-site test sets are shown in Figure \ref{dices_in} and \ref{dices_out}, respectively. We found that, compared to MAP and BD, SSD significantly increased the Dice for both the in-site ($p<1e-6$) and out-of-site ($p<1e-6$) test sets, per a paired t-test across test volumes. We found that SSD had a 5.7\% drop in performance from the in-site test set to the out-of-site test set, where as the MAP has a drop of 6.2\% and BD a drop of 5.4\%. This is better than drops of 9.4\% and 7.8\% on average reported in the literature by \cite{roy2018quicknat} and \cite{roy2018bayesian}, respectively. In Figures \ref{fig:test} and \ref{fig:nndsp}, we show selected example segmentations for the SSD network for volumes that have Dice scores similar to the average Dice score across the respective dataset.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{dices_in_small.png}
\caption{Average Dice scores and standard errors across in-site test volumes for each label for the maximum a posteriori (MAP), MC Bernoulli dropout (BD), and spike-and-slab dropout (SSD) networks.}\label{dices_in}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{dices_out_small.png}
\caption{Average Dice scores and standard errors across out-of-site test volumes for each label for the maximum a posteriori (MAP), MC Bernoulli dropout (BD), and spike-and-slab dropout (SSD) networks.}\label{dices_out}
\end{figure}
\begin{figure}[!h]
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_anat_axial.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_aseg_axial.png}
\end{subfigure}
~
\centering
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_pred_axial.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_difference_axial.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_entropy_axial.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.05\textwidth}
\includegraphics[scale=0.072]{colorbar.png}
\end{subfigure}
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_anat_sagittal.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_aseg_sagittal.png}
\end{subfigure}
~
\centering
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_pred_sagittal.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_difference_sagittal.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_entropy_sagittal.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.05\textwidth}
\includegraphics[scale=0.072]{colorbar.png}
\end{subfigure}
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_anat_coronal.png}
\caption*{structural}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_aseg_coronal.png}
\caption*{FreeSurfer}
\end{subfigure}
~
\centering
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_pred_coronal.png}
\caption*{prediction}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_difference_coronal.png}
\caption*{error}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{test_entropy_coronal.png}
\caption*{uncertainty}
\end{subfigure}
~
\begin{subfigure}[b]{0.05\textwidth}
\includegraphics[scale=0.072]{colorbar.png}
\caption*{}
\end{subfigure}
\caption{In-site segmentation results for the spike-and-slab dropout (SSD) network for a test subject with average Dice performance. The columns show, respectively, the structural image used as input, the FreeSurfer segmentation used as a prediction target, the prediction made by our network, the voxels where there was a mismatch between prediction and target, and the prediction uncertainty at each voxel.}
\label{fig:test}
\end{figure}
\begin{figure}[!h]
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_anat_axial.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_aseg_axial.png}
\end{subfigure}
~
\centering
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_pred_axial.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_difference_axial.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_entropy_axial.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.05\textwidth}
\includegraphics[scale=0.072]{colorbar.png}
\end{subfigure}
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_anat_sagittal.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_aseg_sagittal.png}
\end{subfigure}
~
\centering
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_pred_sagittal.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_difference_sagittal.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_entropy_sagittal.png}
\end{subfigure}
~
\begin{subfigure}[b]{0.05\textwidth}
\includegraphics[scale=0.072]{colorbar.png}
\end{subfigure}
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_anat_coronal.png}
\caption*{structural}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_aseg_coronal.png}
\caption*{FreeSurfer}
\end{subfigure}
~
\centering
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_pred_coronal.png}
\caption*{prediction}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_difference_coronal.png}
\caption*{error}
\end{subfigure}
~
\begin{subfigure}[b]{0.16\textwidth}
\includegraphics[scale=0.25]{nndsp_entropy_coronal.png}
\caption*{uncertainty}
\end{subfigure}
~
\begin{subfigure}[b]{0.05\textwidth}
\includegraphics[scale=0.072]{colorbar.png}
\caption*{}
\end{subfigure}
\caption{Out-of-site segmentation results for the spike-and-slab dropout (SSD) network for a test subject with average Dice performance. The columns show, respectively, the structural image used as input, the FreeSurfer segmentation used as a prediction target, the prediction made by our network, the voxels where there was a mismatch between prediction and target, and the prediction uncertainty at each voxel.}
\label{fig:nndsp}
\end{figure}
\subsection{Utilizing Uncertainty}
\subsubsection{Predicting segmentation errors from uncertainty}
Ideally, an increase in DNN prediction uncertainty indicates an increase in the probability that that prediction is incorrect. To evaluate whether this is the case for the trained brain segmentation DNN, we performed a receiver operating characteristic (ROC) analysis. In this analysis, voxels are ranked from most uncertain to least uncertain and one considers, at each rank, what fraction of the voxels were also misclassified by the network. An ROC curve can then be generated by plotting the true positive rate vs the false negative rate for different uncertainty thresholds used to predict misclassification. The area under this curve (AUC) typically summarizes the results of the ROC analysis. The average ROC and AUCs across volumes for MAP, BD, and SSD for the in-site and out-of-site test sets are shown in \ref{entropy_to_error}. Compared to MAP and BD, SSD significantly improved the AUC for both the in-site ($p<1e-6$) and out-of-site ($p<1e-6$) test sets, per a paired t-test across test set volumes.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{entropy_to_errors_in.png}
\includegraphics[width=0.4\textwidth]{entropy_to_errors_out.png}
\caption{Receiver operating characteristic (ROC) curves for predicting errors for the in-site and out-of-site test sets from the voxel uncertainty of the maximum a posteriori (MAP), MC Bernoulli dropout (BD), and spike-and-slab dropout (SSD) networks.}
\label{entropy_to_error}
\end{figure}
\subsubsection{Predicting scan quality from uncertainty}
Ideally, the output uncertainty for inputs not drawn from the training distribution should be relatively high. This could potentially be useful for a variety of applications. One particular application is detection of bad quality sMRI scans, since the segmentation DNN was trained using relatively good quality scans. To test the validity of predicting high vs low quality scans, we performed an ROC analysis on the held-out NNDSP dataset, where manual quality control ratings are available. We also did the same analysis using MRIQC (v0.10.5) \cite{esteban2017mriqc}, a recently published method that combines a wide range of automated QC algorithms. To statistically test whether any method significantly outperformed the other methods, we performed bootstrap sampling of the AUC for predicting scan quality from average uncertainty by sampling out-of-site test volumes. We performed 10,000 bootstrap samples, each with 418 volumes. The average ROC and AUC for the MAP, BD, SSD, and MRIQC methods are shown in Figure \ref{qc_roc}. The MAP, BD, and SSD networks all have significantly higher AUCs than MRIQC ($p=1.369e-4$,$p=1.272e-5$, and $p=1.381e-6$, respectively). Additionally, SSD had a significantly higher AUC than both MAP and BD ($p=1.156e-3$ and $p=1.042e-3$, respectively).
\section{Discussion}
Segmentation of structures in sMRI volumes is a critical pre-processing step in many neuroimaging analyses. However, these segmentations are currently generated using tools that can take a day or more for each subject \citep{Runtimes}, such as FreeSurfer. This computational cost can be prohibitive when scaling analyses up from hundreds to thousands of subjects. DNNs have recently been proposed to perform sMRI segmentation is seconds to minutes. In this paper, we developed a Bayesian DNN, using spike-and-slab dropout, with the goals of increasing the similarity of the DNN's predictions to the FreeSurfer segmentations and generating useful uncertainty estimates for these predictions.
\begin{figure} [!h]
\centering
\includegraphics[width=0.5\textwidth]{entropy_to_qc.png}
\caption{Receiver operating characteristic (ROC) curves for predicting scan quality for the NNDSP out-of-site test set from the average non-background voxel uncertainty of the maximum a posteriori (MAP), MC Bernoulli dropout (BD), and spike-and-slab dropout (SSD) networks and from MRIQC scores.}\label{qc_roc}
\end{figure}
In order to evaluate the proposed Bayesian network, we trained a standard deep neural network (DNN), using MAP estimation, to predict FreeSurfer segmentations from structural MRI (sMRI) volumes. We trained on a little under 10,000 sMRIs, obtained by combining approximately 70 different datasets (many of which, in turn, contain images from several sites, e.g. NKI, ABIDE, ADHD200). We used a separate test set of more than 1,000 sMRIs, drawn from the same datasets. The resulting standard DNN performs at the same level of state-of-the-art networks \citep{fedorov2017almost}. This result, however, was obtained by testing over an order of magnitude more test data, and many more sites, than those papers. We also tested performance on a completely separate dataset (NNDSP) from a site not encountered in training, which contained 418 sMRI volumes. Whereas Dice performance dropped slightly, this was less than what was observed in other studies \citep{roy2018quicknat,roy2018bayesian}; this suggests that we may be achieving better generalization by training on our larger and more diverse dataset, and we plan on testing this on more datasets from novel sites in the future. This is particularly important to us, as this network is meant to be used within an off-the-shelf tool\footnote{\url{ https://github.com/neuronets/nobrainer}}.
We demonstrated that the estimated uncertainty for the prediction at each voxel is a good indicator of whether the standard network makes an error in it, both in-site and out-of-site. The tool that produces the predicted segmentation volume for an input sMRI will also produce an uncertainty volume. We anticipate this being useful at various levels, e.g. to refine other tools that rely on segmentation images, or to to improve prediction models based on sMRI data (e.g. modification of calculation of cortical thickness, surface area, voxel selection or weighting in regression \citep{roy2018bayesian} or classification models, etc).
We also demonstrated that the average prediction uncertainty across voxels in the brain is an excellent indicator of manual quality control ratings. Furthermore, it outperforms the best existing automated solution \citep{esteban2017mriqc}. Since automation is already used in large repositories (e.g. OpenMRI), we plan on offering our tool as an additional quality control measure.
Finally, we showed that a new Bayesian DNN using spike-and-slab dropout with learned model uncertainty was significantly better than previous approaches. This spike-and-slab method increased segmentation performance and improved the usefulness of output uncertainties compared both to a MAP DNN method and an MC Bernoulli dropout method, which has previously been used in the brain segmentation literature \citep{li2017compactness,roy2018bayesian}. These results show that Bayesian DNNs are a promising method for building brain segmentation and automated sMRI quality control tools. We have also made a version of ``Nobrainer", that incorporates the networks trained and evaluated in this paper, available for download and use within a Singularity/Docker container \footnote{\url{https://github.com/neuronets/kwyk}}.
We believe it may be possible to improve this segmentation processing, in that we did not use registration. One option would be to use various techniques for data augmentation (e.g. variation of image contrast, since that is pretty heterogeneous, rotations/translations of existing examples, addition of realistic noise, etc). Another would be to eliminate the need to divide the brain into sub-volumes, which loses some global information; this will become more feasible in GPUs with more memory. Finally, we plan on using post-processing of results (e.g. ensure some coherence between predictions for adjacent voxels, leverage off-the-shelf brain and tissue masking code).
\section*{Acknowledgments}
This research was supported (in part) by the Intramural Research Program of the NIMH (ZICMH002968). This work utilized the computational resources of the NIH HPC Biowulf cluster. (\url{http://hpc.nih.gov}). JK's and SG's contribution was supported by NIH R01 EB020740.
\bibliographystyle{frontiersinSCNS_ENG_HUMS}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Scalar-tensor models of gravity possess a vast phenomenology, and are widely studied within the context of inflation, dark energy, black hole phenomenology, dark matter and beyond \cite{Clifton:2011jh, Joyce:2014kja, Nojiri:2017ncd}. One interesting proposal is the existence of pseudo-vacuum solutions in which Lorentz invariance is partially broken, in such a way that the scalar degree of freedom does not relax to a constant vacuum expectation value on the background spacetime. Such states open the door to self-tuning mechanisms in which the dynamical degree(s) of freedom can cancel an arbitrary vacuum energy, leaving the metric unaffected \cite{Weinberg:1988cp, Padilla:2015aaa}. This idea was pioneered in Refs. \cite{Charmousis:2011bf, Charmousis:2011ea}, in which Minkowski space solutions were obtained despite the presence of an arbitrary vacuum energy. The scalar field equation derived from this so-called `Fab-Four' action possesses a particular structure such that it is trivially satisfied when the metric is exactly Minkowski space\footnote{More precisely, Milne spacetime.}. In this case, the scalar field remains dynamical and the Friedmann equation relates the scalar field dynamics to the vacuum energy \cite{Copeland:2012qf, Appleby:2015ysa, Babichev:2015qma, Kaloper:2013vta, Copeland:2021czt,Appleby:2012rx}. A different type of degeneracy was subsequently explored in Ref. \cite{Appleby:2018yci}, in which actions were constructed for which the scalar field and one of the Einstein equations are equivalent when the metric is de Sitter \cite{Emond:2018fvv, Appleby:2020njl, Linder:2020xey, Bernardo:2021hrz, Bernardo:2021izq, Linder:2022iqi} or Minkowski space \cite{Appleby:2020dko, Bernardo:2021bsg}. In this work we focus on the latter class of models, dubbed `well-tempering', and specifically those that admit Minkowski space vacuum solutions \cite{Appleby:2020dko, Bernardo:2021bsg}. Modern cosmology requires the existence of low energy de Sitter rather than Minkowski space, but in this work we treat Minkowski space as a useful test bed.
The requirement that an exact, static vacuum solution exists for the metric despite the presence of an arbitrarily large vacuum energy imposes stringent conditions on the form that the scalar-tensor action can possess \cite{Charmousis:2011bf, Appleby:2018yci}. Demanding that Minkowski space is a solution to the field equations regardless of the energy density of the vacuum overconstrains the dynamics, and this ansatz can only be realised if the field equations have some form of redundancy when it is imposed \cite{Appleby:2020dko}. This gives rise to a degeneracy condition, which imposes an exact relationship between different terms in the scalar field Lagrangian \cite{Appleby:2020dko, Bernardo:2021bsg}. However, fixing the Lagrangian precisely so that the model admits a flat spacetime solution can be interpreted as its own form of fine tuning, separate from the Cosmological Constant problem. It is therefore natural to question how the dynamics of this class of models changes if we relax the degeneracy condition, and what is the fate of the vacuum solutions. These questions are the focus of this work.
We focus on the simplest model in Ref. \cite{Appleby:2020dko} that can give rise to a Minkowski space solution, and then adjust the action such that the degeneracy condition is broken whilst preserving the core features of the model -- shift symmetry and Galilean invariance. We argue that the tadpole is crucial in eliminating dynamical fixed points from the system, ensuring that even if we do not impose the degeneracy condition, the metric does not relax to the standard Cosmological Constant-driven de Sitter fixed point. Instead, the expansion rate can evolve to a Minkowski space solution asymptotically. This behaviour is quite generic for the cubic Galileon model, subject to the presence of the tadpole and shift symmetry.
The paper will proceed as follows. In section \ref{sec:2} we introduce the action and field equations that will be used throughout this work, and elucidate the role of the tadpole in precluding the existence of de Sitter solutions. In section \ref{sec:3} we briefly review the degenerate Minkowski vacuum solutions obtained in Ref. \cite{Appleby:2020dko}. Taking a simple model as `fiducial', we relax the strict degeneracy condition in multiple ways in section \ref{sec:4}, finding that the Minkowski space solution is preserved in the asymptotic future. For balance, we include an example of breaking the degeneracy condition such that the Minkowski space solution is lost completely. We close with a discussion of our results and the future hurdles that this class of models must overcome to be considered as viable cosmological models. Related asymptotic de Sitter solutions have been recently constructed in the literature \cite{Khan:2022bxs}, which are similar in spirit to this work.
\textit{Supplementary Material.} A Mathematica notebook which can be used to reproduce the results of the paper can downloaded from \href{https://github.com/reggiebernardo/notebooks}{GitHub} \cite{reggie_bernardo_4810864}.
\textit{Conventions.} We work with the mostly-plus metric signature $(-, +, +, +)$. A dot over a variable means a derivative with respect to the cosmic time $t$ while a prime corresponds to differentiation with respect to the dimensionless time $\tau$. Subscripts on the scalar potentials $K(X)$, $V(\phi)$, $G_3(X)$, and $F(\phi)$ denote differentiation with respect to their arguments $\phi$ or $X$.
\section{Cosmology with the Tadpole}
\label{sec:2}
Throughout this work we will use the following action and corresponding field equations, obtained after imposing a flat Friedmann-Lema\^{i}tre-Robertson-Walker metric:
\begin{equation} \label{eq:action} S = \int \sqrt{-g} d^{4}x \left[ {(M_{\rm pl}^{2} + F(\phi)) R \over 2} + K(X) + V(\phi) - G_{3}(X) \Box \phi - \lambda^{3}\phi - \Lambda + {\cal L}_{\rm m} \right] \end{equation}
\begin{eqnarray}
3H^2(\mpl +F)&=& \rho + \Lambda+2XK_X-K -V +6H\dot\phi XG_{3X}
-3 F_{\phi} H\dot\phi + \lambda^{3}\phi \label{eq:fullfried}\\
-2\dot H\,(\mpl +F)&=& \rho + P + \ddot\phi\,( F_{\phi} -2XG_{3X}) \nonumber \\
&\qquad& \phantom{ggggg} -H\dot\phi\,(F_{\phi} -6XG_{3X})
+2XK_X + 2X F_{\phi\phi} \label{eq:fulldh}\\
0&=&\ddot\phi\,\left[K_X+2XK_{XX}+6H\dot\phi(G_{3X}+XG_{3XX})\right]\notag\\
&\qquad&+3H\dot\phi\,K_X+\lambda^3 -V_{\phi} +6XG_{3X}(\dot H+3H^2)
-3 F_{\phi} (\dot H+2H^2) \label{eq:fullddphi}
\end{eqnarray}
\noindent where $K(X)$ and $G_{3}(X)$ are arbitrary functions of $X = -\left( \partial_\mu \phi \right) \left( \partial^\mu \phi \right) /2$ and $F(\phi)$, $V(\phi)$ are arbitrary functions of $\phi$. Subscripts denote differentiation with respect to that variable. Due to the importance of the tadpole $\lambda^{3}\phi$ in this work, we separate it from $V(\phi)$. We have included a perfect fluid contribution ${\cal L}_{m}$ with density and pressure $\rho$, $P = w \rho$. We will initially fix $\rho = P = 0$, but keep the Cosmological Constant $\Lambda$ arbitrary and non-zero. We re-introduce matter in section \ref{sec:matter}.
When faced with a dynamical system such as ($\ref{eq:fulldh}$) and ($\ref{eq:fullddphi}$), the first step is to determine the fixed points at which the fields approach constant values. The tadpole plays a unique role in the dynamics of $\phi$ and $H$, in that it generically prevents the relaxation of the fields to Cosmological Constant-driven vacuum expectation values. To see this, we return to a simple example. We fix $F(\phi) = 0$, $V(\phi) = 0$, $G_{3}(X)=0$ and $K(X) = \epsilon X$, where $\epsilon$ is a dimensionless constant that can be rescaled to unity via a field redefinition (we decline to do so, retaining the freedom of choosing the sign of the kinetic term). The field equations are then particularly simple
\begin{eqnarray}
3\mpl H^2 &=&\Lambda+\epsilon {\dot{\phi}^{2} \over 2}
+ \lambda^{3}\phi \label{eq:fried}\\
-2\mpl \dot H\, &=&
\,
\epsilon \,\dot{\phi}^{2} \label{eq:dh}\\
0&=& \epsilon \ddot\phi\, +3\epsilon H\dot\phi\, +\lambda^3
\, . \label{eq:ddphi}
\end{eqnarray}
\noindent If $\lambda = 0$, then the system admits an exact vacuum solution $3 \mpl H^{2} = \Lambda$, $\phi = 0$. When $\lambda \neq 0$ this is not a solution, and in fact it is clear that $H={\rm constant}$ is not a solution to this system. $\dot{H}=0$ implies $\dot{\phi} = 0$ which is inconsistent with the scalar field equation. Assuming there exists a solution to this system that is analytic about some $t_{0}$ that we take without loss of generality to be $t_{0}=0$, we can expand as
\begin{eqnarray} \phi(t) &=& \sum_{n=0}^{\infty} \phi_{n}t^{n} \\
H(t) &=& \sum_{n=0}^{\infty} H_{n} t^{n} \, .
\end{eqnarray}
\noindent The field equations, expanded up to ${\cal O}(t^{2})$, are
\begin{eqnarray} \label{eq:O0fr} {\cal O}(t^{0}) \qquad : \qquad & & 3M_{\rm pl}^{2} H_{0}^{2} = \Lambda + {\epsilon \over 2} \phi_{1}^{2} + \lambda^{3} \phi_{0} \\
\label{eq:O0dH} & & -2M_{\rm pl}^{2} H_{1} = \epsilon \phi_{1}^{2} \\
\label{eq:O0sfe} & & 2\epsilon \phi_{2} + 3\epsilon H_{0}\phi_{1} + \lambda^{3} = 0 \\
\label{eq:O1fr} {\cal O}(t) \qquad : \qquad & & 6M_{\rm pl}^{2} H_{0}H_{1} = 2\epsilon \phi_{1}\phi_{2} + \lambda^{3} \phi_{1} \\
\label{eq:O1dH} & & -4M_{\rm pl}^{2} H_{2} = 4\epsilon \phi_{1}\phi_{2} \\
\label{eq:O1sfe} & & 6\epsilon \phi_{3} + 3\epsilon \left(H_{1}\phi_{1} + 2 H_{0}\phi_{2}\right) = 0 \\
\label{eq:O2fr} {\cal O}(t^{2}) \qquad : \qquad & & 3M_{\rm pl}^{2} \left(H_{1}^{2} + 2H_{0}H_{2}\right) = {\epsilon \over 2} \left(4\phi_{2}^{2} + 6\phi_{1}\phi_{3}\right)t^{2} + \lambda^{3} \phi_{2} \\
\label{eq:O2dH} & & -6M_{\rm pl}^{2} H_{3} = \epsilon \left(4\phi_{2}^{2} + 6\phi_{1}\phi_{3}\right) \\
\label{eq:O2sfe} & & 12\epsilon \phi_{4} + 3\epsilon \left(H_{2}\phi_{1} + 2H_{1}\phi_{2} \right) = 0 \, .
\end{eqnarray}
\noindent We must specify two initial conditions out of three degrees of freedom -- $H_{0}$, $\phi_{0}$ and $\phi_{1}$, then the final one is fixed by the order ${\cal O}(t^{0})$ Friedmann equation ($\ref{eq:O0fr}$). If we search for a $H = {\rm constant}$ solution, we must ensure that $H_{i} = 0$ for all $i > 0$. The $\dot{H}$ equation ($\ref{eq:O0dH}$) forces $\phi_{1} = 0$, which from ($\ref{eq:O0sfe}$) gives $\phi_{2} = -\lambda^{3}/2\epsilon$. $H_{2}$ is zero from ($\ref{eq:O1dH}$) -- $\phi_{1} = 0$ -- but $H_{3}$ is non-zero from ($\ref{eq:O2dH}$) -- $H_{3} = -2\epsilon\phi_{2}^{2}/(3M_{\rm pl}^{2})$. The time dependence of $H_{3}$ is sourced by $\phi_{2} \sim \lambda^{3}$, which confirms that in the presence of a tadpole there is no de Sitter solution. Once we abandon the idea of an exact de Sitter vacuum solution, we can take $\phi_{1} \neq 0$ in which case the time dependence of $H$ becomes a function of a combination of $\Lambda$, $\phi_{0}$ and $\phi_{1}$. One can always fix $H_{0} = 0$ as an initial condition, but since the metric is not static this seems to be an arbitrary choice.
Although we took a particularly simple example, we expect this behaviour to be generic. We will argue that static vacuum solutions are not generically present when $\lambda \neq 0$, although we discuss the generality of this statement below. The degeneracy condition considered in Ref. \cite{Appleby:2018yci} allows the metric to exist in a constant vacuum state, but the presence of the tadpole requires that the scalar field must remain dynamical. If we drop the degeneracy condition, then we can expect both the metric and scalar field to evolve.
As a counterpoint to the example in this section, cases arise where a static vacuum can be realized even when the tadpole is present. The caveat here is that these solutions cancel out the Cosmological Constant via fine tuning rather than by a dynamical mechanism. As an example, when the scalar field in the dynamical system (\ref{eq:fried}), (\ref{eq:dh}), and (\ref{eq:ddphi}) is granted a bare mass $m_\phi \neq 0$, the `constant field' solution
\begin{equation}
\left( 3 M_\text{pl}^2 H^2 = \Lambda - \dfrac{\lambda^6}{2m_\phi^2} , \phi = - \lambda^3/m_\phi^2 \right)
\end{equation}
is present. As expected, the tadpole contribution can eliminate the standard Cosmological Constant-driven vacuum state if $\lambda^{6} = 2\Lambda m_{\phi}^{2}$. However, for this fixed point to exist the Cosmological Constant must be cancelled by fine tuning, to produce a low energy (Minkowski) vacuum. This is a manifestation of Weinberg's no-go theorem which requires cancellation between disparate contributions to the Cosmological Constant \cite{Weinberg:1988cp, Padilla:2015aaa}. Such cases do not have the behaviour that we are searching for in this work -- a time dependent dynamical cancellation of $\Lambda$. Keeping this in mind, for the rest of the paper we focus on a model space for which degenerate vacuum solutions exist as asymptotic states.
\section{Degenerate Field Equations}
\label{sec:3}
In a series of papers \cite{Appleby:2018yci, Emond:2018fvv, Appleby:2020njl, Appleby:2020dko, Linder:2020xey, Bernardo:2021izq, Bernardo:2021bsg, Linder:2022iqi}, a set of vacuum solutions to the system of equations was obtained by imposing exact de Sitter or Minkowski ansatze then searching for models for which the ansatz is a solution. Models with such vacuum solutions must solve a degeneracy condition, which relates the scalar field potentials in the action. These vacuum states are unusual, in the sense that the metric does not respond to the scalar field dynamics or any vacuum energy present, at least at the level of the background.
For example, in Ref. \cite{Appleby:2020dko} a Minkowski space solution was derived. Fixing $F(\phi) = V(\phi) = 0$ and $\rho = P = 0$, the condition for the dynamical system ($\ref{eq:fullfried}$), ($\ref{eq:fulldh}$), ($\ref{eq:fullddphi}$) to admit an exact Minkowski space solution is the degeneracy condition
\begin{equation} \label{eq:deg} G_{3X} = -{1 \over \lambda^{3}} K_{X} \left( K_{X} + 2X K_{XX} \right) ,
\end{equation}
\noindent with additional conditions $G_{3X} \neq 0$, $K_{X} \neq 0$, $K_{X} + 2XK_{XX} \neq 0$, $\lambda \neq 0$. The simplest example can be found by taking $K(X) = \epsilon X$ and $G_{3}(X) = -\epsilon^{2}X/\lambda^{3}$. The field equations become
\begin{eqnarray}
3\mpl H^2 &=&\Lambda + {\epsilon \over 2} \dot{\phi}^{2} - {3 \epsilon^{2} \over \lambda^{3}} H\dot\phi^{3} + \lambda^{3}\phi \label{eq:friedt}\\
-2\mpl \, \dot H &=&
{ \epsilon^{2} \over \lambda^{3}}\dot\phi^{2} \ddot\phi - {3 \epsilon^{2} \over \lambda^{3}} H\dot\phi^{3}
+2\epsilon X \label{eq:fulldht}\\
0&=&\ddot\phi\,\left[\epsilon - {6\epsilon^{2} \over \lambda^{3}} H\dot\phi\right]+3\epsilon H\dot\phi + \lambda^3 - {3\epsilon^{2} \over \lambda^{3}}\dot\phi^{2}(\dot H+3H^2) \, .
\label{eq:fullddphit}
\end{eqnarray}
\noindent For this system there exists an exact Minkowski space solution with $H = 0$ identically. Inserting $H = \dot{H} = 0$ into ($\ref{eq:fulldht}$) and ($\ref{eq:fullddphit}$), both equations reduce to
\begin{equation}\label{eq:ddotp} \ddot{\phi} = -\lambda^{3}/\epsilon . \end{equation}
\noindent This is the meaning of degeneracy -- the number of independent dynamical equations must reduce by one when we impose an ansatz that fixes one of the dynamical fields -- in this case the expansion rate $H$. By introducing dimensionless variables $\varphi = \phi/\lambda$ and $\tau = \lambda t$, the field evolves as
\begin{equation} \label{eq:phi_wtvac} \varphi = -{\tau^{2} \over 2\epsilon} + c_{1}\tau + c_{0} \end{equation}
\noindent where $c_{1}$, $c_{0}$ are dimensionless constants. The constants $c_{0}$, $c_{1}$ -- which are integration constants of the dynamics of $\phi$ -- cancel the vacuum energy in the Friedmann equation ($\ref{eq:friedt}$). The dynamical nature of the field $\varphi$ is due to the presence of the tadpole; equation ($\ref{eq:ddotp}$) is sourced by $\lambda$. We expect both the scalar field and metric to remain dynamical in this model, but the metric admits a static vacuum state due to the imposition of the degeneracy condition -- we have demanded that such a solution exists by imposing ($\ref{eq:deg}$).
It is natural to ask what happens if we relax the degeneracy condition. Based on our understanding of the tadpole, we expect that in the absence of a mechanism to enforce a static metric solution then both the metric and scalar field can be dynamical. For the purposes of cosmology, we are not interested in the existence of Minkowski space solutions, or even exact vacuum solutions. We expect that the standard general relativity algebraic relation $3M_{\rm pl}^{2} H^{2} = \Lambda$ is not necessarily a solution to the dynamical system when the tadpole is present, and we also know that Minkowski space is a dynamical attractor when the degeneracy equation is exactly imposed. One might hope then that violating the degeneracy condition might yield a dynamical solution for the metric as well as the scalar field in such a way that $H$ evolves towards the low energy state $H(t) \to 0$, rather than towards the general relativity $\Lambda$-driven de Sitter vacuum solution, without having to impose an exact degeneracy condition.
\begin{table*}[h!]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
Model & Section & Asymptotic State & Dynamical $\cancel{\Lambda}$ & Ghost Free \\
\hline
Unequal Mass Terms $\kappa \neq \lambda$ & \ref{sec:unequal_mass} & Minkowski & \checkmark & \checkmark \\
Linear Coupling $\phi R$ & \ref{sec:coupling} & Minkowski & \checkmark & \checkmark \\
Nonlinear Coupling $\phi^{n}R$ & \ref{sec:coupling} & Minkowski & \checkmark & \checkmark \\
Including Matter $\rho \neq 0$ & \ref{sec:matter} & Minkowski & \checkmark & \checkmark \\
Scalar Field Mass $m_{\phi}^{2}\phi^{2}$ & \ref{sec:5} & de Sitter/Unstable & \checkmark & \checkmark \\
\hline
\end{tabular} \\
\caption{Summary of all models considered in this work, in which the degeneracy condition ($\ref{eq:deg})$ is broken by the addition of different terms to the scalar-tensor action. The asymptotic late time states of the dynamical system are given, as well as the ability of the scalar field to dynamically cancel the effect of $\Lambda$ and the perturbative (no-ghost) stability condition.
}
\label{tab:models}
\end{table*}
\section{Breaking the Degeneracy Condition}
\label{sec:4}
In this section we consider multiple ways in which the degeneracy relation might be broken. We take the exact Minkowski solution and model of the previous section as a fiducial case, then depart from it in various ways. We initially fix the matter contribution to be zero but introduce pressureless dust in section \ref{sec:matter}.
We provide a summary of the models considered in this work, and their important properties, in Table \ref{tab:models}.
\subsection{Unequal Mass Scales}
\label{sec:unequal_mass}
First we allow the mass scales associated with $G_{3}(X)$ and the tadpole to differ, and fix $V(\phi) = F(\phi) = 0$. Hence the first system that we consider has the following action
\begin{equation}\label{eq:ac1} S = \int \sqrt{-g} d^{4}x \left[ {M_{\rm pl}^{2} R \over 2} + \epsilon X + {\epsilon^{2} \over \kappa^{3}} X \Box \phi - \lambda^{3}\phi - \Lambda \right] \, . \end{equation}
\noindent To admit an exact Minkowski vacuum solution we require $\lambda = \kappa$, but we do not impose that relation here. We introduce dimensionless quantities $\alpha = \lambda/\kappa$ and $\Delta = M_{\rm pl}/\lambda$, $h = H/\lambda = d\log{a}/d\tau$, $\tilde{\Lambda} = \Lambda/\lambda^{4}$ and the resulting field equations are
\begin{eqnarray}
3\Delta^{2} h^2 &=& \tilde{\Lambda} + {\epsilon \over 2} (\varphi')^{2} - 3 \epsilon^{2}\alpha^{3} h(\varphi')^{3} + \varphi \label{eq:fr0}\\
-2\Delta^{2} \, h' &=&
\epsilon^{2}\alpha^{3} (\varphi')^{2} \varphi'' - 3 \epsilon^{2} \alpha^{3} h (\varphi')^{3}
+ \epsilon (\varphi')^{2} \label{eq:dh0}\\
0&=& \varphi''\,\left[\epsilon - 6\epsilon^{2}\alpha^{3} h\varphi'\right]+3\epsilon h\varphi' + 1 - 3\epsilon^{2}\alpha^{3} (\varphi')^{2} (h'+3h^2)
\, . \label{eq:sf0}
\end{eqnarray}
The ansatz $h = h' =0$ is no longer a solution to this system because ($\ref{eq:dh0}$), ($\ref{eq:sf0}$) are not equivalent upon insertion of this ansatz. There are no regular fixed points to this system -- $\varphi'' = \varphi' = 0$ is not a solution to ($\ref{eq:sf0}$). There is also no $h' = 0$, $\varphi'' = 0$, $\varphi' = {\rm constant}$ solution, since the Friedmann equation is not consistent with this ansatz -- $\varphi$ would be the only time-dependent term in the equation.
However, there exists an asymptotic solution to this system of equations such that for $\tau \gg 1$
\begin{eqnarray} \varphi &=& \tau^{2} \sum_{n=0}^{\infty} \varphi_{n}\tau^{-n} \label{eq:phi_series} \\
h &=& \tau^{-1} \sum_{n=0}^{\infty} h_{n}\tau^{-n} \, . \label{eq:h_series}
\end{eqnarray}
\noindent Then, order by order in a $\tau \gg 1$ expansion we have
\begin{eqnarray} \label{eq:O0frI} {\cal O}(h_{0}, \varphi_{0}) \qquad : \qquad & & 0 = \varphi_{0} + 2 \epsilon \varphi_{0}^{2} - 24 \epsilon^{2}\alpha^{3} h_{0}\varphi_{0}^{3} \\
\label{eq:O0dHI} & & 0 = \epsilon \varphi_{0}^{2} + 2 \epsilon^{2}\alpha^{3}\varphi_{0}^{3} - 6 \epsilon^{2} \alpha^{3} h_{0} \varphi_{0}^{3} \\
\label{eq:O2sfeI} & & 0 = 1 + 2 \epsilon\varphi_{0} + 6\epsilon h_{0}\varphi_{0} - 12 \epsilon^{2}\alpha^{3}h_{0}\varphi_{0}^{2} - 36 \epsilon^{2}\alpha^{3}h_{0}^{2}\varphi_{0}^{2} \\
\nonumber & & \\
\label{eq:O1frI} {\cal O}(h_{1}, \varphi_{1}) \qquad : \qquad & & 0 = \varphi_{1} - 24\epsilon^{2}\alpha^{3}h_{1} \varphi_{0}^{3} + 2 \epsilon\varphi_{0}\varphi_{1} - 36 \epsilon^{2}\alpha^{3} h_{0} \varphi_{0}^{2}\varphi_{1} \\
\label{eq:O1dHI} & & 0 = -24\epsilon^{2}\alpha^{3}h_{1}\varphi_{0}^{3} + 4 \epsilon \varphi_{0}\varphi_{1} + 8 \epsilon^{2}\alpha^{3}\varphi_{0}^{2}\varphi_{1} - 36 \epsilon^{2}\alpha^{3} h_{0} \varphi_{0}^{2}\varphi_{1} \\
\label{eq:O3sfeI} & & 0 = 6\epsilon h_{1} \varphi_{0} - 72\epsilon^{2}\alpha^{3}h_{0}h_{1}\varphi_{0}^{2} + 3 \epsilon h_{0}\varphi_{1} - 36 \epsilon^{2}\alpha^{3}h_{0}^{2}\varphi_{0}\varphi_{1} \, .
\end{eqnarray}
\noindent The equations can be solved in triplets -- at the lowest order ($\ref{eq:O0frI}$), ($\ref{eq:O0dHI}$) and ($\ref{eq:O2sfeI}$) provide a system of dependent equations that can be solved for $\varphi_{0}$, $h_{0}$ -- these are the field equations to lowest order in the field expansion. Then, ($\ref{eq:O1frI}$), ($\ref{eq:O1dHI}$) and ($\ref{eq:O3sfeI}$) correspond to the field equations at next-to-leading order and can be solved for $h_{1}$ and $\varphi_{1}$ etc. At the lowest order we have
\begin{eqnarray} & & \varphi_{0} = {-1 \pm \sqrt{1 + 8\alpha^{3}} \over 8\alpha^{3} \epsilon} \label{eq:phi0_db1} \\
& & h_{0} = \dfrac{1 + 2 \alpha^3 \pm \sqrt{1 + 8 \alpha^3}}{6\alpha^3} \, . \label{eq:h0_db1}
\end{eqnarray}
\noindent For $\alpha = 1$, $h_{0} =0$ and $\varphi_{0} = -1/2$ as expected. For $\alpha > 1$, $h_{0}$ is positive for both branches and $h \to 0^{+}$, but if $0 < \alpha < 1$ then $h_{0}$ is negative on one branch and $h \to 0^{-}$. Following the $\varphi_{0} < 0$ branch for which $h_0 \rightarrow 0^{+}$ as $\alpha \rightarrow 1^{+}$, at subsequent orders we have $\varphi_1 = h_1 = h_2 = \varphi_3 = h_3 = 0$, $\varphi_2 = -\tilde{\Lambda}$, $\varphi_4 \neq 0$, and $h_4 \neq 0$.
In this example, both $h$ and $\varphi$ now possess non-trivial time dependence, but in such a way that $h \varphi' \simeq {\rm constant}$. The Minkowski vacuum state is asymptotically preserved in the sense that $h \to 0$ as $\tau \to \infty$, specifically we have $h = h_{0}\tau^{-1} + {\cal O}(\tau^{-2})$. The behaviour of this solution is similar to the exact, degenerate solution obtained in Ref. \cite{Appleby:2020dko} in the sense that asymptotically $\varphi'' \simeq {\rm constant}$, but there is no exact Minkowski space solution. Still, we have a spacetime that evolves to a low energy state regardless of the presence and magnitude of $\Lambda$. The expansion rate $h(\tau)$ will be sensitive to $\Lambda$, but the Cosmological Constant will only determine how fast the metric evolves to $h \to 0$. Furthermore, $\Lambda$ only enters at lower order in the $\tau$ expansion.
\subsection{Coupling to Ricci Scalar}
\label{sec:coupling}
Next we consider a non-minimal coupling to the Ricci scalar which breaks the degeneracy condition. The action and dimensionless field equations read
\begin{equation} S = \int \sqrt{-g} d^{4}x \left[ {\left( \mpl + M \phi \right) R \over 2} + \epsilon X + {\epsilon^{2} \over \lambda^{3}} X \Box \phi - \lambda^{3}\phi - \Lambda \right] \end{equation}
\noindent and
\begin{eqnarray}
3 h^2 \left(\Delta ^2+\varphi \mathcal{M}\right) &=& \tilde{\Lambda }-3 h \varphi ' \left(\mathcal{M}+\epsilon ^2 \left(\varphi '\right)^2\right)+\varphi +\frac{1}{2} \epsilon \left(\varphi '\right)^2 \label{eq:frIII}\\
-2 h' \left(\Delta ^2+\varphi \mathcal{M}\right) &=&-h \mathcal{M} \varphi '-3 h \epsilon ^2 \left(\varphi '\right)^3+\mathcal{M} \varphi ''+\epsilon ^2 \left(\varphi '\right)^2 \varphi ''+\epsilon \left(\varphi '\right)^2 \label{eq:dhIII}\\
0&=& \varphi '' \left(\epsilon -6 h \epsilon ^2 \varphi '\right) - h^2 \left(6 \mathcal{M}+9 \epsilon ^2 \left(\varphi '\right)^2\right) \nonumber \\
& & \phantom{ggggggggggggg} -3 h' \left(\mathcal{M}+\epsilon ^2 \left(\varphi '\right)^2\right)+3 h \epsilon \varphi '+1 \label{eq:sfIII}
\end{eqnarray}
where $M = \mathcal{M} \lambda$.
The system admits a power law expansion of the form of ($\ref{eq:phi_series}$) and ($\ref{eq:h_series}$), with the dominant, leading order terms given by $\varphi_0 = -1/2\epsilon$ and $h_0 = 0$. There is a second solution, with $\varphi_{0} =1/4\epsilon$ and $h_{0} = 1$, but $\varphi$ is generically negative when $\tilde{\Lambda} > 0$ so we do not pursue this branch. Following the branch for which $h_0 = 0$ we have at leading order, $\varphi \sim \tau^2$ and $h \sim \tau^{-3}$. The subdominant terms can be evaluated at progressive orders in $\tau$, the next few are given by $\varphi_1 = h_1 = \varphi_3 = h_3 = 0$, $\varphi_2 = - \tilde{\Lambda} - \mathcal{M}/\epsilon$, $h_2 = \mathcal{M}/3$, $\varphi_4 = - \mathcal{M}^2/9\epsilon$, and $h_4 = -2 \mathcal{M}^2/9$. This supports the existence of an asymptotic Minkowski state despite the departure from degeneracy through the presence of an explicit Ricci coupling. The time dependence of $h$ is sourced by $\mathcal{M}$ at each order in the expansion.
The result of this section -- the existence of an asymptotic vacuum state -- can be generalised to a coupling of the form $\beta^{2-n} \phi^n R$ for some mass scale $\beta$ and constant $n$. In this case, the dimensionless Einstein and scalar field equations read
\begin{eqnarray}
3 h^2 \left(\Delta ^2 b^n+b^2 \varphi ^n\right) &=& \frac{1}{2} b^n \left(2 \tilde{\Lambda }-6 h \epsilon ^2 \left(\varphi '\right)^3+\epsilon \left(\varphi '\right)^2\right)+\varphi b^n-3 b^2 h n \varphi ^{n-1} \varphi ' \label{eq:frV}\\
-2 h' \left(\Delta ^2 b^n+b^2 \varphi ^n\right) &=& \epsilon b^n \left(\varphi '\right)^2 \left(\epsilon \varphi ''+1\right)+b^2 n \varphi ^{n-1} \varphi '' \nonumber \\
& & \phantom{g} +b^2 (n-1) n \varphi ^{n-2} \left(\varphi '\right)^2 -h \varphi ' \left(3 \epsilon ^2 b^n \left(\varphi '\right)^2+b^2 n \varphi ^{n - 1}\right) \label{eq:dhV}\\
0&=& \varphi '' \left(\epsilon -6 h \epsilon ^2 \varphi '\right) - 3 b^{2-n} n \left(2 h^2+h'\right) \varphi ^{n-1} \nonumber \\
& & \phantom{ggggggggggggg} -\left(9 h^2 \epsilon ^2 \left(\varphi '\right)^2+3 \epsilon ^2 h' \left(\varphi '\right)^2-3 h \epsilon \varphi '-1\right)
\label{eq:sfV}
\end{eqnarray}
where $\beta = b \lambda$. It can be checked that ($\ref{eq:frV}$), ($\ref{eq:dhV}$), and ($\ref{eq:sfV}$) reduce to ($\ref{eq:frIII}$), ($\ref{eq:dhIII}$), and ($\ref{eq:sfIII}$) in the special case $\beta = M$ and $n = 1$. For $n = 2$, we find that the asymptotic series ansatz $\varphi \sim \tau^2$ and $h \sim \tau^{-1}$ (equations ($\ref{eq:phi_series}$) and ($\ref{eq:h_series}$)) solves the system with non-zero coefficients $\varphi_0 \neq -1/2\epsilon$ and $h_0 \neq 0$. This implies the existence of an asymptotic Minkowski state; however, in contrast with the previous case $n = 1$, the asymptotic state is not the well tempered vacuum for which the scalar field evolves according to equation ($\ref{eq:phi_wtvac}$). For $n = 3$, we have confirmed that an asymptotic series ansatz $\varphi \sim \tau$ and $h \sim \tau^{-1}$ solves the system of equations, indicating that an asymptotic Minkowski state exists, although now with different time dependence in the scalar field compared to the well-tempered solution. For general $n$, we conjecture an asymptotic series solution $\varphi \sim \tau^{2/(n - 1)}$ and $h \sim \tau^{-1}$ and note that $n = 3$ is the last case where $\varphi$'s dominant term in the expansion is of integer order. We return to further implications of $n \neq 1$ in the Discussion.
\subsection{Including Matter}
\label{sec:matter}
A most natural way to break the degeneracy is with the inclusion of matter fields. In reality, the exact degeneracy equations are never truly satisfied and the dynamics is always `off-shell'\footnote{Following \cite{Charmousis:2011bf}, `on-shell' indicates that the metric is evaluated exactly at the vacuum state and `off-shell' away from it.} due to the presence of dark matter, baryons and radiation. We consider the field equations of the model ($\ref{eq:ac1}$) in the presence of matter:
\begin{eqnarray}
3 \Delta ^2 h^2&=&\varrho + \tilde{\Lambda }-3 h \alpha^3 \epsilon ^2 \left(\varphi '\right)^3+\varphi +\frac{1}{2} \epsilon \left(\varphi '\right)^2 \label{eq:frIV}\\
-2 \Delta ^2 h'&=& \varrho + p -3 h \alpha^3 \epsilon ^2 \left(\varphi '\right)^3+ \alpha^3 \epsilon ^2 \left(\varphi '\right)^2 \varphi ''+\epsilon \left(\varphi '\right)^2 \label{eq:dhIV}\\
0&=& \varphi '' \left(\epsilon -6 h \alpha^3 \epsilon ^2 \varphi '\right) -9 h^2 \alpha^3 \epsilon ^2 \left(\varphi '\right)^2- 3 \alpha^3 \epsilon ^2 h' \left(\varphi '\right)^2+3 h \epsilon \varphi '+1 \label{eq:sfIV} \\
0 &=& \varrho ' + 3 h (p+\varrho ) \label{eq:meqIV}
\end{eqnarray}
where $\rho = \lambda^4 \varrho$ and $P = \lambda^4 p$. Note that equations ($\ref{eq:dhIV}$) and ($\ref{eq:sfIV}$) cannot be equivalent when $\varrho + p \neq 0$, so matter generically breaks the degeneracy of the field equations. We let $\varrho$ represent pressureless dust and fix $p=0$, ignoring any radiation component although its presence would not alter our conclusions. We solve this full system $\left(h, \varphi, \varrho \right)$ beginning from a matter era, i.e., $3 H^2 = \rho$, and show that the state falls to a Minkowski vacuum. This supports the existence of an asymptotic Minkowski state when degeneracy is broken due to the presence of matter. In our simulations, we adopt mass scales, $\tilde{\Lambda} \ll \Delta^2$, and take the matter era initial conditions $h \sim 2/(3\tau)$, $\varphi \sim - \tilde{\Lambda}$, and $3\Delta^2 h^2 \sim \varrho$.
Figure \ref{fig:hphi_db4} shows numerical solutions to the field equations for various choices of $\tilde{\Lambda}$. Other choices of parameters only lead to similar profiles.
\begin{figure}[h!]
\center
\subfigure[ ]{
\includegraphics[width = 0.475 \textwidth]{figs/h_v_tau_w_mass.pdf}
}
\subfigure[ ]{
\includegraphics[width = 0.475 \textwidth]{figs/phi_v_tau_w_mass.pdf}
}
\caption{Solutions to the coupled gravity-matter equations ($\ref{eq:frIV}$), ($\ref{eq:dhIV}$), ($\ref{eq:sfIV}$), and ($\ref{eq:meqIV}$) with $\tilde{\Lambda} = 10^{0}$ (solid-blue), $\tilde{\Lambda} = 10^{1}$ (medium-dashed-red), $\tilde{\Lambda} = 10^{-1}$ (short-dashed-black), with fixed $\Delta = 10^2$ and $\alpha = 11/10$.}
\label{fig:hphi_db4}
\end{figure}
This confirms our assertion of the presence of an asymptotic Minkowski state, which was always reached by the solutions despite beginning from a matter universe. Peculiarly, solutions with $\alpha = 1$ seem to first turn to negative values before transitioning to the degenerate vacuum where the scalar field decelerates, $\ddot{\phi} \sim -\lambda^3/\epsilon$. Admitting a heavier tadpole compared to the braiding, i.e. $\alpha > 1$, resolves this and ensures that $h$ remains positive (cf. Figure \ref{fig:hphi_db4}, left panel). The inclusion of matter preserves the existence of a Minkowski attractor that is approached asymptotically.
It is useful to look at how the different densities change during the transition from a matter universe. This is shown in Figure \ref{fig:rho_db4} for $\tilde{\Lambda} / \Delta^2 = 10^{-4}$ and $\alpha = 11/10$. Varying the parameters does not change the overall profile that is presented here. At early times, the Hubble expansion scales with matter as is demanded by the initial conditions. A matter era then persists for a period of time followed by a sharp drop in the expansion rate as the scalar field decelerates and approaches the asymptotic behaviour $\varphi \sim \tau^{2}$, $h \sim \tau^{-1}$.
\begin{figure}[h!]
\center
\includegraphics[width = 0.475 \textwidth]{figs/hrho_v_tau_w_mass.pdf}
\caption{Solution to the coupled gravity-matter equations ($\ref{eq:frIV}$), ($\ref{eq:dhIV}$), ($\ref{eq:sfIV}$), and ($\ref{eq:meqIV}$) with $\tilde{\Lambda} = 10^{0}$, $\Delta = 10^2$, and $\alpha = 11/10$.}
\label{fig:rho_db4}
\end{figure}
At no point have we introduced any unusual fine tuning of mass parameters. The matter contribution is nonzero throughout in Figure \ref{fig:rho_db4}. The Minkowski state is approached regardless of any degeneracy condition being satisfied, despite the presence of an arbitrary vacuum energy and the presence of matter. However, since the model approaches a Minkowski space vacuum state asymptotically, we do not expect it to provide a viable cosmic history. One might hope that alternative methods of breaking the degeneracy condition might impact the expansion rate $h$ such that approximate pseudo-de Sitter might be asymptotically realised. We provide one such example in the following section.
\subsection{Introducing Scalar Field Mass}
\label{sec:5}
If we introduce a mass term for the Galileon, the action and cosmological field equations (in dimensionless units) are
\begin{equation} S = \int \sqrt{-g} d^{4}x \left[ {M_{\rm pl}^{2} R \over 2} + \epsilon X + {\epsilon^{2} \over \lambda^{3}} X \Box \phi - \lambda^{3}\phi - {m_{\phi}^{2} \over 2}\phi^{2} - \Lambda \right] \end{equation}
\noindent and
\begin{eqnarray}
3\Delta^{2} h^2 &=& \tilde{\Lambda} + {\epsilon \over 2} (\varphi')^{2} - 3 \epsilon^{2} h(\varphi')^{3} + \varphi + {\tilde{m}_{\phi}^{2} \over 2} \varphi^{2} \label{eq:frII}\\
-2\Delta^{2} \, h' &=&
\epsilon^{2} (\varphi')^{2} \varphi'' - 3 \epsilon^{2} h (\varphi')^{3}
+ \epsilon (\varphi')^{2} \label{eq:dhII}\\
0&=& \varphi''\,\left[\epsilon - 6\epsilon^{2} h\varphi'\right]+3\epsilon h\varphi' + 1 - 3\epsilon^{2} (\varphi')^{2} (h'+3h^2) + \tilde{m}_{\phi}^{2}\varphi \label{eq:sfII}
\end{eqnarray}
\noindent where $\tilde{m}_{\phi} = m_{\phi}/\lambda$. We have fixed $\alpha = 1$, so the Galileon has an exact $h=0$ solution when $\tilde{m}_{\phi} = 0$.
This system of equations is non-linear and has multiple scales, so it is very difficult to make any general statements. We take the following mass scales $\tilde{\Lambda} \sim {\cal O}(1)$, $\tilde{m}_{\phi} \sim {\cal O}(1)$ and $\Delta \gg 1$. With this choice, the two mass scales associated with the field $\phi$ -- $\lambda$ and $m_{\phi}$ -- are of the same magnitude, which in turn are of the same order as the Cosmological Constant (which is arbitrary, so far). All mass scales are smaller than the Planck mass, which dictates the size of $\Delta$. At least, with this choice we are not fine tuning any mass scales in the action and are working in a sub-Planckian regime. Note that the degenerate vacuum solution obtained in Ref. \cite{Appleby:2020dko} does not rely on $\lambda$ being of the same order of magnitude as $\Lambda$, so we could introduce a hierarchy such that $\Delta \gg \tilde{\Lambda} \gg 1$ but in the absence of any reason to, we do not introduce any hierarchy between the mass scale associated with the field $\phi$ and $\Lambda$.
We use the Friedmann equation to fix as an initial condition $\varphi_{i} = \varphi'_{i} = 0$ and
\begin{equation}\label{eq:hi} h_{i}^{2} = {\tilde{\Lambda} \over 3 \Delta^{2}} \ll 1 \end{equation}
\noindent which is the standard general relativity vacuum state. This would be an exact solution if $\lambda = 0$. Under these conditions we can obtain an approximate solution to these equations using the fact that $\Delta \gg 1$ and $h \ll 1$. Anticipating that in this regime $h$ is slowly rolling, the scalar field equation can be approximated as
\begin{equation} \epsilon \varphi'' + 3 \epsilon h_{i} \varphi' + \tilde{m}_{\phi}^{2}\varphi \simeq -1 \, . \end{equation}
\noindent Using the initial conditions $\varphi_{i} = \varphi'_{i} = 0$, we have as a leading order approximation
\begin{equation} \varphi \simeq {1 \over \tilde{m}_{\phi}^{2}} \left[ e^{-3\epsilon h_{i}\tau/2} \cos \left({\tilde{m}_{\varphi} \tau \over \sqrt{\epsilon}}\right) - 1 \right] \end{equation} \, .
\noindent In turn, the expansion rate $h$ has an oscillating component, sourced by $\varphi$. Note that in this model, the scalar field exhibits oscillatory behaviour and hence is potentially bounded. This behavior is supported by our numerical solutions, provided the mass scales satisfy $\tilde{\Lambda} \sim {\cal O}(\tilde{m}_{\phi})$ and $\Delta \gg 1$. Some examples are shown in Figure \ref{fig:ds_db2}, where we fix $\Delta = 10^{2}$, $\tilde{\Lambda} = 1$, $\epsilon = 1$ and $\tilde{m}_{\phi} = 0.8, 0.7, 0.5$ (cf. red-dashed, green-solid and gold-dotted lines, respectively) then proceed to numerically evolve the dynamical system with initial conditions $\varphi_{i} = \varphi' = 0$ and $h_{i}^{2} = \tilde{\Lambda}/(3\Delta^{2})$. For suitable initial conditions and mass scales, the scalar field undergoes damped oscillations, and the expansion rate $h$ freezes to an asymptotically frozen, mildly oscillating phase (cf. red-dashed lines).
\begin{figure}[h!]
\center
\subfigure[ ]{
\includegraphics[width = 0.475 \textwidth]{figs/h_vs_tau_mphi.pdf}
}
\subfigure[ ]{
\includegraphics[width = 0.475 \textwidth]{figs/varphi_vs_tau_mphi.pdf}
}
\caption{Numerical solutions to equations ($\ref{eq:frII}$), ($\ref{eq:dhII}$), and ($\ref{eq:sfII}$) with $\tilde{m}_{\phi} = 0.5$ (dotted gold), $\tilde{m}_{\phi} = 0.7$ (solid green), $\tilde{m}_{\phi} = 0.8$ (dashed red), fixing $\tilde{m}_\phi = 1$, and $\Delta = 10^2$.}
\label{fig:ds_db2}
\end{figure}
However, when $\tilde{m}_{\phi}$ is sufficiently low compared to the vacuum energy, i.e., $\tilde{\Lambda} > \tilde{m}_\phi$, the solution becomes unstable, with $h(\tau)$ descending to negative values and the scalar field amplitude growing rapidly. This is shown in Figure \ref{fig:ds_db2} (cf. dotted gold lines, $\tilde{m}_{\phi} = 0.5$).
The dynamics of $h$ is tied to the envelope of $\varphi$. Both $h$ and the envelope of $\varphi$ are initially decreasing functions of $\tau$, and the relative rate at which they decrease is important. If $h$ approaches zero while the amplitude of $\varphi$ remains sufficiently large, then $h$ crosses zero and continues catastrophically towards arbitrary negative values. It is at the $h=0$ crossing that the scalar field envelope starts to grow. However, if the amplitude of $\varphi$ decays sufficiently quickly, then $h$ becomes frozen to an approximately constant value. Neither of the fields $h$ or $\varphi$ are actually constant, they continue to oscillate with decreasing amplitude. Still, $h$ can become frozen into an apparent de Sitter like state.
\begin{figure}[h!]
\center
\includegraphics[width = 0.475 \textwidth]{figs/h_varphi_mphi.pdf}
\caption{$h$ as a function of $\varphi$ for the system ($\ref{eq:frII}$), ($\ref{eq:dhII}$), and ($\ref{eq:sfII}$) with $\tilde{\Lambda} = 1$, $\Delta = 10^2$ and varying $\tilde{m}_{\phi}$. If the mass of the scalar field is sufficiently low, the expansion rate $h$ evolves beyond $h=0$ to arbitrarily negative values. Above a particular threshold value of $\tilde{m}_{\phi}$, the expansion rate freezes to an approximately constant state (cf. black-solid, red-dashed lines).}
\label{fig:h_varphi}
\end{figure}
In Figure \ref{fig:h_varphi} we present $h$ as a function of $\varphi$ for different values of $\tilde{m}_{\phi}$. For each dynamical track, we fix the same initial conditions $\varphi_{i} = \varphi'_{i} = 0$ and $h_{i}$ given by ($\ref{eq:hi}$), fix $\tilde{\Lambda} = 1$, $\Delta = 10^{2}$, $\epsilon = 1$ and allow $\tilde{m}_{\phi}$ to vary over the range $0.5 < \tilde{m}_{\phi} < 0.8$. If the scalar field mass is sufficiently low, then $h$ evolves beyond $h=0$ and to negative values, and the envelope of the $\varphi$ oscillations grows (cf. yellow-dashed, green-solid lines). If $\tilde{m}_{\phi}$ is large, then the envelope of $\varphi$ oscillations decays more rapidly and $h$ becomes frozen to an approximately constant value (cf. black-solid, red-dashed tracks).
Unfortunately, for this state to mimic the observed late-time accelerating epoch of the Universe, the parameters in the model should be fine tuned. In particular the mass of the scalar field must take a precise value -- too large and $h$ would freeze to an unacceptably large value and too small and $h$ would slide catastrophically to arbitrary negative values. Still, the dynamics of this particular case possess some welcome features -- there is a range of parameter values for which the scalar field and $h$ are bounded, and $h$ can freeze to a dynamical low energy state. There is no longer a stable $h=0$ Minkowski space solution, however. Counter to the previous examples in this section, the presence of a mass term eliminates the possibility of a flat spacetime solution, even asymptotically.
We acknowledge that the question as to whether these models inherently require fine tuning remains open. In this section, we have provided a simple example for which the fields asymptotically freeze to `constant' values (although there is no exact, constant solution, and the fields continue to evolve indefinitely). The end point of the dynamics in this class of models depends on some combination of the $(R, \phi, X, \Box \phi)$-functional dependence of the Lagrangian, coupling constants and the initial conditions of the fields. In contrast, the end-point in the standard model is generically the algebraic relation $3H^{2} = 8\pi G \Lambda$, making fine tuning inevitable. Although the simple example provided in this section has its own stability issues, the dynamical nature of the solution opens up the possibility of ameliorating fine tuning, as the fields $h$, $\varphi$ can approximately freeze and their asymptotic values become a function of the initial conditions, age/dynamics of the Universe in addition to the coupling constants in the action.
Hence, this class of models rephrase the Cosmological Constant problem in a novel way. It remains an open question whether a `natural' end point to the dynamics of the expansion rate $h$ can be realised. The current work constitutes an interesting step forward in this regard, as we found that the strict enforcement of the degeneracy condition is not required. This means that we do not need to impose any exact conditions on the coupling constants of the theory, removing this particular element of fine tuning from the proposal. The idea that fields can `freeze' into approximately constant values which depend on the prior dynamics of the system is made possible with asymptotic degenerate states. These are not constant attractor solutions predictable solely from coupling constants of the theory.
To end, we comment that a Vainshtein mechanism sourced by the braiding $\left(\mathcal{L}\sim \left( \partial \phi \right)^2 \Box \phi\right)$ can be expected to settle in at small distances where nonlinearities become the dominant contributions to the equations of motion. These scales are irrelevant to the current discussion, but may nonetheless be considered in a different work which looks at spherically symmetric solutions in degenerate and non-degenerate models.
\section{Discussion}
\label{sec:6}
The existence of an exact Minkowski solution despite the presence of an arbitrary vacuum energy for the class of models studied in this work requires a degeneracy relation, which must be solved exactly \cite{Appleby:2020dko, Bernardo:2021bsg}. In this work we have shown that even when it is not, we do not expect the standard relation $3M_{\rm pl}^{2} H^{2} = \Lambda$, $\phi = {\rm constant}$ to necessarily be a solution to the dynamical system when the tadpole is present. Rather, when the degeneracy equation is broken the metric will also be time dependent in tandem with the scalar field. The Minkowski solution is a dynamical attractor when the degeneracy condition holds exactly, and we have found that it is also an attractor without imposing any exact relation between terms in the Lagrangian. We have considered some simple ways in which the degeneracy condition can be broken, and studied the dynamics that result when both the metric and scalar field are free to evolve.
We confirmed that departures from degeneracy forces the metric into a dynamical state, but nonetheless for most cases an asymptotic Minkowski solution was retained. Unsurprisingly, this may not be precisely the well tempered vacuum, particularly in cases where the shift symmetry is broken in the Einstein equation. The functional time dependence of the solution is related to presence or absence of shift symmetry that the degenerate model possesses. The model in section \ref{sec:unequal_mass} and the linear coupling $\sim \phi R$ in section \ref{sec:coupling} preserve the symmetry $\phi \to \phi +c$ and hence exhibit $\varphi \sim \tau^{2}$ behaviour on approach to the Minkowski asymptote. Adding a non-linear coupling $\sim \phi^{n}R$ or mass term $m_{\phi}^{2}\phi^{2}$ changes mass dimension operators in the action and correspondingly the time dependence of the scalar field. Degeneracy breaking with a mass term can produce a low energy de Sitter phase which could potentially reconcile the background evolution with observations. Unfortunately, the solution faces something of a cliff edge towards an unstable $h < 0$ state. Admittedly, this may be considered as another kind of fine tuning, but it opens the possibility of having a consistent cosmology independent of a large Cosmological Constant. Overall, our results broaden the horizon for what could be classified as self-tuning models and the tadpole plays a major role. This generic dynamical behaviour has been similarly used in Ref. \cite{Khan:2022bxs}, which found asymptotic de Sitter solutions without a degeneracy condition being imposed. It would be of interest to study the generality of Ref. \cite{Khan:2022bxs} and consider how common asymptotic de Sitter states are.
Throughout this work we have used `shift symmetry' as a loosely defined label. A conservative shift symmetric model is made up of only derivatives of the scalar field in the Lagrangian such that one can define a Noether current $J^\alpha \left[ \partial \phi\left(x\right) \right]$ satisfying a conservation law $ \nabla_\alpha J^\alpha = 0 $ which corresponds to the scalar field equation. In the presence of a tadpole, one may instead write down $\nabla_\alpha \left( J^\alpha - \lambda^3 x^\alpha \right) = 0$, identifying a conserved charge $Q = J^t - \lambda^3 t$ which is related to the Noether charge by a time reparametrization. This manifests at the level of the action, or the field equations, upon the application of a shift transformation which only artificially influences the dynamics through the initial conditions entering the Hamiltonian constraint at an arbitrary time. In this way, we regard the exactly degenerate model in section \ref{sec:3} to be shift symmetric `on-shell' and its vacuum state is insensitive to the value of the Cosmological Constant. In the nondegenerate models of section \ref{sec:4}, and similarly the model in section \ref{sec:3} `off-shell', the dynamics will depend subdominantly on the Cosmological Constant as the system asymptotes to the degenerate vacuum. The mass term and nonlinear conformal couplings are also considered as shift symmetry breaking in this regard, as they alter the dynamical behaviour on approach to the degenerate solution.
We emphasize that the models considered here must be further studied to ascertain whether they could indeed represent viable cosmologies. For one, viable models should not only have a consistent background, but also possess only healthy perturbations on top of it. The degenerate model (section \ref{sec:3}), when evaluated `on-shell', is ghost-free but suffers from a Laplace instability after a time $t \gtrsim \left( M_\text{pl}/\lambda^3 \right)^{1/2}$ \cite{Appleby:2020dko}. The models considered in this work cannot be evaluated on an exact static background, but asymptotically we can determine their stability. For example, for the model in which we relaxed the mass scales (section \ref{sec:unequal_mass}), once the fields $H$ and $\phi$ have relaxed to their $t \gg \lambda^{-1}$ asymptotic forms $\phi \sim \lambda^{3} t^{2}$ and $H \sim t^{-1}$, we can deduce that after a time $t \gtrsim \left( M_\text{pl} \kappa^{3/2}/\lambda^{9/2} \right)^{1/2}$ in this state the scalar perturbations of the dynamical fields will similarly exhibit Laplace instability. A lighter tadpole compared to the braiding can keep the Laplace stability in check while the system asymptotically evolves to the Minkowski vacuum for a longer period, but never indefinitely. This, among other effects at linear cosmological perturbations, can be studied conveniently using the effective field theory formalism (Appendix \ref{sec:eft}). Studying the field equations in the regime in which the Laplace condition is violated is an interesting open problem.
Other future directions may be considered. One, it would be interesting to see if similar departures from the Fab Four can be obtained. Second, models containing light dynamical fields must suppress fifth forces to satisfy Solar system constraints and be considered viable. Screening mechanisms are a potential loophole, but the question of whether these models exhibit this nonlinear feature is unresolved. Given that the presence of matter breaks the degeneracy condition, a `static' metric is unlikely to solve the coupled scalar-Einstein field equations in the presence of a central mass. Hence what spacetime replaces the role of the Schwarzschild metric remains to be found. The lack of cosmological constraints on degenerate and (now) nearly-degenerate models must also be given attention, once a viable mechanism to generate late-time accelerated expansion has been introduced. Finally, studying the conditions under which all fields remain bounded could help to reconcile the self tuning mechanism with high energy physics.
\section*{Acknowledgements}
The authors would like to thank Eric Linder, Arnaz Khan, and Andy Taylor for helpful suggestions and discussions. SAA is supported by an appointment to the JRG Program at the APCTP through the Science and Technology Promotion Fund and Lottery Fund of the Korean Government, and was also supported by the Korean Local Governments in Gyeongsangbuk-do Province and Pohang City.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Two-dimensional causal dynamical triangulations (CDT) \cite{al}
provides a well defined path integral representation
of two-dimensional projectable Ho\v{r}ava-Lifshitz quantum gravity
(HL \cite{hl}),
as was recently shown in \cite{agsw}. 2d CDT coupled to conformal
field theories with central charges $c=1/2$ and $c=4/5$ as well as
$c\geq 1$ have been investigate numerically \cite{aal,aal1,gauss}.
However, it has not yet been possible to provide exact solutions
of the gravity theory coupled to a well defined continuum matter theory
despite the existence of a matrix formulation \cite{cdtmatrix}\footnote{To
be precise, CDT has been solved when coupled to some ``non-standard''
hard dimer models \cite{aggs,az}, but it is unknown if these dimer models have
an interesting continuum limit. Also, `` generalized CDT'' models coupled to
ordinary hard dimer models have been solved \cite{aggs,az1},
using matrix models.}. Here we will provide
a first such step and solve CDT coupled to gauge theories.
Gauge theories are simple in two dimensions since there
are no propagating field degrees of freedom. However, if the
geometry is non-trivial there can still be non-trivial
dynamics, involving a finite number of degrees of freedom.
In the CDT case we consider space-time with the topology
of a cylinder, space being compactified to $S^1$, and we
thus have non-trivial dynamics associated with the holonomies
of $S^1$. This has been studied in great detail in flat space-time
(see \cite{2dgauge} and references therein).
We will use the results from these studies
to solve CDT coupled to gauge theory. The rest of this
article is organized in the following way. In Sec.\ \ref{review}
we review the part of \cite{2dgauge} that we need for the
construction the CDT quantum Hamiltonian. In Sec.\ \ref{H} we
find the lattice transfer matrix and the corresponding continuum
Hamiltonian and finally in Sec.\ \ref{cosmo} we discuss
``cosmological'' applications.
\section{2d gauge theories on a cylinder}\label{review}
Let us first heuristically understand the Hamiltonian for
gauge theory on the cylinder, the gauge group $G$ being a
simple Lie group (we can think of $G=SU(N)$ if needed). The action is
\beq\label{j1}
S_{YM}= \frac{1}{4} \int dtdx \;(F^a_{\m\n})^2,~~~~ \m,\n=0,1,
\eeq
where $F^a_{01}=E_1^a$ is the chromo-electric field. Quantizing in
the temporal gauge, $A_0^a=0$, say, one obtains the Hamiltonian
\beq\label{j2}
\hH = \oh \int dx \; (\hE_1^a)^2,
~~~\hE_1^a\equiv -i \frac{\del}{\del A^a_1(x)},
\eeq
and this Hamiltonian acts on physical states, i.e.\ wavefunctions
which satisfy Gauss law
\beq\label{j3}
(D_1 \hE^1)^a\Psi(A) =0,
\eeq
where $D_1$ denotes the covariant derivative.
Since $D_1E^1$ are the generators of gauge transformations \rf{j3}
just tells us that $\Psi(A)$ is gauge invariant. But on $S^1$ the
only gauge invariant functions are class functions of
the holonomies and any class function can be expanded in
characters of irreducible unitary representations of the group.
Let $T_R^a$ denote the Lie algebra generators of the representation $R$,
where $\tr_R T_R^aT_R^a= C_2(R) $, the value of quadratic Casimir
for the representation $R$. For a holonomy
\beq\label{j4}
U_R(A) = P \e^{i g \oint dx A^a_1(x)T_R^a}, ~~~\chi_R (U) \equiv \tr_R U_R,
\eeq
where $g$ is the gauge coupling, one easily finds that the action
of $\hH$ on the wavefunction $\chi_R(U(A))$ is
\beq\label{j5}
\hH \chi_R(U(A)) =\oh g^2 L C_2(R) \chi_R(U(A)).
\eeq
From this we take along that on
the gauge invariant wave functions we can write\footnote{\label{foot1} We are
clearly not very precise here when discussing the quantization
(that is why we started this section with the word ``heuristic'').
We have still available a time independent gauge transformation
which we can use to gauge the holonomy $U(A)$ to a Cartan
subalgebra of $G$, i.e.\ to diagonalize $U(A)$, and further
to permute the diagonal elements. Strictly speaking $\hH$
should be defined on this subspace which is the orbifold
$T^{N-1}/S_N$ for $G=SU(N)$. We refer the reader to
\cite{2dgauge} for details.}
\beq\label{j6}
\hH = \oh g^2 L \Del_G
\eeq
where $\Del_G$ is the Laplace-Beltrami operator on the group $G$
(here $SU(N)$), and further that the gauge invariant eigenfunctions
are the irreducible characters of $G$.
Let us now quantize the theory using a lattice, i.e.\
using a (regularized) path integral. The lattice partition function
is defined as
\beq\label{j7}
Z(g) = \int \prod_\ell dU_\ell \prod_{{\rm plaquettes}} Z_P[U_P],
\eeq
where we to each link $\ell$ associate a $U_\ell \in G$, and $U_P$ is
the product of the $U_\ell$'s around the plaquette (since we always take
the trace of $U_P$ it does not matter which link is first in the product
as long as we keep the orientation). One writes $U_\ell = e^{a g iA_\ell^bt^b}$
where $\ell$ signifies a link in direction $\m=0$ or $\m=1$, $a$ is
the length of a lattice link and we choose $t^a$ to be generators of
the Lie algebra of $G$ in the fundamental representation, normalized
to $\tr t^b t^c = 1/2$. This establishes a formal relation
between the gauge fields $A_\ell$ and the group variables $U_\ell$.
One has a large choice for $Z_P[U_p]$, the only requirement being
that $Z(g)$ in \rf{j7} should formally become the continuum
path integral when the lattice spacing is taken to zero. Often
the so-called Wilson action is used where
\beq\label{j8}
Z_P[U_P] = \e^{\bt \tr (U_p+U_P^{-1})},~~~~\bt = \frac{1}{4g^2 a^2}.
\eeq
In the limit where $a\to 0$ one has
$ \tr (U_p+U_P^{-1}) \to 1-a^4 g^2(F_{\m\n}^b)^2 +0(a^6)$, thus leading
to the correct naive continuum limit in \rf{j7} if $\bt$ scales as
in \rf{j8}. For the purpose of extracting the Hamiltonian
it is convenient for us
to use a different $Z_P[U_p]$, the so-called heat kernel action
\beq\label{j9}
Z_P[U_P] = \la U_P| e^{-\oh g^2 A_P \Del_G}|I\ra = \sum_R d_R \chi_R(U_P) \,
\e^{-\oh g^2 A_P C_2(R)},
\eeq
where $A_P = a_t a_s$ denotes the area of the plaquette with spatial
lattice link length $a_s$ and time-like link length $a_t$ (we will
usually think of $a_s=a_t$), $I$ denotes
the identity element in $G$ and, as above $\Del_G$ the Laplace-Beltrami
operator on $G$. Using $U_\ell = (I + a g A_\ell^at^a \cdots)$
in the limit $a\to 0$,
and $\sum_R d_R \chi_R(U_P) =\del(U_P-I)$, one can show
that the continuum Yang-Mills action is formally reproduced.
The convenient property of the heat kernel action in 2d is that
it is additive, i.e.\ if we integrate over a link in \rf{j7} the
action is unchanged: write $U_{P_1}= U_4U_3U_2U_1$ and
$U_{P_2} = U^{-1}_4U_7U_6U_5$, then
\beq\label{j10}
\int dU_4 Z_{P_1} [U_{P_1}] Z_{P_2}[U_{P_2}] = Z_{P_1+P_2} [U_{P_1+P_2}],
\eeq
where $U_{P_1+P_2} = U_7U_6U_5U_3U_3U_1$, see Fig.\ \ref{fig1}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=14cm]{fig1.eps}
\end{center}
\vspace{-5mm}
\caption{Integrating out the link $U_4$ using the heat kernel
action. The graphic notation is such one has cyclic matrix multiplication
on loops and if an arrow is reversed (oriented link $\ell \to -\ell$)
then $U_{-\ell} = U^{-1}_\ell$.}
\label{fig1}
\end{figure}
The relation follows from the orthogonality of the group characters:
\beq\label{j11}
\int dU \; \chi_R (XU) \chi_{R'} (U^{-1} Y) =
\frac{1}{d_R} \del_{RR'} \chi_R(XY).
\eeq
Let us now consider a lattice with $t$ links in the time direction
and $l$ links in the spatial direction. We have two boundaries,
with gauge field configurations $\{U_\ell\}$ and $\{U'_\ell\}$, which
we can choose to keep fixed (Dirichlet-like boundary conditions (BC)),
integrate over (free BC),
or identify and integrate over (periodic BC). We write, using
Dirichlet BCs
\beq\label{j12}
Z(g,\{U'_\ell\},\{U_\ell\}) = \la \{U'_\ell\}|\, \hT^t\,
|\{U_\ell\}\ra,~~~~\hT = \e^{-a_t \hH}
\eeq
where $\hT$ is the transfer matrix, giving us the transition
amplitude between link configurations at neighboring
time slices. However in 2d we can restrict $\hT$ to be
an operator only acting on the holonomies since we can
use \rf{j10} to integrate out the temporal
links $U_\ell$ which connect two time slices. We obtain
\beq\label{j13}
\la U'| \hT | U\ra = \int dU_0\; \la U'U_0 U^{-1} U_0^{-1}| \,
\e^{-l a_s a_t \oh g^2\Del_G} | e\ra,
\eeq
where we have not integrated over the last temporal link $U_0$ and
$U$ is the holonomy for the links at the time slice $t'$, say, and
$U'$ the holonomy for the links at the neighbor time slice $t'+1$
(see Fig. \ref{fig2}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=12cm]{fig2.eps}
\end{center}
\vspace{-5mm}
\caption{Integrating out the temporal links in a time-slab,
except a last link $U_0$. The temporal links $U_0^{-1}$ and $U_0$
are identified on the cylinder,
and the result is $Z_P[U_P]$,
$U_P=U_0(U_5U_4U_3U_2U_1)U_0^{-1}(U_1'U_2'U_3'U_4'U_5')$
using the heat kernel action. }
\label{fig2}
\end{figure}
Using $\la U' U^{-1} | \e^{-\Del_G}|e\ra = \la U' | \e^{-\Del_G}|U\ra$ we
can write the transfer matrix elements as
\beq\label{j14}
\la U'|\, \e^{- a_t (l a_s \oh g^2\Del_G)} \hP |U\ra =
\la U'|\hP\; \e^{- a_t (l a_s \oh g^2\Del_G)} \hP |U\ra,
\eeq
where the projection operator $\hP$ is defined by
\beq\label{j15}
\hP |U\ra = \int dG\, |G U G^{-1}\ra.
\eeq
$\hP$ commutes with $\Del_G$, a fact which allows us
to write the right hand side
of eq.\ \rf{j14} and thus ensures that we can restrict the
action of the transfer matrix even further, namely to the class functions
on $G$. To make this explicit consider an arbitrary state
\beq\label{j15a}
|\Psi \ra = \int dU \; |U\ra \Psi(U),~~~~\Psi(U) = \la U|\Psi\ra,
\eeq
i.e.\ $(\hP \Psi)(U) = \int dG \Psi(G^{-1} U G) $ which is clearly a
class function.
Denote the length of the lattice $L =a_s l$. From \rf{j12}
and \rf{j14} it follows that
\beq\label{j16}
\hH = \oh g^2 L \Del_G.
\eeq
The expression agrees with the continuum expression.
We have reviewed how the lattice theory, even
if no gauge fixing is imposed, nevertheless makes it
possible and natural to restrict the transfer matrix and
the corresponding Hamiltonian to class functions of the holonomies.
Finally, it is of course only for the heat kernel action that
one derives an $\hH$ formally identical to the continuum
Hamiltonian even before the lattice spacings are taken to zero.
The above arguments could be repeated for any reasonable action,
e.g.\ the Wilson action mentioned above, and in the limit where
$a_s,a_t\to 0$ one would obtain \rf{j16}. Finally, the derivation
can be repeated also for Abelian groups or discrete groups like
$Z_N$ groups, resulting in an expression like \rf{j16}
with an appropriate group Laplacian $\Del_G$, in the Abelian case
without the issue of reduction of domain of $\Del_G$.
\section{Coupling to geometry}\label{H}
The covariant version of the Yang-Mills theory \rf{j1} is
\beq\label{j17}
S_{YM} = \oq \int d^2 x \sqrt{g(x)}\; F_{\m\n}^a (F^{\m\n})^a.
\eeq
We want a path integral formulation which includes also
the integration over geometries. Here the CDT formulation is
natural: one is summing over geometries which have cylindrical
geometry and a time foliation, each geometry being defined by a
triangulation and the sum over geometries in the path
integral being performed by summing over all triangulations
with topology of the cylinder and a time foliation. The coupling
of gauge fields to a geometry via dynamical triangulations
(where the length of a link is $a$) is
well known \cite{DTgauge}: One uses as plaquettes the triangles.
Thus the 2d partition function becomes
\beq\label{j18}
Z(\Lam,g,l',l,\{U'_\ell\},\{U_\ell\}) =
\sum_{\cT} \e^{- \oh N_{\cT} \Lam \frac{\sqrt{3}}{4}a^2} Z^{G}_{\cT}(\bt),
\eeq
where the summation is over CDT triangulations $\cT$,
with an ``entrance'' boundary consisting of $l$ links
and an ``exit'' boundary consisting of $l'$ links,
$\Lam$ is the lattice cosmological constant, $N_{\cT}$ the
number of triangles in $\cT$,
and the gauge partition function for a given triangulation $\cT$
is defined as
\beq\label{j19}
Z_{\cT}^{G}(g, \{U'_\ell\},\{U_\ell\}) =
\int \prod_\ell dU_\ell \; \prod_P Z_P[U_P].
\eeq
The integration is over links and $\prod_P$ is the product
over plaquettes (here triangles) in $\cT$. For the plaquette
action defining $Z_P[U_P]$ we have again many choices, and for
convenience we will use the heat kernel action \rf{j9}.
We can introduce a transfer matrix $\hT$, which connects
geometry and fields at time label $t'$ to geometry and fields
at time label $t'+1$, and if the (discretized) universe
has $t+1$ time labels we can write
\beq\label{j20}
Z(\Lam,g,l',l,\{U'_\ell\},\{U_\ell\}) =
\la \{U'_\ell\}, l' | T^t | \{U_\ell\},l\ra,~~~~T= \e^{-a \hH}.
\eeq
The one-dimensional geometry at $t'$ is characterized by
the number $l$ of links (each of length $a$), and on these
links we have field configurations $\{U_\ell\}$. Similarly
the geometry at $t'+1$ has $l'$ links and field configurations
$\{U'_\ell\}$. For fixed $l$ and $l'$ the number of
plaquettes (triangles) in the spacetime cylinder ``slab'' between $t'$ and
$t'+1$ is $l+l'$ and the number of temporal links $l+l'$.
There is a number of possible triangulations of the slab for fixed
$l$ and $l'$, namely
\beq\label{j21}
N(l',l) = \frac{1}{l+l'}\binom{l+l'}{l}.
\eeq
For each of these triangulations we can integrate over the
$l+l'$ temporal link variables $U_0$, as we did for a fixed lattice
and we obtain as in that case
\beq\label{j22a}
\la U' |\hP\; \e^{-a (a (l+l') \frac{\sqrt{3}}{8} g^2\Del_G)} \hP |U\ra,
\eeq
where $U'$ and $U$ are the holonomies corresponding
to $\{U'_\ell\}$ and $\{U_\ell\}$, respectively, and $\hP$
is the projection operator \rf{j15} to class functions coming from
the last integration over a temporal link $U_0$. The factor $\sqrt{3}/8$
rather than the factor 1/2 appears because we are using equilateral
triangles rather than squares as in Sec.\ \ref{review}. In order
to have unified formulas we make a redefinition $g^2 \sqrt{3}/4 \to g^2$
and thus we have the matrix element
\beq\label{j22}
\la U' |\hP\; \e^{-a (a (l+l') \oh g^2\Del_G} \hP |U\ra,
\eeq
If we did not have the matter fields the transfer matrix would be
\beq\label{j23}
\la l'| \hT_{{\rm geometry}} |l\ra = N(l',l) \e^{-a((l+l')a \oh \Lam)},
\eeq
where we have made a redefinition $\Lam \sqrt{3}/{4} \to \Lam$,
similar to the one made for $g^2$, in order to be in accordance
with notations in other articles.
The limit where $a \to 0$ and $L' =a \,l'$ and $L= a\,l$
are kept fixed has been studied \cite{2dcdt} and one finds
\beq\label{j23a}
\hT_{{\rm geometry}} = \e^{-a (\hH_{{\rm cdt}}+ O(a))},~~~~
\hH_{{\rm cdt}} = -\frac{d^2}{dL^2} L + \Lam L.
\eeq
From the definition \rf{j20} of $\hH$ and \rf{j22} it follows that
\beq\label{j24}
\hH = \hH_{{\rm cdt}} + \oh g^2 L \Del_G,
\eeq
acting on the Hilbert space which is the tensor product
of the Hilbert space of square integrable class functions on $G$ and
the Hilbert space of the square integrable functions on $R_+$ with measure
$d\m(L) = L dL$.
We have obtained the Hamiltonian \rf{j24} using the path integral,
starting out with a lattice regularization. Alternatively one
can use that the classical 2d YM action \rf{j1} on the (flat) cylinder
can be formulated in terms of the holonomies $U(t)$ defined in
eq.\ \rf{j4} (see \cite{rajeev} for details):
\beq\label{jx1}
S_{YM} = \frac{1}{2} \int dx dt\; \tr E_1^2 =
\frac{1}{2g^2 L} \int dt \;\tr (i U^{-1} \prt_0 U)^2.
\eeq
Let us now couple the YM theory to geometry as in \rf{j17}.
One observes that $\tilde{E}= E^1/\sqrt{g}=E_1 \sqrt{g}$ behaves as a scalar
under diffeomorphisms. Thus $D_1 \tilde{E} =0$, where $D_1$ is
the ordinary gauge covariant derivative as in \rf{j3}.
This implies that the derivation
in \cite{rajeev} which led to \rf{jx1} for flat spacetime
is essentially unchanged. As we are interested in HL projectable
2d quantum geometries we assume the geometry is
defined by a laps function $N(t)$, a shift function $N_1(x,t)$
and a spatial metric $\g(x,t)$. $\sqrt{g(x,t)} = N(t) \sqrt{\g(x,t)}$,
and introducing the spatial length $L(t) = \int dx \sqrt{\g(x,t)}$
one obtains instead of \rf{jx1}
\beq\label{jx2}
S_{YM} = \oh \int dt dx \; \sqrt{g(x,t)} \, \tr \tilde{E}^2 = \frac{1}{2g^2}
\int dt \;\frac{\tr (i U^{-1} \prt_0 U)^2}{N(t) L(t)}.
\eeq
Combined with the results from \cite{agsw} for the HL-action
one can write the total action as
\beq\label{jx3}
S_{TOT} = \int dt \;\left[\frac{1}{2N(t) L(t)}
\left(\oh (\prt_0 L)^2 +
\frac{1}{g^2} \tr (i U^{-1} \prt_0 U)^2\right)+
{\Lambda} N(t) L(t)\right].
\eeq
This classical action leads to the quantum Hamiltonian \rf{j24}.
Let us return to the quantum Hamiltonian \rf{j24}.
Since the eigenfunctions of $\Del_G$ after projection with $\hP$
are just the characters $\chi_R(U)$ on $G$
and they have eigenvalues $C_2(R)$, we can solve the eigenvalue equation
for $\hH$ by writing $\Psi(L,U) = \psi_R(L) \chi_R(U)$.
For $\hH_{{\rm cdt}}$ we have \cite{2dcdt,physrep}
\beq\label{j25}
\hH_{{\rm cdt}} \psi_n (L,\Lam) = \ep_n \psi_n(L,\Lam), ~~~~~
\ep_n= 2n \sqrt{\Lam},~~n>0,
\eeq
where the eigenfunctions are of the form
$\Lam \,p_n(L\sqrt{\Lam}) \e^{-\sqrt{\Lam} L}$,
$p_n(x)$ being a polynomial of degree $n-1$.
The corresponding solution for $\psi_R(L)$ is obtained by the
substitution
\beq\label{j25a}
\Lam \to \Lam_R= \Lam + \oh g^2 C_2(R),
\eeq
i.e.\
\beq\label{j26}
\hH \Psi_{n,R} = E(n,R) \Psi_{n,R},~~~E(n,R) = 2n \sqrt{\Lam_R},~~n>0
\eeq
\beq\label{j27}
\Psi_{n,R}(L,U) = \Lam_R\, p_n(L \sqrt{\Lam_R})\, \e^{-L \sqrt{\Lam_R}} \,
\chi_R(U),
\eeq
with the reservation that the correct variable is not really
the group variable $U$ but rather the conjugacy class corresponding
to $U$. In the simplest case of $SU(2)$ the group manifold can
be identified with $S^3$ and $\Del_G$ is the Laplace-Beltrami
operator on $S^3$. The conjugacy classes are labeled by the geodesic
distance $\tht$ to the north pole and the representations are labeled
by $R=j$ and we have\footnote{\label{foot2}
Using the lattice we have effectively
performed a quantization using the fact that $SU(2)$ is a compact group.
However, as already mentioned in footnote \ref{foot1}, there are
subtleties associated with the quantization, more precisely
whether one chooses first to project to the algebra and
quantized there, or first to quantized using the group
variables and then project to the holonomies. We refer to \cite{2dgauge}
for a detailed discussion.}
\beq\label{j28}
C_j = j(j+1),~~~~\chi_j (\tht) =
\frac{\sin (j+\oh)\tht}{\sin \oh \tht},~~~j=0, \oh, 1, \ldots
\eeq
As already mentioned the above results are also valid in
simpler cases. If $G=U(1)$ where one has
\beq\label{j29}
U(\tht) = \e^{i \tht},~~~~\Del_G = -\frac{d^2}{d \tht^2},
\eeq
\beq\label{j30}
C_n = n^2,~~~~\chi_n( \tht) = \e^{i n \tht},~~~~n=0,\pm 1,\pm2, \ldots.
\eeq
and if $G=Z_N$, the discrete cyclic group of order $N$,
\beq\label{j31}
U(k) = \e^{\frac{2\pi}{N} \, k},~~~~~
(\Del_G)_{k,k'} = \del_{k,k'+1}+\del_{k,k'-1}-2\del_{k,k'}, ~~~k=0,\ldots,N-1,
\eeq
\beq\label{j32}
C_n = 2\left(1-\cos\left(\frac{2\pi}{N} \, n\right)\right)~~~~
\chi_n(k) = \e^{i \frac{2\pi n}{N}\,k},~~~n=0,1,\ldots,N-1.
\eeq
\section{The ground state of the universe}\label{cosmo}
In CDT the disk amplitude is defined as
\beq\label{j33}
W_\Lam (L) = \int_0^\infty dt\;\la L |\, \e^{-t \hH_{\rm cdt}}|L'\to 0\ra.
\eeq
It is a version of the Hartle-Hawking wave function. One
can calculate $W_\Lam(L)$ \cite{al}:
\beq\label{j34}
W_\Lam(L) = \frac{\e^{-\sqrt{\Lam}L}}{L}.
\eeq
This function satisfy
\beq\label{j35}
\hH_{{\rm cdt}} W_\Lam (L) = 0,
\eeq
and one can view \rf{j35} as the Wheeler-deWitt equation. Formally
$W_\Lam(L) \propto \psi_0(L)$ in the notation used in eq.\ \rf{j25}, but it was
not included as an eigenfunction in the listing in \rf{j25}
since it does not belong to the
Hilbert space $L^2(R_+)$ with measure $LdL$.
If we couple the theory of fluctuating geometries to gauge fields as above
we have to decide what kind of boundary condition to
impose in the limit $L'\to 0$ in \rf{j33}.
A possible interpretation of this ``singularity''
in the discrete setting is that all the vertices of
the first time slice at time $t'=1$ have additional temporal
links joining a single vertex at time $t'=0$ (see Fig.\ \ref{fig3}).
We can view this as an explicit, discretized, realization of
the matter part of the Hartle-Hawking boundary condition.
\begin{figure}[t]
\begin{center}
\includegraphics[width=10cm]{fig3.eps}
\end{center}
\vspace{-5mm}
\caption{The ``beginning of the universe'' at $t'=0$ and
the connection to the first loop at $t'=1$. }
\label{fig3}
\end{figure}
Denote by $\{U^{(0)}_\ell\}$, $\ell =1,\ldots,l$ the gauge fields on
these temporal links and by $\{U_\ell\}$, $\ell =1,\ldots,l$
the gauge fields on the spatial links
constituting the first loop at time $t'=1$ and denote by $U(1)$
the corresponding holonomy at time $t'=1$.
The contribution to the matter partition function coming from
this first ``big bang'' part of the universe is then
\beq\label{j36}
\int \prod_{k=1}^{l} dU^{(0)}_k \; \prod_{k'=1}^l Z_{P_{k'}}[U_{P_{k'}}] =
Z_{{\rm disk}} [U(1)] = \la U(1)| \, \e^{-\oh g^2 l a^2 \Del_G} |I\ra ,
\eeq
where we have integrated out the temporal links $\{U^{(0)}_\ell\}$.
The matter partition function can now be written (after integrating
out the temporal links in the rest of the lattice too, as the integral
over $t$ holonomies $U(1),U(2),\ldots,U(t)$
\beq\label{j37}
\int \prod_{i=1}^t \left( dU(i) \la U(i)| \,
\e^{-\oh g^2 (l_i+l_{i-1}) a^2 \Del_G}|U(i-1)\ra\right),
\eeq
where $U(0) \equiv I$ and $l_0=0$.
From this expression it is natural to say that the universe
starts out in the matter state $|I\ra$, or expanded in charaters:
\beq\label{j38}
\la U| I\ra = \del(U-I) = \sum_R d_R \chi_R(U).
\eeq
This wave function is not normalizable if the group has
infinitely many representations, but neither is $W_\Lam(L)$ as
we just saw. Combining the two we might define the Hartle-Hawking wavefunction
for 2d CDT coupled to gauge fields as
\beq\label{j39}
W(L,U) = \int_0^\infty dT \, \la L,U|\,\e^{-T \hH} |L=0,U=I\ra
= \sum_R d_R \chi_R(U) \; W_{\Lam_R}(L),
\eeq
where $\Lam_R$ is defined in eq.\ \rf{j25a}.
We have explicitly:
\beq\label{j40}
W(L,k) = \sum_r \e^{\frac{i2\pi r k}{N}} \;
\frac{\exp\left(-L (\sqrt{\Lambda +
g^2[ 1 -\cos(2\pi r/n)]})\right)}{L},
\eeq
for the $Z_N$ theory,
\beq\label{j41}
W(L,\theta) =
\sum_{r=-\infty}^\infty \e^{ir\theta}\;
\frac{\exp \left(-L\sqrt{\Lambda+\oh r^2g^2 }\right)}{L}.
\eeq
for the $U(1)$ theory, and
\beq\label{j42}
W(L,\theta) =
\sum_{k=0}^{\infty}
\frac{\sin\left(\frac{(k+1)\tht}{2}\right)}{\sin \frac{\tht}{2}}\;
\frac{\exp \left(-L\sqrt{\Lambda+\frac{1}{8} g^2 k(k+2) }\right)}{L}.
\eeq
for the $SU(2)$ theory.
We have tried to define the initial matter state $|I \ra$ in the
Hartle-Hawking spirit as coming from ``no boundary'' conditions
by closing the universe into a disk. Even if the ``initial''
(Big Bang) state is then a simple tensor product $|L=0\ra \otimes |I\ra$,
the corresponding Hartle-Hawking wave function is the result
of a non-trivial interaction between matter and geometry.
However, we cannot claim that the model
points to such a ``no boundary'' condition in a really
{\it compelling} way. From a continuum point of view it should not
make a difference if we, rather than implementing the continuum
statement $L' \to 0$ by adding a little cap, had implemented it
by insisting that the first time slice had $l=2$ or $l=3$, say.
The calculation of $W_\Lam(L)$ is insensitive to such details. However,
if our universe really started with such a microscopic loop, there
is no reason that we should not choose the matter ground state,
i.e.\ the trivial, constant, character as the initial state.
In this case absolutely nothing happens with matter during the time evolution
of the universe. It just stays in this state and the state
does not influence the geometry. Clearly the state $| I\ra$ is
much more interesting and more in accordance with the picture
we have of the Big Bang of the real 4d world where
matter and geometry have interacted.
Even if the argument for the state $|I\ra$ are not compelling, as
just mentioned, it is nevertheless encouraging that the
``natural'' Hartle-Hawking like boundary condition leads to a non-trivial
interaction between geometry and matter.
\vspace{.5cm}
\noindent {\bf Acknowledgments.}
The authors acknowledge support from the ERC-Advance grant 291092,
``Exploring the Quantum Universe'' (EQU) as well as the support
from FNU, the Free Danish Research Council, from the grant
``quantum gravity and the role of black holes''. Finally this research
was supported in part by the Perimeter Institute of Theoretical Physics.
Research at Perimeter Institute is supported by the Government of Canada
through Industry Canada and by the Province of Ontario through the
Ministry of Economic Development \& Innovation.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Near-term quantum computing technologies will hopefully allow quantum computing
systems to reliably solve tasks that are beyond the capabilities of classical
systems~\citep{preskill2018quantum}. One of the most promising applications of
quantum computing is hybrid quantum-classical learning systems, which utilize
parameterized quantum operations and classical optimization algorithms to solve
the learning tasks. Prior works have demonstrated that parametrized quantum
circuits (PQCs)~\citep{benedetti2019parameterized} are able to handle a variety
of supervised and unsupervised tasks such as
classification~\citep{schuld2020circuit,havlivcek2019supervised,schuld2019quantum,li2021vsql})
and generative
modeling\citep{liu2018differentiable,zhu2019training,larose2020robust,huang2021experimental},
as well as provide proofs of their learning advantages~\citep{du2020expressive,jerbi2021parametrized}.
Some recent work~\citep{jerbi2021parametrized,skolik2021quantum} further shows
that PQCs can be used to construct quantum policies to solve the more complex
reinforcement learning problems, with an empirical learning advantage over
standard deep neural networks (DNNs).
One of the key aspects behind the success of PQC-based algorithms is the
architectural designs of hybrid quantum learning frameworks. While prior
works~\citep{jerbi2021parametrized,skolik2021quantum,perez2020data} have either
identified some essential components, or empirically analyzed the influence of
different design choices on the learning performances, the development of
high-performance architectures of hybrid quantum systems nevertheless relies on
human ingenuity. On the other hand, architecture search methods, which aim to
automate the process of discovering and evaluating the architecture of complex
systems, have been extensively explored in classical learning systems, e.g.,
neural architecture search (NAS)~\citep{elsken2019neural}. More specifically,
recent works~\citep{lu2019nsga, ding2021optimizing} on combining genetic
algorithms with gradient-based optimization have demonstrated superior
performance in NAS and more generally optimizing deep neural networks. In the
context of quantum computing, common architecture search approaches, such as
greedy algorithms~\citep{grimsley2019adaptive,tang2021qubit}, evolutionary
algorithms~\citep{las2016genetic,du2020quantum,lu2021markovian}, reinforcement
learning~\citep{niu2019universal,kuo2021quantum}, and gradient-based
learning~\citep{zhang2020differentiable} have also been attempted to solve tasks
such as quantum control, variational quantum eigensolver, and quantum error
mitigation. However, most of these approaches target on optimizing either
specific pure quantum circuits or single-qubit quantum operations, instead of
more complex multi-qubit hybrid systems. Overall, automated search and
optimization of architectures of hybrid quantum learning systems have not been
sufficiently explored yet.
In this work, we aim to explore using genetic algorithms to automatically design
the architecture of hybrid quantum-classical systems that can solve complex RL
problems. We propose EQAS-PQC, an Evolutionary Quantum Architecture Search
framework for constructing complicated quantum circuits based on some
fundamental PQC circuits. We adopt the ideas of successful approaches in NAS
using genetic algorithms, which have more flexible architecture search spaces
and require less prior knowledge about architecture design. In our experiments,
we consider the benchmark RL environments from OpenAI Gym, which has been widely
used for RL research. Experimental results show that agents trained by using our
method significantly outperform the ones from prior work. We further analyze the
top-performing PQC architectures found by our method to identify the common
patterns that appear during the search process, which may provide insights for
the future development of hybrid quantum-classical learning systems.
\section{Preliminaries and Related Work}
In this section, we introduce the basic concepts of quantum computation related
to this work, and give a detailed description of parametrized quantum circuits
and their applications.
\subsection{Quantum Computation Basics}
An $n$-qubit quantum system is generally represented by a complex Hilbert space
of $2^n$ dimensions. Under the bra-ket notation, the quantum state of the system
is denoted as a vector $\ket{\psi}$, which has unit norm
$\braket*{\psi}{\psi}=1$, where $\bra{\psi}$ is the conjugate transpose and
$\braket*{\psi}{\psi}$ represents the inner-product. The computation basis
states are represented as the tensor products of single-qubit computational
basis states, {\em e.g.}, the two-qubit state $\ket{01}=\ket{0}\otimes\ket{1}$ where
$\ket{0}=\begin{bmatrix} 1\\0 \end{bmatrix}$ and $\ket{1}=\begin{bmatrix} 0\\1
\end{bmatrix}$.
A unitary operator $U$ acting on qubits is called a quantum gate. Some common
quantum gates are frequently used in this work, namely the single-qubit Pauli
gates Pauli-$X$, Pauli-$Y$, Pauli-$Z$ and their associated rotation operators
$R_x$, $R_y$, $R_z$. The matrix representations of Pauli gates are
\begin{equation}
X = \begin{bmatrix}
0 & 1 \\ 1 & 0
\end{bmatrix},
Y = \begin{bmatrix}
0 & -i \\ i & 0
\end{bmatrix},
Z = \begin{bmatrix}
1 & 0 \\ 0 & -1
\end{bmatrix}.
\end{equation}
Given a rotation angle $\theta$, the matrix representations of rotation
operators are
\begin{align}
\notag R_x(\theta) &= \begin{bmatrix}
\cos\frac{\theta}{2} & -i\sin\frac{\theta}{2} \\
-i\sin\frac{\theta}{2} & \cos\frac{\theta}{2}
\end{bmatrix},\\
\notag R_y(\theta) &= \begin{bmatrix}
\cos\frac{\theta}{2} & -\sin\frac{\theta}{2} \\
\sin\frac{\theta}{2} & \cos\frac{\theta}{2}
\end{bmatrix},\\
R_z(\theta) &= \begin{bmatrix}
e^{-i\frac{\theta}{2}} & 0 \\ 0 & e^{i\frac{\theta}{2}}
\end{bmatrix}.
\end{align}
If a quantum state of a composite system can not be written as a product of the
states of its components, we call it an entangled state. An entanglement can be
created by applying controlled-Pauli-Z gates to the input qubits.
A projective measurement of quantum states is described by an observable, $M$,
which is a Hermitian operator on the state space of the quantum system being
observed. The observable has a spectral decomposition
\begin{equation}
M = \sum_m mP_m,
\end{equation}
where $P_m$ is the projector onto the eigenspace of M with eigenvalue $m$. Upon
measuring the state $\ket{\psi}$, the probability of getting result $m$ is given
by
\begin{equation}
p(m) = \bra{\psi}P_m\ket{\psi},
\end{equation}
and the expectation value of the measurement is
\begin{equation}
E(M) = \sum_m m\cdot p(m) = \bra{\psi}M\ket{\psi}.
\end{equation}
For a more detailed introduction to basic concepts of quantum computation, we
refer the readers to \citet{nielsen2002quantum}.
\subsection{Parametrized Quantum Circuits}
Given a fixed $n$-qubit system, a parametrized quantum circuit (PQC) is defined
by a unitary operation $U(s, \theta)$ that acts on the current quantum states
$s$ considering the trainable parameters $\theta$. In this work, we mainly
consider two types of PQCs: variational PQCs
(V-PQCs)~\citep{kandala2017hardware,benedetti2019parameterized} and
data-encoding PQCs (D-PQCs)~\citep{schuld2021effect,perez2020data}. The V-PQCs
are composed of single-qubit rotations $R_x$, $R_y$, $R_z$ with the rotation
angles as trainable parameters. The D-PQCs have a similar structure with
rotations, but the angles are the input data $d$ scaled by a trainable parameter
$\lambda$. The structures of both PQCs are depicted in Fig.~\ref{fig:pqc}, which
we describe in details later in Sec.~\ref{sec:encode}.
A recent work~\citep{jerbi2021parametrized} proposes to use an
alternating-layered architecture~\citep{schuld2021effect,perez2020data} to
implement parameterized quantum policies for RL, which basically applies an
alternation of V-PQC (followed by an entanglement) and D-PQC till the target
depth. While this architecture is simple and effective, it is obvious to see
that this general design can be easily modified and probably improved by
changing the placement of its components. In this work, we aim to optimize the
design of such PQC-based systems with architecture search methods.
\subsection{Quantum Architecture Search}
Early research~\citep{spector2004automatic} has shown the usage of genetic
programming to solve specific quantum computing problems from an evolutionary
perspective. Prior
works~\citep{tang2021qubit,las2016genetic,du2020quantum,lu2021markovian,niu2019universal,kuo2021quantum,zhang2020differentiable}
have explored the usage of common architecture search approaches in various
quantum computing applications such as quantum control, variational quantum
eigensolver, and quantum error mitigation.
However, most of these works target on specific quantum computing problems and
try to optimize the quantum circuits in a hardware-efficient manner.
More recently, a few approaches have been proposed to optimize the architectures
involving parameterized quantum circuits. \citet{grimsley2019adaptive} proposed
a method that iteratively adds parameterized gates and re-optimizes the circuit
using gradient descent. \citet{ostaszewski2021structure} proposed an
energy-based searching method for optimizing both the structure and parameters
for single-qubit gates and demonstrated its performance on a variational quantum
eigensolver. In this work, we take one step further and propose a more general
architecture search framework for hybird quantum-classical systems with both
parameterized and non-parameterized quantum operators, which aim to solve the
challenging learning problems such as RL.
\begin{figure*}[t]
\makebox[\linewidth]{\hspace{3em}
\Qcircuit @C=1em @R=.8em {
& & & \mathbf{x}_1(\theta) & & & & & \mathbf{x}_2(d,\lambda) &&&&& \mathbf{x}_3 &&&&&&& \mathbf{x}_0(\psi) \\
&\\
\lstick{\ket{0}_0} &\qw & \gate{R_x(\theta_{0,0})} & \gate{R_y(\theta_{0,1})} & \gate{R_z(\theta_{0,2})} &\qw &\qw &\qw &\gate{R_x(\lambda_{0}d_0)} &\qw &\qw &\qw &\ctrl{1} &\qw &\qw &\ctrl{3} &\qw &\qw &\qw &\multigate{3}{\mathbf{x}_1(\psi)} &\qw & \meter \gategroup{3}{3}{6}{5}{1.0em}{--} \gategroup{3}{9}{6}{9}{1.0em}{--} \gategroup{3}{13}{6}{16}{1.0em}{--} \gategroup{3}{20}{6}{22}{1.0em}{--} \\
\lstick{\ket{0}_1} &\qw & \gate{R_x(\theta_{1,0})} & \gate{R_y(\theta_{1,1})} & \gate{R_z(\theta_{1,2})} &\qw &\qw &\qw &\gate{R_x(\lambda_{1}d_1)} &\qw &\qw &\qw &\ctrl{-1} &\ctrl{1} &\qw &\qw &\qw &\qw &\qw &\ghost{\mathbf{x}_1(\psi)} &\qw & \meter\\
\lstick{\ket{0}_2} &\qw & \gate{R_x(\theta_{2,0})} & \gate{R_y(\theta_{2,1})} & \gate{R_z(\theta_{2,2})} &\qw &\qw &\qw &\gate{R_x(\lambda_{2}d_2)} &\qw &\qw &\qw &\qw &\ctrl{-1} &\ctrl{1} &\qw &\qw &\qw &\qw &\ghost{\mathbf{x}_1(\psi)} &\qw & \meter\\
\lstick{\ket{0}_3} &\qw & \gate{R_x(\theta_{3,0})} & \gate{R_y(\theta_{3,1})} & \gate{R_z(\theta_{3,2})} &\qw &\qw &\qw &\gate{R_x(\lambda_{3}d_3)} &\qw &\qw &\qw &\qw &\qw &\ctrl{-1} &\ctrl{-1} &\qw &\qw &\qw &\ghost{\mathbf{x}_1(\psi)} &\qw & \meter\\
}}
\caption{
\textbf{Illustration of a simple 4-qubit PQC architecture in the search space of EQAS-PQC.} This architecture, of which the genome encoding is $1-2-3-0$, is composed of 4 operations: 1) Variational PQC ($\mathbf{x}_1$) performs rotations on each qubits according to parameters $\theta$; 2) Data-encoding PQC ($\mathbf{x}_2$) performs rotations on each qubit according to the input data $d$ and scaling parameter $\lambda$; 3) Entanglement ($\mathbf{x}_3$) performs circular entanglement to all the qubits; 4) Measurement ($\mathbf{x}_0$) adds another Variational PQC ($\mathbf{x}_1$) and perform measurement to obtain the observable values.
}
\label{fig:pqc}
\end{figure*}
\section{Method}
We propose EQAS-PQC, an Evolutionary Quantum Architecture Search framework for
constructing quantum learning models based on some fundamental PQC circuits.
While the proposed framework can be generally applied to various learning
problems, in this work, we choose to target on the challenging RL problems in
order to better illustrate the benefit of our method. In this section, we
describe the major components of EQAS-PQC including encoding scheme and search
process in detail.
\subsection{Encoding and Search Space} \label{sec:encode}
Biologically inspired methods such as genetic algorithms (GAs) have been
successfully used in many search and optimization problems for decades. In most
cases, GAs refer to a class of population-based computational paradigms, which
simulate the natural evolution process to evolve programs by using genetic
operations ({\em e.g.}, crossover and mutation) to optimize some pre-defined fitness or
objective functions. From this perspective, we may view the architectures of
quantum circuits as \textit{phenotypes}. Since the genetic operations usually
work with \textit{genotypes}, which are representations where the genetic
operations can be easily applied, we need to define an encoding scheme as the
interface for abstracting the architectures to genomes, where the genes are
different quantum operations.
The existing architectures of parameterized quantum
policies~\citep{jerbi2021parametrized} can be viewed as a composition of
functional quantum circuits that specify some computational schemes on a single
qubit or multiple qubits. In EQAS-PQC, we define four basic operation encodings
$\mathbf{x} = \{\mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3\}$, and
the corresponding genes are represented as integers $\{0,1,2,3\}$. An
illustration of a simple PQC architecture in the search space of EQAS-PQC is
depicted in Fig.~\ref{fig:pqc}. More specifically, given a fixed $n$-qubit
state, we define the following operations:
\begin{itemize}
\item $\mathbf{x}_1$: Variational PQC - A circuit with single-qubit rotations
$R_x$, $R_y$, $R_z$ performed on each qubit, with the rotation angles as
trainable parameters.
\item $\mathbf{x}_2$: Data-encoding PQC - A circuit with single-qubit
rotations $R_x$ performed on each qubit, with the rotation angles is the input
scaled by trainable parameters.
\item $\mathbf{x}_3$: Entanglement - A circuit that performs circular
entanglement to all the qubits by applying one or multiple controlled-Z gates.
\item $\mathbf{x}_0$: Measurement - A Variational PQC followed by measurement.
\end{itemize}
The outputs are computed by weighting the observables by another set of
trainable parameters for each output, with optional activation functions such as
\textsc{Softmax}. The architecture encoding/decoding is terminated when
approaching to $\mathbf{x}_0$.
It is easy to see that the search space of EQAS-PQC is dependent on the maximal
length of the genomes. Since the encoding will terminate when approaching to
$\mathbf{x}_0$, there will be cases where the same architecture is decoded from
different genomes. So the search space is the sum of possible operations (except
$\mathbf{x}_0$) for all the possible length less than the maximum length. In
other words, given a maximum length of the genomes $n$, the search space of
EQAS-PQC is
\begin{equation}
\Omega_{\mathbf{x},n}=\sum_{i=1}^{n}(|\mathbf{x}|-1)^{i-1}
\end{equation}
\begin{table*}
\caption{\textbf{RL environment specifications and hyperpameters.} (*: The reward
function of MountainCar-v0 has been modified from the standard version as in
OpenAI Gym, following the practices in \citet{jerbi2021parametrized,
duan2016benchmarking}.)}
\label{tab:gym}
\centering
\begin{tabular}{cccccc}
\toprule
\cmidrule(r){1-2}
Environment & \# States/Qubits & Reward & Learning rates & $\gamma$ & Observables \\
\midrule
CartPole-v1 & $4$ & $+1$ & $0.01, 0.1, 0.1$ & $1.0$ & [$Z_0Z_1Z_2Z_3$]\\
MountainCar-v0 & $2$ & $-1+height^*$ & $0.01,0.1,0.01$ & $1.0$ & [$Z_0,Z_0Z_1,Z_1$]\\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=.49\linewidth]{figures/cartpole.png}
\includegraphics[width=.49\linewidth]{figures/mountaincar.png}
\caption{\textbf{Learning performance of EQAS-PQC on benchmark RL
environments.} We plot the average learning curves (smoothed by a temporal
window of 10 episodes)) over 10 randomly-initialized EQAS-PQC agents and
\textsc{Softmax}-PQC agents in two benchmark RL environments (CartPole-v1 and
MountainCar-v0) from OpenAI Gym. The shaded areas represent the standard
deviation of the average collected reward.}
\label{fig:rl}
\end{figure*}
\subsection{Search Process}
Similar to many other genetic algorithms, EQAS-PQC iteratively generates a
population of candidates (architectures) through genetic operations on the given
parents, and selects parents for the next generation based on fitness
evaluation. In this work, we adopt the Non-Dominated Sorting Genetic Algorithm
II (NSGA-II)~\citep{deb2000fast} to optimize the search process, with the
average collected rewards as the objective. NSGA-II has been successfully
employed in various single- and multi-objective optimization problems including
NAS~\citep{lu2019nsga}.
The goal of EQAS-PQC is to discover diverse sequential combinations of quantum
operators and optimize the process with respect to the objective. Towards this
goal, we elaborate on the following components of the search process:
\subsubsection*{Crossover.} \hspace{1em} We use the two-point crossover to
perform recombination of the parent architectures to generate new offspring.
This method randomly choose two crossover points and swap the bits between the
parents, and has been widely used in search problems such as NAS. The intuition
is that sequential architectures for learning models usually require different
substructure for the beginning (input), middle (modeling), and ending (output)
of the architecture. From this perspective, the two-point crossover can
hopefully separate the three parts of the model and improve the architecture
through recombination.
\subsubsection*{Mutation.} \hspace{1em} To enhance the diversity of architectures in the
population, we also add polynomial mutation, which has been widely used in
evolutionary optimization algorithms as a variation operator. Given the specific
encoding, the mutation is operated in integer space, and will allow the search
to potentially reach any possible genomes in the search space.
\subsubsection*{Duplicate elimination.} \hspace{1em} It is worth noting that, given the
proposed encoding scheme, some different genomes may be decoded to the same
architecture, {\em e.g.}, any operations after $\mathbf{x}_0$ does not change the
architecture. To maintain the diversity of population, we additionally eliminate
those duplicate architectures with different genomes.
\subsubsection*{Fitness evaluation.} \hspace{1em} For each generation, we decode the
population to different architectures, and use the architectures to construct
\textsc{Softmax}-PQC~\citep{jerbi2021parametrized} policies for the RL agents.
While EQAS-PQC can be easily extended to optimize for multiple objectives, in
this work, we demonstrate by using the learning performance of the RL agents as
a single objective for fitness evaluation. The learning performance is computed
as the average episode reward to represent the area under the learning curve.
\section{Experiments}
In this section, we describe the experimental setup and implementation details
of EQAS-PQC on the classical benchmark RL environments. We also show the
empirical results of our method compared to the prior work (\textsc{Softmax}-PQC
by \citet{jerbi2021parametrized}) to demonstrate the advantage of using EQAS-PQC
against commonly-used alternating-layer PQC architectures.
\subsection{RL Environments}
In this work, we consider two classical RL benchmark environments from the
OpenAI Gym~\citep{brockman2016openai}: CartPole and MountainCar. Both
environments have continuous state spaces and discrete action spaces, and have
been widely used in RL research, including prior works on quantum
RL~\citep{jerbi2021parametrized,skolik2021quantum}. The CartPole task is to
prevent the pole from falling over by controlling the cart. For MountainCar, the
goal is to drive up the mountain by driving back and forth to build up momentum.
More detailed description can be found in \citet{brockman2016openai}. The
specifications are presented in Table~\ref{tab:gym}, where the reward is the
step reward and $\gamma$ is the discount factor for future rewards.
\subsection{Implementation Details}
\subsubsection*{Search algorithm.} \hspace{1em} For both environments,
EQAS-PQC uses a population size of 20 and runs for 20 generations. The maximum
length of architecture is set to 30. We also reduce the total number of episode
to a factor of 0.8 in the search process to improve the efficiency of evolution.
The main search framework is implemented using the
\textit{pymoo}~\citep{blank2020pymoo}.
\subsubsection*{RL training during search.} \hspace{1em} For each generated
PQC policy, we learn the policy for a single trial, and calculate the average
episode rewards as its learning performance. We set the hyperparameters such as
learning rates and observables following the general practice in
\citet{jerbi2021parametrized}, which are also summarized in Table~\ref{tab:gym}.
All the agents are trained using REINFORCE~\citep{williams1992simple}, which is
a basic Monte Carlo policy gradient algorithm. We additionally apply the
value-function baseline~\citep{greensmith2004variance} in MountainCar to
stabilize the Monte Carlo process, which has been commonly used in recent RL
methods~\citep{duan2016benchmarking}. The quantum circuits are implemented using
Cirq~\citep{hancock2019cirq} and the learning process is simulated using
TensorFlow~\citep{abadi2016tensorflow} and TensorFlow
Quantum~\citep{broughton2020tensorflow}.
\subsubsection*{Performance evaluation.} \hspace{1em} For the final results,
we take the best performing architecture for each environment and evaluate it
for 10 trials (500 episodes for CartPole and 1000 episodes for MountainCar). To
compare with prior work, we also evaluate the alternating-layer architecture
(\textsc{Softmax}-PQC) as used in \citet{jerbi2021parametrized}, which can be
viewed as a special case in the search space of EQAS-PQC.
\subsection{Results}
We evaluate the general performance of the proposed EQAS-PQC and the experiment
results are presented as follows. There are two goals for our experiment: 1) to
show that our method is able to find PQC architectures with better learning
performance as well as a similar computation cost to prior work; 2) to discover
the performance-critical design choices of PQCs in addition to the commonly-used
alternating-layer architecture. To illustrate the above points, we first apply
EQAS-PQC to two classical RL benchmark environments and obtained the best
performing architectures, and then conduct two analysis on the resulting
architectures.
\subsubsection*{Learning Performance}\hspace{1em}
We evaluate and visualize the average learning performance over 10 trials of the
best-performing architecture searched by EQAS-PQC and the one used in
\textsc{Softmax}-PQC~\citep{jerbi2021parametrized}, as shown in
Fig.~\ref{fig:rl}. The corresponding genomes of the EQAS-PQC architectures for
the two RL environments are:
\begin{itemize}
\item CartPole: \\
$3-3-2-3-3-1-2-1-3-2-3-2-0$
\item MountainCar: \\
$3-1-2-3-1-2-2-2-3-2-1-1-1-3-2-0$
\end{itemize}
To ensure a fair comparison, for \textsc{Softmax}-PQC, we use the depth of 6,
resulting in an architecture with length 19, which is larger than the resulting
architectures searched by EQAS-PQC for both environments. Thus, we can conclude
that our method is able to find PQC architectures that significantly outperform
the standard alternating-layer PQC.
\begin{figure}
\centering
\includegraphics[width=.99\linewidth]{figures/prob.png}
\caption{Probability distribution of quantum operations in top-performing PQC
architectures. We select 20 top-performing architectures searched by EQAS-PQC
(10 for each RL environment), and calculate the probability distributions of
operations at each position in the architecture.}
\label{fig:prob}
\end{figure}
\subsubsection*{Probability Distribution of Quantum Operations}\hspace{1em} We
also want to tell the reason why architectures found by EQAS-PQC is able to have
better performance. To illustrate this, we calculate the probability
distribution of all the encoded operations at each position in the architecture,
and visualize in Fig.~\ref{fig:prob}. The probabilities are smoothed by
a window of length 5 and the fitted lines are polynomials.
From the plot, we can first see that the Variational PQC has a similar frequency
as entanglement, which aligns with the design of alternating-layer PQC. However,
the frequency of Data-encoding PQC has an obvious decreasing trend, indicating
that it is better to have more Data-encoding PQCs at the beginning of the
architecture. This finding is intuitive and can be referred to the general
machine learning modeling practices, where data input is usually at the
beginning of the modeling. Finally, the probability of Measurement does not
increase till the end of architecture, meaning that most of the optimal
architectures have a similar length of around 20. This also shows the advantage
of PQCs that shallow architectures with a small number of qubits are able to
handle the challenging RL problems, which has also been proved in prior
works~\citep{jerbi2021parametrized}.
\section{Conclusion and Future Work}
In this work, we propose EQAS-PQC, an evolutionary quantum architecture search
framework for parameterized quantum circuits. EQAS-PQC uses the population-based
genetic algorithm to evolve PQC architectures by exploring the search space of
quantum operations. Experimental results show that our method can significantly
improve the performance of hybrid quantum-classical systems in solving benchmark
reinforcement learning problems. In addition, we also analyze the probability
distributions of quantum operations in top-performing architectures, and
identifies design choices that are essential to the performance of PQC-based
architectures.
One limitation to our work is that the experiments are conducted using a
simulation backend for quantum circuits. For future work, we expect to extend
our work to use real quantum computers and add more objectives to the
consideration of evolutionary, such as quantum noise and hardware efficiency.
\begin{acks}
This work is supported by the National Science Foundation under Grant No.
1617087. Any opinions, findings, and conclusions expressed in this publication
are those of the authors and do not necessarily reflect the views of the
National Science Foundation. The authors would like to thank Sofiene Jerbi for
providing implementation details for reproducing baseline results. The
authors would like to thank Edward Pantridge, Thomas Helmuth, Ryan Boldi,
and Anil Saini for their valuable comments and helpful suggestions.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction }
A future $e^{+}e^{-}$ colliders with center-of-mass energy at the TeV level will play a key role in understanding electroweak symmetry breaking and physics beyond the standard model~\cite{ILC, CLIC}.
To construct a detector that fulfils the physics requirements at a linear collider, pioneering prototypes equipped with new technologies have been constructed and large data sets have been collected in cosmic ray and test beam experiments~\cite{CALICEetal}.
Full detector simulation~\cite{Mokka, SIDSimu}, reconstruction algorithms~\cite{JCPaper, PFA} are being developed and bench mark physics channels~\cite{LoI, SiDLoI} are being analyzed.
Druid (Display Root module Used for lInear collider Detector) was developed to support these studies.
Following the idea of Particle Flow Algorithm~\cite{JCPaper, PFA}, ultra-high granularity calorimeters were employed in linear collider detector design, with which, by separating and measuring each jet particle in the most suited sub-detector, a very good jet energy resolution can be achieved.
Nowadays, with the development of micro-electronics, the granularity of the designed calorimeters for the linear collider are revolutionary increased comparing to previous experiments.
For example, in the constructed physics prototypes of CALICE collaboration, the Silicon-Tungsten electromagnetic calorimeter holds 10k channels in a cube with side length of 20~cm~\cite{CALICEEcal}, the number of channels is one eighty of the CMS electromagnetic calorimeter~\cite{CMSEcal}.
Both prototypes of digital and semi-digital hadron calorimeter have over half a million channels, leading to a world record of number of calorimeter channels in experimental physics~\cite{CALICEDHCAL, CALICESDHCAL}.
The ultra-high granularity calorimeter provides an unprecedented level of details for the recorded shower, enables new approaches for shower analysis and jet reconstruction algorithm development.
Besides the global event topology and detector geometry, Druid emphasizes at the high precision and flexible display.
The event/shower detail can be tagged through the zoom and rotation option.
Many display options are defined in Druid Graphic User Interface, providing easy access to different event information.
Seveal examples can be found in later sections.
In this paper we present the performance, the dependencies, the display objects and options of Druid.
At the end of this paper, we demonstrate two examples on using Druid to debug reconstruction algorithms.
\section{Druid dependencies: LCIO, GDML and TEve}
Druid serves as a bridge between the displayed TEve~\cite{MatevPaper} objects and the information stored in LCIO data files and GDML geometry description files.
TEve is a framework for object management, providing hierarchical data organization, object interaction, and visualization through a ROOT~\cite{root} Graphical User Interface.
It is intensively used in LHC event displays.
LCIO~\cite{lciohome} is the standard linear collider data format while the GDML~\cite{gdmlpage} is an XML based geometry description format used to exchange geometry data between applications.
Druid has been optimized to have the minimal dependencies: only ROOT (version 5.28.00 or higher) and LCIO are requested.
Druid has been integrated into the ilcsoft~\cite{ilcsoft}.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{DruidNotePlots/CALICE_TB.eps}
\caption{40 GeV $\pi^{+}$ shower recorded at CALICE test beam experiment~\cite{CALICEetal}. The experimental setting up included the prototypes of an electromagnetic calorimeter (ECAL, with $1\times{1} cm^{2}$ cells), a hadronic calorimeter (HCAL with $3\times{3} cm^{2}$ cells) and a tail catcher (TCMT, $5\times{100} cm^{2}$ long strips). Their hits are displayed with accordingly sizes and colored to the hit energy. This image shows also the misalignment between ECAL and HCAL prototypes}
\label{CALICETBevt}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.95\columnwidth]{DruidNotePlots/ttHCLIC.eps}}
\caption{One TeV ttH event at the Compact Linear Collider~\cite{CLIC} Silicon Detector. One such event weight approximately 1~MB in the LCIO data format}
\label{TTH}
\end{figure}
Druid has been intensively tested on the fully simulated/reconstructed data at International Large Detector~\cite{LoI} (ILD) and Silicon Detector~\cite{SiDLoI} (SiD) and CALICE~\cite{CALICEetal} test beam data, see Fig.~\ref{CALICETBevt},~\ref{TTH}.
Testing on a 2.8 GHz laptop, it takes about 5 second to launch Druid at any data file.
The time needed to display an event is dependent on its data size, for example, about 3 second is required to display a One TeV ttH event at Compact Linear Collider such as the one on Fig.~\ref{TTH}.
\section{Event objects}
Event information is stored in different collections in a LCIO file.
These collections can be classified into MCTruth level, Digitization level and the reconstruction level.
Accordingly, Druid defines the TEve objects and groups them following the same hierarchy.
The correspondence is summarized in Table~\ref{tab1}.
\begin{table}
\caption{LCIO Collections versus corresponding TEve object}
\begin{center}
\begin{tabular}{ c | c | c }
\hline
Level & LCIO Collection & TEve Object \\
\hline
& MCParticle & TEveTrack \\
MCTruth & SimuCalorimeterHit & TEveBox \\
& SimTrackerHit & TEvePointSet \\
\hline
Digitization & CalorimeterHit & TEveBox \\
& TrackHit & TEvePointSet \\
\hline
& TrackAssignedHit & TEvePointSet \\
Reconstruction & Vertex & TEvePointSet \\
& ClusterHit & TEveBox \\
& ReconstructedParticle & TEveTrack \\
\hline
\end{tabular}
\end{center}
\label{tab1}
\end{table}
The TEveTracks corresponding to MCParticle and ReconstructedParticle collections are organized into groups according to their particle type, while low energy objects are grouped for easy masking.
For the detector hits collections, the TEveBoxs and TEvePointSets are divided into groups according to the subdetectors.
The size, color and style of the TEve objects can be defined on various information.
For example, TEveTracks can be colored by particle type, and TEveBoxs corresponding to CalorimeterHit can be colored to hit energy, time, or the type of particle that induces this hit, see Fig~\ref{coloroption}.
For the detector geometry, the GDML file can be written by the simulation software~\cite{Mokka, SIDSimu}.
It records all the geometry information of the simulation: the size, the material, the orientation and the shape of every volume.
Druid displays each volume as a polyhedron whose color and transparency are determined by its material, allows a detailed verification on the detector geometry.
The later release of Druid includes the GDML files for the five most recent full detector concepts as well as the CALICE test beam prototypes.
The full detector geometry is usually too detailed for a display focus on the event information.
Two options have been employed in Druid to reduce the workload of geometry display.
First, any ``unwanted" volume can be hidden, see Fig.~\ref{Dismount}.
Secondly, Druid employs the ``display depth", an global parameter referring to the hierarchy of geometry description in the GDML file, to interactively mask the geometry details.
For the users focusing on the event information, the lowest display depth (the default), at which the contour of sub detectors are displayed, is usually sufficient.
\begin{figure}
\centering
\includegraphics[width=0.55\columnwidth]{DruidNotePlots/TurnoffGeo.eps}
\hspace{0.1in}
\includegraphics[width=0.40\columnwidth]{DruidNotePlots/SidTracker.eps}
\caption{Control the level of details of geometry display with hide/dismount option and display depth: More details are displayed with higher display depth. \textbf{Left:} International Large Detector with the yoke, the coil and part of the calorimeter dismounted, lowest display depth \textbf{Right:} Tracking System of Silicon Detector at the second lowest display depth}
\label{Dismount}
\end{figure}
\section{Display options}
\subsection{Options inherited from TEve}
Druid inherits many display options from TEve with the hot key access, such as the zoom, the rotation, return to the original orientation and scale as well as the black \& white background color switch.
To focus on the inner part of the display, a cut away view can be used to removes part of the display.
One example is given in Fig.~\ref{ttb_cutview}, where one eighth of the detector is removed.
TEve allows to attach text information on each displayed object.
Picking an object with the censor, the attached text is printed in the display, see Fig.~\ref{siduds}.
\begin{figure}
\centerline{\includegraphics[width=0.95\columnwidth]{DruidNotePlots/ttbar_Blackb3_time2.eps}}
\caption{500GeV ttbar event at ILD: calorimeter hits are colored according to the time of deposit}
\label{ttb_cutview}
\end{figure}
Besides these options, many interactive actions can be accessed at the ROOT GUI interface.
The interface is divided into three pages: the file page, the eve page and the Druid option panel.
The file page browses the file directory.
The second page browsers all the generated TEve objects: displayed or hidden.
For any TEve object, its display/hide status can be switched individually or by groups.
Druid remembers the status of display/hide by the name of the collections when navigating to a new event.
Options as changing illumination setting, tuning geometry display depth, setting reference point/frame are also available in the second page.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{DruidNotePlots/Text.eps}
\caption{A simulated $\tau$ jet ($\tau^{-} \rightarrow e^{-} + \bar{\nu_{e}} + \nu_{\tau} $) of an 500~GeV $\tau\tau$ event at ILD. The Calorimeter Hits are colored according to their deposit time. The text attached to the $\bar{\nu_{e}}$ has been printed on the display, showing the basic information such as its particle type and 4-momentum. The option panel is shown in the left part of this image.}
\label{siduds}
\end{figure}
\subsection{Options defined in Druid}
The third page includes the options defined by Druid, for example the event navigation, the object color/size setting and the cut parameters adjusting.
There are also several options using buttons:
(1). Select rotation center.
(2). Regenerate the color: generate another color according to the object index.
Here the index means the order of a given object in its collection, for example the clusters.
(3). Switch between scenarios: the minimal scenario which ignores all the intermediate reconstructed objects such as digitized hits, tracks and clusters and the maximal scenario that displays every possible collection.
To give fastest performance, the minimal scenario is set as the default.
To accelerate the speed of event display as well as to focus on interesting physics information, Druid defines several cuts with interactively adjustable parameters, for example the cut on the minimal transverse momentum on the MCParticle list and the cut on the energy of calorimeter hits.
As discussed in the introduction, the display of calorimeter hits is of special importance for the event display of linear collider.
Three different types of calorimeter hits are defined in LCIO: the simulated, digitized/recorded and clustered calorimeter hits.
The default size and orientation of calorimeter hits are set corresponding to different detector geometry concepts.
The hit size can be changed independently for each type and the hit color can be specified according to different information.
The simulated hits can be colored to the energy, the type or index of the particle induces this hit, the time, or a uniform color.
Fig.~\ref{coloroption} shows a simulated $\tau$ jet displayed with different color option.
The digitized calorimeter hits are colored to the energy, and the clustered hits are colored according to the index of the cluster.
A global factor can be adjusted to scale the calorimeter hit color when it is colored to the hit energy or the deposit time;
Once colored to the index, the color can be regenerated to give a better separation to nearby hits induced by the same kinds of particle, see Fig.~\ref{coloroption}(c).
Once a cut or a hit size/color configuration has been adjusted, the name and statistics of the affected collections are printed on the terminal.
\begin{figure}
\centerline{\includegraphics[width=1.0\columnwidth]{DruidNotePlots/ColorOptionSmall.eps}}
\caption{Simulated $\tau$ jet ($\tau \rightarrow \nu + \pi^{0} + \pi^{+}$) at the calorimeter. Hit color are defined according to:
(a), Type of particle that induces the hit; (b), Energy of the hit; (c), Index of particle; (d), Deposit time of the hit.
}
\label{coloroption}
\end{figure}
\section{Example application: debugging reconstruction code}
One of the most important applications for Druid is the debugging of reconstruction code.
Here we demonstrate two examples with PandoraPFA, the most successful reconstruction Particle Flow Algorithm developed for linear collider.
The first example is the reconstruction of a $\tau$ jet: we recall the same $\tau$ jet as in Fig.~\ref{coloroption}.
Fig.~\ref{Reccompare}(a) shows the reconstructed particle and their corresponding cluster, where two photons and one pion has been reconstructed.
The cluster hits are displayed as cubes with 5~mm side length.
In Fig.~\ref{Reccompare}(b), Druid overlays the reconstructed objects with MCTruth objects: SimCalorimeterHits and MCParticle.
The blue straight line indicates a neutrino generated from $\tau$ decay.
The SimCalorimeterHits are colored according to the particle type, red for pions and yellow for photons.
Most of the SimCalorimeterHits has been attached to the reconstructed particle.
Therefore, for this $\tau$ jet, the output of the reconstruction algorithm agrees with the MCTruth information.
Reading the attached text information, further comparison on the reconstructed energy and MCTruth energy for each particle is accessible.
\begin{figure}
\centerline{\includegraphics[width=1.0\columnwidth]{DruidNotePlots/TauCompare1.eps}}
\caption{A $\tau$ jet ($\tau \rightarrow \nu + \pi^{0} + \pi^{+}$) reconstructed with PandoraPFA.
(a) Reconstructed objects: ReconstructedParticle and corresponding cluster.
(b) MCTruth and reconstructed objects: SimCalorimeterHit, MCParticle, ReconstructedParticle and its cluster.
}
\label{Reccompare}
\end{figure}
The second example is a 100~GeV $\pi$ shower.
Fig.~\ref{Reccompare}(a) shows the simulated detector hits.
The $\pi$ hits into the calorimeter, create a hadronic cluster composed of two electromagnetic sub clusters and several sailing through tracks as well as a separated small cluster deposited by a backscattering charged particle.
The reconstructed objects is shown in Fig.~\ref{Reccompare}(b), where PandoraPFA divided these hits into four clusters:
The leading cluster is associated with the track, reconstructed as a charged particle with the energy equal to track momentum (the total energy).
These remaining three clusters are reconstructed as nearby neutral particles, create a significant amount of double counted energy.
Actually, Fig.~\ref{Reccompare}(b) shows a typical Particle Flow Algorithm double counting.
To reduce such kind of confusions is the main challenge for Particle Flow Algorithm development.
\begin{figure}
\centerline{\includegraphics[width=1.0\columnwidth]{DruidNotePlots/PionCompare.eps}}
\caption{A $\pi$ shower reconstructed with PandoraPFA.
(a) MCTruth objects: SimCalorimeterHits and SimTrackerHits.
(b) Reconstructed object: ReconstructedParticles and corresponding clusters.
}
\label{PionReccompare}
\end{figure}
\section{Summary}
Druid, a dedicated event display for linear collider has been developed.
For the event data, Druid not only displays the global event topology but also provides close view to the event/shower details, with the options to emphasize on different information.
Reading GDML file written by simulation software, Druid can display all detailed simulated detector geometry with practical options to control the level of details and to browser the geometry.
It has been heavily used in geometry verification, data analysis and reconstruction algorithm development.
\acknowledgments
We are grateful to Norman Graf, for the suggestion of using GDML file; to Henri Videau, for his suggestions on the display style setting, to Jean-Claude Brient, for his continuous support in this project.
Special Thanks goes to Matevz Tadel and Alja Tadel, for all their support and discussions.
The research leading to these results has received funding from the European Commission under the FP7 Research Infrastructures project AIDA, grant agreement no. 262025.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Capture components } \label{capture component}} \par
The following two lemmas are preliminary results about the symmetry of the parameters. These are used later to prove main results of this article. \\
\begin{lem}
For any $ \lambda $ in a hyperbolic component the attracting (super-attracting) cycles of $ f_\lambda $ and $ f_{\bar {\lambda}} $ are complex conjugates; so are their multipliers (trivial for the super-attracting case). \\
\end{lem}
\begin{proof}
For any $ f_\lambda$ in $ \mathcal F$ we have
\begin{align*}
\overline{f_\lambda^k (\lambda i )}
&= f_{\bar{\lambda}}^k(\overline{\lambda i }) \\
&= f_{\bar{\lambda}}^k(-i \bar{\lambda}) \\
&= f_{\bar{\lambda}}^k(i \bar{\lambda}) \ \text{for} \ k \in Z^{+}
\end{align*}
Therefore the orbits of the asymptotic values of $ f_\lambda $ and $ f_{\bar {\lambda}} $ are conjugates. \\
Suppose $ \lambda $ is in a hyperbolic shell component and suppose $ z_p $ is a periodic point of $ f_\lambda $ with period $p.$ Then $ \bar z_p $ is a periodic point of $ f_{\bar \lambda} .$ Their multipliers are $ \rho(\lambda) = [f_\lambda^p( z_p) ]' $ and $\rho(\bar \lambda) = [f_{\bar{\lambda}}^k (\bar{z_p} )]' .$ Thus it is clear that $ \rho(\bar \lambda) = \overline {\rho(\lambda)}. $
\end{proof}
\vspace{0.5 cm}
\begin{lem}
Suppose $ \lambda $ is in a hyperbolic component and $ \{ z_p \}, \ p \geq 1 $ is an attracting periodic cycle of $ f_\lambda . $ Then $ f_{-\lambda}, f_{\pm \lambda i} $ have attracting periodic cycles $ \{- z_p \}, \{\mp z_p i \} $ respectively. If $ \rho(\lambda) $ is the multiplier of the attracting cycle of $ f_\lambda $ then $ (-1)^p \rho(\lambda), (\mp i)^p \rho(\pm \lambda i) $ are multipliers of the attracting cycles of $ f_{-\lambda}, f_{\pm \lambda i}. $ \\
\end{lem}
\begin{proof}
Let $ z_0 $ be a periodic point of $ f_\lambda $ of period $p \geq 1.$ Then $ f_{-\lambda}^p (- z_0) = - f_\lambda^p (z_0) = - z_0 . $ Suppose that there exists an integer $ q < p $ such that $ f_{-\lambda}^q (- z_0) = - z_0 .$ Then $ - f_{\lambda}^q ( z_0) = - z_0 $ so that $ f_{\lambda}^q ( z_0) = z_0 $ contradicting the hypothesis. \\
Now we prove that $ f_{\pm \lambda i} $ has the attracting periodic cycle $ \{\pm z_p i \}.$ It is clear that $ f_{ \lambda i}^p ( z_0 ) = - i f_\lambda^p (z_0) = - i z_0 . $ Therefore $ f_{ \lambda i}^p ( - z_0 i ) = - i f_\lambda^p (- z_0 i) = - i f_\lambda^p (z_0) = z_0 , $ for $p \geq 1.$ Suppose that there exists an integer $ q < p $ such that $ f_{ \lambda i}^q (- z_0 i ) = - z_0 i .$ Thus $ f_{ \lambda i}^q (- z_0 i ) = - z_0 i $ implies that $ f_{ \lambda i}^q ( z_0 ) = z_0 $, contradicts the hypothesis. The proof for $ f_{-\lambda i} $ follows similarly. For the multiplier map, the proof is the following.
\begin{align*}
&\rho(-\lambda) = \prod_{i=0}^{p-1}f_{-\lambda}'(z_i) = (-1)^p \prod_{i=0}^{p-1}f_{\lambda}'(z_i) = (-1)^p \rho(\lambda). \\
& \rho(\pm \lambda i) = \prod_{i=0}^{p-1}f_{\pm \lambda i}'(z_i) = \prod_{i=0}^{p-1} {\mp i} f_{\lambda }'(z_i) = ( {\mp i})^p \rho(\lambda).
\end{align*}
\end{proof}
\vspace{0.5 cm}
\begin{prop}
Let $ \mathcal B_n = \{ \lambda \in \mathbb C^* : f_\lambda^n ( \lambda i ) = 0 \}. \ \mathcal B = \cup_{n=1}^{\infty} \mathcal B_n . $ Then $ \mathcal B $ is the set of pre-zeros of the maps in $\mathcal F$. \label{coding prezeros}
\end{prop}
\vspace{0.5 cm}
\begin{proof}
For $n = 1, \ \text{set} \ \mathcal B_1 = \{ c_{1_k} = \sqrt{k \pi} : k = \pm 1, \pm 2, \ldots \} $ so that $ \pm \mathcal B_1 $ contains all pre-images of 0. For $ n= 2 , \ \text{set} \ \mathcal B_2 = \{ \lambda : f_\lambda^2 ( \pm \lambda i) = 0 \} .$ Therefore $ f_\lambda ( \pm \lambda i ) = \pm \sqrt{k\pi }, \ $ for some $k \in \mathbb Z^*.$ Suppose that $f_{\lambda}(\lambda i)=p_k = \sqrt{k \pi}.$ Then $ \lambda $ can be determined by solving $ f_\lambda ( \pm \lambda i ) = \pm \sqrt{k\pi }, \ k \in \mathbb Z^* .$ The numerical solution $ \lambda = (x,y) $ can be obtained by iterating $ \lambda_{m+1} = \phi (\lambda_m) = ( \phi_1 ( \lambda_m ) , \phi_2 (\lambda_m) ) $ where $ \phi$ is defined below. From $ \lambda_{m+1} = \pm \sqrt { \arctan \frac{ p_k}{\lambda_m} } $ we get the following formula. \\
Let $ X_{k,j} = \frac {1}{2} \arctan \frac { -2x p_k}{x^2 + y^2 - p_k^2 } + j \pi , \ j \in \mathbb Z, \ \ Y_k = \frac {1}{2} \ln \frac { \sqrt { 4x^2y^2 + ( p_k^2 + x^2 - y^2 ) ^2 } }{ x^2 + ( p_k + y)^2 } . $ \\
From above we have the following:\\
$ \phi_{1,j} ( \lambda_m) = \sqrt { \frac { X_{k,j} + \sqrt { X_{k,j}^2 + Y_k^2 } } {2} }, \ \ \ \phi_{2,j} ( \lambda_m ) = \frac { Y_k} { |Y_k|} \sqrt { \frac { X_{k,j} + \sqrt { X_{k,j}^2 + Y_k^2 } } {2} } . $ \\
We have introduced the index $j$ to indicate the branch of the solution of the arctangent. We get $ \mathcal B_2 = \{ \lambda_{k,j} \in \mathbb C \ | \ \lambda $ is a solution of $ f_\lambda ( \pm \lambda i ) = \pm p_k, \ k \in \mathbb Z^* \} .$ As $ k \rightarrow \infty, \ f_\lambda ( \pm \lambda i ) \rightarrow \infty $ implies that the solution of the above equation $ \lambda_{k,j} \rightarrow s_j = \sqrt {\frac {( 2j + 1) \pi } { 2 } }, \ j \in \mathbb Z $ as $ k \rightarrow \infty .$ \\
Similarly $ \mathcal B_3 $ consists of all solutions $ \lambda $ so that $ f_\lambda^2 ( \pm \lambda i ) = \pm p_ k, \ k \in \mathbb Z^* .$ For a solution $ \lambda $ of the above equation, one more index has been introduced. In general any point in $ \mathcal B_p $ can be coded as $ \lambda_{k, j_1, j_2, \ldots, j_{p-1}} $ where the indices are determined by the branches of the solution in all intermediate steps. The point $ \lambda_{k, j_1, j_2, \ldots, j_{p-1}} $ is in a neighborhood of $ s_{k, j_1, j_2, \ldots, j_{p-1}} $ for large enough $ j_{p-1} $ where $ s_{k, j_1, j_2, \ldots, j_{p-1}} $ is a pre-pole of order $ p .$ Thus each point in $ \mathcal B_p $ can be indexed in a suitable way to to recognize the pre-zeros of this family of maps.
\end{proof}
\vspace{0.5 cm}
There is a unique point in each of the capture components such that the corresponding asymptotic values of $ f_\lambda $ are mapped to the origin by finite iterations of $ f_\lambda $. These points are called the \textit{centers} of the components. The following proposition describes a characterization of these centers of the capture components. The result can be used to give an indexing of the capture components. \\
\begin{prop}
For each $n$ and $ c_{n_k} \in \mathcal B_n, \ k \in \mathbb Z, \ $ there are capture components $ \mathcal C_{n_k} $ containing $ c_{n_k} $ so that $ f_\lambda^n (c_{n_k} i ) = 0 . $ The point $ c_{n_k} $ is called the center of the component $ \mathcal C_{n_k}. $ \label{code cap com}
\end{prop}
\vspace{0.5 cm}
\begin{proof}
At each point $ c_{1_k} \in \pm \mathcal B_1 , \ f_{\lambda = c_{1_k}} (c_{1_k} i ) = 0 . $ Then $ f_\lambda $ has only one super-attracting periodic cycle and the asymptotic values $ \pm \lambda i = \pm (c_{1_k} i) $ are pre-periodic. 0 is always a super-attracting fixed point for $ \lambda \in \mathbb C^* .$ For $ \lambda $ in a hyperbolic component in the parameter space, the forward orbit of the asymptotic values must be in the stable set (Fatou set). Using quasi-conformal conjugacy and the B\"ottcher map, there is an open set $ U $ such that for all $ \lambda' \in U, \ \lambda' \neq \lambda, \ f_{\lambda' } $ is quasi-conformally conjugate to $ f_\lambda . $ Therefore the dynamical behavior of $ f_{\lambda'} $ coincides with the dynamical behavior of $ f_\lambda $ on their Julia sets. Let $ \mathcal C_{1_k} $ be the largest neighborhood of $ c_{1_k} $ where this quasi-conformal conjugacy can be extended. Then $ \mathcal C_{1_k} $ is a capture component with center at $ c_{1_k} .$ We see in Theorem~\ref{cap sim con} that $ c_{1_k } $ is the unique point in $ \mathcal C_{1_k} $ such that $ c_{1_k} i $ is pre-periodic. As $ n \rightarrow \infty $ the set of points in $ \mathcal B_1 $ tends to $ \infty $ along the real and imaginary lines. \\
Using the indexing of the pre-zeros from proposition~\ref{coding prezeros} we get that $ \lambda_{k, j_1, j_2 \ldots j_{p-1}} $ is the solution of the equation $ f_{\lambda = c_{k, j_1, j_2 \ldots j_{p-1}}}^{ p} (\lambda i ) = 0 $ [or $ f_{\lambda = c_{k, j_1, j_2 \ldots j_{p-1}}}^{ p} ( - \lambda i ) = 0] .$ Then $ c_{k, j_1, j_2 \ldots j_{p-1}} = \lambda_{k, j_1,j_2 \ldots j_{p-1}} $ is the center of the capture component $ \mathcal C_{k, j_1,j_2 \ldots j_{p-1}} .$
\end{proof}
\vspace{0.5 cm}
Using the indices that refer to the branches of the arctangent, one can actually give a coding to the capture components, based on how many iterations $f_\lambda$ takes to map the asymptotic values to the immediate basin of zero. Using the proposition~\ref {code cap com} we can give a more precise definition of a capture component as follows. \\
\begin{defn}
The \textit{capture components} of depth $ i \geq 1 $ are the connected components $ \mathcal C_{ n_1, n_2, \ldots, n_i} $ of $ \mathcal C ,$ where $ \mathcal C_{ n_1, n_2, \ldots, n_i}= \{ \lambda \in \mathbb C : f_\lambda^i ( \pm \lambda i) \in \mathcal A_\lambda^0(0) $ and $ f_\lambda^{i-1} ( \pm \lambda i) \notin \mathcal A_\lambda^0(0) \} $ and $ \mathcal A_\lambda^0(0) $ is the immediate attracting basin of zero. The indices $ n_1, \ldots, n_i $ indicate the inverse branches of the arctangent that map 0 back to $ \pm \lambda i. \ \mathcal C_0 $ is the only capture component of depth zero containing the origin. \\
\end{defn}
\begin{lem}
Let $ \mathcal C_k $ be a capture component containing $ \lambda \in \mathcal B_k , \ k \in \mathbb N .$ Let $ \mathcal A_\lambda^0(0) $ be the immediate attracting basin of zero corresponding to $ f_\lambda . $ Then for all $ \lambda \in \mathcal C_k , \ f_\lambda^{k} ( \lambda i ) \in \mathcal A_\lambda^0 .$ \\
\end{lem}
\begin{proof}
Since $ \lambda_0 $ is the center of $ \mathcal C_k ,$ we have $ f_\lambda^k ( \lambda_0 i ) = 0 .$ Let us define $ g( \lambda ) = f_\lambda^k ( \lambda i ), \ \lambda \in \mathcal C_k . $ Then $ g(\lambda) $ is a well-defined holomorphic map from $ \mathcal C_k $ to $ \bigcup_{ \lambda \in \mathcal C_k } \mathcal A_\lambda^0(0) . $
Thus $ g( \mathcal C_k ) $ is a connected component in $ \bigcup_{ \lambda \in \mathcal C_k } \mathcal A_\lambda^0(0) $ containing 0. But the only connected component of $ \bigcup_{ \lambda \in \mathcal C_k } \mathcal A_\lambda^0(0) $ containing 0 is the immediate attracting basin of 0 of $ f_{\lambda_0}.$ Therefore $ f_\lambda^{ k } (\lambda i) $ is in the immediate basin of 0 for all $ \lambda \in \mathcal C_k. $
\end{proof}
\vspace{0.5 cm}
The connectivity of capture components is proved in the following theorem. We use the technique of {\em quasiconformal surgery} introduced by Douady \cite{branner1988surgery_63}. See Branner-Fagella \cite{branner2014quasiconformal_64} for a full discussion about quasi-conformal surgery.\\
\begin{thm}
Any capture component in $ \mathcal C $ is simply connected. \label{cap sim con} \\
\end{thm}
\begin{proof}
Let $ \lambda_0 \in \mathcal C $ so that $ f^k_{\lambda_0} ( \lambda_0 i) = 0 $. Let $ \psi_{\lambda_0} $ be the B\"ottcher map $\psi_{\lambda_0}: \mathcal A_{\lambda_0}^0(0) \rightarrow \mathbb D $ which conjugates $ f_{\lambda_0}$ on the immediate basin of 0 to the map $ z \mapsto z^2 $ in $ \mathbb D. $ Then $ \lambda \mapsto \psi_{\lambda} (f_{\lambda}^{ k}(\lambda i)) $ is a holomorphic map from a neighborhood $U$ of $ \lambda_0 $ to a neighborhood of the origin for some $k.$
Define the map $ \Phi : U \rightarrow \mathbb D $ by $ \Phi ( \lambda ) = \psi_\lambda ( f_\lambda^{{k+1} } ( \lambda i ))$ where $ \psi_\lambda $ is the B\"ottcher map conjugating $ f_\lambda $ near 0 to a map $ z \mapsto z^2 $ in a neighborhood of the origin. We claim that the map $ \Phi_U $ is a proper map and is a local homeomorphism. Let $ \lambda_0 \in U $ so that $ 0 = \Phi_U(\lambda_0) $. The idea of this surgery construction is the following: for any point $z$ near $ 0,$ we can build a map $ f_{\lambda(z)} $ such that $ \Phi_U(\lambda(z)) = z,$ or in other words we can find a local inverse of $ \Phi_U $. This proves that the component containing the center is open. \\
Let $ \mathcal A_{\lambda_0} (0) $ be the pre-image of $ \mathcal A_{\lambda_0}^0 (0) $ under $f^{-k}_{\lambda_0}$ and let $W_{\lambda_0}$ be the connected component of $ \mathcal A_{\lambda_0} (0) $ containing $ f_{\lambda_0}^k ( \lambda_0 i ) .$ Let $ V_{\lambda_0} $ be any small neighborhood of $ f_{\lambda_0}^{k+1} ( \lambda_0 i ) $ contained in $ \mathcal A_{\lambda_0}^0 (0) .$ Consider $ B_{\lambda_0} \subset W_{\lambda_0} $ to be the pre-image of $ V_{\lambda_0} $ containing $ f_{\lambda_0}^{ {k }} ( \lambda_0 i ) .$ Note that by continuity it is the preimage given by the itinerary of the center $\lambda_0$. For any $ 0< \epsilon < \min \{ |z_0| , 1-|z_0| \} $ we consider $ D ( z_0, \epsilon ),$ the disk of radius $ \epsilon $ centered at $ z_0.$ For any $ z \in D ( z_0, \epsilon ) , $ choose a diffeomorphism $ \delta_z : B_{\lambda_0} \rightarrow V_{\lambda_0} $ with the following properties: \\
i) $ \delta_{z_0} = f_{\lambda_0} ; $ \\
ii) $ \delta_z $ coincides with $ f_{\lambda_0} $ in a neighborhood of $ \partial B_{\lambda_0} $ for any $z$; \\
iii) $ \delta_z ( f^{{k}}_{\lambda_0} ( \lambda_0 i)) = \psi^{-1}_{\lambda_0} (z) ; $ \\
We consider the following mapping for any $ z \in D(z_0, \epsilon )$: \\
$ G_z : \mathbb C \rightarrow \mathbb C : $
$$
G_z ( w ) = \left\{
\begin{array}{ll}
\delta_z (w) & \text{if } w \in B_{\lambda_0}, \\
f_{\lambda_0} (w) & \text{if } w \notin B_{\lambda_0}.
\end{array}
\right.
$$
We construct an invariant almost complex structure $ \sigma_z $ with bounded dilatation ratio. Let $ \sigma_0 $ be the standard complex structure of $ \mathbb C . $ We define a new almost complex structure $ \sigma_z $ in $ \mathbb C $ by
$$
\sigma_z ( w ) = \left\{
\begin{array}{lll}
(\delta_z)^* \sigma_0 & \text{ on } \ B_{\lambda_0} \\
(f_{\lambda_0}^{{k}})^ * \sigma_0 & \text { on } \ f_{\lambda_0}^{{-k}} ( B_{\lambda_0} ), \ \ \forall k \geq 1 \text { (where defined)} \\
\sigma_0 & \text { on } \ \mathbb C \setminus \{ \cup_{n\geq 1 } f_{\lambda_0}^{{-k}}( B_{\lambda_0}) \cup
B_{\lambda_0}\}
\end{array}
\right.
$$
By construction $ \sigma $ is $ G_z -$ invariant, i.e $ ( G_z )^* \sigma = \sigma $ and $ \sigma $ has bounded distortion since $ \delta_z $ is a diffeomorphism and $ f_{\lambda_0 } $ is holomorphic in the attracting basin. Applying the Measurable Riemann Mapping Theorem we obtain a quasi-conformal map $ \phi_z : \mathbb C \rightarrow \mathbb C $ such that $ \phi_z $ preserves the complex structure $ \sigma_z ,$ i.e $ (\phi_z)^* \sigma = \sigma .$ The map $ \phi_z $ is uniquely determined up to an affine transformation; therefore it can be determined by what it does to two points. We assume $ \phi $ fixes the origin and maps the two asymptotic values to a pair of points symmetric with respect to the origin. Then the map $ F_z = \phi_z \circ G_z \circ {\phi_z}^{-1} $ is meromorphic with 0 as a fixed critical point of multiplicity two. $F_z$ respects the dynamics: it has a super-attracting periodic cycle. Moreover $ F_z $ is quasi-conformally conjugate to $ G_z $ in the respective basins of attraction and is conformally conjugate to $ G_z $ everywhere else. Then $ F_z $ is a meromorphic map of the form $ \nu \tan (a_2 z^2 + a_0 )$ for some $ a_2, \ a_0 \in \mathbb C, \ a_2 \neq 0 $ \cite{fagella2017dynamics_57}. Doing a suitable change of variable and composing with an affine transformation, if necessary, we can get a $ \lambda \in \mathbb C^* $ such that $ f_{\lambda(z) } = ( k \phi_z) \circ G_z \circ ( k\phi_z)^{-1} .$ \\
By construction $ \phi_{z_0} $ is the identity map for $ z = z_0 .$ Therefore there exists a continuous function $ z : D( z_0, \epsilon ) \mapsto \lambda (z) \in U $ such that $ \lambda(z_0) = z_0 $ and $ F_{\lambda(z)} = \phi_z \circ G_{\lambda(z_0)} \circ {\phi_z}^{-1} .$ Moreover $ \phi_z $ is holomorphic on $ \mathcal A_{\lambda_0}^0 (0) $ conjugating $ F_{\lambda_0} $ to $ F_{\lambda(z)} .$ Hence from the following commutative diagram: \\
$$ \begin{CD}
{\mathcal A_{\lambda(z)}^0(0) }@< \phi_z << {\mathcal A_{\lambda_0 }^0 (0)} @> \psi_{\lambda_0} >> { \mathbb D } \\
@VV f_{\lambda(z)} V @VVf_{\lambda_0} V @VV z \mapsto z^2 V\\
{\mathcal A_{\lambda(z)}^0(0) } @< \phi_z<< {\mathcal A_{\lambda_0 }^0(0)}@>\psi_{\lambda_0}>> { \mathbb D }
\end{CD}$$ \\
we have that $ \psi_{\lambda(z)} = \psi_{\lambda_0} \circ \phi_z^{-1} $ is the B\"ottcher Coordinate of $ \mathcal A_{\lambda(z)}^0(0) .$ \\
Finally we conclude that $ \Phi ( \lambda (z) ) = \psi_{\lambda(z)} ( f_{\lambda(z)}^{\circ {k+1}} (\lambda i )) = z, \ $ since $ f_{\lambda(z)}^{\circ k+1} (\lambda i ) = \phi_z \circ G^{\circ k+1 }_z \circ \phi^{-1}_z = \phi_z \circ G_z ( f_{\lambda_0}^{\circ k} (\lambda_0 i ) ) = \phi_z \circ \psi_{\lambda_0}^{-1} (z) = \phi_z \circ \phi^{-1}_z \circ \psi_{\lambda(z)}^{-1} (z) = \psi_{\lambda(z)}^{-1} (z) . $ \\
By the Riemann-Hurwitz formula, $ \Phi $ is a degree one covering map. Therefore $ \Phi^{-1} ( z ) $ is a compact set and $ \Phi : \mathcal C_k \to \mathbb D $ is a proper map. That completes the proof.
\end{proof}
\vspace{0.5 cm}
\begin{lem}
Let us assume $ \lambda \in \mathbb R $ or $ \lambda \in \Im$ and $ |\lambda| < \sqrt \frac{\pi}{4} .$ Then $ \lambda \in \mathcal C_0. $ \\
\end{lem}
\begin{proof}
We will prove the lemma when $ \lambda $ is in the positive real axis. For the imaginary axis the proof follows similarly. We see that $ \lambda = \sqrt \frac{\pi}{4} $ is a repelling fixed point of $ f_ \lambda .$ Suppose $ 0 < \lambda < \sqrt \frac{\pi}{4}. $ \\
It follows that $ | f_ \lambda ( \lambda i ) | = | - \lambda \tan x^2 | < | \lambda | $ and
$$ | f_ \lambda^2 ( \lambda i) | < | \lambda \tan \lambda^2 | < | \lambda| $$
$$ \vdots $$
$$ | f_ \lambda^n ( \lambda i) | < | \lambda| .$$
Therefore the post singular orbit of $ f_ \lambda $ is bounded in the dynamic plane for $\lambda$ in the interval $(-\pi/4,\pi/4).$ For any small neighborhood $ I_ \lambda \subset \mathbb R $ in the parameter space with $ | \lambda| < \sqrt \frac{\pi}{4}, \ f_ \lambda^n ( \lambda i) $ form a normal family in $ I_ \lambda .$ Therefore $ I_ \lambda $ must be in a hyperbolic component in the parameter space. In proposition~\ref{shellcomponent_empty_intersection_real_imaginary_line}, of the next section we prove that there is no shell component intersecting $ \mathbb R $ and $ \Im .$ Thus $ I_ \lambda $ must be in one of the capture components. As $ \lambda $ can be chosen arbitrarily close to the origin, $ I_ \lambda \subset \mathcal C_0 . $ Hence $ \lambda \in \mathcal C_0 .$
\end{proof}
\vspace{0.5 cm}
{ \bf \large \section {Arrangement of the hyperbolic shell components at a virtual center}\label{shell component} } \par
We investigate the combinatorial structure of the hyperbolic components that are not capture components in the parameter space. We denote shell components as $\Omega_p,$ where the period of the attracting cycle is $p.$ \\
\begin{prop}
Suppose $ \lambda \in \mathbb R $ or $ \lambda \in \Im $. Then $ \lambda $ is not in any hyperbolic shell component in the parameter space. \label{shellcomponent_empty_intersection_real_imaginary_line} \\
\end{prop}
\begin{proof}
We prove it by contradiction. Suppose $ \lambda \in \mathbb R , $ and $\lambda $ is in a shell component. Then $ f_\lambda $ has an attracting periodic cycle of period $ p \geq 1 $ and one of the asymptotic values is attracted to the attracting periodic fixed point (not the super-attracting fixed point). We label the attracting periodic components $ \{U_i\}_{i = 0}^{p-1} $ so that the asymptotic value $ \lambda i \in U_1 $ and denote the periodic fixed point in $ U_i $ by $ z_i .$ By our assumption $ \lambda i \in \Im $ and
\begin{center}
$ f_\lambda (\lambda i ) = \lambda \tan (\lambda i ) ^2 = - \lambda \tan \lambda^2 \in \mathbb R .$
\end{center}
This implies that $ f_\lambda^{\circ n} ( \lambda i ) \in \mathbb R , \ \ \forall n \in \mathbb N .$ As $ f_\lambda^{\circ pn} ( \lambda i ) \rightarrow z_1 $ as $ n \rightarrow \infty $ all periodic points in this cycle are on $ \mathbb R .$ As $ f_\lambda (\lambda i ) \in U_2 \cap \mathbb R, \ f_\lambda^{\circ np} ( f_\lambda (\lambda i ) ) \in U_2 \cap \mathbb R $ for all $n$ and $ f_\lambda^{\circ np} ( f_\lambda (\lambda i ) ) \rightarrow z_2 $ as $ n \rightarrow \infty .$ Since $ U_2 $ is simply connected and symmetric about the real line, there is an interval $ I \subset U_2 $ containing both $ f_\lambda (\lambda i ) $ and $z_2. $ We take a branch g of $ f_\lambda ^{-1} $ so that $ g (I) $ is an interval that contains $ \lambda i $ in $ U_1. $ Thus $ g(I) \subset \Im .$ Let $ \gamma $ be a path in $ U_1 $ joining $ z_1 $ and $ \lambda i .$ Using the symmetry of $ f_\lambda $ with respect to the real and imaginary axes we have $ - \bar U_1 , \bar U_1 , -U_1 $ are also in the stable domain. Therefore they have a non-empty intersection with $ U_1 .$ Therefore $ U_1 = -U_1= -\bar U_1 = \bar U_1 . $ Thus the degree of $ f_\lambda : U_1 \rightarrow U_2 $ is at least two. Since $ U_1 $ is a bounded periodic component, $ U_1 $ must contain the critical point, the origin. Contradiction! We can use similar argument to get a contradiction when $ \lambda \in \Im . $
\end{proof}
\vspace{0.5 cm}
\begin{defn}
Let $ \rho_\lambda $ denote the multiplier of an attracting or neutral periodic cycle of $ f_\lambda .$ If $ \Omega_p $ is an arbitrary shell component and $ \Delta^ * $ is the unit disk punctured at the origin, the multiplier map $ \rho: \Omega_p \rightarrow \Delta^* $ is defined by $ \lambda \mapsto \rho_\lambda .$ For each $ \alpha \in \mathbb R $ the \textit{internal ray}\index{internal ray} $ R(\alpha) $ is defined by $ R(\alpha ) = \rho^{-1}(re^{2\pi i\alpha }),\ 0 < r < 1 .$
\end{defn}
\vspace{0.5 cm}
The following two theorems describes very important topological properties of the hyperbolic shell components. The theorems are proved in Fagella-Keen \cite{fagella2017dynamics_57}. \\
\begin{thm}
For each shell component $ \Omega_p $ of $ \mathcal H, \ $ the multiplier map $ \rho_\lambda: \Omega_p \rightarrow \Delta^* $ is a covering map. \label{covering map} \\
\end{thm}
\begin{thm}
For any $ \lambda^* $ and $ \lambda $ in the component $\Omega_p $, there exists a unique quasi-conformal map $g$ such that $ f_{\lambda^*} \circ g = g \circ f_\lambda. $
\end{thm}
\vspace{0.5 cm}
Let $ H_l $ denotes the right half plane. From Theorem~\ref{covering map} we have that the multiplier map $ \rho_\lambda : \Omega \to \mathbb D^* $ is a universal covering. Hence there is a conformal homeomorphism $ \phi : H_l \to \Omega ,$ unique up to precomposition by a Mobious transformation such that $ ( \rho_\lambda \circ \phi )(w) = e^w . $ Under the map $ \phi : H_l \to \Omega ,$ the boundary of $ \Omega $ corresponds to the imaginary axis. \\
Now we are in a position to characterize the virtual center in the boundary of a shell component. Here we give a formal definition of the virtual center of a shell component. We use the following definition from \cite{fagella2017dynamics_57}. \\
\begin{defn}
Let $ \Omega $ be a shell component and $ \rho: \Omega \to \mathbb D $ be the multiplier map. A point $ \lambda \in \partial \Omega $ is called a \textit{virtual center} \index{virtual center} if for any sequence $ \lambda_n \in \Omega $ with $ \lambda_n \to \lambda ,$ the multiplier map $ \rho(\lambda_n) \to 0 . $ \\
\end{defn}
Let $ T_k = \{ w \in H_l | 2k \pi < \Im w < 2 (k+1 ) \pi \} $ where $ k = 0, \pm 1, \pm2,\ldots .$ Every open set $ T_k $ is a horizontal strip of $ H_l . $ Let $ V_k = \phi ( T_k ) . $ Then $ V_k $ is an open subset of $ \Omega $ obtained by cutting $ \Omega $ along $ \mathcal R(k) $ for all integers $k$ where $\mathcal R(k)$ is the image of the boundary of the horizontal strip $ T_k $ under $\phi$. The boundary of $ V_k,\ \partial V_k $ consists of three curves $ \mathcal R(k), \mathcal R(k+1), $ and $ \{ \phi (2 \pi \alpha ) : k < \alpha < k+1 \} $ together with their endpoints. These curves are all regular simple arcs; hence $ \partial V_k $ is a Jordan curve. By the Unifomization Theorem and the Carath\'eodory theorem the conformal isomorphism $ \phi|_{V_k} $ extends to a homeomorphism of $ \overline V_k $ onto $\overline T_k . $ \\
The boundary piece of $ \Omega, \ \{ \phi (2 \pi \alpha ) : k < \alpha < k+1 \} $ is a regular arc. It may not be regular at the endpoints $ \phi ( 2 k \pi ) $ and $ \phi (2 (k+1) \pi ) .$ The points where the boundary of $ \Omega $ fails to be smooth, are called the cusps of $ \Omega.$ Each cusp is mapped under $ \phi $ to a point $ 2 k \pi i$ for some integer $k.$ Computer pictures show that the cusps of the unbounded shell components (see Proposition~\ref{shell boundary}) lie in each quadrant which contains the component. The pictures show that there are saddle node bifurcation points along the boundary of any component $ \Omega \in \mathcal H $ and there are components attached to $ \Omega $ at these points. \\
We show that if $ p > 1 ,$ the asymptotic values of the functions corresponding to the virtual centers of $ \Omega_p $ are pre-poles of order $ p-1 .$ For $ p = 1 $ the virtual center of $ \Omega_1 $ is infinity. First we prove the lemma under the assumption that the components are bounded.\\
\begin{lem}
For any bounded hyperbolic shell component $ \Omega_p $ with $ p > 1,$ the virtual center $ \lambda^* $ is finite and $ f_\lambda^{(p-1)} (\lambda^* i ) = \infty ; $ that is, $ \lambda^* i $ is a pre-pole of order $ p- 1 .$ \label{prepole} \\
\end{lem}
\begin{proof}
Let $ \lambda \in \Omega_p $ and $ f_\lambda $ has an attracting periodic cycle of period $p.$ Let $ U_0 $ be the unbounded component containing asymptotic tract and $ z_0 \in U_0 $ be the periodic fixed point. Suppose $ z_i = f_\lambda(z_{i-1}) $ be the attracting periodic cycle. This implies that $ U_1 $ contains $ \lambda i $ and $ z_1 .$ Denote the pre-image of $ z_0 $ in the periodic cycle by $ z_{p-1} .$ There exists $ n \in \mathbb Z \ $ such that $ f_{\lambda,n}^{-1} (z_0 ) = z_{p-1} $ and $ f_{\lambda,n}^{-1} (U_0 ) = U_{p-1} $ where $n$ is the index to denote the inverse branch of $ \arctan. $ Since $ U_0 $ contains an asymptotic tract, $ \partial U_{p-1} $ contains a pole $ s_n$.
We note that there is a pre-asymptotic tract at $ s_n $ in $ U_{p-1} $ containing a pre-image of either $ z = \pm i \sqrt{it} $ or $ \pm \sqrt{it} $ for large $ t > 0. $ If $ \partial U_{p-1} $ would contain any other pole there would be another pre-asymptotic tract in $ U_{p-1} $ at this pole containing the pre-image of the same segment and $ f_{\lambda|_{U_{p-1}}} $ would not be injective. \\
Since the maps $ f_\lambda$ for $ \lambda $ in $ \Omega_p $ are quasi-conformally conjugate on their Julia sets, the pre-pole varies continuously with $ \lambda$ in $ \Omega_p. $ Suppose $ \lambda $ moves along an internal ray $ R(\alpha) $ to the virtual center $ \lambda^* $ as $ r \rightarrow 0 ,$ so that $$\lim_{\lambda \xrightarrow{\text{R}} \lambda^* } \rho_\lambda = 0.\ \ $$
Since $ (\tan z^2)' = 2 z \sec z^2 $ and $ \lambda = \frac{z_i}{ \tan z_{i-1}^2 },$ it follows that
\begin{align*}
\rho_\lambda & = [f_\lambda^{p } ( z_0(\lambda)]' \\
& = \prod_{i = 1}^{p} f_\lambda' [f_\lambda^{{i-1} } ( z_0(\lambda) ] \\
& = 2^p \prod_{i = 1}^{p} \frac {2 z_i z_{i-1}}{\sin 2z_{i-1}^2 }.
\end{align*}
The only way some factor may tend to 0 as $ \lambda \rightarrow \lambda^* $ is for $ \sin 2z_{i-1}^2 \rightarrow \infty $ for some $i,$ or equivalently $ \Im z_{i-1}^2 \rightarrow \infty .$ Since $ z_0 $ is in the asymptotic tract, we conclude that $ \Im z_{0}^2 \rightarrow \infty .$ By hypothesis $ p > 1 $ and $ \lambda^* \neq \infty $ so that $ z_{p-i} \neq z_{p-1} $ for $ i \neq 1. $ Therefore
\begin{align*}
& \lim_{\lambda \xrightarrow{\text{R}} \lambda^* } \lambda \tan z_{p-1}^2 (\lambda) \\
&= \lim_{\lambda \xrightarrow{\text{R}} \lambda^* } z_0(\lambda) \\
&=\infty .
\end{align*}
We can say further that
\begin{align*}
&\lim_{\lambda \xrightarrow{\text{R}} \lambda^* } z_{p-1} (\lambda) \\
& = \lim_{\lambda \xrightarrow{\text{R}} \lambda^* } f_{\lambda,n}^{-1} (z_0(\lambda) ) \\
& = s_n .
\end{align*}
and $$ \lim_{\lambda \xrightarrow{\text{R}} \lambda^* } z_1(\lambda) = \lambda^* i $$ so that $ \lambda^* i $ is a prepole of order $ p-1.$
\end{proof}
\vspace{0.5 cm}
\begin{prop}
Let $ \Omega_p $ be a hyperbolic shell component such that for $ \lambda \in \Omega_p , \ f_\lambda $ has an attracting $p$-periodic cycle $ \{z_0, z_1, z_2, \ldots, z_{p-1} \} .$ If the virtual center $ \lambda^* = \infty , $ then for $ j= 0, 1, \ldots, p-1 $ as $ \lambda $ varies along some internal ray $ R(\alpha) $ in $ \Omega_p ,$ \\
\begin{center}
$ z_j^* = \lim_{\lambda \xrightarrow{\text{R}} \lambda^* } z_j (\lambda) = \infty .$
\end{center}
\end{prop}
\vspace{0.5 cm }
\begin{proof}
For $ \lambda \in \Omega_p ,$ let $ z_0 = z_0(\lambda) $ belongs to the component $ U_0$ that contains an asymptotic tract of $ \lambda i .$ Then $ \lambda i $ and $ z_1(\lambda i ) $ both belong to the component $ U_1 .$ Let us assume that $ \lambda $ moves along the internal ray $ R(\alpha ) $ in $ \Omega_p $ to the limit point $ \lambda^* = \infty.$ Because we assumed $ \lambda^* $ is a virtual center, $\displaystyle\lim_{\lambda \xrightarrow{\text{R}} \lambda^* } \rho(\lambda) = 0 .$ It follows that $ \Im z_0^2(\lambda) \rightarrow + \infty . $
We need to show that $ z^*_1 = \displaystyle \lim_{\lambda \xrightarrow{\text{R}} \lambda^* } z_1(\lambda) = \infty . $ If not, $ \exists \ $ a sequence $ \lambda_n \rightarrow \lambda^* $ in $ \Omega_p $ such that $ \displaystyle \lim_{\lambda_n \xrightarrow{\text{R}} \lambda^* } z_1(\lambda) = c \neq \infty . $ Then $ z_1(\lambda_n) = \lambda_n \tan(z_0(\lambda_n))^2 $ implies
$$ \displaystyle \lim_{\lambda_n \xrightarrow{\text{R}} \lambda^* } \tan(z_0(\lambda_n))^2 = \displaystyle \lim_{\lambda_n \xrightarrow{\text{R}} \lambda^* } \frac{z_1(\lambda_n ) }{\lambda_n } = 0 . $$
Therefore the curve $ z_0(\lambda), \ \lambda \in R(\alpha) $ is bounded and $\displaystyle \lim_{\lambda_n \xrightarrow{\text{R}} \lambda^* } z_0(\lambda) = m \pi $ for some integer $m$ or $ z_0(\lambda) $ is unbounded but comes arbitrary close to infinitely many integral multiples of $ \pi .$ Either possibility contradicts $ \Im z_0^2(\lambda) \rightarrow + \infty . $ Thus for $ z_j , \ j = 2,3, \ldots, p-1 $ we can argue as follows: \\
If $ z_j \not \rightarrow \infty $ as $ \lambda \rightarrow \lambda^* , \ j = 2, 3, \ldots, p-1 ,$ then $ \exists $ a sequence $ \lambda_n \rightarrow \lambda^* $ in $ \Omega_p $ such that $ \displaystyle \lim_{\lambda_n \xrightarrow{\text{R}} \lambda^* } z_j(\lambda) = c \neq \infty, \ j = 2, \ldots, p-1 .$ Arguing as above we get either $ z_{j-1} (\lambda) $ is bounded and $ \displaystyle \lim_{\lambda_n \xrightarrow{\text{R}} \lambda^* } z_{j-1}(\lambda) = m \pi $ for some integer $m$ or $ z_{j-1}(\lambda) $ is unbounded but comes arbitrary close to infinitely many integral multiples of $ \pi .$ In either case an arbitrary small neighborhood of $ m\pi $ contains an attracting fixed point of order $p > 1$ for infinitely many $ \lambda_n .$ That is an arbitrary small neighborhood of zero contains $z_j(\lambda_n ) $ for infinitely many $ \lambda_n .$ But zero is a critical point and a super-attracting fixed point. The hypothesis $ \lambda \in \Omega_p$ and the fact that the immediate basin of zero contains an attracting fixed point of order $p > 1$ give a contradiction.
\end{proof}
\vspace{0.5 cm}
The following lemma is proved for more general families in \cite{fagella2017dynamics_57}. We state them for our families.\\
\begin{lem}
Let $ \Omega_p, \ p \geq 1$ be a hyperbolic shell component and $ \lambda_n \in \Omega_p $ be such that $ \lambda_n \to \lambda^* $ where $ \lambda^* \in \partial \Omega_p . $ Let $ \{ a_n^0, a_n^1, \ldots, a_n^{p-1}\} $ be the attracting periodic cycle of $ f_{\lambda_n} $ such that $ a_n^1 $ is in the component of the immediate attracting basin that contains the asymptotic value $ \lambda_n i.$ Suppose $ | a_n^j | \to \infty $ as $ n \to \infty . $ Then $ j= 0 $ and \\
a) $ a_n^1 \to \lambda_n i $ as $ n \to \infty . $
b) $ a_n^{p-1} $ tends to a pole of $ f_\lambda^* . $
c) $ a_n^{k-i} $ tends to a pre-pole of $ f_\lambda^* . $
d) The multiplier map $ \rho_n \to 0 $ as $ n \to \infty . $ \label{virtual cycle}
\end{lem}
\vspace{0.5 cm}
\begin{prop}
If $ \lambda^* i$ is a pre-pole \index{prepole} of $ f_{\lambda^*} $ of order $ p-1 , \ p > 1, $ then $ \exists \ \lambda $ near to $ \lambda^* $ such that $ f^ {(p-1) }_\lambda (\lambda i) \in \mathcal A ,$ for a given asymptotic tract $ \mathcal A. $ \label{prepole order} \\
\end{prop}
\begin{proof}
Let $ \mathcal A $ be an asymptotic tract such that $ \mathcal A = \{ z : \Im z^2 > r, \ \Re z, \ \Im z > 0 \} $ for large enough $r,\ r > 0 .$ Let $ U $ be a small neighborhood around $ \lambda^* i$ and $ V$ be the corresponding neighborhood around $ \lambda^*$ such that $ \lambda \in V $ iff $ \lambda i \in U .$ Suppose the assumption in the proposition is not true. Then $ \forall \lambda \in V , \ f^ {(p-1) }_\lambda (\lambda i) \notin \mathcal A .$ Now we can define a map $ g : V \rightarrow \mathbb C $ by $ g(\lambda ) = f_\lambda ^{(p-2)} (\lambda i) $ so that $ g $ is bijective and $ g (V) $ is an open set around $ s_n = f_\lambda ^{(p-2)} (\lambda i) .$ Choose the branch of $ f_{\lambda}^{-1} $ such that $ f_{\lambda}^{-1} (\mathcal A) $ is an open set attached at $ s_n .$ We get $ U_\lambda = i g^{-1} ({g(V)\cap f_{\lambda}^{-1} (\mathcal A)}) $ is an open set attached at $ \lambda^* i$ but $\lambda i$ is not in $ U_\lambda$ and this is true for all $ \lambda \in V .$ So $ \forall \lambda \in V , \ \ f_\lambda^{-1} (U_\lambda) $ is bounded. But because $ \lambda^*i $ is in a virtual cycle of $ f_{\lambda^*} $, it follows that $ f_{\lambda^*}^{-1} (U_{\lambda^*}) $ is unbounded. Contradiction!
\end{proof}
\vspace{0.5 cm}
The proof of the following Lemma is modeled on the proof for the tangent family in Keen-Kotus \cite{keen1997dynamics_45}.\\
\begin{lem}
Let $ \lambda^* i$ be a pre-pole of $ f_{\lambda^*} $ of order $ p-1 $ for $ \lambda^* $ in the parameter space of $ \mathcal F = \{ \lambda \in \mathbb C^* : f_\lambda (z) = \lambda \tan(z)^2 \} .$ Then there exists $ \lambda $ near $ \lambda^* $ such that $ f_\lambda $ has an attracting periodic cycle of period $p > 1 .$ \label{prepole virtual center} \\
\end{lem}
\begin{proof}
Let us choose an arbitrary small $ \epsilon > 0 $ and a set $ U = \mathcal B ( \lambda^* i , \epsilon ) ,$ a disk with center $\lambda^*i$ and radius $\epsilon.$ Let $ V$ be the corresponding neighborhood of $ \lambda^* $ in parameter space such that $ \lambda \in V $ iff $ \lambda i \in U .$ Let us choose an asymptotic tract $ \mathcal A $ of $f_\lambda$ for large enough $ r > 0 ,$ such that $ \mathcal A = \{ z : \Im z^2 > r, \ \Re z, \ \Im z > 0 \} .$ For $ \lambda \in V$ consider the common pre-asymptotic tracts,
$$ \mathcal I_n = \displaystyle \cap_{ \lambda \in V } f_{\lambda,n}^{-1}(\mathcal A) $$
attached to the pole $ s_n .$ We can find $ 0 < \eta = \eta(r) $ such that $ |\arg \lambda - \arg \lambda^*| < \eta $ for $ \lambda \in V .$ Hence the angle between $ f^{-1}_{n,\lambda}(\mathbb R ) $ and $ \mathbb R $ or $ \Im $ is bounded and $ \mathcal I_n $ contains some triangular domain with one vertex at $ s_n .$ Let $ g : V \rightarrow \mathbb C $ be a map defined by $ g(\lambda ) = f_\lambda ^{(p-2)} (\lambda i) . $ Then $ g(V) $ is an open set containing $ s_n$ and there exists open set $ V^+ \subset V $ such that $ V ^+ = g^{-1}( \mathcal I_n) .$ For any $ \lambda \in V^+ , \ f_\lambda ^{(p-1)} (\lambda i)$ belongs to an asymptotic tract $ \mathcal A = \{ z : \Im z^2 > r', \Re z, \Im z > 0 \} $ where possibly $ r' < r .$ Moreover for the inverse branch such that
$$ f_{{n_{p-2}},\lambda^*}^{-(p-2)}(s_n) = \lambda^* i $$
we have the property $ v_\lambda = f_{{n_{p-2}},\lambda}^{-(p-2)}(s_n) \neq \lambda i $ by Hurwitz's Theorem. Then $ v_\lambda $ is defined by choosing the branch by analytic continuation and the imaginary part of the square of the pre-image $ w_{\lambda, k } = f^{-1}_\lambda ( v_\lambda) $ lies in the upper half plane and continuously depends on $ \lambda $ and $ \Im w_{\lambda, k }^2 $ goes to $ \infty $ as $ \lambda \rightarrow \lambda^* .$ \\
Now consider $ \zeta_\lambda = | v_\lambda - \lambda i | $ and $ B_\lambda = B( v_\lambda, \zeta_\lambda ).$ Then $ f_\lambda^{(p-2)} ( B_\lambda ) $ is a neighborhood around $ s_n $ and taking the principal part we get $ f_\lambda^{(p-1)} ( B_\lambda ) $ is a subset of $ \mathbb C \setminus D_{R_\lambda} $ where $ D_{R_\lambda} $ is a disk around the origin of radius $ |f_\lambda^{(p-1)} ( \lambda i )| \approx R_\lambda $ with $ R_\lambda \rightarrow \infty $ as $ \lambda \rightarrow \lambda^* .$ We need to prove that $ \Im (f_\lambda^{(p-1)})^2 ( \lambda i )> \Im w_{\lambda, k }^2 $ for $ k \leq k_0 $ for some $ k_0 \in \mathbb Z .$ Let
$$ M =\displaystyle \max_{z\in \bar{U}, \lambda \in \bar{V}}| (f_\lambda^{{p-2}})' (z)|.$$
As $ | \lambda | >> |\zeta_\lambda|, $ we have $ w_{\lambda, k} = f_\lambda^{-1} ( v_\lambda ) = f_\lambda^{-1} ( \lambda i + \zeta_\lambda ) = \sqrt {\bigg[\frac{1}{2i}\log\frac{i\zeta_\lambda }{2\lambda - i\zeta_\lambda }\bigg]} .$ \\
We do the following calculation to estimate $\Im w_{\lambda, k} :$
\begin{align*}
& \log\frac{i\zeta_\lambda }{2\lambda - i\zeta_\lambda } \\
& = \log |\frac{i\zeta_\lambda }{2\lambda - i\zeta_\lambda }| + i (\theta_\lambda + \pi k ) , k \in \mathbb Z \\
& \approx \log | \zeta_\lambda | + i (\theta_\lambda + \pi k ) , k \in \mathbb Z .
\end{align*}
We choose a branch of the square root function so that $ \Re w_{\lambda, k}, \ \Im w_{\lambda, k} > 0 $.\\
So $ w_{\lambda, k} \approx \frac{1}{\sqrt 2} \sqrt {(\theta_\lambda + \pi k ) - i \log | \zeta_\lambda | }, \ k \in \mathbb Z $ \\
$ \approx R_{k_\lambda} e^{i\frac{ \arg P_{k_\lambda}}{2}} $ where $ R_{k_\lambda} = \frac{1}{\sqrt 2} \sqrt {(\theta_\lambda + \pi k )^2 +( \log | \zeta_\lambda |)^2 } $ and $ P_{k_\lambda} = {(\theta_\lambda + \pi k ) - i \log | \zeta_\lambda | } .$ \\
Therefore $ | f_{n, \lambda}^{-1}( w_{\lambda, k}) - s_n | \approx \sqrt \frac{\lambda}{ R_{k_\lambda}}, \ k \in \mathbb Z $ \\
$ | f_{{n_{p-1}},\lambda}^{-(p-1)}(w_{\lambda, k} ) - \lambda i | \geq | f_{n, \lambda}^{-1}( w_{\lambda, k}) - s_n | \cdot \displaystyle \min_{z\in \bar{U}, \lambda \in \bar{V}}| (f_\lambda^{{p-2}})' (z)| $ \\
$ | f_{{n_{p-1}},\lambda}^{-(p-1)}(w_{\lambda, k} ) - \lambda i | \geq \frac{1}{M} \sqrt \frac{ \lambda }{ R_{k_\lambda}}.$ \\
Since $ \zeta_\lambda $ is assumed small we can get $ k_0 \in \mathbb Z $ such that $ \frac{1}{M} \sqrt \frac{ \lambda }{ R_{k_\lambda}} \geq \zeta_\lambda, \ \forall k \leq k_0 . $
Therefore $ | f_{\lambda}^{(p-2)}(\lambda i ) - s_n | \leq | f_{{n},\lambda}^{-1}(w_{\lambda, k}) - s_n |, \ \forall k \leq k_0 $\\
so that $ | f_{\lambda}^{(p-1)}(\lambda i ) | > | w_{\lambda, k} |, \ \forall k \leq k_0 .$ It follows that \\
$ \Im (f_\lambda^{(p-1)})^2 ( \lambda i )> \Im w_{\lambda, k }^2 .$ \\
Now we will construct a domain $ \mathbb T $ for some fixed $ \lambda \in V^+ $ inside the asymptotic tract $ \mathcal A $ such that $ f_\lambda^{p} ( \mathbb T ) \subset \mathbb T $. Let $ \tilde R_\lambda = \frac{1}{ 2} {( \log | \zeta_\lambda |) } - \epsilon $. Take $ \mathcal A = \{ z : \Im z^2 > \tilde R_\lambda \} $ so that $v_\lambda \in f_\lambda( \mathcal A ) .$ Let $ I^{\pm} $ be the two rays meeting at $ s_n $ such that the triangular domain $T$ between them is contained in $ f_{\lambda,n} ^{-1}( \mathcal A ) $ and such that $ f_{\lambda}^{(p-2)}(\lambda i ) \in T $. Let $ \mathcal S $ be the triangular region with vertex at $ v_\lambda $ bounded by $ \mathcal J^{\pm} = f_{n_{p-2},\lambda}^{-(p-2) } ( I^{\pm} ) $ and an arc of the boundary of $ f_\lambda( \mathcal A )$ so that $ \lambda i \in \mathcal S .$ \\
Finally let $ \mathbb S = \cup_{k \in \mathbb Z } f_{\lambda,k} ^{-1}( \mathcal S ) .$ Then $ \mathbb S $ is an asymptotic tract whose boundary is formed by pre-images of $ \partial \mathcal S ,$ i.e it is made up of arcs $ f_{\lambda,k}^{-1}( \mathcal J^{\pm} ) $ that meet at $ w_{\lambda,k } $. Now consider $ \tilde {\mathbb S} = f_{\lambda}^{(p-1)}(\mathcal S ). $ This is a triangle with a vertex at infinity, the sides meeting there are rays and the third side is an arc of a circle centered at the origin and the radius is slightly smaller than $ | f_{\lambda}^{(p-1)}(\lambda i ) | .$ To prove $ f_\lambda^{ p } ( \mathbb S ) \subset \mathbb S ,$ we need to check: \\
1. $ \Im (f_\lambda^{(p-1)}(\lambda i) )^2 > \tilde R_\lambda $ and \\
2. $ f_\lambda ( I^{\pm} ) \subset \mathcal A .$ \\
Now $\lambda $ was chosen so that $ f_\lambda^{(p-1)}(\lambda i) $ is in the asymptotic tract of the asymptotic value $ \lambda i ,$ hence changing the argument we can insure 1 holds. 2 can be insured by decreasing the angle between $ I^{\pm} $ if necessary.
\end{proof}
\vspace{0.5 cm}
Lemma~\ref{prepole virtual center} shows that there is a hyperbolic component attached to the point $ \lambda^* $ in the parameter plane. The proof shows that it is a shell component and therefore every virtual cycle parameter is a virtual center. \\
\begin{rem}
For a shell component, although both asymptotic values are attracted to the periodic cycle, one is preferred in the sense that one is contained in the periodic component of Fatou set while the other is contained in a pre-periodic component. The above construction finds the preferred asymptotic value. \\
\end{rem}
We have just shown that if $ \lambda^* $ is a virtual center then it is virtual cycle parameter. Thus the set $$ \{\infty,\pm \lambda^* i,f_{\lambda^*} (\pm \lambda^* i) , f_{\lambda^*}^2 (\pm \lambda^* i), \ldots, f_{\lambda^*}^p (\pm \lambda^* i) \} $$ is a cycle, considered with appropriate limits. This cycle behaves like a super-attractive cycle with a singular value, namely the asymptotic value.\\
Set $$ \mathcal D_p = \{ \lambda^* : f_{\lambda^*}^p ( \lambda^* i) = \infty \} , \ \ \mathcal D = \cup_p \mathcal D_p .$$ Now we can prove that for $ \lambda^* \in \mathcal D_{p-1} $ we can find a quadruplet $ \{\Omega_{p_{i}}\}^4_{i=1} $ such that $ \lambda^* $ is a virtual center for the quadruplet. We will first show that each virtual cycle parameter corresponds to a virtual center of a shell component. \\
\begin{thm}
Let $ \Omega $ be a shell component of period $ p \geq 2 $ and $ \lambda^* \in \partial \Omega. $ Then $ \lambda^* $ is a virtual center if and only if $ \lambda^* $ is a virtual cycle parameter.\index{virtual cycle parameter} \label{virtual cycle parameter}\\
\end{thm}
\begin{proof}
Let $ \lambda^* $ be the virtual center. Let $ \lambda_n $ be a sequence of parameters in $ \Omega $ such that $ \lambda_n \to \lambda^* $ as $ n \to \infty $ and let $ \{ a_n^0, \ldots, a_n^{p-1} \} $ be the corresponding attracting cycle of $ f_{\lambda_n} .$ If one of the $ a_n^j $ tends to infinity as $ n \to \infty , $ then by Lemma~\ref{prepole virtual center} we are done. Now suppose that all points of the periodic cycle converge to finite points and the multiplier tends to 0. It follows that the periodic cycle is super-attracting and the cycle contains a critical point. That contradicts the assumption that $ \lambda^* $ is in $ \partial \Omega . $ By the definition of a virtual cycle parameter at least one of the points of the cycle is the point at infinity. The multipliers of the cycles of $ \lambda_n $ tend to the multiplier of the virtual cycle of $ \lambda^* $ and $ \lambda^* $ is a virtual center.
\end{proof}
\vspace{0.5 cm}
\begin{prop}
If $ \lambda^* i$ is a prepole of $ f_{\lambda^*} $ of order $ p-1 $ then there are four hyperbolic components $ \{\Omega_{p_i}\}^4_{i = 1}$ attached at $ \lambda^*$ such that $ f_\lambda, \ \lambda \in \Omega_{p_i}, $ has an attracting cycle of period $p.$ \\
\end{prop}
\begin{proof}
We saw in Proposition~\ref{prepole order} and in Lemma~\ref{prepole virtual center} that given a virtual center at $ \lambda^* $ of order $p-1$ and an asymptotic tract, we can have an unbounded periodic component containing the asymptotic tract.
Given an asymptotic tract $ \mathcal A ,$ we can choose $ \Omega_p $ at $ \lambda^* $ uniquely so that $ f_\lambda^{{p-1}} (\lambda i ) \in \mathcal A .$ As there are four asymptotic tracts, there are four hyperbolic components of order $p$ attached to a virtual center of order $p-1 . $
\end{proof}
\vspace{0.5 cm}
\begin{prop}
Suppose $ \lambda^* \in \mathcal D_{p-1} $ so that for $ \lambda \in \Omega_p^i, \ f_{\lambda_n}^{(p-2)}(\lambda_n i ) = s_n .$ That is $ \lambda^* $ is the virtual center of a quadruplet $ \{\Omega_p^i\}^4_{i=1} $ so that $ f_\lambda $ has attracting $p$-periodic cycle for a $ \lambda \in \Omega_{p_{i}} . $ Then there exists a sequence of component quadruplets \index{quadruplets} $ \{\Omega_p^i, k\}^4_{i=1},\ k \in \mathbb Z $ with virtual centers $ \lambda_k^* \in \mathcal D_p $ where $ f_{\lambda_k^*}^{(p-1)}(\lambda_k^* i) = s_k $ and $ \lambda_k^* \rightarrow \lambda^* $ as $ |s_k| \rightarrow \infty .$ \\
\end{prop}
\begin{proof}
Choose an arbitrary small $ \epsilon > 0 $ and let $ U = B(\lambda^* i , \epsilon ) $ be a small neighborhood around $ \lambda^* i $ in the dynamic plane and let $ V = D(\lambda^* , \epsilon ) $ be the corresponding open set in the parameter space. Consider $ g( \lambda ) = f^{(p-2)} ( \lambda i ) ,\ \lambda \in V .$ Then $ g(U) $ is an open set containing $ s_n .$ Taking the principal part of $ f_{\lambda^* },$ we get $ f_{\lambda^*}^{(p-1)} (U) $ is an unbounded set and $ \exists \ k_0 \in \mathbb Z $ such that $ \pm s_k \in f_{\lambda^* }^{(p-1)} (U),\ \forall k \geq k_0 .$ The $s_k$ are pre-poles of higher order converging to $s_n$ so they are in $f_{\lambda_n}^{(p-2)}(U)$. Thus there are $ \lambda_k^* \in D(\lambda^* , \epsilon ) $ such that $ f_{\lambda_k^* }^{(p-1)} ( \lambda_k^* i ) = \pm s_k . $ So $ \lambda_k \in \mathcal D_p .$ Using Lemma~\ref{prepole virtual center}, Proposition~\ref{virtual cycle parameter}, we get $ \lambda_k^* $ is a virtual center for $ \{\Omega_p^i \}^4_{i=1} $ for all $i.$ Furthermore we have $ \lambda_k^* \rightarrow \lambda^* $ as $ |s_k| \rightarrow \infty . $
\end{proof}
\vspace{0.5 cm}
\begin{prop}
Let $ \lambda_n \in \mathcal D_p .$ Then $ \lambda_n $ is a virtual center for a sequence of components $ \{\Omega_p^i, n\}^4_{i=1} $ with itineraries $ { \bf n_p }= ( n_1, n_2, \ldots, n_p) .$ \label{ itenaries}\\
(a) If $ (n_1 , n_2, \ldots, n_{p-1} ) $ are the same for all $ \lambda_n $ and $ n_p = n $ then the sequence $ \lambda_n $ has accumulation point in $ \mathcal D_0 = \{\infty \}.$ \\
(b) If $ (n_2, n_3, \ldots, n_p ) $ are the same for all $ \lambda_n $ and $ n_1 = n $ then the accumulation point of $ \lambda_n $ is $ \lambda \in \mathcal D_{p-1} $ where $ \lambda $ is a virtual center with itinerary $ {\bf n_{p-1} } = ( n_2, \ldots, n_p) $ with $ f_{\lambda }^{(p-2)} (\lambda i ) = s_{n_2} .$ \\
\end{prop}
\begin{proof}
Consider the set $ \mathcal S = \mathbb C \setminus \cup_{k= 1}^{p-1} \mathcal D_{p-1} .$ Define a map $ g: \mathcal S \rightarrow \hat {\mathbb C } $ by $ g ( \lambda ) = f_{\lambda }^{p} (\lambda i ) .$ We have removed the set $ \cup_{k= 1}^{p-1} \mathcal D_{p-1} $ because $g$ would have essential singularities at those points and $g$ is well-defined in $ \mathcal S .$ From the construction of the set $ \mathcal S $ we see that $g$ has poles for $ \lambda \in \mathcal D_p $ and $g$ is holomorphic elsewhere. Suppose $ \lambda' $ is an accumulation point of $ \lambda_n \in \mathcal S .$ If $ \lambda' \not \in \cup_{k= 1}^{p-1} \mathcal D_{k} \cup \{ \infty \} $ then $ g(\lambda)$ is well-defined and holomorphic in a neighborhood of $\lambda' .$ On the other hand $ \lambda' $ is an accumulation point of poles of $g$ and then $g$ has a non-removable singularity at $ \lambda' .$ Thus we arrive at a contradiction. We claim that $ \lambda' \in \mathcal D_o = \{ \infty \} .$ Thus $ \lambda_n = f_{\lambda_n}^{-1} \circ ( f_{n_{p-1}}^{-1} \circ (\ldots f_{n_{1}}^{-1} (\infty )) ) $ implies that $ \lambda_n^2 $ is in $ L_n $ where $ L_n $ is the half open vertical strip between $ l_{n-1} = ( n - 1/2)\pi/2 + it $ and $ l_n = ( n + 1/2)\pi/2 + it , \ t \in \mathbb R, \ n \in \mathbb Z $ containing the line $ l_{n-1}.$ So the only accumulation point $ \lambda_n $ can have is at $ \infty .$ \\
As $ n_1 $ varies, we can write $ \lambda_n = ( f_{n_p}^{-1} \circ f_{n_{p-1}}^{-1} \circ \ldots \circ f_{n_2}^{-1}) \circ f_n^{-1} (\infty ) .$ Therefore $ \lambda_n \rightarrow \lambda^* $ implies $ \lambda^* = ( f_{n_p}^{-1} \circ f_{n_{p-1}}^{-1} \circ \ldots) \circ( \lim_{|n| \rightarrow \infty }) f_n^{-1} (\infty ) = ( f_{n_p}^{-1} \circ f_{n_{p-1}}^{-1} \circ \ldots) ( \lim_{|n| \rightarrow \infty } s_n) = ( f_{n_p}^{-1} \circ f_{n_{p-1}}^{-1} \circ \ldots f_{n_{2}}^{-1}) ({\infty }) \in \mathcal D_{p-1} .$
\end{proof}
\vspace{0.5 cm}
\begin{prop}
Let $ \lambda_n \in \mathcal D_p \cap \mathbb R $ (or $ \mathcal D_p \cap \Im ) . $ Then $ \lambda_n $ is a virtual center for a sequence of components $ \{\Omega^i_{p, n}\}^4_{i=1} $ with itineraries $ { \bf n_p }= ( n_1, n_2, \ldots, n_p) .$ \\
(a) If $ (n_1 , n_2, \ldots, n_{p-1} ) $ are the same for all $ \lambda_n $ and $ n_p = n $ then the sequence $ \lambda_n $ has accumulation point in $ \mathcal D_0 = \{\infty \}.$ \\
(b) If $ ( n_1 , \ldots, n_{j-1},n_{j+1}, \ldots, n_p ) $ are the same for all $ \lambda_n $ and $ n_j= n $ for $ 1 \leq j \leq n_{p-1} ,$ then the accumulation point of $ \lambda_n $ is $ \lambda \in \mathcal D_{j-1} $ where $ \lambda $ is a virtual center with itinerary $ {\bf n_{p-j} } = ( n_{j+1}, \ldots, n_p) $ where $ f_{\lambda }^{\circ(p-j-1)} (\lambda i ) = s_{n_{j+1}} .$\\
\end{prop}
\begin{proof}
The proof of (a) is similar to the proof of Proposition~\ref{ itenaries} (a). To prove (b) we see that, if $ n_j = n , $ we can write \\
$ \lambda_n = f_{n_p}^{-1} \circ f_{n_{p-1}}^{-1} \circ \ldots \circ f_n^{-1} \circ f_{n_{j+1}}^{-1} \circ \ldots \circ f_{n_1}^{-1} (\infty ) .$ Therefore $ \lambda_n \to \lambda^* $ implies \\
$ \lambda^* = ( f_{n_p}^{-1} \circ f_{n_{p-1}}^{-1} \circ \ldots \circ \lim_{|n| \to \infty} f_n^{-1}(\circ f_{n_{j-1}}^{-1} \ldots \circ f_{n_p}^{-1} (\infty ))= ( f_{n_p}^{-1} \circ f_{n_{p-1}}^{-1} \circ \ldots \circ f_{n_{j-1}}^{-1} (\infty ) ) $ (By part (a)). In other words, $ f_{\lambda^*}^{(p-j)-1} (\lambda^* i) = s_{n_{j-1}} .$
\end{proof}
\vspace{0.5 cm}
{\bf \large \section {Bifurcation at the boundaries and the boundedness of shell components }\label{bifurcation} } \par
We have proved in Theorem~\ref{covering map} that there is a universal covering map, namely the multiplier map, $ \rho_\lambda: \Omega_p \to \mathbb D^* $ which can be lifted to a conformal isomorphism $ \phi : \mathbb {H}_l \to \Omega_p ,$ where $ \mathbb{H}_l $ denotes the right half plane so that $ ( \rho_\lambda \circ \phi )(c) = \exp^{2\pi i c} : \mathbb{H}_l \to \mathbb D^* $ and the map $ \phi $ extends continuously to the boundary of $ \mathbb H_l. $ \\
\begin{defn}
We define a \textit{boundary point} $ \lambda \in \partial\Omega_p $ to be a point of internal angle $ \alpha $ if $ \lambda = \rho_\lambda^{-1} ( e^{2 \pi i \alpha} ) . $\\
\end{defn}
\begin{figure}
\centering
\includegraphics[height= 9.3 cm, width= 9.3 cm]{Hyperbolic_Component}
\caption{Arrangement of the hyperbolic components.}
\end{figure}
Suppose $\lambda_0$ is a boundary point of $ \Omega_p $ where $f_{\lambda_0}$ has a parabolic periodic cycle. If there is another component $\Omega_q,$ with boundary point \index{boundary point} $\lambda_0,$ and if $p|q,$ then $\Omega_q$ is called a ${\it bud}$ of $\Omega_p $ and if $q | p$ then $ \Omega_q $ is called a $root$ of $\Omega_p. $ In a standard period doubling bifurcation \index{bifurcation} each attractive cycle of period $p$ bifurcates to an attractive periodic cycle of period $q = 2p.$ For maps in $ \lambda \tan z $ family, (in \cite{jiang1991dynamics_52}) it is shown that a non-standard period doubling bifurcation occurs where a single attractive cycle bifurcates to two distinct attractive cycles of the same period. This kind of bifurcation is called cycle doubling bifurcation. \\
Keeping this in mind, we see that each of the lines $ P(\alpha) = \{ \phi ( t + 2\pi i \alpha ) | t \in ( -\infty , 0 ), \ \alpha \in (0,1) \} $ corresponds to an internal ray with the multiplier having a real value and one end of the ray corresponds to a virtual center while the other corresponds to the multiplier taking the value 1 or -1. \\
Furthermore we will see that all period doubling bifurcations occur along internal rays with $ \alpha = \frac{q}{p} $ with $ \gcd (q,p ) = 1, \ p \neq 0 $ and any period $p$ cycle bifurcates into a period of $qp$ cycle. Since both asymptotic values have the same forward orbit, there is only one periodic attractive cycle for $ \lambda $ in a shell component. Therefore there can be no occurrence of cycle doubling bifurcations in this family. \\
The proof of the following results follows the text in \cite{jiang1991dynamics_52}. We summarize the results here. We will see that the bud components are again shell components.\\
\begin{thm}
Let $ \Omega $ be a shell component of $ \mathcal F. $ Let $ \lambda \in \partial \Omega, \ \rho_\lambda = e^{2\pi i \frac {q}{p} } $ with $ \gcd (q,p) = 1, \ p \neq 0 , 1 $ and $ f_\lambda^p (z_0 ) = z_0 . $ Then there is a map $ \tilde{ f_\lambda} $ such that $ \tilde{ f_\lambda} $ has one repelling cycle at $ z_0 $ and one attracting cycle of period $p.$ \\
\end{thm}
\vspace{0.5 cm}
\begin{prop}
Let $ \Omega_n $ be an arbitrary shell component of period $n.$ Suppose $ \lambda_0 \in \partial \Omega_n $ such that $ m_{\lambda_0} $ is a p-th root of unity and let $ f(z) $ be analytic in a neighborhood of the periodic point. Then there is a perturbation $ \tilde { f}^n $ of $ { f}^n $ such that $ \tilde f $ has a finite number of attracting periodic cycles of period $np.$ \label{bud component}\\
\end{prop}
\vspace{0.5 cm}
\begin{thm}
For a given shell component $ \Omega_n $ of period $n,$ there are components $ \Omega_{np} $ called bud components attached to $ \Omega_n $ at the point of internal argument $ q/p, \ p \neq 0, 1 $ and $ \gcd(p,q) = 1. $ The period of $ \Omega_{np} $ is $np.$ \\
\end{thm}
Let $ \Omega_{np} $ be a bud component attached to $ \Omega_n $ at the boundary point $ \lambda^* $ of $ \Omega_n $ of internal argument $ \frac{q}{p} . $ The point $ \lambda^* $ is the root of $ \Omega_{np} .$ Let $ m_\lambda : \Omega_{np} \to \mathbb D^* $ be the conformal covering map induced by the multiplier. There are $n$ periodic points $ z_i ,\ i = 1,2, \ldots ,n $ of $ f_{\lambda^* } $ of period $n$ with $ \prod_{i = 1}^{n} f_{\lambda^* }' (z_i) = e^{2 \pi q/p i} .$ For $ \lambda \in \Omega_{np}, $ for each $ i = 1,2, \ldots ,p $ in the $n$ disjoint neighborhoods $ N_i $ of $ z_i ,$ there are $p$ periodic points $ \xi_{i j} $ of $ f_\lambda $ of period $ np $ and $ \xi_{i j} \to z_i $ as $ \lambda \to \lambda^*.$ Therefore in the bud component $ \Omega_{np} ,$ the multiplier of the attracting cycle of period $np$ satisfies $ m_\lambda = \prod_{i = 1}^{n} \prod_{j= 1}^{p} f_{\lambda^* }' ( \xi_{i j}) \to \prod_{i = 1}^{n} \prod_{j= 1}^{p} f_{\lambda^* }' ( z_{i}) = \prod_{j = 1}^{p} e^{2 \pi q/p i }= e^{2 \pi q i } $ as $ \lambda \to \lambda^* . $ \\
Therefore like the cusps, the root $ \lambda^* $ of a component $ \Omega $ is mapped under $ \phi $ (as defined earlier $ \phi : \mathcal H_l \to \Omega_p $) to a point $ 2k \pi $ for some integer $k.$ The computer picture shows that the boundary of $ \Omega $ is smooth at its root $ \lambda^* . $ \\
\begin{figure}
\centering
\includegraphics[ height= 8.5 cm, width= 8.5 cm]{atanhz2zoom_2}
\caption{Arrangement of the capture and shell components along the imaginary axis, CC = Capture Component, Number = Period of the component. }
\end{figure}
Using Proposition~\ref{bud component}, we see that the buds in turn have buds. For any component $ \Omega_p $ we can locate the bud components attached to $ \Omega_p $ by following the internal rays of rational arguments. Let $ \Omega_p$ be an unbounded shell component. We can denote the bud components of $ \Omega_p $ attached to it at its boundary points of internal argument $ \frac{q_1}{p_1} .$ The component has period $ p_1. $ We can then locate the bud components of $ \Omega_{\frac{q_1}{p_1}} .$ The one attached to it at its boundary points with internal argument $ \frac{q_2}{p_2} $ is denoted by $ \Omega_{\frac{q_1 q_2}{p_1 p_2}} . $ This bud component has period $ p_1p_2.$ Suppose we are at a component $ \Omega_{\frac{q_1 q_2 \ldots q_k}{p_1 p_2 \ldots p_k}} $ of period $ p_1 p_2 \ldots p_k $ where $ {\frac{q_1 q_2 \ldots q_k}{p_1 p_2 \ldots p_k}} $ are all in $ Q/\mathbb Z .$ Following the internal ray of $ \Omega_{\frac{q_1 q_2 \ldots q_k}{p_1 p_2 \ldots p_k}} $ of argument $ \frac{q_{k+1}}{p_{k+1}} \in Q/\mathbb Z,$ we can locate a bud component attached to it at the point of internal argument $ \frac{q_{k+1}}{p_{k+1}} . $ The period of this bud is $ p_1p_2 \ldots p_kp_{k+1} . $ This gives us a way to code the components. \\
However we may need to locate a component attached to the current component at the virtual center. In this case, we see in Proposition~\ref{prepole order} that the period of that component is same as the period of the current component. We proved that all shell components appear in quadruplet and each quadruplet has a unique virtual center. Since the four components are attached at their shared virtual center, coding the virtual centers give a coding of the component quadruplets. By Proposition~\ref{prepole order}, given an asymptotic tract, we can choose a shell component from a quadruplet at a given virtual center. Therefore the coding can be done by adding another subscript $ i = 1, 2, 3, 4$ and by choosing the corresponding asymptotic tract that are in the periodic domain. \\
{\bf \subsection { Unbounded components } } \par
\begin{prop}
For $ \lambda $ of the form, $ \lambda = \pm i\sqrt {it} $ or $ \lambda = \pm \sqrt {it} $ there exists some $ s > 0 $ such that for all $ t > s > 0 ,$ $ f_\lambda $ has only one attracting fixed point (the origin is always a super-attracting fixed point). These $ \lambda's$ belong to unbounded shell components. \\
\end{prop}
\begin{proof}
First we will show that there is a periodic cycle for such $t$ and hence the multiplier map $ \rho(\lambda) < 1 $. \\
If $\lambda = \sqrt {it}, \ t \ > 0, $ then
\begin{align*}
& f_\lambda (\lambda i) \\
&= \sqrt {it} \tan (it) \\
&= - i \sqrt {it} \tanh t .
\end{align*}
\begin{align*}
& f_\lambda^2 ( \lambda i) \\
& = \sqrt {it} \tan ( i t \tanh^2 t) \\
& = - i \sqrt {it} \tanh( t \tanh^2 t)
\end{align*}
\begin{align*}
\vdots
\end{align*}
Therefore
\begin{align*}
& |f_\lambda^n (\lambda i)| \\
&= |- i \sqrt {it} \tanh ( t \tanh^2 \hdots ( t \tanh^2 t))) \hdots)| \\
& \leq | \sqrt{it}|
\end{align*}
Thus $ f_\lambda^n (\lambda i) $ is on the line $ l = -i \sqrt {iy} , \ y > 0, \ $ for all $n$ and the line $ l = -i \sqrt {iy} , \ y > 0, \ $ is forward invariant under $ f_\lambda$ for $ \lambda = \sqrt {it}. $ Moreover $ |f_\lambda ^n ( i \sqrt {it} )| < | \sqrt {it} | $ implies that the sequence $ f_\lambda^n $ forms a normal family and is bounded by $ |\lambda| $ and therefore $ \lambda = \sqrt {it} $ is in a shell component for some $ t > 0.$ Also the orbit of the asymptotic values is bounded by $ | \lambda | = \sqrt {t} .$ Therefore the periodic point $ z_i $ of the limit function satisfies $ | z_i | < | \lambda | $ and $ z_j = -i \sqrt {ix_j}, \ $ for some $ x_j > 0 . $ \\
Let $ \{ {z_j = -i\sqrt{ix_j} } \}_{j = 0}^{p-1}, \ p > 1 $ be the set of periodic points and $ U_j $ be the corresponding periodic components labeled such that $ U_0 $ contains an asymptotic tract. The component $U_1$ containing the asymptotic value $ \lambda i $ contains the periodic point $ z_1 $ such that $ | z_1 | > | z_j | $ for $ j \neq 1. $ This implies the asymptotic value is in the component containing the asymptotic tract and therefore the periodic component is invariant. \\
Since the central capture component is a simply connected component containing the origin, the component meets the line $ l,$ and there is some $s > 0$ such that the above holds for all $t > s > 0.$ If $ \lambda = -\sqrt{it}, \ t> 0 ,$ imitating the above calculation we get the fixed point $ i \sqrt{ix} , \ x > 0. $ For $ \lambda = i \sqrt{it} $ or $ \lambda = -i \sqrt{it} $ the proof follows by similar argument.
\end{proof}
\vspace{0.5 cm}
In the rest of this section our main goal is to prove that the shell components of period greater than one are bounded. To prove this we need to discuss the boundaries of the unbounded hyperbolic components. The next result describes the asymptotic behavior of the boundaries of the unbounded hyperbolic components. In Theorem~\ref{shell_comp_bounded}, we will conclude the section with the final result. \\
\begin{prop}
Let $ \Omega_1^j,\ j = 1,2,3,4 $ be the unbounded shell components containing $ \lambda = \pm \sqrt {it}, \ t > s > 0 $ for some $s$ (see 3.5 for the coding of shell components). The index $j$ denotes the asymptotic tract, contained in the periodic domain of $ f_\lambda. $ Then the boundary of $ \Omega_1^j $ is asymptotic to the curve $ \pm {\sqrt {|t|}} \pm i e^{2\sqrt{| t| }}$ as $ \Re \lambda = |t| \to \infty .$ \label{shell boundary}\\
\end{prop}
\begin{proof}
We will prove this only for $ \Omega_1^1 ,$ the unbounded shell component in the first quadrant. The proof for other components follows by the symmetry. Let $ z = z(\lambda) $ be an attracting fixed point of $ f_\lambda $ for $ \lambda \in \Omega_1^1 . $ Then $ f_\lambda(z) = z$ implies that $ \lambda \tan z^2 = z . $ The multiplier map $ \rho_\lambda $ is given by $ \rho_\lambda = 2 \lambda z \sec^2 z^2 $ with $ |\rho_\lambda | < 1 $ or equivalently $ | 2 \lambda z \sec^2 z^2 | < 1 .$ So we have,
\begin{align*}
& 2 \lambda z \sec^2 z^2 \\
&= \frac { 2 \lambda z \sin z^2 } { \sin z^2 \cos z^2 . \cos z^2 } \\
& = \frac {4 \lambda z \tan z^2} { \sin 2z^2 } \\
& = \frac { 2. 2z^2 } { \sin 2z^2 } \\
& = \frac {2u} { \sin u}, \ \ u = 2z^2 .
\end{align*}
\begin{figure}
\centering
\includegraphics[height= 10cm, width=10cm]{Image1}
\caption{ Curve : $ |H(u)| = 1$}
\end{figure}
So the above condition can be written as $ \big| \frac {2u} {\sin u} \big| < 1 . $ \\
Let $ H(u) = \frac { 2u } { \sin u} . $ \\
\begin{figure}
\centering
\includegraphics[height= 10cm, width=6cm]{Image2}
\end{figure}
In $ u = x + iy $ plane the curve $ |H(u)| = 1 $ has two branches symmetric about the $x$-axis and contained in the upper half and lower half regions. The boundary of $ |H(u)| = 1 $ is asymptotic to the curve $ |x| \pm ie^{2|x|} $ as $ |x| \to \infty $ and $ |H(u)| < 1 $ in the upper and lower half regions. Looking at these curves in the $z$-plane, we have $ \pm \sqrt{|t|} \pm ie^{2\sqrt{|t|}} $ as $ |t| \to \infty .$ \\
\begin{figure}
\centering
\includegraphics [height= 10cm, width=12cm]{Image3}
\end{figure}
Let $ S(u) = \frac { \sqrt u } { (\sqrt{2} \tan u/2) } .$ The set of $u$ that satisfies $ | S(u) | \geq 1 $ is unbounded and contains region in the upper and lower half planes. The regions meet the upper and lower half planes in two unbounded, simply connected domains. If we set $ \lambda = S(u) , $ then the function maps each of these unbounded regions to some domain $ \Omega$ in the $ \lambda-$plane so that $ f_\lambda $ has an attracting fixed point. That implies these unbounded regions are mapped to a hyperbolic shell component $ \Omega $ of period one. Since $S$ maps the lines $ x = 0, \ y \neq 0 $ to $\lambda = \sqrt {it} $ for $ t > s > 0 $ for some s, then $ \Omega = \Omega_1^1 . $ The asymptotic behaviour of $ \Omega_1^1 $ directly follows from the asymptotic behavior of the curves $ |H(u)| = 1 .$ By the symmetry in $ \mathcal F ,$ the boundary of the other unbounded shell components behave in the same way.
\end{proof}
\vspace{0.5 cm}
\begin{prop}
Let $ \Omega_2 $ be the bud component \index{bud component}
tangent to $ \Omega_1 $ at $ \lambda_k $ as above. The virtual center $ \lambda^* $ is equal to $ s_k i $ where $ s_k $ denotes the pole of $ f_\lambda (z).$
\end{prop}
\begin{proof}
We claim that $ \lambda^* $ is finite. If not, there will be a sequence of $ \lambda_j $ in some internal ray in $ \Omega_2 $ with an end point at $ \lambda_k $ and the other end point tending to $ \lambda^* .$ For simplicity, we omit the subscript $j$ for both sequences in parameter space and use it for the corresponding sequence for periodic points. Consider $ \lambda = \lambda_1 + i \lambda_2 ,\ z_j = x_j + iy_j, \ j = 0,1 $ where $ z_j $ are the corresponding periodic points of period 2. We denote $ X_j = \Re z_j^2 $ and $ Y_j = \Im z_j^2 .$ From the equation $ z_1 = \lambda \tan z_0^2 $ we get
\medskip
\[ X_1 = \frac{ \lambda_1 \sin (2X_0) - \lambda_2 \sinh (2Y_0) } { \cos (2X_0) + \cosh (2Y_0) } \ \ \ (A) \]
\[ Y_1 = \frac{ \lambda_1 \sinh (2Y_0) + \lambda_2 \sin(2X_0) }{\cos (2X_0) + \cosh(2Y_0)} \ \ \ (B) \]
As $ \lambda \to \infty , Y_0 \to \infty .$ We have that $ \lambda_2 \geq e^{|2\sqrt t |} . $ Then $ | X_1 | \geq | {\lambda_1} + {\lambda_2} \sinh (2Y_0) | $ implies that $ |X_1| \approx \lambda_2 \to \infty . $ Using periodicity we can interchange $ X_1, \ Y_1 $ by $ X_0, \ Y_0 $ in the equations (A) and (B). As $ | X_1 | \to \infty ,$ the term $ \lambda_2 \sin(2X_1) $ in $ Y_0 $ oscillates. Since $ Y_0 \to \infty $ the term $ | \lambda_1 \sinh (2Y_0) | $ must grow faster than $ | \lambda_2 \sin(2X_0) | $which implies that $ 2 Y_0 \approx {\pm} \lambda_1 \to \infty . $ Using periodicity, we get $ |X_0| \approx {\pm } \lambda_2 \to \infty $ and $ 2Y_1 \approx {\pm } \lambda_2 \to \infty $ similarly. Therefore we can estimate the multiplier map as
\medskip
\[ | 2 z_i \lambda \sec^2 z_i | \approx | 2 \lambda_2^2 . e^{\pm} 2\lambda_1 | \]
so that $ |\rho_\lambda| \approx 4 \lambda_2^4 e^{\pm 4\lambda_1 }$ or $ |\rho_\lambda| \approx 4 \lambda_2^4 $. So the multiplier map grows with $4 \lambda_2^4 $ along the internal ray as $ \lambda $ tends to $ \lambda^*. $ Therefore the multiplier cannot tend to zero. \\
Thus $ \lambda^* $ is finite and it is a pre-pole of $ f_{\lambda^*} $ of order one. For each bifurcation parameter $ \lambda_k \in \partial \Omega_1^i \cap \partial \Omega_2^j ,$ there is some internal curve $ \gamma_k $ in $ \Omega_2^j $ with one end at $ \lambda_k $ and the other end at $ s_n $ for some $n.$ From the discussion of the section \ref{bifurcation} we have that the curves $ \gamma_k $ are all disjoint and lie in order. From the Lemma~\ref{prepole}, we get that each of these $ \gamma_k $ has one-to-one correspondence with $ s_n \in \partial \Omega_2 .$ Thus by renaming the virtual center, if needed, we get the conclusion.
\end{proof}
\vspace{0.5 cm}
\begin{thm}
The hyperbolic shell components $ \Omega_p $ are bounded component for $ p > 1 . $ \label{shell_comp_bounded}\\
\end{thm}
\begin{proof}
For each integer n, we can choose parameters $ \pm s_n , \pm s_n i ; $ Choose the period two shell components $ \pm \Omega_{2,n}, \pm \Omega_{2,n} i , \pm \overline{ \Omega_{2,n}} , \pm \overline{ \Omega_{2,n}} i $ budding off the shell component of period one and are attached to the respective virtual centers. Choose curves $ \pm \gamma_n , \pm \gamma_n i , \pm \overline{\gamma_n} , \pm \overline{\gamma_n} i $ in $ \pm \Omega_{2,n}, \pm \Omega_{2,n} i,$ \\
$ \pm \overline{ \Omega_{2,n}} , \pm \overline{ \Omega_{2,n}} i $ respectively such that the curves $ \pm \gamma_n , \pm \gamma_n i , \pm \overline{\gamma_n} , \pm \overline{\gamma_n} i $ with the boundary arcs of $ \Omega_1^j , \ j = 1,2,3,4 $ enclose a region. Any shell component $ \Omega_p, \ p > 1$ except $ \pm \Omega_{2,n}, \pm \Omega_{2,n} i , \pm \overline{ \Omega_{2,n}}, $ \\
$\pm \overline{ \Omega_{2,n}} i $ lies in one of these bounded region as defined above. That proves $ \Omega_p $ is bounded.
\end{proof}
\vspace{0.5 cm}
\begin{cor}
All capture components are bounded. \\
\end{cor}
\begin{proof}
Given a capture component, $ \mathcal C_{n_k} $ locate the center $ c_{\bf n_k} .$ Now we can choose $ \pm s_{n_{k+1}} , \ \ \pm s_{n_{k+1}} $ and follow the technique used in the proof of \ref{shell_comp_bounded} to find a region encloses $ \mathcal C_{n_k} .$ That proves the result.
\end{proof}
\pagebreak
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Natural Language Inference (NLI) is one of the most commonly used NLP tasks, particularly in the scope of evaluating models for their language understanding capabilities.
Since their emergence, pre-trained language models (PLMs) have been highly successful on standard NLI datasets, such as the Multi-Genre Natural Language Inference \cite[MultiNLI]{williams-etal-2018-broad}.
However, recent analytical studies have revealed that their success is partly due to their reliance on spurious correlations between superficial features of the input texts and gold labels in these datasets \cite{poliak-etal-2018-hypothesis,bhargava-etal-2021-generalization}.
As a result, performance usually drops on out-of-distribution datasets where such correlations do not hold.
Several proposals have been put forth to enhance the robustness of models to the known and unknown biases and improve performance on the so-called challenging datasets \cite{stacey-etal-2020-avoiding,utama-etal-2020-mind,asael-etal-2022-generative}.
One of the well-known dataset biases in NLI models is the spurious correlation of the \textit{entailment} label and high word-overlap between premise and hypothesis.
A number of challenging sets are designed to showcase the tendency of PLMs to predict entailment for most such cases.
HANS \cite{mccoy-etal-2019-right} is arguably the most widely used dataset in this group.
Constructed based on human-made linguistic patterns, the dataset focuses on high-overlapping samples, the non-entailment subset of which is deemed as challenging for NLI models.
Most current debiasing methods have considered the word-overlap bias as one of their main targets and have shown substantial improvements on HANS \cite{mendelson-belinkov-2021-debiasing,min-etal-2020-syntactic}.
\begin{table*}[ht!]
\centering
\scalebox{0.85}{
\begin{tabular}{lp{13cm}l}
\toprule
\bf{Overlap} & \bf{Sample} & \bf{Label} \\
\midrule
\multirow{4}{*}{Full ($1.0$)} & \par P: A little kid in blue is sledding down a snowy hill. \par H: A little kid in blue sledding. & \multirow{2}{*}{Entailment} \\
\cmidrule(lr){2-3}
& \par P: The young lady is giving the old man a hug.
\par H: The young man is giving the old man a hug. & \multirow{2}{*}{Non-Entailment} \\
\midrule
\multirow{2}{*}{$\frac{12}{13}=0.923$} & \par P: A woman in a blue shirt and green hat looks up at the camera. \par H: A woman \colorbox{purp}{wearing} a blue shirt and green hat looks at the camera & \multirow{2}{*}{Entailment} \\
\cmidrule(lr){2-3}
\multirow{2}{*}{$\frac{11}{12}=0.917$} & \par P: Two men in wheelchairs are reaching in the air for a basketball. \par H: Two \colorbox{purp}{women} in wheelchairs are reaching in the air for a basketball. & \multirow{2}{*}{Non-Entailment} \\
\midrule
\multirow{4}{*}{$\frac{1}{14}=0.071$} & \par P: Several young people sit at \colorbox{green}{a} table playing poker.
\par H: Youthful Human beings are gathered around \colorbox{green}{a} flat surface to play a card game.
& \multirow{2}{*}{Entailment} \\
\cmidrule(lr){2-3}
\multirow{4}{*}{$\frac{1}{11}=0.091$} & \par P: A blond \colorbox{green}{woman} in a white dress sits in a flowering tree while holding a white bird.
\par H: The \colorbox{green}{woman} beats two eggs to make breakfast for her husband.
& \multirow{2}{*}{Non-Entailment} \\
\midrule
\multirow{4}{*}{None ($0.0$)} & \par P: A couple sits in the grass.
\par H: People are outside.
& \multirow{2}{*}{Entailment} \\
\cmidrule(lr){2-3}
& \par P: An older women tending to a garden.
\par H: The lady is cooking dinner.
& \multirow{2}{*}{Non-Entailment} \\
\bottomrule
\end{tabular}
}
\caption{NLI examples with different degrees of word-overlap (between premise and hypothesis), where the overlap is the ratio of hypothesis words that are shared with the premise. The highlighted words are the common (in green) or different (in purple) words (the samples are picked to reflect extreme cases across the word-overlap spectrum).}
\label{tab:my_label}
\end{table*}
In this paper, we revisit the word-overlap bias in NLI and the effectiveness of existing debiasing techniques.
Despite the popularity of this type of bias, we find that some of its aspects are generally ignored in the research community.
If we consider word-overlap as a feature with values ranging from no to full overlap, and NLI task with two labels of entailment and non-entailment, we show that there are other kinds of spurious correlation than the popular high word-overlap and entailment.
Specifically, as it is shown in Figure \ref{fig:reverse-bias}, we see a clear bias towards non-entailment for the low and no word-overlap values (denoted by the high performance on the non-entailment label, which comes at the price of reduced performance on the entailment class).
We will refer to this type of bias as \textit{reverse} word-overlap throughout the paper.
Through a set of experiments, we demonstrate that the overlooked reverse word-overlap bias exists in popular NLI datasets, such as MNLI and SNLI, as well as in the predictions of PLMs. Moreover, our results suggest that while existing debiasing methods can mitigate the overlap bias in NLI models to some extent, they are ineffective in resolving the reverse bias.
Moreover, we analyze how NLI models employ minority instances to enhance their generalization.
Focusing on the forgettable debiasing method \cite{yaghoobzadeh-etal-2021-increasing}, we realize that eliminating HANS-like examples and the reverse ones do not hurt the generalization noticeably.
In search of the origin of the bias, we employ prompt-based techniques to check whether the bias stems from pre-training.
We also verify the robustness of PLMs in a few-shot learning experiment with controlled and balanced training sets.
Our results suggest that PLMs do not exhibit any bias towards a specific label.
Nevertheless, introducing a few samples triggers the bias toward the entailment label.
Furthermore, balancing the training examples with respect to their word-overlap prevents the emergence of bias to some extent.
Our contributions can be summarized as follows:
\begin{itemize}
\item
We expand our understanding of the word-overlap bias in NLI by revealing an unexplored spurious correlation between low word-overlap and non-entailment.
\item We analyze how debiasing methods work for the whole spectrum of word-overlap bias, finding that they generally fail at addressing bias for the low and non-overlapping cases.
\item To explore the origin of word-overlap bias in PLMs, we design several new experiments showing that, even when exposed to a few training examples, PLMs get biased towards predicting entailment.
\end{itemize}
\section{Natural Language Inference}
\label{sec:nli}
In NLI, a model is provided with two input sentences, namely \textit{premise} and \textit{hypothesis}.
The task for the model is to predict whether the hypothesis is true (\textit{entailment}), false (\textit{contradiction}), or undetermined (\textit{neutral}) given the premise.
\subsection{Bias in NLI Models}
Analyzing NLI models have demonstrated that they are sensitive to the shortcuts that appear in the dataset.
Several types of bias have been investigated in the literature, including hypothesis-only prediction, spurious correlations between certain words and labels (e.g., negation words and the non-entailment label), sensitivity to the length of hypothesis, and lexical overlap between the premise and hypothesis \cite{gururangan-etal-2018-annotation,poliak-etal-2018-hypothesis,mccoy-etal-2019-right,wu-etal-2022-generating}.
Relying on these spurious features hampers the language understanding ability of NLI models, leading to poor performance on out-of-distribution datasets where such superficial correlations do not hold \cite{he-etal-2019-unlearn,mccoy-etal-2019-right}.
\paragraph{Word-Overlap Bias.} Among the detected dataset biases, word-overlap is a quite well-studied shortcut in the NLI task \cite{zhou-bansal-2020-towards,mendelson-belinkov-2021-debiasing}.
We define word-overlap ($wo$) as the ratio of words in the hypothesis ($h$) that are shared with the premise ($p$), i.e., $\frac{|h\cap p|}{|h|}$.
Table \ref{tab:my_label} shows examples of different degrees of word-overlap.
\begin{table*}
\centering
\setlength{\tabcolsep}{21pt}
\scalebox{0.85}{
\begin{tabular}{r c c c c c }
\toprule
& MNLI-dev & HANS & HANS$+$ & HANS$-$ & W\small{A}\normalsize{NLI} \\
\midrule
\multicolumn{1}{c}{} &
\multicolumn{5}{c}{\textbf{BERT}} \\
\cmidrule{2-6}
Baseline & 84.2 \small $\pm 0.3$ &
63.9 \small $\pm 1.7$ &
98.5 \small $\pm 1.2$ &
29.3 \small $\pm 4.6$ &
56.9 \small $\pm 0.6$ \\
\cmidrule{2-6}
Long-tuning & 83.4 \small $\pm 0.8$ &
65.8 \small $\pm 2.3$ &
99.0 \small $\pm 0.2$ &
32.6 \small $\pm 4.4$ &
58.0 \small $\pm 0.6$ \\
\,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, & 82.7 \small $\pm 0.3$ &
73.8 \small $\pm 0.5$ &
91.8 \small $\pm 0.4$ &
55.9 \small $\pm 1.3$ &
59.0 \small $\pm 0.3$ \\
PoE & 80.0 \small $\pm 0.8$ &
66.9 \small $\pm 2.2$ &
71.6 \small $\pm 3.7$ &
62.2 \small $\pm 2.7$ &
71.6 \small $\pm 0.7$ \\
\midrule
\multicolumn{1}{c}{} &
\multicolumn{5}{c}{\textbf{RoBERTa}} \\
\cmidrule{2-6}
Baseline & 87.2 \small $\pm 0.2$ &
73.3 \small $\pm 3.4$ &
98.5 \small $\pm 1.0$ &
48.2 \small $\pm 7.8$ &
59.7 \small $\pm 1.6$\\
\cmidrule{2-6}
Long-tuning & 86.9 \small $\pm 0.3$ &
73.0 \small $\pm 1.7$ &
97.8 \small $\pm 1.2$ &
48.2 \small $\pm 4.2$ &
60.3 \small $\pm 0.1$ \\
\,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, & 85.6 \small $\pm 0.3$ &
78.9 \small $\pm 0.6$ &
88.1 \small $\pm 2.4$ &
69.7 \small $\pm 2.3$ &
62.0 \small $\pm 1.4$\\
PoE & 84.6 \small $\pm 0.1$ &
77.0 \small $\pm 1.5$ &
79.3 \small $\pm 6.2$ &
71.4 \small $\pm 3.7$ &
73.4 \small $\pm 0.1$ \\
\bottomrule
\end{tabular}}
\caption{The average accuracy of the baseline models and debiasing methods on the MNLI development (matched) set as the \textit{in-distribution} and W\small{A}\normalsize{NLI} and HANS as the \textit{out-of-distribution} datasets (HANS$+$ and HANS$-$ are entailment and non-entailment subsets, respectively).}
\label{tab:mnli}
\end{table*}
\begin{figure*}
\centering
\includegraphics[scale=0.2]{EMNLP 2022/images/reverse-bias}
\caption{The distribution of instances across word-overlap bins (SNLI dataset).}
\label{fig:probe-stat}
\end{figure*}
\subsection{Debiasing Methods}
\label{sec:debiasing}
Creating high-quality datasets without any spurious features between instances and gold labels is an arduous and expensive process \cite{gardner-etal-2021-competency}, making it inevitable for a dataset not to have biases to some extent.
Therefore, to have a robust model, it is essential to take extra steps for debiasing against dataset artifacts.
The past few years have seen several debiasing methods \cite{karimi-mahabadi-etal-2020-end,utama-etal-2020-mind,utama-etal-2020-towards,belinkov-etal-2019-dont}.
For our experiments, we opted for three different debiasing approaches.
We evaluate the effectiveness of these techniques in mitigating the overlap bias and its reverse.
\paragraph{Long-tuning.}
\citet{tu-etal-2020-empirical} have shown that fine-tuning NLI models for more epochs can enhance the generalizability of LMs over challenging datasets. Following their suggestion, we fine-tuned the models for 20 epochs on the MNLI dataset.
\paragraph{Forgettable Examples.}
\citet{yaghoobzadeh-etal-2021-increasing} find minority examples without prior knowledge of the dataset artifacts.
In the proposed method, the minority examples are considered samples that have never been learned or learned once and then forgotten by the model. Then, the already trained NLI model is fine-tuned on this subset for a few more epochs.
Following the authors' suggestion, to find the forgettable examples, we utilized a simple Siamese Bag of Words (BoW) model where the sentence representations of the premise and hypothesis are the average over their word embeddings.
\paragraph{Product of Experts (PoE).}
In this method, a weak model is supposed to learn superficial features in the input. The weak learner's output is then used to normalize the main model's predictions on over-confident examples. Following previous studies \cite{karimi-mahabadi-etal-2020-end,DBLP:conf/iclr/Sanh0BR21}, we employed the following combination strategy for taking into account both weak learner and main model predictions:
\begin{equation}
y = softmax(\log p_w + \log p_m )
\end{equation}
where $p_w$ and $p_m$ are the outputs of the weak learner and the main model, respectively. The robust model is trained using a cross-entropy loss function based on $y$.
We used TinyBERT \cite{jiao-etal-2020-tinybert} as our weak learner.
\subsection{Experimental Setup}
\label{sec:setup1}
\paragraph{Datasets.} In our experiments, we opted for the Multi-Genre Natural Language Inference dataset \cite[MNLI]{williams-etal-2018-broad} for training the NLI models.
The dataset contains 433k training examples.
Since the gold labels for the test set are not publicly available, we follow previous work and report results on the \textit{development-matched} (MNLI-dev in the tables).
Also, following the convention in previous studies, we merge neutral and contradiction examples into the non-entailment group.
As challenging datasets, we considered HANS \cite{mccoy-etal-2019-right} and W\small{A}\normalsize{NLI} \cite{,liu-etal-2022-wanli}.
In the former dataset, each instance is curated in a way that all words of the hypothesis are also observed in the premise, irrespective of the word order.
Previous work has shown that biased NLI models tend to perform poorly on HANS, particularly for the non-entailment class \cite{yaghoobzadeh-etal-2021-increasing}.
The latter challenging set has employed GPT-3 \cite{GPT3-NEURIPS2020} to generate high-quality instances followed by filtering done by human crowd-workers.
Quality tests on W\small{A}\normalsize{NLI} indicate that the dataset contains fewer artifacts compared to MNLI.
\paragraph{Models.} As for PLMs, we opted for the base version of BERT and RoBERTa \cite{devlin-etal-2019-bert,liu2020roberta} and fine-tuned them for three epochs as our baselines.
We trained the models with a learning rate of 2e-5, employing the Adam optimizer for three different random seeds.
The batch size was set to 32 with a max length of 128.
All the reported results are based on three random seeds.
\subsection{Results}
Table \ref{tab:mnli} shows the results for the baseline models (BERT and RoBERTa) and the three debiasing techniques on different datasets.
The bias in the baseline model is highlighted by the performance contrast across the entailment (HANS$+$) and non-entailment (HANS$-$) subsets.
As can be seen, the three debiasing methods are generally effective in softening the biased behavior, reflected by the improved performance on HANS$-$ (and, in turn, HANS), and also W\small{A}\normalsize{NLI}.
\begin{table*}
\centering
\setlength{\tabcolsep}{22pt}
\scalebox{0.85}{
\begin{tabular}{l c c c c}
\toprule
\multirow{2}{*}{\bf Overlap} &
\multicolumn{2}{c}{\textbf{BERT}} &
\multicolumn{2}{c}{\textbf{RoBERTa}}
\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
& \textbf{Entailment} & \textbf{Non-Entailment} & \textbf{Entailment} & \textbf{Non-Entailment} \\
\midrule
{Full} & 99.7 \small $\pm 0.1$ &
\underline{13.3} \small $\pm 1.4$ &
99.7 \small $\pm 0.1$ &
\underline{17.6} \small $\pm 0.9$ \\
{[0.8, 1.0)} & 92.9 \small $\pm 0.0$ &
83.0 \small $\pm 1.5$ &
95.9 \small $\pm 0.6$ &
92.7 \small $\pm 2.4$ \\
\cmidrule(lr){2-5}
{[0.6, 0.8)} & 85.2 \small $\pm 0.4$ &
86.2 \small $\pm 1.6$ &
91.5 \small $\pm 1.4$ &
84.5 \small $\pm 2.8$
\\
{[0.4, 0.6)} & 74.2 \small $\pm 0.1$ &
91.9 \small $\pm 1.1$ &
85.8 \small $\pm 2.4$ &
90.2 \small $\pm 2.4$
\\
{[0.2, 0.4)} & 64.5 \small $\pm 0.6$ &
95.1 \small $\pm 0.6$ &
78.5 \small $\pm 2.8$ &
93.8 \small $\pm 1.6$
\\
\cmidrule(lr){2-5}
{(0.0, 0.2)} & \underline{55.5} \small $\pm 1.4$ &
96.7 \small $\pm 0.5$ &
\underline{68.6} \small $\pm 3.3$ &
96.0 \small $\pm 1.2$
\\
{None} & {61.6} \small $\pm 1.3$ &
95.2 \small $\pm 0.2$ &
{77.2} \small $\pm 3.4$ &
93.6 \small $\pm 1.5$ \\
\bottomrule
\end{tabular}}
\caption{The accuracy of the two NLI models across different overlap bins and on both subsets. The lowest numbers in each column are underlined.}
\label{tab:reverse-performance}
\end{table*}
\section{Reverse Word-Overlap}
\label{sec:reverse-wo}
Considering the word-overlap bias as a spectrum, the existing studies have mainly focused on a small subset of the spectrum, i.e., the case with full word-overlap and its spurious correlation with the entailment label.
In this section, we evaluate the performance of NLI models on other areas of the spectrum and with respect to both labels (entailment and non-entailment) to broaden our insights on the robustness of these models considering the word-overlap feature.
\subsection{Probing Dataset}
\label{sec:setup}
As for this probing study, we experimented with the SNLI dataset \cite{bowman-etal-2015-large}, merging the training, development, and test sets to build a unified evaluation set.
The set was split into seven bins based on the degree of overlap.
The statistics are reported in Figure \ref{fig:probe-stat}.
As an example, the $[0.6,0.8)$ bin contains samples that have a word overlap (between premise and hypothesis) of greater than (and equal to) $0.6$ and less than $0.8$.
\subsection{Results}
Unless specified otherwise, the experimental setup in this experiment is the same as the one reported in Section \ref{sec:setup1}.
Table \ref{tab:reverse-performance} reports the results across different word overlap bins for both BERT and RoBERTa and for both labels.
As expected, high contrast is observed on the full overlap subset: near-perfect NLI performance on the entailment, while poor performance on non-entailment, suggesting a strong bias towards the entailment label.
This is the conventional type of NLI bias that has been usually discussed in previous studies.
The HANS challenging dataset is constructed based on the same type of bias.
However, surprisingly, the results show that this biased behavior only exists for samples with full overlap.
In fact, no notable bias is observed even for the high overlap samples in the [$0.8$, $1$) bin.
This observation further narrows down the scope of HANS as a challenging dataset and raises questions on the robustness of models developed based on the dataset.
\paragraph{Reverse bias.} Interestingly, the results
in Table \ref{tab:reverse-performance} shed light on another inherent spurious correlation that exists between NLI performance and the degree of word-overlap.
Particularly towards the non-overlap extreme, the performance drops on entailment and increases on non-entailment samples. In the (0.0, 0.2) bin, we see the largest gap: 55.5 entailment vs 96.7 non-entailment for the BERT model.
We refer to the biased behavior of NLI models on the low word-overlapping samples towards the non-entailment label as the \textit{Reverse bias}.
It is also worth mentioning that based on the proposed results, reverse bias covers a broader range of bins in comparison with the word-overlap bias.
\begin{figure*}[h!]
\centering
\includegraphics[width=
13cm]{EMNLP 2022/images/reverse-bias-full}
\caption{The performance of the baseline and the three debiasing methods across the seven word-overlap bins for both labels and for BERT and RoBERTa. Across the spectrum, the debiasing techniques seem to be effective only on samples with high (particularly full) word-overlap on the non-entailment subset and are either ineffective (or even harmful) towards the other end of the overlapping spectrum and on the entailment subset.}
\label{fig:Reverse-Bias}
\end{figure*}
\subsection{Effectiveness of Debiasing Methods}
Figure \ref{fig:Reverse-Bias} shows the performance of the three debiasing methods (described in Section \ref{sec:debiasing}) across the seven bins in our word-overlap analysis.
As can be observed, debiasing methods improve over the baseline on the full-overlap (``Full'' in Figure \ref{fig:Reverse-Bias}) and non-entailment subset, with PoE proving the most effective.
The improvement is expected since the results on the challenging dataset, HANS, suggest the same.
This, however, comes at the price of
reduced performance on the entailment subset, specifically in the BERT model.
As we move toward the non-overlap end of the spectrum (``None'' in Figure \ref{fig:Reverse-Bias}), the performance gap between the entailment and non-entailment labels grows, mainly due to the drop in entailment performance.
Interestingly, the experimental results reveal that debiasing methods are clearly ineffective in addressing the reverse bias and perform similarly to the baseline models.
\section{Analysis}
\label{sec:analysis}
\begin{figure*}
\centering
\subfigure[Non-Entailment]{
\includegraphics[width=5.cm,height=4.cm]{images/plt1.png}\label{fig:dist-minority}}
\hspace{0.1\textwidth}
\subfigure[Entailment]{
\includegraphics[width=5.cm,height=4.cm]{images/plt2.png} \label{fig:dist-minority-ent}}
\caption{Normalized distribution of instances with respect to their word-overlap in the original training set of MNLI and the subset identified by \,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\,. }
\label{fig:data-dist}
\end{figure*}
\subsection{Role of Minority Examples
In the context of word-overlap bias, the non-entailment instances that have full overlap (between premise and hypothesis) are usually referred to as \textit{minority} examples.
\citet{tu-etal-2020-empirical} show that minority examples of the training set play a crucial role in the generalizability of language models, and eliminating them can significantly hurt performance on challenging datasets, such as HANS.
\citet{yaghoobzadeh-etal-2021-increasing} relate the forgettables with the minority examples by observing the difference in word-overlap distribution in forgettables.
We carry out a set of experiments on the \textit{forgettable} approach, where a subset of the training data is chosen for further fine-tuning of models (66$k$ in our NLI experiments for the \,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, method).
We extend the forgettable analysis to the low word-overlap or reverse minority examples.
We also verify the role played by minority examples in the performance of debiasing methods.
As the first step, we compare the distribution of instances with respect to their overlap in the original training set of MNLI and its forgettable subset.
The results are shown in Figure \ref{fig:data-dist}.
As can be seen, the forgettable subset tends to have better coverage over the minority subset than the original MNLI training set. See the right side of Figure \ref{fig:dist-minority} and the left side of Figure \ref{fig:dist-minority-ent}.
One can hypothesize that better coverage of minority examples is the reason behind the effectiveness of the forgettable approach.
To verify this hypothesis, we eliminate several subsets from \,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, and fine-tune the NLI models with the remaining samples.
We considered the following four settings:
\begin{itemize}
\item \textbf{Full $-$ NEnt:} Full overlap between premise and hypothesis with the non-entailment label.
\item \textbf{None $-$ Ent:} No overlap and entailment label.
\item \textbf{[0.8, 1.0] $-$ NEnt:} More then 80\% overlap and non-entailment label.
\item \textbf{[0.0, 0.2] $-$ Ent:} Less than 20\% overlap and entailment label.
\end{itemize}
The results are reported in Table \ref{tab:minority-role}.
Interestingly, we observe that removing HANS-like examples (Full$-$NEnt), which were hypothesized to play the main role in improving performance on the challenging datasets, does not affect the performance of \,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, notably.
The observation is consistent even for larger subsets of high-overlapping instances ([0.8, 1]$-$NEnt).
Discarding the reverse group (low-overlapping entailment samples) yields a similar pattern. So, it can be inferred that such samples do not play the primary role in the debiasing methods' effectiveness.
This opens up questions on how NLI models extrapolate to patterns unseen during training and how debiasing methods enhance their generalization over out-of-distribution data.
This is particularly interesting in light of observations made by \citep{tu-etal-2020-empirical} that standard training does not enable such extrapolation.
We leave further investigations in this area to future work.
\begin{table*}[h!]
\centering
\setlength{\tabcolsep}{14pt}
\scalebox{0.85}{
\begin{tabular}{r c c c c c r}
\toprule
& MNLI-dev & HANS & HANS$+$ & HANS$-$ & W\small{A}\normalsize{NLI} & Eliminated \\
\midrule
\multicolumn{1}{c}{} &
\multicolumn{5}{c}{\textbf{BERT}} &
\multicolumn{1}{c}{}\\
\cmidrule{2-6}
\it Baseline & 84.2 \small $\pm 0.3$ &
63.9 \small $\pm 1.7$ &
98.5 \small $\pm 1.2$ &
29.3 \small $\pm 4.6$ &
56.9 \small $\pm 0.6$ \\
\,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, & 82.7 \small $\pm 0.3$ &
73.8 \small $\pm 0.5$ &
91.8 \small $\pm 0.4$ &
55.9 \small $\pm 1.3$ &
59.0 \small $\pm 0.3$ &
\\
\cmidrule{2-7}
Full~$-$~NEnt & 82.8 \small $\pm 0.4$ &
71.7 \small $\pm 0.9$ &
93.2 \small $\pm 0.4$ &
50.3 \small $\pm 2.0$ &
59.4 \small $\pm 0.5$ & 782\\
[0.8, 1.0]~$-$~NEnt & 83.2 \small $\pm 0.2$ &
72.3 \small $\pm 0.8$ &
93.5 \small $\pm 1.3$ &
51.1 \small $\pm 2.9$ &
58.8 \small $\pm 0.5$ & 6,350 \\
[0.0, 0.2]~$-$~~~~Ent & 82.9 \small $\pm 0.4$ &
73.7 \small $\pm 0.7$ &
91.9 \small $\pm 0.8$ &
55.4 \small $\pm 2.1$ &
59.5 \small $\pm 0.7$ & 1,801 \\
None~$-$~~~~Ent & 82.8 \small $\pm 0.5$ &
73.8 \small $\pm 0.8$ &
92.1 \small $\pm 1.5$ &
55.5 \small $\pm 3.1$ &
59.3 \small $\pm 0.6$ & 482 \\
\midrule
&
\multicolumn{5}{c}{\textbf{RoBERTa}}
\\
\cmidrule{2-6}
\it Baseline & 87.2 \small $\pm 0.2$ &
73.3 \small $\pm 3.4$ &
98.5 \small $\pm 1.0$ &
48.2 \small $\pm 7.8$ &
59.7 \small $\pm 1.6$\\
\,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, & 85.6 \small $\pm 0.3$ &
78.9 \small $\pm 0.6$ &
88.1 \small $\pm 2.4$ &
69.7 \small $\pm 2.3$ &
62.0 \small $\pm 1.4$ & \\
\cmidrule{2-7}
Full~$-$~NEnt & 86.4 \small $\pm 0.2$ &
79.1 \small $\pm 1.3$ &
92.1 \small $\pm 1.6$ &
66.1 \small $\pm 4.0$ &
62.2 \small $\pm 0.9$ &
782 \\
[0.8, 1.0]~$-$~NEnt & 86.6 \small $\pm 0.2$ &
78.4 \small $\pm 1.0$ &
95.9 \small $\pm 0.8$ &
60.8 \small $\pm 2.8$ &
61.8 \small $\pm 0.7$ &
6,350 \\
[0.0, 0.2]~$-$~~~~Ent & 86.1 \small $\pm 0.2$ &
79.3 \small $\pm 1.3$ &
89.8 \small $\pm 1.2$ &
68.7 \small $\pm 2.1$ &
62.3 \small $\pm 0.9$ &
1,801 \\
None~$-$~~~~Ent & 86.1 \small $\pm 0.2$ &
79.1 \small $\pm 1.2$ &
88.6 \small $\pm 2.1$ &
69.5 \small $\pm 2.9$ &
62.1 \small $\pm 0.7$ &
482\\
\bottomrule
\end{tabular}}
\caption{The performance of \,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, after eliminating four different subsets. \textit{Eliminated} denotes the number of eliminated examples in each setting. All the subsets tend to be in the same performance ballpark with respect to the generalizability of the model on the out-of-distribution datasets (W\small{A}\normalsize{NLI} and HANS). }
\label{tab:minority-role}
\end{table*}
\begin{table*}
\centering
\setlength{\tabcolsep}{22pt}
\scalebox{0.85}{
\begin{tabular}{c c c c c c }
\toprule
\multicolumn{1}{c}{} &
\multicolumn{5}{c}{\textbf{Baseline}} \\
\cmidrule(lr){2-6}
& MNLI-dev & HANS & HANS$+$ & HANS$-$ & W\small{A}\normalsize{NLI}
\\
\midrule
Zero-shot & 42.0 & 55.3 & 57.5 & 53.1 & 58.0
\\
\midrule
$K = ~16$ & 45.6 \small $\pm ~1.2$ &
53.6 \small $\pm ~1.3$ &
73.2 \small $\pm 16.5$ &
34.4 \small $\pm 13.9$ &
54.7 \small $\pm ~2.3$ \\
$K = ~32$ & 46.9 \small $\pm ~0.6$ &
50.8 \small $\pm ~0.8$ &
98.3 \small $\pm ~1.2$ &
~3.3 \small $\pm ~2.8$ &
50.1 \small $\pm ~2.2$ \\
$K = ~64$ & 49.6 \small $\pm ~0.3$ &
50.3 \small $\pm ~0.3$ &
99.4 \small $\pm ~0.5$ &
~1.1 \small $\pm ~1.1$ &
48.4 \small $\pm ~4.3$ \\
$~K = 128$ & 52.7 \small $\pm ~0.9$ &
50.0 \small $\pm ~0.0$ &
99.9 \small $\pm ~0.2$ &
~0.1 \small $\pm ~0.2$ &
45.1 \small $\pm ~0.4$ \\
$~K = 256$ & 56.4 \small $\pm ~0.4$ &
50.7 \small $\pm ~0.8$ &
98.1 \small $\pm ~2.2$ &
~3.3 \small $\pm ~3.9$ &
50.3 \small $\pm ~0.0$ \\
$~K = 512$ & 61.4 \small $\pm ~1.1$ &
50.0 \small $\pm ~0.1$ &
100 \small $\pm ~0.0$ &
~0.1 \small $\pm ~0.1$ &
46.2 \small $\pm ~2.0$ \\
\midrule
\multicolumn{1}{c}{} & \multicolumn{5}{c}{\textbf{Balanced}}\\
\cmidrule(lr){2-6}
$K = 16$ &
44.1 \small $\pm ~0.6$ &
52.5 \small $\pm ~1.5$ &
95.6 \small $\pm ~2.6$ &
~9.3 \small $\pm ~5.7$ &
54.3 \small $\pm ~3.2$ \\
$K = 32$ & 45.7 \small $\pm ~1.3$ &
51.9 \small $\pm ~1.1$ &
82.2 \small $\pm 15.7$ &
21.5 \small $\pm 13.4$ &
52.0 \small $\pm ~1.3$ \\
$K = 64$ & 45.2 \small $\pm ~1.1$ &
52.4 \small $\pm ~1.1$ &
69.8 \small $\pm ~6.0$ &
35.1 \small $\pm ~3.8$ &
54.4 \small $\pm ~0.3$\\
$~K = 128$ &48.0 \small $\pm ~0.1$ &
51.7 \small $\pm ~0.1$ &
95.7 \small $\pm ~5.0$ &
~7.7 \small $\pm ~5.2$ &
52.8 \small $\pm ~3.3$ \\
$~K = 256$&51.3 \small $\pm ~1.3$ &
51.2 \small $\pm ~3.0$ &
84.9 \small $\pm 15.6$ &
17.5 \small $\pm 21.5$ &
51.8 \small $\pm ~3.5$\\
$~K = 512$&53.2 \small $\pm ~0.2$ &
51.3 \small $\pm ~2.8$ &
86.8 \small $\pm 10.8$ &
15.8 \small $\pm 16.5$ &
49.5 \small $\pm ~1.7$\\
\bottomrule
\end{tabular}}
\caption{Zero-shot and few-shot results of prompt-based fine-tuning for BERT. While no significant bias is seen in the zero-shot setting, only with a few task-specific examples, BERT predictions are biased towards entailment (HANS$+$ vs. HANS$-$). Balancing the training set (bottom block) slightly reduces the extent of bias.}
\label{tab:prompting}
\end{table*}
\subsection{The Origin of Word-Overlap Bias}
We conducted another experiment to see if the vulnerability of NLI models to the word-overlap feature and the reverse bias comes from pre-training or from fine-tuning on the task-specific data.
To this end, we followed \citet{utama-etal-2021-avoiding} in evaluating pre-trained models under zero- and few-shot settings.
To rule out the impact of fine-tuning and verify if the pre-trained model exhibits similar biases with respect to word-overlap, we evaluated BERT in a zero-shot setting by reformulating the NLI task as a masked language modeling objective.
Following previous studies \cite{schick-schutze-2021-exploiting, utama-etal-2021-avoiding}, we transformed the NLI examples using the below template:
\begin{verbatim}
Premise ? [MASK], Hypothesis.
\end{verbatim}
\noindent where the \texttt{[MASK]} token denotes the gold label.
We used a simple verbalizer with \textit{yes}, \textit{maybe}, and \textit{no} as mappings to, respectively, the entailment, neutral, and contradiction labels.
The first row of Table \ref{tab:prompting} shows the results for the zero-shot setting.
The similar performance across HANS$-$ and HANS$+$ shows that the pre-trained BERT model does not exhibit much bias towards a specific label.
Therefore, the bias stems from the fine-tuning on the task-specific instances.
This is reflected even with as few as 16 samples in the few-shot scenario (where we have fine-tuned the prompt-based model).
As the number of training instances increases, the gap between the entailment and non-entailment samples grows.
\paragraph{Balanced data.} We also examined the role of class imbalance in the training data on the emergence of word-overlap bias.
For this experiment, we defined four categories based on the overlap \{Full, [0.5, 1), (0.0, 0.5), and None\} and uniformly sampled $K$ instances per label.
The bottom block of Table \ref{tab:prompting} presents the results. It can be inferred that having a balanced training set can reduce the bias to some extent.
Finally, the high variance on the HANS subsets suggests that the quality of training examples and word-overlap percentage between the premise and hypothesis can have a significant impact on the bias in NLI systems.
\section{Related Work}
\paragraph{Dataset biases in NLP.}
Different categories of bias have been discovered and discussed in NLP datasets.
Earlier work has discovered that negative words are correlated with contradiction label in the SNLI dataset \cite{naik-etal-2018-stress,gururangan-etal-2018-annotation}.
Hypothesis-only \cite{gururangan-etal-2018-annotation} and word-overlap between hypothesis and premise \cite{mccoy-etal-2019-right} are other types of biases discussed in the literature of SNLI and MNLI datasets.
In particular, word overlap has also been investigated in the context of duplicate question detection on the QQP dataset \cite{zhang-etal-2019-paws}.
For both NLI and QQP, it has been shown that considerable spurious correlations exist between high word overlap and the entailment/duplicate label.
In this word, we focused on the word overlap bias in the NLI dataset and introduced an overlooked aspect of this bias: the correlation between low word overlap and non-entailment class.
\paragraph{Challenging sets.}
In the past few years, several challenging datasets have been introduced to study the limitations of NLP models and, in particular, pre-trained language models in learning robust features and ignoring dataset biases.
Challenging datasets for NLI include HANS \cite{mccoy-etal-2019-right}, ANLI \cite{williams-etal-2022-anlizing}, MNLI-hard \cite{gururangan-etal-2018-annotation} and
Stress-tests \cite{naik-etal-2018-stress}.
Similar datasets for other tasks include PAWS \cite{zhang-etal-2019-paws,pawsx2019emnlp}, for duplicate question detection, and FEVER-Symmetric \cite{schuster-etal-2019-towards}, for stance detection.
\paragraph{Spurious correlation.}
\citet{gardner2021competency} argue that for complex language understanding tasks, any simple feature correlation should be considered spurious, e.g., ``not'' and the contradiction label in NLI.
Spurious correlations can also be defined from the viewpoint of generalizability \citet{chang-etal-2021-robustness,yaghoobzadeh-etal-2021-increasing}.
According to this definition, a feature is spurious if it works well only for specific examples.
The reverse word overlap feature described in this paper fits well within both definitions.
\citet{schwartz2022limitations} review several definitions for spurious correlations.
\paragraph{Debiasing methods.}
Many studies try to remove the spurious correlations or dataset biases either from the training dataset or the model.
Most debiasing approaches filter or down weight those training examples that are either easy or contain spurious correlations \cite{he-etal-2019-unlearn,karimi-mahabadi-etal-2020-end,utama-etal-2020-mind,DBLP:conf/iclr/Sanh0BR21}.
Others augment the training set with examples that violate the spurious correlations.
A mix of both these approaches has also been investigated by \citet{wu-etal-2022-generating}.
An alternative approach is to extend the fine-tuning either on all \cite{tu-etal-2020-empirical} or parts of training data \cite{yaghoobzadeh-etal-2021-increasing}.
\paragraph{Analysis of debiasing.}
Given the increasing interest in debiasing methods, there have been concerns about their widespread use. \citet{schwartz-stanovsky-2022-limitations} argue that excessive balancing prevents the models from learning anything (in particular, important world and commonsense knowledge), making it neither practical nor desired.
They suggest abstaining and interacting with the user when the contextual information is not sufficient and also focus on zero- and few-shot learning approaches instead of full fine-tuning.
In this paper, we showed that balancing datasets should only be taken as a partial solution for eliminating spurious correlations.
We also showed that in this context, few-shot learning might not be effective.
\citet{mendelson-belinkov-2021-debiasing} found that debiasing methods encode more extractable information about the bias in their inner representations.
This observation is explained in a concurrent work to ours in terms of the necessity and sufficiency of the biases \cite{joshi2022all}.
In this paper and for the word-overlap bias, we showed that our selected debiasing techniques are not robust against if we consider the whole spectrum.
\section{Conclusions}
In this work, we uncovered an unexplored aspect of the well-known word-overlap bias in the NLI models. We showed a spurious correlation between the low overlap instances and the non-entailment label, namely the \textit{reverse} word-overlap bias.
We demonstrated that existing debiasing methods are not effective in mitigating the reverse bias.
We found that the generalization power of debiasing methods (the forgettable approach in particular) does not stem from minority examples.
We also showed that the word-overlap bias does not seem to come from the pre-training step of PLMs.
As future work, we plan to focus on designing new debiasing methods for mitigating the reverse bias for NLI and similar tasks. Also, building specific challenging sets, similar to HANS, for the reverse bias helps to expand this line of research.
\section{Acknowledgements}
We would like to acknowledge that the idea of reverse bias was initiated in discussion with Alessandro Sordoni (MSR Montreal). Also, we want to thank the anonymous reviewers for their valuable comments, which helped us in improving the paper. Sara Rajaee is funded in part by the Netherlands Organization for Scientific Research (NWO) under project number VI.C.192.080.
\section{Limitations}
In our experiments, we have focused on two popular PLMs, BERT and RoBERTa. Using more PLMs, with diversity in the objective and architecture and evaluating their robustness is one of the extendable aspects of our work. Moreover, we evaluated three debiasing methods, but this could have been expanded to more. The other susceptible aspect to improvement is creating a more high-quality dataset for analyzing the overlap bias and its reverse. We have used SNLI as our main probing set, a crowdsourcing-based dataset that contains some noisy examples, especially in minority groups.
\section{Introduction}
Natural Language Inference (NLI) is one of the most commonly used NLP tasks, particularly in the scope of evaluating models for their language understanding capabilities.
Since their emergence, pre-trained language models (PLMs) have been highly successful on standard NLI datasets, such as the Multi-Genre Natural Language Inference \cite[MultiNLI]{williams-etal-2018-broad}.
However, recent analytical studies have revealed that their success is partly due to their reliance on spurious correlations between superficial features of the input texts and gold labels in these datasets \cite{poliak-etal-2018-hypothesis,bhargava-etal-2021-generalization}.
As a result, performance usually drops on out-of-distribution datasets where such correlations do not hold.
Several proposals have been put forth to enhance the robustness of models to the known and unknown biases and improve performance on the so-called challenging datasets \cite{stacey-etal-2020-avoiding,utama-etal-2020-mind,asael-etal-2022-generative}.
One of the well-known dataset biases in NLI models is the spurious correlation of the \textit{entailment} label and high word-overlap between premise and hypothesis.
A number of challenging sets are designed to showcase the tendency of PLMs to predict entailment for most such cases.
HANS \cite{mccoy-etal-2019-right} is arguably the most widely used dataset in this group.
Constructed based on human-made linguistic patterns, the dataset focuses on high-overlapping samples, the non-entailment subset of which is deemed as challenging for NLI models.
Most current debiasing methods have considered the word-overlap bias as one of their main targets and have shown substantial improvements on HANS \cite{mendelson-belinkov-2021-debiasing,min-etal-2020-syntactic}.
\begin{table*}[ht!]
\centering
\scalebox{0.85}{
\begin{tabular}{lp{13cm}l}
\toprule
\bf{Overlap} & \bf{Sample} & \bf{Label} \\
\midrule
\multirow{4}{*}{Full ($1.0$)} & \par P: A little kid in blue is sledding down a snowy hill. \par H: A little kid in blue sledding. & \multirow{2}{*}{Entailment} \\
\cmidrule(lr){2-3}
& \par P: The young lady is giving the old man a hug.
\par H: The young man is giving the old man a hug. & \multirow{2}{*}{Non-Entailment} \\
\midrule
\multirow{2}{*}{$\frac{12}{13}=0.923$} & \par P: A woman in a blue shirt and green hat looks up at the camera. \par H: A woman \colorbox{purp}{wearing} a blue shirt and green hat looks at the camera & \multirow{2}{*}{Entailment} \\
\cmidrule(lr){2-3}
\multirow{2}{*}{$\frac{11}{12}=0.917$} & \par P: Two men in wheelchairs are reaching in the air for a basketball. \par H: Two \colorbox{purp}{women} in wheelchairs are reaching in the air for a basketball. & \multirow{2}{*}{Non-Entailment} \\
\midrule
\multirow{4}{*}{$\frac{1}{14}=0.071$} & \par P: Several young people sit at \colorbox{green}{a} table playing poker.
\par H: Youthful Human beings are gathered around \colorbox{green}{a} flat surface to play a card game.
& \multirow{2}{*}{Entailment} \\
\cmidrule(lr){2-3}
\multirow{4}{*}{$\frac{1}{11}=0.091$} & \par P: A blond \colorbox{green}{woman} in a white dress sits in a flowering tree while holding a white bird.
\par H: The \colorbox{green}{woman} beats two eggs to make breakfast for her husband.
& \multirow{2}{*}{Non-Entailment} \\
\midrule
\multirow{4}{*}{None ($0.0$)} & \par P: A couple sits in the grass.
\par H: People are outside.
& \multirow{2}{*}{Entailment} \\
\cmidrule(lr){2-3}
& \par P: An older women tending to a garden.
\par H: The lady is cooking dinner.
& \multirow{2}{*}{Non-Entailment} \\
\bottomrule
\end{tabular}
}
\caption{NLI examples with different degrees of word-overlap (between premise and hypothesis), where the overlap is the ratio of hypothesis words that are shared with the premise. The highlighted words are the common (in green) or different (in purple) words (the samples are picked to reflect extreme cases across the word-overlap spectrum).}
\label{tab:my_label}
\end{table*}
In this paper, we revisit the word-overlap bias in NLI and the effectiveness of existing debiasing techniques.
Despite the popularity of this type of bias, we find that some of its aspects are generally ignored in the research community.
If we consider word-overlap as a feature with values ranging from no to full overlap, and NLI task with two labels of entailment and non-entailment, we show that there are other kinds of spurious correlation than the popular high word-overlap and entailment.
Specifically, as it is shown in Figure \ref{fig:reverse-bias}, we see a clear bias towards non-entailment for the low and no word-overlap values (denoted by the high performance on the non-entailment label, which comes at the price of reduced performance on the entailment class).
We will refer to this type of bias as \textit{reverse} word-overlap throughout the paper.
Through a set of experiments, we demonstrate that the overlooked reverse word-overlap bias exists in popular NLI datasets, such as MNLI and SNLI, as well as in the predictions of PLMs. Moreover, our results suggest that while existing debiasing methods can mitigate the overlap bias in NLI models to some extent, they are ineffective in resolving the reverse bias.
Moreover, we analyze how NLI models employ minority instances to enhance their generalization.
Focusing on the forgettable debiasing method \cite{yaghoobzadeh-etal-2021-increasing}, we realize that eliminating HANS-like examples and the reverse ones do not hurt the generalization noticeably.
In search of the origin of the bias, we employ prompt-based techniques to check whether the bias stems from pre-training.
We also verify the robustness of PLMs in a few-shot learning experiment with controlled and balanced training sets.
Our results suggest that PLMs do not exhibit any bias towards a specific label.
Nevertheless, introducing a few samples triggers the bias toward the entailment label.
Furthermore, balancing the training examples with respect to their word-overlap prevents the emergence of bias to some extent.
Our contributions can be summarized as follows:
\begin{itemize}
\item
We expand our understanding of the word-overlap bias in NLI by revealing an unexplored spurious correlation between low word-overlap and non-entailment.
\item We analyze how debiasing methods work for the whole spectrum of word-overlap bias, finding that they generally fail at addressing bias for the low and non-overlapping cases.
\item To explore the origin of word-overlap bias in PLMs, we design several new experiments showing that, even when exposed to a few training examples, PLMs get biased towards predicting entailment.
\end{itemize}
\section{Natural Language Inference}
\label{sec:nli}
In NLI, a model is provided with two input sentences, namely \textit{premise} and \textit{hypothesis}.
The task for the model is to predict whether the hypothesis is true (\textit{entailment}), false (\textit{contradiction}), or undetermined (\textit{neutral}) given the premise.
\subsection{Bias in NLI Models}
Analyzing NLI models have demonstrated that they are sensitive to the shortcuts that appear in the dataset.
Several types of bias have been investigated in the literature, including hypothesis-only prediction, spurious correlations between certain words and labels (e.g., negation words and the non-entailment label), sensitivity to the length of hypothesis, and lexical overlap between the premise and hypothesis \cite{gururangan-etal-2018-annotation,poliak-etal-2018-hypothesis,mccoy-etal-2019-right,wu-etal-2022-generating}.
Relying on these spurious features hampers the language understanding ability of NLI models, leading to poor performance on out-of-distribution datasets where such superficial correlations do not hold \cite{he-etal-2019-unlearn,mccoy-etal-2019-right}.
\paragraph{Word-Overlap Bias.} Among the detected dataset biases, word-overlap is a quite well-studied shortcut in the NLI task \cite{zhou-bansal-2020-towards,mendelson-belinkov-2021-debiasing}.
We define word-overlap ($wo$) as the ratio of words in the hypothesis ($h$) that are shared with the premise ($p$), i.e., $\frac{|h\cap p|}{|h|}$.
Table \ref{tab:my_label} shows examples of different degrees of word-overlap.
\begin{table*}
\centering
\setlength{\tabcolsep}{21pt}
\scalebox{0.85}{
\begin{tabular}{r c c c c c }
\toprule
& MNLI-dev & HANS & HANS$+$ & HANS$-$ & W\small{A}\normalsize{NLI} \\
\midrule
\multicolumn{1}{c}{} &
\multicolumn{5}{c}{\textbf{BERT}} \\
\cmidrule{2-6}
Baseline & 84.2 \small $\pm 0.3$ &
63.9 \small $\pm 1.7$ &
98.5 \small $\pm 1.2$ &
29.3 \small $\pm 4.6$ &
56.9 \small $\pm 0.6$ \\
\cmidrule{2-6}
Long-tuning & 83.4 \small $\pm 0.8$ &
65.8 \small $\pm 2.3$ &
99.0 \small $\pm 0.2$ &
32.6 \small $\pm 4.4$ &
58.0 \small $\pm 0.6$ \\
\,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, & 82.7 \small $\pm 0.3$ &
73.8 \small $\pm 0.5$ &
91.8 \small $\pm 0.4$ &
55.9 \small $\pm 1.3$ &
59.0 \small $\pm 0.3$ \\
PoE & 80.0 \small $\pm 0.8$ &
66.9 \small $\pm 2.2$ &
71.6 \small $\pm 3.7$ &
62.2 \small $\pm 2.7$ &
71.6 \small $\pm 0.7$ \\
\midrule
\multicolumn{1}{c}{} &
\multicolumn{5}{c}{\textbf{RoBERTa}} \\
\cmidrule{2-6}
Baseline & 87.2 \small $\pm 0.2$ &
73.3 \small $\pm 3.4$ &
98.5 \small $\pm 1.0$ &
48.2 \small $\pm 7.8$ &
59.7 \small $\pm 1.6$\\
\cmidrule{2-6}
Long-tuning & 86.9 \small $\pm 0.3$ &
73.0 \small $\pm 1.7$ &
97.8 \small $\pm 1.2$ &
48.2 \small $\pm 4.2$ &
60.3 \small $\pm 0.1$ \\
\,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, & 85.6 \small $\pm 0.3$ &
78.9 \small $\pm 0.6$ &
88.1 \small $\pm 2.4$ &
69.7 \small $\pm 2.3$ &
62.0 \small $\pm 1.4$\\
PoE & 84.6 \small $\pm 0.1$ &
77.0 \small $\pm 1.5$ &
79.3 \small $\pm 6.2$ &
71.4 \small $\pm 3.7$ &
73.4 \small $\pm 0.1$ \\
\bottomrule
\end{tabular}}
\caption{The average accuracy of the baseline models and debiasing methods on the MNLI development (matched) set as the \textit{in-distribution} and W\small{A}\normalsize{NLI} and HANS as the \textit{out-of-distribution} datasets (HANS$+$ and HANS$-$ are entailment and non-entailment subsets, respectively).}
\label{tab:mnli}
\end{table*}
\begin{figure*}
\centering
\includegraphics[scale=0.2]{EMNLP 2022/images/reverse-bias}
\caption{The distribution of instances across word-overlap bins (SNLI dataset).}
\label{fig:probe-stat}
\end{figure*}
\subsection{Debiasing Methods}
\label{sec:debiasing}
Creating high-quality datasets without any spurious features between instances and gold labels is an arduous and expensive process \cite{gardner-etal-2021-competency}, making it inevitable for a dataset not to have biases to some extent.
Therefore, to have a robust model, it is essential to take extra steps for debiasing against dataset artifacts.
The past few years have seen several debiasing methods \cite{karimi-mahabadi-etal-2020-end,utama-etal-2020-mind,utama-etal-2020-towards,belinkov-etal-2019-dont}.
For our experiments, we opted for three different debiasing approaches.
We evaluate the effectiveness of these techniques in mitigating the overlap bias and its reverse.
\paragraph{Long-tuning.}
\citet{tu-etal-2020-empirical} have shown that fine-tuning NLI models for more epochs can enhance the generalizability of LMs over challenging datasets. Following their suggestion, we fine-tuned the models for 20 epochs on the MNLI dataset.
\paragraph{Forgettable Examples.}
\citet{yaghoobzadeh-etal-2021-increasing} find minority examples without prior knowledge of the dataset artifacts.
In the proposed method, the minority examples are considered samples that have never been learned or learned once and then forgotten by the model. Then, the already trained NLI model is fine-tuned on this subset for a few more epochs.
Following the authors' suggestion, to find the forgettable examples, we utilized a simple Siamese Bag of Words (BoW) model where the sentence representations of the premise and hypothesis are the average over their word embeddings.
\paragraph{Product of Experts (PoE).}
In this method, a weak model is supposed to learn superficial features in the input. The weak learner's output is then used to normalize the main model's predictions on over-confident examples. Following previous studies \cite{karimi-mahabadi-etal-2020-end,DBLP:conf/iclr/Sanh0BR21}, we employed the following combination strategy for taking into account both weak learner and main model predictions:
\begin{equation}
y = softmax(\log p_w + \log p_m )
\end{equation}
where $p_w$ and $p_m$ are the outputs of the weak learner and the main model, respectively. The robust model is trained using a cross-entropy loss function based on $y$.
We used TinyBERT \cite{jiao-etal-2020-tinybert} as our weak learner.
\subsection{Experimental Setup}
\label{sec:setup1}
\paragraph{Datasets.} In our experiments, we opted for the Multi-Genre Natural Language Inference dataset \cite[MNLI]{williams-etal-2018-broad} for training the NLI models.
The dataset contains 433k training examples.
Since the gold labels for the test set are not publicly available, we follow previous work and report results on the \textit{development-matched} (MNLI-dev in the tables).
Also, following the convention in previous studies, we merge neutral and contradiction examples into the non-entailment group.
As challenging datasets, we considered HANS \cite{mccoy-etal-2019-right} and W\small{A}\normalsize{NLI} \cite{,liu-etal-2022-wanli}.
In the former dataset, each instance is curated in a way that all words of the hypothesis are also observed in the premise, irrespective of the word order.
Previous work has shown that biased NLI models tend to perform poorly on HANS, particularly for the non-entailment class \cite{yaghoobzadeh-etal-2021-increasing}.
The latter challenging set has employed GPT-3 \cite{GPT3-NEURIPS2020} to generate high-quality instances followed by filtering done by human crowd-workers.
Quality tests on W\small{A}\normalsize{NLI} indicate that the dataset contains fewer artifacts compared to MNLI.
\paragraph{Models.} As for PLMs, we opted for the base version of BERT and RoBERTa \cite{devlin-etal-2019-bert,liu2020roberta} and fine-tuned them for three epochs as our baselines.
We trained the models with a learning rate of 2e-5, employing the Adam optimizer for three different random seeds.
The batch size was set to 32 with a max length of 128.
All the reported results are based on three random seeds.
\subsection{Results}
Table \ref{tab:mnli} shows the results for the baseline models (BERT and RoBERTa) and the three debiasing techniques on different datasets.
The bias in the baseline model is highlighted by the performance contrast across the entailment (HANS$+$) and non-entailment (HANS$-$) subsets.
As can be seen, the three debiasing methods are generally effective in softening the biased behavior, reflected by the improved performance on HANS$-$ (and, in turn, HANS), and also W\small{A}\normalsize{NLI}.
\begin{table*}
\centering
\setlength{\tabcolsep}{22pt}
\scalebox{0.85}{
\begin{tabular}{l c c c c}
\toprule
\multirow{2}{*}{\bf Overlap} &
\multicolumn{2}{c}{\textbf{BERT}} &
\multicolumn{2}{c}{\textbf{RoBERTa}}
\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
& \textbf{Entailment} & \textbf{Non-Entailment} & \textbf{Entailment} & \textbf{Non-Entailment} \\
\midrule
{Full} & 99.7 \small $\pm 0.1$ &
\underline{13.3} \small $\pm 1.4$ &
99.7 \small $\pm 0.1$ &
\underline{17.6} \small $\pm 0.9$ \\
{[0.8, 1.0)} & 92.9 \small $\pm 0.0$ &
83.0 \small $\pm 1.5$ &
95.9 \small $\pm 0.6$ &
92.7 \small $\pm 2.4$ \\
\cmidrule(lr){2-5}
{[0.6, 0.8)} & 85.2 \small $\pm 0.4$ &
86.2 \small $\pm 1.6$ &
91.5 \small $\pm 1.4$ &
84.5 \small $\pm 2.8$
\\
{[0.4, 0.6)} & 74.2 \small $\pm 0.1$ &
91.9 \small $\pm 1.1$ &
85.8 \small $\pm 2.4$ &
90.2 \small $\pm 2.4$
\\
{[0.2, 0.4)} & 64.5 \small $\pm 0.6$ &
95.1 \small $\pm 0.6$ &
78.5 \small $\pm 2.8$ &
93.8 \small $\pm 1.6$
\\
\cmidrule(lr){2-5}
{(0.0, 0.2)} & \underline{55.5} \small $\pm 1.4$ &
96.7 \small $\pm 0.5$ &
\underline{68.6} \small $\pm 3.3$ &
96.0 \small $\pm 1.2$
\\
{None} & {61.6} \small $\pm 1.3$ &
95.2 \small $\pm 0.2$ &
{77.2} \small $\pm 3.4$ &
93.6 \small $\pm 1.5$ \\
\bottomrule
\end{tabular}}
\caption{The accuracy of the two NLI models across different overlap bins and on both subsets. The lowest numbers in each column are underlined.}
\label{tab:reverse-performance}
\end{table*}
\section{Reverse Word-Overlap}
\label{sec:reverse-wo}
Considering the word-overlap bias as a spectrum, the existing studies have mainly focused on a small subset of the spectrum, i.e., the case with full word-overlap and its spurious correlation with the entailment label.
In this section, we evaluate the performance of NLI models on other areas of the spectrum and with respect to both labels (entailment and non-entailment) to broaden our insights on the robustness of these models considering the word-overlap feature.
\subsection{Probing Dataset}
\label{sec:setup}
As for this probing study, we experimented with the SNLI dataset \cite{bowman-etal-2015-large}, merging the training, development, and test sets to build a unified evaluation set.
The set was split into seven bins based on the degree of overlap.
The statistics are reported in Figure \ref{fig:probe-stat}.
As an example, the $[0.6,0.8)$ bin contains samples that have a word overlap (between premise and hypothesis) of greater than (and equal to) $0.6$ and less than $0.8$.
\subsection{Results}
Unless specified otherwise, the experimental setup in this experiment is the same as the one reported in Section \ref{sec:setup1}.
Table \ref{tab:reverse-performance} reports the results across different word overlap bins for both BERT and RoBERTa and for both labels.
As expected, high contrast is observed on the full overlap subset: near-perfect NLI performance on the entailment, while poor performance on non-entailment, suggesting a strong bias towards the entailment label.
This is the conventional type of NLI bias that has been usually discussed in previous studies.
The HANS challenging dataset is constructed based on the same type of bias.
However, surprisingly, the results show that this biased behavior only exists for samples with full overlap.
In fact, no notable bias is observed even for the high overlap samples in the [$0.8$, $1$) bin.
This observation further narrows down the scope of HANS as a challenging dataset and raises questions on the robustness of models developed based on the dataset.
\paragraph{Reverse bias.} Interestingly, the results
in Table \ref{tab:reverse-performance} shed light on another inherent spurious correlation that exists between NLI performance and the degree of word-overlap.
Particularly towards the non-overlap extreme, the performance drops on entailment and increases on non-entailment samples. In the (0.0, 0.2) bin, we see the largest gap: 55.5 entailment vs 96.7 non-entailment for the BERT model.
We refer to the biased behavior of NLI models on the low word-overlapping samples towards the non-entailment label as the \textit{Reverse bias}.
It is also worth mentioning that based on the proposed results, reverse bias covers a broader range of bins in comparison with the word-overlap bias.
\begin{figure*}[h!]
\centering
\includegraphics[width=
13cm]{EMNLP 2022/images/reverse-bias-full}
\caption{The performance of the baseline and the three debiasing methods across the seven word-overlap bins for both labels and for BERT and RoBERTa. Across the spectrum, the debiasing techniques seem to be effective only on samples with high (particularly full) word-overlap on the non-entailment subset and are either ineffective (or even harmful) towards the other end of the overlapping spectrum and on the entailment subset.}
\label{fig:Reverse-Bias}
\end{figure*}
\subsection{Effectiveness of Debiasing Methods}
Figure \ref{fig:Reverse-Bias} shows the performance of the three debiasing methods (described in Section \ref{sec:debiasing}) across the seven bins in our word-overlap analysis.
As can be observed, debiasing methods improve over the baseline on the full-overlap (``Full'' in Figure \ref{fig:Reverse-Bias}) and non-entailment subset, with PoE proving the most effective.
The improvement is expected since the results on the challenging dataset, HANS, suggest the same.
This, however, comes at the price of
reduced performance on the entailment subset, specifically in the BERT model.
As we move toward the non-overlap end of the spectrum (``None'' in Figure \ref{fig:Reverse-Bias}), the performance gap between the entailment and non-entailment labels grows, mainly due to the drop in entailment performance.
Interestingly, the experimental results reveal that debiasing methods are clearly ineffective in addressing the reverse bias and perform similarly to the baseline models.
\section{Analysis}
\label{sec:analysis}
\begin{figure*}
\centering
\subfigure[Non-Entailment]{
\includegraphics[width=5.cm,height=4.cm]{images/plt1.png}\label{fig:dist-minority}}
\hspace{0.1\textwidth}
\subfigure[Entailment]{
\includegraphics[width=5.cm,height=4.cm]{images/plt2.png} \label{fig:dist-minority-ent}}
\caption{Normalized distribution of instances with respect to their word-overlap in the original training set of MNLI and the subset identified by \,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\,. }
\label{fig:data-dist}
\end{figure*}
\subsection{Role of Minority Examples
In the context of word-overlap bias, the non-entailment instances that have full overlap (between premise and hypothesis) are usually referred to as \textit{minority} examples.
\citet{tu-etal-2020-empirical} show that minority examples of the training set play a crucial role in the generalizability of language models, and eliminating them can significantly hurt performance on challenging datasets, such as HANS.
\citet{yaghoobzadeh-etal-2021-increasing} relate the forgettables with the minority examples by observing the difference in word-overlap distribution in forgettables.
We carry out a set of experiments on the \textit{forgettable} approach, where a subset of the training data is chosen for further fine-tuning of models (66$k$ in our NLI experiments for the \,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, method).
We extend the forgettable analysis to the low word-overlap or reverse minority examples.
We also verify the role played by minority examples in the performance of debiasing methods.
As the first step, we compare the distribution of instances with respect to their overlap in the original training set of MNLI and its forgettable subset.
The results are shown in Figure \ref{fig:data-dist}.
As can be seen, the forgettable subset tends to have better coverage over the minority subset than the original MNLI training set. See the right side of Figure \ref{fig:dist-minority} and the left side of Figure \ref{fig:dist-minority-ent}.
One can hypothesize that better coverage of minority examples is the reason behind the effectiveness of the forgettable approach.
To verify this hypothesis, we eliminate several subsets from \,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, and fine-tune the NLI models with the remaining samples.
We considered the following four settings:
\begin{itemize}
\item \textbf{Full $-$ NEnt:} Full overlap between premise and hypothesis with the non-entailment label.
\item \textbf{None $-$ Ent:} No overlap and entailment label.
\item \textbf{[0.8, 1.0] $-$ NEnt:} More then 80\% overlap and non-entailment label.
\item \textbf{[0.0, 0.2] $-$ Ent:} Less than 20\% overlap and entailment label.
\end{itemize}
The results are reported in Table \ref{tab:minority-role}.
Interestingly, we observe that removing HANS-like examples (Full$-$NEnt), which were hypothesized to play the main role in improving performance on the challenging datasets, does not affect the performance of \,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, notably.
The observation is consistent even for larger subsets of high-overlapping instances ([0.8, 1]$-$NEnt).
Discarding the reverse group (low-overlapping entailment samples) yields a similar pattern. So, it can be inferred that such samples do not play the primary role in the debiasing methods' effectiveness.
This opens up questions on how NLI models extrapolate to patterns unseen during training and how debiasing methods enhance their generalization over out-of-distribution data.
This is particularly interesting in light of observations made by \citep{tu-etal-2020-empirical} that standard training does not enable such extrapolation.
We leave further investigations in this area to future work.
\begin{table*}[h!]
\centering
\setlength{\tabcolsep}{14pt}
\scalebox{0.85}{
\begin{tabular}{r c c c c c r}
\toprule
& MNLI-dev & HANS & HANS$+$ & HANS$-$ & W\small{A}\normalsize{NLI} & Eliminated \\
\midrule
\multicolumn{1}{c}{} &
\multicolumn{5}{c}{\textbf{BERT}} &
\multicolumn{1}{c}{}\\
\cmidrule{2-6}
\it Baseline & 84.2 \small $\pm 0.3$ &
63.9 \small $\pm 1.7$ &
98.5 \small $\pm 1.2$ &
29.3 \small $\pm 4.6$ &
56.9 \small $\pm 0.6$ \\
\,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, & 82.7 \small $\pm 0.3$ &
73.8 \small $\pm 0.5$ &
91.8 \small $\pm 0.4$ &
55.9 \small $\pm 1.3$ &
59.0 \small $\pm 0.3$ &
\\
\cmidrule{2-7}
Full~$-$~NEnt & 82.8 \small $\pm 0.4$ &
71.7 \small $\pm 0.9$ &
93.2 \small $\pm 0.4$ &
50.3 \small $\pm 2.0$ &
59.4 \small $\pm 0.5$ & 782\\
[0.8, 1.0]~$-$~NEnt & 83.2 \small $\pm 0.2$ &
72.3 \small $\pm 0.8$ &
93.5 \small $\pm 1.3$ &
51.1 \small $\pm 2.9$ &
58.8 \small $\pm 0.5$ & 6,350 \\
[0.0, 0.2]~$-$~~~~Ent & 82.9 \small $\pm 0.4$ &
73.7 \small $\pm 0.7$ &
91.9 \small $\pm 0.8$ &
55.4 \small $\pm 2.1$ &
59.5 \small $\pm 0.7$ & 1,801 \\
None~$-$~~~~Ent & 82.8 \small $\pm 0.5$ &
73.8 \small $\pm 0.8$ &
92.1 \small $\pm 1.5$ &
55.5 \small $\pm 3.1$ &
59.3 \small $\pm 0.6$ & 482 \\
\midrule
&
\multicolumn{5}{c}{\textbf{RoBERTa}}
\\
\cmidrule{2-6}
\it Baseline & 87.2 \small $\pm 0.2$ &
73.3 \small $\pm 3.4$ &
98.5 \small $\pm 1.0$ &
48.2 \small $\pm 7.8$ &
59.7 \small $\pm 1.6$\\
\,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, & 85.6 \small $\pm 0.3$ &
78.9 \small $\pm 0.6$ &
88.1 \small $\pm 2.4$ &
69.7 \small $\pm 2.3$ &
62.0 \small $\pm 1.4$ & \\
\cmidrule{2-7}
Full~$-$~NEnt & 86.4 \small $\pm 0.2$ &
79.1 \small $\pm 1.3$ &
92.1 \small $\pm 1.6$ &
66.1 \small $\pm 4.0$ &
62.2 \small $\pm 0.9$ &
782 \\
[0.8, 1.0]~$-$~NEnt & 86.6 \small $\pm 0.2$ &
78.4 \small $\pm 1.0$ &
95.9 \small $\pm 0.8$ &
60.8 \small $\pm 2.8$ &
61.8 \small $\pm 0.7$ &
6,350 \\
[0.0, 0.2]~$-$~~~~Ent & 86.1 \small $\pm 0.2$ &
79.3 \small $\pm 1.3$ &
89.8 \small $\pm 1.2$ &
68.7 \small $\pm 2.1$ &
62.3 \small $\pm 0.9$ &
1,801 \\
None~$-$~~~~Ent & 86.1 \small $\pm 0.2$ &
79.1 \small $\pm 1.2$ &
88.6 \small $\pm 2.1$ &
69.5 \small $\pm 2.9$ &
62.1 \small $\pm 0.7$ &
482\\
\bottomrule
\end{tabular}}
\caption{The performance of \,$\mathcal{F}_{\,\scriptsize\textsc{BoW}}$\, after eliminating four different subsets. \textit{Eliminated} denotes the number of eliminated examples in each setting. All the subsets tend to be in the same performance ballpark with respect to the generalizability of the model on the out-of-distribution datasets (W\small{A}\normalsize{NLI} and HANS). }
\label{tab:minority-role}
\end{table*}
\begin{table*}
\centering
\setlength{\tabcolsep}{22pt}
\scalebox{0.85}{
\begin{tabular}{c c c c c c }
\toprule
\multicolumn{1}{c}{} &
\multicolumn{5}{c}{\textbf{Baseline}} \\
\cmidrule(lr){2-6}
& MNLI-dev & HANS & HANS$+$ & HANS$-$ & W\small{A}\normalsize{NLI}
\\
\midrule
Zero-shot & 42.0 & 55.3 & 57.5 & 53.1 & 58.0
\\
\midrule
$K = ~16$ & 45.6 \small $\pm ~1.2$ &
53.6 \small $\pm ~1.3$ &
73.2 \small $\pm 16.5$ &
34.4 \small $\pm 13.9$ &
54.7 \small $\pm ~2.3$ \\
$K = ~32$ & 46.9 \small $\pm ~0.6$ &
50.8 \small $\pm ~0.8$ &
98.3 \small $\pm ~1.2$ &
~3.3 \small $\pm ~2.8$ &
50.1 \small $\pm ~2.2$ \\
$K = ~64$ & 49.6 \small $\pm ~0.3$ &
50.3 \small $\pm ~0.3$ &
99.4 \small $\pm ~0.5$ &
~1.1 \small $\pm ~1.1$ &
48.4 \small $\pm ~4.3$ \\
$~K = 128$ & 52.7 \small $\pm ~0.9$ &
50.0 \small $\pm ~0.0$ &
99.9 \small $\pm ~0.2$ &
~0.1 \small $\pm ~0.2$ &
45.1 \small $\pm ~0.4$ \\
$~K = 256$ & 56.4 \small $\pm ~0.4$ &
50.7 \small $\pm ~0.8$ &
98.1 \small $\pm ~2.2$ &
~3.3 \small $\pm ~3.9$ &
50.3 \small $\pm ~0.0$ \\
$~K = 512$ & 61.4 \small $\pm ~1.1$ &
50.0 \small $\pm ~0.1$ &
100 \small $\pm ~0.0$ &
~0.1 \small $\pm ~0.1$ &
46.2 \small $\pm ~2.0$ \\
\midrule
\multicolumn{1}{c}{} & \multicolumn{5}{c}{\textbf{Balanced}}\\
\cmidrule(lr){2-6}
$K = 16$ &
44.1 \small $\pm ~0.6$ &
52.5 \small $\pm ~1.5$ &
95.6 \small $\pm ~2.6$ &
~9.3 \small $\pm ~5.7$ &
54.3 \small $\pm ~3.2$ \\
$K = 32$ & 45.7 \small $\pm ~1.3$ &
51.9 \small $\pm ~1.1$ &
82.2 \small $\pm 15.7$ &
21.5 \small $\pm 13.4$ &
52.0 \small $\pm ~1.3$ \\
$K = 64$ & 45.2 \small $\pm ~1.1$ &
52.4 \small $\pm ~1.1$ &
69.8 \small $\pm ~6.0$ &
35.1 \small $\pm ~3.8$ &
54.4 \small $\pm ~0.3$\\
$~K = 128$ &48.0 \small $\pm ~0.1$ &
51.7 \small $\pm ~0.1$ &
95.7 \small $\pm ~5.0$ &
~7.7 \small $\pm ~5.2$ &
52.8 \small $\pm ~3.3$ \\
$~K = 256$&51.3 \small $\pm ~1.3$ &
51.2 \small $\pm ~3.0$ &
84.9 \small $\pm 15.6$ &
17.5 \small $\pm 21.5$ &
51.8 \small $\pm ~3.5$\\
$~K = 512$&53.2 \small $\pm ~0.2$ &
51.3 \small $\pm ~2.8$ &
86.8 \small $\pm 10.8$ &
15.8 \small $\pm 16.5$ &
49.5 \small $\pm ~1.7$\\
\bottomrule
\end{tabular}}
\caption{Zero-shot and few-shot results of prompt-based fine-tuning for BERT. While no significant bias is seen in the zero-shot setting, only with a few task-specific examples, BERT predictions are biased towards entailment (HANS$+$ vs. HANS$-$). Balancing the training set (bottom block) slightly reduces the extent of bias.}
\label{tab:prompting}
\end{table*}
\subsection{The Origin of Word-Overlap Bias}
We conducted another experiment to see if the vulnerability of NLI models to the word-overlap feature and the reverse bias comes from pre-training or from fine-tuning on the task-specific data.
To this end, we followed \citet{utama-etal-2021-avoiding} in evaluating pre-trained models under zero- and few-shot settings.
To rule out the impact of fine-tuning and verify if the pre-trained model exhibits similar biases with respect to word-overlap, we evaluated BERT in a zero-shot setting by reformulating the NLI task as a masked language modeling objective.
Following previous studies \cite{schick-schutze-2021-exploiting, utama-etal-2021-avoiding}, we transformed the NLI examples using the below template:
\begin{verbatim}
Premise ? [MASK], Hypothesis.
\end{verbatim}
\noindent where the \texttt{[MASK]} token denotes the gold label.
We used a simple verbalizer with \textit{yes}, \textit{maybe}, and \textit{no} as mappings to, respectively, the entailment, neutral, and contradiction labels.
The first row of Table \ref{tab:prompting} shows the results for the zero-shot setting.
The similar performance across HANS$-$ and HANS$+$ shows that the pre-trained BERT model does not exhibit much bias towards a specific label.
Therefore, the bias stems from the fine-tuning on the task-specific instances.
This is reflected even with as few as 16 samples in the few-shot scenario (where we have fine-tuned the prompt-based model).
As the number of training instances increases, the gap between the entailment and non-entailment samples grows.
\paragraph{Balanced data.} We also examined the role of class imbalance in the training data on the emergence of word-overlap bias.
For this experiment, we defined four categories based on the overlap \{Full, [0.5, 1), (0.0, 0.5), and None\} and uniformly sampled $K$ instances per label.
The bottom block of Table \ref{tab:prompting} presents the results. It can be inferred that having a balanced training set can reduce the bias to some extent.
Finally, the high variance on the HANS subsets suggests that the quality of training examples and word-overlap percentage between the premise and hypothesis can have a significant impact on the bias in NLI systems.
\section{Related Work}
\paragraph{Dataset biases in NLP.}
Different categories of bias have been discovered and discussed in NLP datasets.
Earlier work has discovered that negative words are correlated with contradiction label in the SNLI dataset \cite{naik-etal-2018-stress,gururangan-etal-2018-annotation}.
Hypothesis-only \cite{gururangan-etal-2018-annotation} and word-overlap between hypothesis and premise \cite{mccoy-etal-2019-right} are other types of biases discussed in the literature of SNLI and MNLI datasets.
In particular, word overlap has also been investigated in the context of duplicate question detection on the QQP dataset \cite{zhang-etal-2019-paws}.
For both NLI and QQP, it has been shown that considerable spurious correlations exist between high word overlap and the entailment/duplicate label.
In this word, we focused on the word overlap bias in the NLI dataset and introduced an overlooked aspect of this bias: the correlation between low word overlap and non-entailment class.
\paragraph{Challenging sets.}
In the past few years, several challenging datasets have been introduced to study the limitations of NLP models and, in particular, pre-trained language models in learning robust features and ignoring dataset biases.
Challenging datasets for NLI include HANS \cite{mccoy-etal-2019-right}, ANLI \cite{williams-etal-2022-anlizing}, MNLI-hard \cite{gururangan-etal-2018-annotation} and
Stress-tests \cite{naik-etal-2018-stress}.
Similar datasets for other tasks include PAWS \cite{zhang-etal-2019-paws,pawsx2019emnlp}, for duplicate question detection, and FEVER-Symmetric \cite{schuster-etal-2019-towards}, for stance detection.
\paragraph{Spurious correlation.}
\citet{gardner2021competency} argue that for complex language understanding tasks, any simple feature correlation should be considered spurious, e.g., ``not'' and the contradiction label in NLI.
Spurious correlations can also be defined from the viewpoint of generalizability \citet{chang-etal-2021-robustness,yaghoobzadeh-etal-2021-increasing}.
According to this definition, a feature is spurious if it works well only for specific examples.
The reverse word overlap feature described in this paper fits well within both definitions.
\citet{schwartz2022limitations} review several definitions for spurious correlations.
\paragraph{Debiasing methods.}
Many studies try to remove the spurious correlations or dataset biases either from the training dataset or the model.
Most debiasing approaches filter or down weight those training examples that are either easy or contain spurious correlations \cite{he-etal-2019-unlearn,karimi-mahabadi-etal-2020-end,utama-etal-2020-mind,DBLP:conf/iclr/Sanh0BR21}.
Others augment the training set with examples that violate the spurious correlations.
A mix of both these approaches has also been investigated by \citet{wu-etal-2022-generating}.
An alternative approach is to extend the fine-tuning either on all \cite{tu-etal-2020-empirical} or parts of training data \cite{yaghoobzadeh-etal-2021-increasing}.
\paragraph{Analysis of debiasing.}
Given the increasing interest in debiasing methods, there have been concerns about their widespread use. \citet{schwartz-stanovsky-2022-limitations} argue that excessive balancing prevents the models from learning anything (in particular, important world and commonsense knowledge), making it neither practical nor desired.
They suggest abstaining and interacting with the user when the contextual information is not sufficient and also focus on zero- and few-shot learning approaches instead of full fine-tuning.
In this paper, we showed that balancing datasets should only be taken as a partial solution for eliminating spurious correlations.
We also showed that in this context, few-shot learning might not be effective.
\citet{mendelson-belinkov-2021-debiasing} found that debiasing methods encode more extractable information about the bias in their inner representations.
This observation is explained in a concurrent work to ours in terms of the necessity and sufficiency of the biases \cite{joshi2022all}.
In this paper and for the word-overlap bias, we showed that our selected debiasing techniques are not robust against if we consider the whole spectrum.
\section{Conclusions}
In this work, we uncovered an unexplored aspect of the well-known word-overlap bias in the NLI models. We showed a spurious correlation between the low overlap instances and the non-entailment label, namely the \textit{reverse} word-overlap bias.
We demonstrated that existing debiasing methods are not effective in mitigating the reverse bias.
We found that the generalization power of debiasing methods (the forgettable approach in particular) does not stem from minority examples.
We also showed that the word-overlap bias does not seem to come from the pre-training step of PLMs.
As future work, we plan to focus on designing new debiasing methods for mitigating the reverse bias for NLI and similar tasks. Also, building specific challenging sets, similar to HANS, for the reverse bias helps to expand this line of research.
\section{Acknowledgements}
We would like to acknowledge that the idea of reverse bias was initiated in discussion with Alessandro Sordoni (MSR Montreal). Also, we want to thank the anonymous reviewers for their valuable comments, which helped us in improving the paper. Sara Rajaee is funded in part by the Netherlands Organization for Scientific Research (NWO) under project number VI.C.192.080.
\section{Limitations}
In our experiments, we have focused on two popular PLMs, BERT and RoBERTa. Using more PLMs, with diversity in the objective and architecture and evaluating their robustness is one of the extendable aspects of our work. Moreover, we evaluated three debiasing methods, but this could have been expanded to more. The other susceptible aspect to improvement is creating a more high-quality dataset for analyzing the overlap bias and its reverse. We have used SNLI as our main probing set, a crowdsourcing-based dataset that contains some noisy examples, especially in minority groups.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec1}
Kronecker coefficients are widely studied in mathematics from many points of view (e.g. symmetric polynomials, complexity theory, combinatorics). In this contribution to the volume on non-commutativity and physics, motivated by work on tensor models in physics, we review how the structure of a class of non-commutative, associative algebras $\mathcal{K}(n)$ (one for each integer $n$) leads to new algorithms for computing Kronecker coefficients. These algorithms have a geometrical interpretation in terms of sub-lattices of a lattice whose basis vectors are labelled by bi-partite ribbon graphs. Bi-partite ribbon graphs themselves are related to holography in a fundamental way (which relates quantum field theory on one geometry to string theory on another geometry). The story we present has, as central figures, the algebra $\mathcal{K}(n)$ and stringy geometrical structures (lattices, holography). This can be viewed as part of a new direction in the study of non-commutative geometric structures in mathematical physics, which encompasses stringy holography as well as more direct approaches to the emergence of geometry from algebras in traditional non-commutative geometry \cite{Connes,Madore,Majid}. The standard mathematical approach to Kronecker coefficients is to think of them as the structure constants of the commutative fusion ring of representations of symmetric groups. Bringing the non-commutative associative algebra $\mathcal{K}(n)$ to bear on the Kronecker coefficients evidences the idea pioneered in traditional non-commutative geometry, that underlying commutative mathematical structures, there are deeper non-commutative physical constructions which carry interesting hidden information.
Following the classic work of 't Hooft \cite{tHooft} it has been understood that the combinatorics of large gauge theories with gauge symmetries such as $U(N), SO(N), Sp(N)$ simplifies in the large $N$ limit. This simplification is related to the emergence of string world-sheet combinatorics in the $1/N$ expansion of physical observables, and underlies examples of gauge-string duality such as the AdS/CFT correspondence \cite{malda}. These simplifications are based on the appearance of double-line diagrams or equivalently ribbon graphs in large $N$ computations. Ribbon graphs are also related to the mathematics of holomorphic maps between two-dimensional surfaces, and in particular a special class of such maps called Belyi maps. This leads to simple mathematical models of gauge-string duality based on the correspondence between correlators of invariant matrix polynomials in matrix models (the gauge theory) and the counting of Belyi maps (which can be viewed as a combinatorial topological string theory) \cite{dMRBelyi}.
The solution of counting problems for tensor model observables can also be formulated in terms of ribbon graph counting \cite{BenGeloun:2020yau,JoSan1}. Specifically,
we consider a complex tensor variable $ \Phi_{ijk}$, with $ i , j, k \in \{ 1, 2, \cdots , N \} $ which transforms in the three-fold tensor product $( V_N \otimes V_N \otimes V_N) $
of the fundamental representation $V_N$ of $U(N)$. The space of polynomials of degree $n$ in $\Phi$ and degree $n$ in the complex conjugate $\bar \Phi$ contains a subspace of $U(N)$ invariants of dimension, which somewhat non-trivially, turns out to be equal to the number of bi-partite ribbon graphs with $n$ edges, for $ N \ge n$. It was also understood \cite{JoSan2} that there is an associative algebra $\mathcal{K}(n)$ with basis labelled by these bi-partite ribbon graphs. The dimension of the algebra, equivalently the number of bi-partite ribbon graphs with $n$ edges, is equal to the sum of squares of Kronecker coefficients :
\begin{eqnarray}
\text{Dim} ( \mathcal{K}(n) ) = \sum_{ R_1 , R_2 , R_3 \vdash n } ( C ( R_1 , R_2 , R_3 ) )^2
\end{eqnarray}
for all triples $R_1, R_2 , R_3 $ of Young diagrams with $n$ boxes. The algebra $\mathcal{K}(n)$ has a Fourier basis $Q^{ R_1, R_2 , R_3 }_{ \tau_1 , \tau_2 } $ labelled by triples of Young diagrams along with a pair of multiplicity indices $\tau_1 , \tau_2$ which each range over
$\{ 1, 2, \cdots , n \}$ \cite{PCA2016,JoSan2}. The Fourier basis is defined using Clebsch-Gordan coefficients of the symmetric group $S_n$ and gives the Weddernurn-Artin decomposition of $\mathcal{K}(n)$ as a direct sum of matrix algebras, where the matrix algebras exist for triples of Young diagrams having non-vanishing Kronecker coefficient. The papers \cite{DR1706,DGT1707,IMM1710} have related results on tensor model observables with a similar algebraic perspective.
In the paper \cite{BenGeloun:2020yau}, we observed that the matrix-subspace of $\mathcal{K}(n)$ associated with a triple of Young diagrams can be obtained as the eigenspace of a specified set of central elements in $\mathcal{K}(n)$, acting on $\mathcal{K}(n)$ by multiplication. This led to the realization of the square of the Kronecker coefficient for any triple $(R_1, R_2 , R_3) $ as the number of linearly independent null vectors of a certain combinatorially constructed integer matrix. Integer matrix algorithms give a constructive combinatorical algorithm for arriving at a basis in the space of null vectors. The construction has a natural interpretation in terms of quantum mechanics in a Hilbert space spanned by the bi-partite ribbon graphs with $n$ edges. Furthermore, considering an involution operator on $\mathcal{K}(n)$, we arrive at the construction of the Kronecker coefficient itself, and therefore propose
an answer to the old question of Murnaghan on a combinatorial interpretation of Kronecker coefficient \cite{MurnaghanOnReps,StanleyConjecture}. The complexity of the combinatorial algorithm is a left as a interesting problem for the future.
In section \ref{sec:MatBel} we describe the combinatorial model of gauge-string duality arising from complex matrix model correlators. In section \ref{sect:CandAlg} we explain the quantum mechanics on the algebra of bi-partite ribbon graphs. In section \ref{HNF-algo} we explain the construction of the integer matrix for every triple, whose null space has a dimension given by the square of the Kronecker coefficient for that triple. Further considerations lead us to the construction of several other pertinent integer sub-lattices in the lattice of bi-partite ribbon graphs. An interesting corollary is the identity (\ref{sumsinglets}) giving the sum of Kronecker coefficients for triples of Young diagrams with $n$ boxes, in terms of the number of ribbon graphs invariant under an involution $S$ defined in section \ref{HNF-algo}. Finally we note that the use of combinatorial structures from string theory and quantum gravity, notably combinatorial topological string theory, to address mathematical group theory questions has also more recently found applications in the proof of integrality properties of partial character sums for general finite groups \cite{IDFCTS,CTSGTA}, providing an alternative to the Galois theoretic proof for these sums.
\section{ Bi-partite graphs and matrix model correlators }\label{sec:MatBel}
Consider a Gaussian complex matrix model of a complex matrix of size $N$ with partition function
\begin{equation}
\mathcal{Z} = \int [dZ] e^{ - { 1 \over 2 } tr Z Z^{ \dagger} }
\end{equation}
$[dZ]= \prod_{ i , j } dZ^i_j d \bar Z^i_j $ is the standard measure on $N^2$ complex variables.
Using standard formulae for multi-variable Gaussian integration,
we find that the quadratic expectation value is
\begin{align}
\langle Z^{i}_{ j} ( Z^{ \dagger} )^k_l \rangle = { 1 \over \mathcal{Z}}
\int [dZ] e^{ - { 1 \over 2 } tr Z Z^{ \dagger} } Z^i_j (Z^{\dagger})^k_l = \delta^i_l \delta_j^k
\end{align}
The complex matrix model measure is invariant under transformations by matrices $U$ which are in the unitary group $U(N)$. The complex matrix model finds application in describing the half-BPS sector of the $\mathcal{N} =4$ super-Yang Mills theory \cite{BBNS}\cite{CJR}. Holomorphic gauge invariant polynomial functions of $Z$ of degree $n$ correspond to half-BPS quantum states of scaling dimension $n$. These can be parametrised using permutations in $S_n$
\begin{eqnarray}
\cO_{ \sigma } ( Z ) = \sum_{ i_1 , \cdots , i_n } Z^{ i_1}_{ i_{ \sigma(1) } } Z^{ i_2}_{ i_{ \sigma(2) } } \cdots Z^{ i_n }_{ i_{\sigma (n) } }
= \prod_{ a =1}^{n} ( {\rm{tr}} Z^a)^{ C_a ( \sigma ) }
\end{eqnarray}
where $C_{ a } ( \sigma )$ is the number of cycles of length $a$ in $\sigma$. It is easy to verify that $\cO_{ \sigma } ( Z ) = \cO_{ \gamma \sigma \gamma^{-1} } ( Z ) $ for any $ \gamma \in S_n$. The correlator of a holomorphic and an anti-holomorphic operator is defined by the integral
\begin{eqnarray}
\langle \cO_{ \sigma_1} ( Z ) \cO_{ \sigma_2 } ( Z^{ \dagger} ) \rangle
= { 1 \over \mathcal{Z}}
\int [dZ] e^{ - { 1 \over 2 } tr Z Z^{ \dagger} } \cO_{ \sigma_1 } ( Z )
\cO_{ \sigma_2} ( Z^{ \dagger} ) \, .
\end{eqnarray}
This is calculated to be \cite{CJR,BrownCMMD}
\begin{eqnarray}
&& \langle \cO_{ \sigma_1} ( Z ) \cO_{ \sigma_2 } ( Z^{ \dagger} ) \rangle
= \sum_{ \sigma_3 \in S_n } \sum_{ \gamma \in S_n } \delta ( \sigma_1 \gamma \sigma_2 \gamma^{-1} \sigma_3 ) N^{ C_{ \sigma_3 } } \cr
&& = { n! \over \vert {\cal C}_1 \vert \vert {\cal C}_2 \vert }
\sum_{ \sigma_1' \in {\cal C}_1 } \sum_{ \sigma_2' \in {\cal C}_2 } \sum_{ \sigma_3 \in S_n } \delta ( \sigma_1' \sigma_2' \sigma_3 ) N^{ C_{ \sigma_3 } } \, .
\end{eqnarray}
Here $ {\cal C}_1 $ is the conjugacy class of $\sigma_1 $, ${\cal C}_2$ is the conjugacy class of $\sigma_2$ and $C_{ \sigma_3 }$ is the number of cycles in the permutation $ \sigma_3$. We have used $ \vert{\cal C}_1 \vert , \vert {\cal C}_2 \vert $ to denote the sizes of the conjugacy classes ${\cal C}_1 , {\cal C}_2$. We can also separate out the sum over $\sigma_3$ into all the conjugacy classes, labelled by ${\cal C}_3$ in $S_n$ :
\begin{eqnarray}
\langle \cO_{ \sigma_1} ( Z ) \cO_{ \sigma_2 } ( Z^{ \dagger} ) \rangle
= { n! \over \vert {\cal C}_1 \vert \vert {\cal C}_2 \vert }
\sum_{ \sigma_1' \in {\cal C}_1 } \sum_{ \sigma_2' \in {\cal C}_2 } \sum_{ {\cal C}_3} \sum_{ \sigma_3 \in {\cal C}_3 } \delta ( \sigma_1' \sigma_2' \sigma_3 ) N^{ C ({\cal C}_3) } \, .
\end{eqnarray}
In the last expression we have used $C({\cal C}_3)$ for the number of cycles in any permutation belonging to the conjugacy class ${\cal C}_3$.
It is convenient to normalise the observables as follows :
\begin{eqnarray}\label{corrWS}
&& N^{-n} { N^{ C ( {\cal C}_1 ) + C ( {\cal C}_2 ) } \vert {\cal C}_1 \vert \vert {\cal C}_2 \vert \over n ! n! }
\langle \cO_{ \sigma_1} ( Z ) \cO_{ \sigma_2 } ( Z^{ \dagger} ) \rangle \cr
&& =\sum_{ {\cal C}_3 } \left ( { 1 \over n! } \sum_{ \sigma_1' \in {\cal C}_1 }
\sum_{ \sigma_2' \in {\cal C}_2 } \sum_{ \sigma_3 \in {\cal C}_3 }
{\cal C} \delta ( \sigma_1' \sigma_2' \sigma_3 ) N^{ C ( {\cal C}_1 ) + C ( {\cal C}_2 ) + C ( {\cal C}_3 ) - n } \right )
\end{eqnarray}
The sum for fixed conjugacy classes ${\cal C}_1 , {\cal C}_2, {\cal C}_3 $ has a nice geometrical interpretation in terms of holomorphic maps from a two-dimensional surface to a sphere, with three branch points on the sphere. In the inverse image of the branch points, the branching structure is described by the conjugacy classes ${\cal C}_1 , {\cal C}_2 , {\cal C}_3$. The genus $h$ of the surface is given by
\begin{eqnarray}
(2-2h ) = C ( {\cal C}_1 ) + C ( {\cal C}_2 ) + C ( {\cal C}_3 ) - n
\end{eqnarray}
The formula \eqref{corrWS} shows that the naturally normalised matrix model correlators can be interpreted as observables in a topological string with sphere (complex projective plane) target, where the string path integral is localised on holomorhic maps with three branch points, which can be chosen to be $0,1, \infty $. Such maps are distinguished maps of interest in number theory, called Belyi maps. There are also nice combinatorial objects, namely bi-partite ribbon graphs associated with these Belyi maps. These can be thought of as the inverse image of the interval $[0,1]$. For a review of the link between bi-partite ribbon graphs, and references to the original literature, we refer the reader to \cite{dMRBelyi}. A good textbook discussion of bi-partite ribbon graphs and Belyi maps is in \cite{LZBelyi}.
\section{Counting, algebra and quantum mechanics of tensor model observables}
\label{sect:CandAlg}
\noindent{\bf Counting of rank $d$ tensor model observables.}
The counting of rank $d$ complex tensor observables or tensor invariants, containing $n$ copies of a complex tensor variable $\Phi$ and its conjugate $\bar \Phi$, has been performed in \cite{JoSan1}.
The enumeration method of tensor invariants used therein is based on the counting of equivalence classes
in Cartesian products of the symmetric group $S_n$ of $n$ elements, generated by certain subgroup actions. We describe it here at rank $d=3$, using the ``gauge-fixed formulation'' from \cite{JoSan1,JoSan2}, while the generalisation to any $d$ is straightforward.
All the contractions between the indices of $n$ tensors and $n$ conjugate tensors producing $U(N) \times U(N) \times U(N)$ invariants can be described by triples of permutations $\sigma_1, \sigma_2, \sigma_3 \in S_n$ depicted in Figure \ref{countingC}.
\begin{figure}[h]\begin{center}
\begin{minipage}[t]{.8\textwidth}\centering
\includegraphics[angle=0, width=6cm, height=1.8cm]{contract2.pdf}
\vspace{0.3cm}
\caption{ {\small The contraction of $n$ tensors $\Phi_{c_1c_2c_3}$ and
$n$ tensors $\bar\Phi_{c_1c_2c_3}$ identified as
permutation triple { $( \sigma _1, \sigma _2, \sigma _3)$}.}}
\label{countingC}
\end{minipage}
\end{center}
\end{figure}
Use Figure \ref{countingC}, and fixing a gauge $\sigma_3={\rm id}$, we obtain tensor invariants from permutation pairs, where pairs related by conjugating with a diagonal adjoint action on $ {S}_n \times {S}_n$ are in the same equivalence class :
\begin{equation}
( \tilde \sigma _1 , \tilde \sigma _2 ) \sim (\tau \tilde \sigma _1 \tau^{-1} , \; \tau \tilde \sigma _2 \tau^{-1} \,) \,, \quad
\quad \tilde \sigma _i, \tau \in {S}_n\,.
\label{orbitad}
\end{equation}
These equivalence classes are also known to enumerate bi-partite ribbon graphs. There is thus a graphical interpretation of the rank $3$ tensor invariants in terms of bipartite ribbon graphs.
Burnside's lemma allows us to write the number of equivalence classes under a group action
in terms of the fixed points of the same action. This leads us to
\begin{eqnarray}
{\rm Rib}(n) =
\sum_{ p \,\vdash n } {\rm Sym} ( p ) \,,
\label{blem}
\end{eqnarray}
where the sum is performed over all partitions $p$ of $n$, denoted $p \,\vdash n$,
and where $ {\rm Sym} (p):= \prod_{i=1}^n (i^{p_i})(p_i!)$.
There is another expression of the same counting as a sum
over triples $ ( R_1 , R_2 , R_3) $ of irreducible representations (irreps) \cite{JoSan2}:
\begin{eqnarray}
{\rm Rib}(n) = \sum_{ R_1 , R_2 , R_3 \vdash n } C ( R_1 , R_2 , R_3 )^2
\end{eqnarray}
The Kronecker coefficient $ C ( R_1 , R_2 , R_3 )$ is a non-negative-integer
that yields the number of times $R_3$ appears in the tensor product decomposition $ R_1 \otimes R_2$.
\
\noindent{\bf The algebra $\mathcal{K}(n)$ of bipartite ribbon graphs.}
For the group action given in \eqref{orbitad}, we consider
the set of orbits, each of which is associated with a ribbon graph.
We introduce a label $r \in \{ 1, \dots, {\rm Rib}(n)\}$.
Consider $ {\mathbb C} ( S_n ) \otimes_{ {\mathbb C} } {\mathbb C} ( S_n )$, simply denoted
$ {\mathbb C} ( S_n ) \otimes {\mathbb C} ( S_n ) $.
For each ribbon graph labeled by $r$,
consider its orbit representative given by the pair
$( \tau_1^{(r)} , \tau_2^{(r)} )$.
The element $E_r$ of $ {\mathbb C} ( S_n ) \otimes {\mathbb C} ( S_n ) $
is defined as
\begin{eqnarray}
\label{classr}
E_r=
{ 1 \over n! } \sum_{ \mu \in S_n } \mu \tau_1^{(r)} \mu^{-1} \otimes \mu \tau_2^{(r)} \mu^{-1}
\end{eqnarray}
Now, we define the ${\mathbb C}$-vector subspace $\mathcal{K}(n) \subset {\mathbb C} ( S_n ) \otimes {\mathbb C} ( S_n )$ spanned by these elements:
\begin{equation}
\mathcal{K}(n) = {\rm Span}_{{\mathbb C}}\Big\{
E_r \,, r=1, \dots {\rm Rib}(n)
\Big\}
\label{graphbasis}
\end{equation}
The dimension of $\mathcal{K}(n) $ is the number of bipartite ribbon graphs
\begin{equation} \text{Dim} ( \mathcal{K} ( n ) ) = {\rm Rib}(n)
\end{equation}
$ \mathcal{K}(n)$ is a permutation centralizer algebra (PCA) \cite{PCA2016} - a subspace of a permutation algebra, here $ {\mathbb C} ( S_n ) \otimes {\mathbb C} ( S_n ) $, which commutes with a sub-algebra with basis labelled by permutations, here the diagonally embedded $S_n$ permutations.
$\mathcal{K}(n) $ is also semi-simple: it has a non-degenerate symmetric bilinear pairing
given by
\begin{eqnarray}\label{Wpairing}
{\boldsymbol{\delta}} _2 : {\mathbb C}(S_n)^{\otimes 2}\times {\mathbb C}(S_n)^{\otimes 2} \to {\mathbb C}
\end{eqnarray}
which is defined in terms of the usual delta function on the group.
$ {\boldsymbol{\delta}} _2 ( \otimes_{i=1}^2 \sigma _i ; \otimes_{i=1}^2 \sigma '_i ) =
\prod_{i=1}^2 \delta ( \sigma _i \sigma '_i)$, with $\delta( \sigma )=1$, if $ \sigma ={\rm id}$, and $0$ otherwise.
This extends to linear combinations with complex coefficients.
Semi-simplicity implies that, by the Wedderburn-Artin theorem, $\mathcal{K}(n) $ admits a decomposition in simple matrix algebras. This decomposition is made manifest using what we denote as the Fourier basis
\begin{eqnarray}\label{qbasis}
Q^{R_1,R_2,R_3}_{\tau_1,\tau_2} &=&
\kappa_{R_1,R_2}
\sum_{ \sigma _1, \sigma _2 \in S_n}
\sum_{i_1,i_2,i_3, j_1,j_2}
C^{R_1,R_2; R_3 , \tau_1 }_{ i_1 , i_2 ; i_3 } C^{R_1,R_2; R_3, \tau_2 }_{ j_1 , j_2 ; i_3 }
\crcr
&\times&
D^{ R_1 }_{ i_1 j_1} ( \sigma_1 ) D^{R_2}_{ i_2 j_2 } ( \sigma_2 ) \, \sigma_1 \otimes \sigma_2
\end{eqnarray}
$D^R_{ ij} ( \sigma ) $ are the matrix elements of
the linear operator $D^R( \sigma )$ in an orthonormal basis for the irrep $R$. The indices
$ \tau_1 , \tau_2 $ run over an orthonormal basis for the multiplicity space of $R_3$ appearing in the tensor
decomposition of $ R_1 \otimes R_2$. This multiplicity is equal to the Kronecker coefficient $C ( R_1 , R_2 , R_3 )$. $\kappa_{R_1,R_2} = \frac{d(R_1)d(R_2)}{(n!)^2}$ is a normalization factor, where
$d(R_i)$ is the dimension of the irrep $R_i$. $C^{R_1,R_2; R_3 , \tau_1 }_{ i_1 , i_2 ;i_3 } $ are Clebsch-Gordan coefficients of the representations of $S_n$.
\
\noindent{\bf
Quantum mechanics of bipartite ribbon graphs.}
We define a sesquilinear form on $ {\mathbb C} ( S_n ) \otimes {\mathbb C} ( S_n )$ as
\begin{eqnarray}\label{def:innerprod}
g ( \sum_i a_i \alpha_{1i} \otimes \alpha_{2i} , \sum_j b_j \beta_{1j} \otimes \beta_{2j})
= \sum_{i,j} \bar a_i b_j \;
\delta (\alpha_{1i} ^{-1 }\beta_{1j} ) \delta ( \alpha_{2i} ^{-1} \beta_{2j} )
\end{eqnarray}
where $a_i,b_i \in {\mathbb C}$, $\alpha_{1i}, \alpha_{2i} ,\beta_{1j} , \beta_{2j} \in S_n$, and where the bar means complex conjugation.
One checks that
$g$ is nondegenerate and therefore induces an inner product on ${\mathbb C} ( S_n) \otimes {\mathbb C}(S_n)$.
We restrict $g$ to an give an inner product on $\mathcal{K}(n)\subset {\mathbb C} ( S_n) \otimes {\mathbb C} ( S_n) $, and consequently, $\mathcal{K}(n)$ is an algebra which is also an Hilbert space.
There is another operator that will be of prominent use in the following.
Consider the linear conjugation operator $S: {\mathbb C}(S_n) \to {\mathbb C}(S_n)$ which maps a linear combination
$A = \sum_{i} c_i \sigma _i \in {\mathbb C}(S_n)$ to $S(A) := \sum_{i} c_i \sigma _i^{-1}$.
Extend this operation to ${\mathbb C} ( S_n ) \otimes {\mathbb C} ( S_n) $ by inverting the permutation in each tensor factor,
we easily check that $S^2 = {\rm id}$ and call $S$ an involution.
Let us discuss the Hermitian operators in our setting
that could play the role of Hamiltonian operators.
For a conjugacy class $ {\cal C}_{\mu} \subset S_n $ labelled by $\mu \vdash n$, a partition of $n$, we have a central element
$
T_{ \mu } = \sum_{ \sigma \in {\cal C}_{ \mu } } \sigma
$
that obviously obeys $ \gamma T_\mu \gamma^{-1} = T_\mu$, for any $ \gamma \in S_n$.
We are interested in particular $\mu= [k,1^{n-k}]$ defined by a single cycle of length $k$ and all remaining cycles of length 1.
The corresponding operator denotes $T_k$.
There are operators
that multiplicatively generate the centre of $\mathcal{K}(n)$ \cite{KR1911}.
At any $n \ge 2$, we define elements in ${\mathbb C} ( S_n) \otimes {\mathbb C}(S_n)$
\begin{eqnarray}\label{tki}
T^{(1)}_k = T_k \otimes 1 \,, \qquad
T^{(2)}_k = 1 \otimes T_k \,, \qquad
T^{(3)}_k = \sum_{ \sigma \in {\cal C}_k } \sigma \otimes \sigma \;.
\end{eqnarray}
\vspace{-0.4cm}
The sum of products of the $T^{(i)}_k$'s, $k=1, \dots, n$, generates the centre
$\mathcal{Z} (\mathcal{K}(n))$ of $\mathcal{K}(n)$. In fact, one does not need the entire set $k=1, \dots, n$
to generate the centre, only a smaller number of them is enough $k=1, \dots, k_*(n) \le n$.
\vspace{-0.4cm}
We showed that $T_k^{(i)} $ are Hermitian operators on $\mathcal{K}(n)$
with respect to in the inner product defined by \eqref{def:innerprod}
$ g ( E_s , T_k^{ (i)} E_r )= g ( T_k^{ (i)} E_s , E_r )$.
(See proof of Proposition 3 in \cite{BenGeloun:2020yau}.)
The operators $T_k^{(i)} $, for $i$ ranging over $\{ 1,2, 3 \}$ and $k$ ranging over some subset of $\{ 2,3, \cdots , n \}$ form a set of commuting Hermitian operators on $\mathcal{K}(n)$. The commutativity follows from the fact that they are central elements of $ \mathcal{K}(n)$.
We consider such sets of operators as Hamiltonians and we define a quantum-mechanical time evolution of states in $\mathcal{K} ( n ) $ of the form
\begin{eqnarray}
E_r (t) = e^{ - i t T_{ k}^{(i)} } E_r
\end{eqnarray}
where $E_r (t)$ becomes time-dependent ribbon graph states.
The action of the $T_{ k}^{(i)}$'s on the ribbon graph bases yields a crucial
fact. As linear operators, let $(\mathcal{M} ^{ (i)}_k )_r^s $ be the matrix elements of $T_k^{(i)}$. We have
\begin{eqnarray}\label{Ter}
T_k^{(i)} E_s = \sum_{ s } (\mathcal{M} ^{ (i)}_k )_s^t E_t
\end{eqnarray}
The matrix elements $ (\mathcal{M} ^{ (i)}_k )_r^s $ are non-negative integers
(Proposition 2 in \cite{BenGeloun:2020yau}).
$T_k^{(i)} $ operators act on the Fourier basis of $ \mathcal{K}(n)$
as (Proposition 4 \cite{BenGeloun:2020yau})
\vspace{-0.5cm}
\begin{proposition}
\label{propTaQo}
For all $k \in \{ 2, 3, \cdots n \}$, $\{ R_i \vdash n : i \in \{ 1,2,3\} \} $, $\tau_1, \tau_2 \in [\![1, C(R_1,R_2,R_3) ]\!]$, the Fourier basis elements
$ Q^{R_1, R_2, R_3}_{\tau_1 , \tau_2}$ are eigenvectors of $T_k^{(i)} $:
\begin{eqnarray}
T_k^{(i)} Q^{R_1, R_2, R_3}_{ \tau_1 , \tau_2 }
= { \chi_{R_i} ( T_k ) \over d(R_i) } Q^{R_1, R_2, R_3}_{ \tau_1 , \tau_2 } \,,
\label{t1Qo}
\end{eqnarray}
\end{proposition}
\vspace{-0.4cm}
This means that the Fourier basis $ Q^{R_1, R_2, R_3}_{\tau_1 , \tau_2}$
is an eigenbasis of the operators $T_k^{(i)}$.
Furthermore, following Proposition 5 of \cite{BenGeloun:2020yau},
we have
\vspace{-0.5cm}
\begin{proposition}
\label{PropTkFS}
For any $\widetilde k_* \in \{ k_*(n) , k_*(n) +1 , \cdots , n \} $ the list of eigenvalues of the
reconnection operators
$$ \{ T^{(1)}_{2} , T^{(1)}_{ 3} , \cdots , T^{(1)}_{ \widetilde k_* } ; T^{(2)}_{2} , T^{(2)}_{ 3} , \cdots , T^{(2)}_{ \widetilde k_* } ; T^{(3)}_{2} , T^{(3)}_{ 3} , \cdots , T^{(3)}_{ \widetilde k_*} \}$$
uniquely determines the Young diagram triples $(R_1 , R_2 , R_3 )$.
\end{proposition}
\vspace{-0.4cm}
The sum of all permutations $ \sigma $ in the conjugacy class $ {\cal C}_{ p } $ in $ S_n$ for partition
$p $ are central elements in $ \mathcal{Z} ( {\mathbb C} ( S_n)) $. The irreducible normalized
characters of these central elements are integers :
\begin{eqnarray} \label{chiint}
\chi_R ( T_{ p } ) / d ( R ) \in \mathbb{Z}
\end{eqnarray}
The proof combines a known number theoretic fact about the normalized characters of a finite group
being algebraic integers, along with the rationality of characters of irreducible representations of $ S_n$
which follows from the Murnaghan-Nakayama Lemma.
\
\noindent{\bf Stacking $T_k^{(i)}$ matrices and common eigenspace.}
Using Proposition \ref{PropTkFS}, the Fourier subspace for a given triple $ ( R_1 , R_2 , R_3 )$
is uniquely specified as common eigenspace of the operators $T^{(i)}_k$, for
$k \in \{ 2, \dots, \tilde k_*(n) \} $ and $i\in \{ 1,2,3 \} $; with $ k_*(n) \le \widetilde k_* \le n $,
with specified eigenvalues for these reconnection operators.
The numerator $ \chi_R ( T_k)$ is given by $\chi_R ( T_k) = {T_k } \chi_R ( \sigma ) $ for $ \sigma \in {\cal C}_k$. The character $ \chi_R ( \sigma )$ can be computed with the combinatorial Murnaghan-Nakayama rule \cite{MurnaghanOnReps}. The dimension $d(R)$ is obtained from the hook formula for dimensions.
The vectors in the Fourier subspace for a triple $(R_1, R_2 , R_3)$ solve the following matrix equation
\begin{eqnarray}\label{stack}
\scriptsize{ \left[\begin{array}{c}\
\mathcal{M} ^{(1)}_2 - { \chi_{R_1} ( T_2 ) \over d({R_1}) } \\
\vdots \\
\mathcal{M} ^{(1)}_{\widetilde k_*} - { \chi_{R_1} ( T_{\widetilde k} ) \over d({R_1}) } \\
(1 \rightarrow 2) \\
(1 \rightarrow 3)
\end{array}\right]
\cdot v
= {\bold 0 }
}
\end{eqnarray}
where the notation $(1 \rightarrow j) $, $j=2,3$,
means that we replace the matrix block $\mathcal{M} ^{(1)}_k \rightarrow \mathcal{M} ^{(j)}_k $ for $j=2,3$.
This rectangular array gives the matrix elements of a linear operator mapping $ \mathcal{K}(n)$
to $3 (\widetilde k_* -1 ) $ copies of $ \mathcal{K}(n) $, using the geometric basis of ribbon graph vectors. From \eqref{chiint}, the normalized characters are integers. Renaming as
$\mathcal{L}_{ R_1 , R_2 , R_3 } $ the integer matrix in \eqref{stack} we have
\begin{eqnarray}\label{Xv}
\mathcal{L}_{ R_1 , R_2 , R_3 } \cdot v = 0
\end{eqnarray}
We then have, for each triple of Young diagrams, the problem of finding the null space of an integer matrix. Null spaces of integer matrices have integer null vector bases. These can be interpreted in terms of lattices and can be constructed via well known algorithms.
\section{The Hermite Normal Forms algorithm and lattice interpretation of kernels}\label{HNF-algo}
We are interested in solving the linear system $X \cdot v = 0$,
where $X= \mathcal{L}_{ R_1 , R_2 , R_3 } $ \eqref{Xv} has only integer matrix entries.
A crucial fact about the null spaces of integer matrices is that they have bases given as integer vectors. This follows from the theory of Hermite normal forms (HNF) and has an interpretation in terms of sub-lattices \cite{Cohen}\cite{Schrijver}.
The null space of the integer matrix $X$ is the span of a set of null vectors which can be chosen to be integer vectors, i.e. integral linear combinations of the $E_r$. A key result from the theory of integer matrices and lattices is that any integer matrix $A$ (square or rectangular; we use $ A = X^T$) has a unique HNF. This means that $ A $ has a decomposition $A= U h$ with $U $ a unimodular matrix, i.e. an integer matrix of determinant $ \pm 1$ and $h$ is an integer matrix computed by
an integrality perserving algorithm.
In the present application we have a lattice $\mathbb{Z}^{ {\rm Rib}(n) } \subset {\mathbb R}^{ {\rm Rib}(n) } $
which is interpreted as the space of integer linear combinations of the geometric ribbon graph basis vectors $E_r$ of the ribbon graph algebra $ \mathcal{K}(n)$.
Based on these facts, we provide the construction of $ C(R_1,R_2,R_3)^2 $ as the dimension of a sub-lattice of the lattice of ribbon graphs.
The HNF procedure achieves the proof of the
following theorem (Theorem 1 in \cite{BenGeloun:2020yau}):
\vspace{-0.4cm}
\begin{theorem}\label{theo:C2}
For every triple of Young diagrams $(R_1 , R_2 , R_3 ) $ with $n$ boxes, the lattice ${\mathbb Z}^{ | {\rm Rib}(n) | }$
of integer linear combinations of the geometric basis vectors $E_r$ of $ \mathcal{K} ( n ) $ contains a sub-lattice
of dimension $ ( C ( R_1 , R_2 , R_3 ))^2 $ spanned by a basis of integer null vectors of the operator
$ X$, which is $ \mathcal{L}_{ R_1, R_2 , R_3 } $ from \eqref{Xv}.
\end{theorem}
\vspace{-0.4cm}
The action of operator $S$ (see section \ref{sect:CandAlg}) on the geometrical ribbon graph basis $E_r$ or on the Fourier basis $Q$ of $ \mathcal{K} ( n )$ has key properties that will allow us to interpret the dimension of various lattice subspaces
of ${\mathbb Z}^{ | {\rm Rib}(n) | }$ in terms of sums of products of Kronecker coefficients.
Let us denote the vector space of ribbon graphs, which is the underlying vector space of the algebra $ \mathcal{K} ( n ) $
by $ V^{ {\rm Rib}(n) } $. $ V^{ {\rm Rib}(n) } $ has a decomposition according to the eigenvalues of $S$
\begin{eqnarray}
V^{ {\rm Rib}(n) } = V^{ {\rm Rib}(n) }_{ S=1} \oplus V^{ {\rm Rib}(n) }_{ S=-1}
\end{eqnarray}
The action of $S$ on the Fourier basis $Q$ leads to:
\begin{eqnarray}\label{VR1R2R3}
V^{ {\rm Rib}(n) } = \bigoplus_{R_1 , R_2 , R_3} V^{ {\rm Rib} (n) :\; R_1 , R_2 , R_3 }
\end{eqnarray}
where $ V^{ {\rm Rib} (n):\; R_1 , R_2 , R_3} $ has dimension $ C ( R_1 , R_2 , R_3 )^2 $
and is spanned by the $Q^{ R_1 , R_2 , R_3 }_{ \tau_1 , \tau_2 }$ for all $\tau_1$ and $\tau_2$.
Then $ V^{ {\rm Rib} (n) :\; R_1 , R_2 , R_3 }= V^{ {\rm Rib} (n) :\; R_1 , R_2 , R_3 } _{ S =1 }\oplus V^{ {\rm Rib} (n) :\;R_1 , R_2 , R_3 } _{ S =-1 }$.
Combining this with \eqref{VR1R2R3} we then have
\begin{equation}
V^{ {\rm Rib} ( n ) } = \bigoplus_{ R_1 , R_2 , R_3 } \left ( V^{ {\rm Rib} ( n ) : R_1 , R_2 , R_3 }_{ S =1 } \oplus V^{ {\rm Rib} ( n ) : R_1 , R_2 , R_3 }_{ S =-1 } \right )
\end{equation}
We can show that
\begin{eqnarray}
\text{Dim} \left ( V^{ {\rm Rib} ( n ) : R_1 , R_2 , R_3 }_{ S =-1 } \right )
&=& { C ( R_1 , R_2 , R_3) ( C ( R_1 , R_2 , R_3 ) -1 ) \over 2 } \crcr
& = & \text{Dim} \left ( P^{ R_1 , R_2 , R_3 } V^{ {\rm Rib} (n ) }_{ \pairs^-} \right )
\label{dim-}
\end{eqnarray}
with $P^{ R_1 , R_2 , R_3 }$ the projector onto $V^{ {\rm Rib} ( n ) : R_1 , R_2 , R_3 }$ and $V^{ {\rm Rib}(n) }_{ S=-1} = V^{ {\rm Rib} ( n ) }_{ \pairs^- } $. Similarly, we can show
\begin{eqnarray}
&&
\text{Dim} \left ( V^{ {\rm Rib} ( n ) : R_1 , R_2 , R_3 }_{ S =+1 } \right ) = { C ( R_1 , R_2 , R_3 ) ( C ( R_1 , R_2 , R_3 ) + 1 ) \over 2 } \cr
& & = \text{Dim} \left ( P^{ R_1 , R_2 , R_3 } V^{ {\rm Rib} (n ) }_{ \pairs^+} \right ) + \text{Dim}
\left ( P^{ R_1 , R_2 , R_3 } V^{ {\rm Rib} (n ) }_{ \singlets} \right )
\label{dim+sing}
\end{eqnarray}
with the decomposition
$V^{ {\rm Rib}(n) }_{ S=1} = V^{ {\rm Rib} ( n ) }_{ \pairs^+ } \oplus V_{ \singlets } $, that defines $ V^{ {\rm Rib} ( n ) }_{ \pairs^+ } $, the subspace spanned by $\{ ( E_r^{(n)} + E_r^{ (\bar n )} ) \}$,
and $ V_{ \singlets } $ the subspace spanned by $\{E_r^{ (s)} \}$ that are ribbons obeying $SE_r^{ (s)} = E_r^{ (s)}$.
Note that we do not have separate expressions for the two terms in the sum above
in terms of Kronecker coefficients, since we do not expect the $P^{ R_1 , R_2 , R_3 }$ to commute with the projection of $V^{ {\rm Rib}(n) }_{ S=1}$ into the separate summands
$V^{ {\rm Rib}(n) }_{ \singlets }$ and $V^{ {\rm Rib}(n) }_{ \pairs^+ }$.
Using once again the HNF procedure
and the outcome of the above discussion,
we reach the statement (Theorem 2 in \cite{BenGeloun:2020yau})
\vspace{-0.3cm}
\begin{theorem}
\label{addLatt}
For every triple of Young diagrams $(R_1 , R_2 , R_3 ) $ with $n$ boxes, there are three
constructible sub-lattices of ${\mathbb Z}^{ {\rm Rib}(n) }$ of respective dimensions
${ C ( R_1 , R_2 , R_3 ) ( C ( R_1 , R_2 , R_3 ) +1) /2 } $,
${ C ( R_1 , R_2 , R_3 ) ( C ( R_1 , R_2 , R_3 ) -1) / 2 } $,
and $C ( R_1 , R_2 , R_3 )$.
\end{theorem}
\vspace{-0.4cm}
If we perform the sum over $ R_1 , R_2 , R_3 $ in \eqref{dim+sing}, we have
$
\text{Dim} \left ( V^{ {\rm Rib} ( n ) }_{ S =+1 } \right )
= \text{Dim} \left ( V^{ {\rm Rib} ( n ) }_{ \pairs^+ } \right ) + \text{Dim} \left ( V_{ \singlets } \right )
$
and
$
\text{Dim} \left ( V^{ {\rm Rib} ( n ) }_{ S = -1 } \right )
= \text{Dim} \left ( V^{ {\rm Rib} ( n ) }_{ \pairs^- } \right )$.
Since $
\text{Dim} \left ( V^{ {\rm Rib} ( n ) }_{ \pairs^+ } \right ) = \text{Dim} \left ( V^{ {\rm Rib} ( n ) }_{ \pairs^- } \right )
$, we have
\begin{eqnarray}\label{sumsinglets}
&& \hbox{ Number of bi-partite ribbon graphs with $n$ edges invariant under } S \cr
&& = \text{Dim} \left ( V^{ {\rm Rib} ( n ) }_{ \singlets } \right ) = \sum_{ R_1 , R_2 , R_3 } C ( R_1 , R_2 , R_3 )
\end{eqnarray}
While the sum over triples of Young diagrams with $n$ boxes of the square of Kronecker coefficients gives
the number of ribbon graphs with $n$ edges, the sum of the Kronecker coefficients gives the number of singlet ribbon graphs.
\
\begin{center}
{ \bf Acknowledgments}
\end{center}
The research of S.R. was supported by the STFC consolidated grant ST/P000754/1 " String Theory, Gauge Theory and Duality".
We thank the Editors Konstantinos Anagnostopoulos, Peter Schupp, George Zoupanos, for the invitation to contribute to this special volume on ``Non-commutativity and physics''.
We thank Peter Schupp for suggestions on the introduction.
\vskip.2cm
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\subsection{Level spacing distribution}
The starting point to express the level spacing of a $2\times 2$ Lorentzian matrix $\mathcal{H}$ is the
distribution of the eigenvalues (we call them $x$ and $y$). This is obtained from the distribution Eq. (\ref{DistributionOfH}) by the usual variable change of the eigenvalues \cite{Brouwer}\cite{Fritz}. We start with the case of
a centered ($\mathcal{E}=0$) and standard ($\lambda=1$) distribution:
\begin{eqnarray}
P(x,y)=\frac{1}{2\pi} \frac{|x-y|^\beta}{(1+x^2)^{(\beta/2+1)}(1+y^2)^{(\beta/2+1)}}
\end{eqnarray}
$|x-y|^\beta$ is the Jacobian of the variable substitution. It acts as a repulsion term avoiding the eigenvalues to be the same \cite{Mehta}\cite{Fritz}.
The distribution of the level spacing $s$ is therefore expressed as follows:
\begin{eqnarray*}
\mathcal{P}(s) &=& \iint\delta(s-|x-y|) \frac{|x-y|^\beta}{[(1+x^2)(1+y^2)]^{(\beta/2+1)}} \frac{dx dy}{2 \pi} \\
\end{eqnarray*}
After a first integration over the delta function, one obtains:
\begin{eqnarray}
\mathcal{P}(s) =\frac{s^\beta}{2\pi}[p(s)+p(-s)] \label{functionPs}
\end{eqnarray}
where the function $p(s)$ reads:
\begin{eqnarray}
p(s)=\int_{-\infty}^{+\infty} \frac{dx}{[1+x^2]^{(\beta/2+1)} [1+(s-x)^2]^{(\beta/2+1)}}
\end{eqnarray}
It is interesting to notice that $p(s)=(f \ast f)(s)$\\
where $\ast$ is the convolution product and $f$ is the following function:
\begin{eqnarray}
f(x)=\frac{1}{(1+x^2)^{(\beta/2+1)}}. \label{Functionf}
\end{eqnarray}
The convolution product suggests to use the Fourier transform $\mathcal{F}$(noted also $\hat{} $ ) :
\begin{eqnarray}
\mathcal{F}(p(s))= \hat{f}^2 \label{1}
\end{eqnarray}
The Fourier transform of $f$ Eq. (\ref{Functionf}) is expressed using the modified Bessel function of the second kind, BesselK \cite{Mathematica}:
\begin{eqnarray}
\hat{f}(\omega)\propto |\omega|^{\frac{\beta+1}{2}} \text{BesselK}[\frac{\beta+1}{2},|\omega|]
\end{eqnarray}
To obtain the function $p(s)$, we need to do the inverse of the Fourier transform of Eq. (\ref{1}).
\begin{eqnarray}
p(s)\propto \mathcal{F}^{-1}\left((|\omega|^{\frac{\beta+1}{2}}\text{BesselK}[\frac{\beta+1}{2},|\omega|])^2\right) \label{Eq10}
\end{eqnarray}
The inverse Fourier transform in Eq. (\ref{Eq10}) is expressed using the Gauss hypergeometric function $\mathstrut_2 F_1$ \cite{Mathematica}:
\begin{eqnarray}
p(s)\propto\mathstrut_2 F_1(\frac{3}{2}+\beta,\frac{\beta+2}{2},\frac{\beta+3}{2},-\frac{s^2}{4}) \label{functionFs}
\end{eqnarray}
Now, we are ready to express the probability density function of the level spacing for the three ensembles in a unique compact form deduced directly from Eqs.(\ref{functionPs}) and (\ref{functionFs}) :
\begin{eqnarray}
\mathcal{P}(s)= C_\beta s^\beta\mathstrut_2 F_1(\frac{3}{2}+\beta,\frac{\beta+2}{2},\frac{\beta+3}{2},-\frac{s^2}{4}) \label{result}
\end{eqnarray}
where the normalization constant $ C_\beta$ is given as follows:
\begin{eqnarray}
C_\beta=\frac{1}{2^\beta \sqrt{\pi}} \frac{\Gamma(\frac{3}{2}+\beta)}{\Gamma(\frac{1+\beta}{2})\Gamma(\frac{3+\beta}{2})}.
\end{eqnarray}
The behavior of this probability density at small level spacing $(s\rightarrow 0)$ is given by:
\begin{eqnarray}
\mathcal{P}(s) \sim C_\beta s^\beta,
\end{eqnarray}
this is the signature of the level repulsion of each of the three ensembles ($\beta=1,2 \text{ or } 4$) corresponding to different type of symmetry. This feature is comparable to
what is found for the Wigner surmise\cite{Mehta}\cite{Fritz}.
The tail of the distribution is more interesting since it is different from all the distributions known so far in a sense it has a geometrical fall off:
\begin{eqnarray}
\mathcal{P}(s)\sim \frac{4}{\pi s^2},
\end{eqnarray}
this tail is $\beta\text{-independent}$ and the exponent of the fall off law implies the important property of a no mean for the density function.
It means that the mean level spacing defined usually as $\bar{s}=\int s\mathcal{P}(s) ds$ does not exist since the integral diverges. This signifies that despite the distribution is narrow around it's center (for small $\lambda$),
the eigenvalues can be typically as far as possible.
\section{The distribution of the level spacing at arbitrary width}
It is obvious that changing the center of the distribution does only shift all the energies by the same amount and therefore the level spacing distribution
remains unchanged. This is not the case of the distribution width . Indeed, at a different energy, the width of the Lorentzian changes and
becomes arbitrary $\lambda$. We can obtain the distribution at this energy by changing the variables in the integral we started with or just by saying that with
this new width of the distribution, all the eigenvalues are multiplied by a factor $\lambda$ and therefore, the invariant distribution is obtained for the renormalized level spacing defined as follows:
$$ \mathcal{S}=\frac{ s}{\lambda}$$
Within this definition, the distribution of the normalized variable $\mathcal{S}$ is the same as in Eq. (\ref{result}).
Usually, we prefer to normalize the lengths in the problem with the mean level spacing $\bar{s}$. Here, since the mean level spacing is divergent, we can not use it for this task.
There is another quantity which can be used as a scaling length. It is also commonly called in literature mean level spacing and noted $\Delta$.
This variable is defined as the inverse of the density of states at the center of the Hamiltonian spectrum:
$\Delta=\frac{1}{\rho(E=\mathbb{\mathcal{E}})}$
where the density $\rho$ is defined as $\rho(E)=\sum_{E_i=\{x,y\}} \langle \delta(E-E_i) \rangle $.
This scale $\Delta$ is still defined for the case of a $2\times 2$ matrix and is finite. Nevertheless, it has no anymore the signification of a mean level spacing
since the spectrum is not dense. The density of states $\rho$ is $\beta$-independent and reads \cite{Brouwer}:
\begin{eqnarray}
\rho(E)=\frac{2}{\pi} \frac{\lambda}{(E-\mathcal{E})^2+\lambda^2}
\end{eqnarray}
thus, the scale $\Delta$ at the center of the distribution reads:
\begin{eqnarray}
\Delta=\frac{\pi}{2}\lambda
\end{eqnarray}
This parameter is equivalent(up to a factor $\pi \over 2$) to what was stated before about choosing the width $\lambda$ as a simpler and natural scaling parameter. Moreover, by choosing $\lambda$, the factors in
Eq. (\ref{result}) remain unchanged.
Hereafter, the variable $s$ will stand for the level spacing renormalized by the width of the distribution $\lambda$. Within this definition, it is worth keeping in mind the important feature of the level spacing
distribution at small $s$ (resistance to crossing) and at large $s$ (no mean level spacing) which are true for the three ensembles ($\beta=1$, $2$ and $ 4$).
\begin{eqnarray}
\mathcal{P}(s)\sim
\begin{cases} C_\beta s^\beta, & \mbox{ if $s \ll 1$ }\\
\frac{4}{\pi s^2 },& \mbox{ if $ s \gg 1$ } \label{Dl}
\end{cases}
\end{eqnarray}
We stress that this geometrical fall-off exhibits a high degree of fluctuation of the level spacing. It is completely different from the usual decay we face in the Wigner surmise or the semi-poissonian
ensembles\cite{Pichard}\cite{Bogomolny}.
\subsection{Orthogonal case $\beta=1$}
\begin{figure}
\includegraphics[scale=0.55]{Orthogonalbis.eps}
\caption{The distribution of the renormalized level spacing of a $2\times 2$ Lorentzian orthogonal matrix ($\beta=1$). The red line is the analytical result given in Eq. (\ref{Orthogonal}). The agreement between theory and numerical simulation is excellent. }
\label{Fig1}
\end{figure}
It may be interesting to simplify formula Eq. (\ref{result}) for each case of the three symmetries ($\beta=1, 2$ and $4$). We start here with the case of time reversal symmetry $\beta=1$. It can be shown that
the expression of the level spacing distribution boils down to the following form using the complete elliptic integrals of the first and the second kind
noted respectively $\mathbb{K}$ and $\mathbb{E}$ \cite{Mathematica}:
\begin{eqnarray}
\mathcal{P}(s)= \left( \frac{8}{\pi} \right)\frac{(-4+s^2)\mathbb{E}(-\frac{s^2}{4})+(4+s^2)\mathbb{K}(-\frac{s^2}{4})}{s(4+s^2)^2} \label{Orthogonal}
\end{eqnarray}
The numerical simulation consisting on sampling $2\times2$ Hamiltonians with a Lorentzian distribution and doing the statistics of the level spacing shows an excellent agreement with formula Eq. (\ref{Orthogonal}) as can be shown in
Fig. \ref{Fig1}.
The expansion at small and large level spacing can be obtained straightforwardly:
\begin{eqnarray*}
\mathcal{P}(s)\sim
\begin{cases}
\frac{3 s}{8}, & \mbox{if $s \ll 1 $ } \\
\frac{4}{\pi s^2 }, & \mbox{if $ s\gg 1 $ }
\end{cases}
\end{eqnarray*}
\section{ Unitary case $\beta=2$}
\begin{figure}
\includegraphics[scale=0.5]{UnitaryCase.eps}
\caption{The distribution of the renormalized level spacing of a $2\times 2$ Lorentzian Unitary matrix ($\beta=2$). The red line is the analytical result given in Eq. (\ref{UnitaryResult}). The agreement between theory and simulation is excellent.}
\label{Fig2}
\end{figure}
The case of the unitary symmetry ($\beta=2$) can be written in a much simpler formula. Just by taking into account the form of the Gauss hypergeometric function for even parameter ($\beta$) , one
finds the following result:
\begin{eqnarray}
\mathcal{P}(s)=\frac{4}{\pi}\frac{s^2(20+s^2)}{(4+s^2)^3} \label{UnitaryResult}
\end{eqnarray}
The behavior at small and large level spacing is in agreement with formula Eq. (\ref{Dl}):
\begin{eqnarray*}
\mathcal{P}(s)\sim
\begin{cases}
\frac{5 s^2}{4\pi}, & \mbox{if $s \ll 1 $ } \\
\frac{4}{\pi s^2 }, & \mbox{if $ s\gg 1 $ }
\end{cases}
\end{eqnarray*}
This distribution law is tested numerically (see Fig. \ref{Fig2}).
Again, we notice that this distribution has no mean level spacing.\\
\subsection{Symplectic ensemble $\beta=4$}
\begin{figure}
\includegraphics[scale=0.55]{SymplecticCase.eps}
\caption{The distribution of the renormalized level spacing of a $4\times 4$ Lorentzian matrix with the symplectic symmetry ($\beta=4$). (notice the factor $2$ due to the degeneracy of the eigenvalues). The red line is the analytical result given by Eq. (\ref{DisSymplectic}). The agreement between theory and simulation is excellent.}
\label{FigSymplectic}
\end{figure}
Again, the distribution of the level spacing for the symplectic ensemble is much simpler and can be written in a fractional form:
\begin{eqnarray}
\mathcal{P}(s)=\frac{4 s^4}{\pi}\frac{(336+24 s^2+ s^4)}{(4+s^2)^5} \label{DisSymplectic}
\end{eqnarray}
The expansion at small and large level spacing is straightforward:
$$
\mathcal{P}(s) \sim \begin{cases} \frac{21 s^4}{16\pi}, & \mbox{if $ s \ll 1$ } \\
\frac{4}{\pi s^2 }, & \mbox{if $s\gg 1$} \end{cases}
$$
Now, it becomes clear that the tail of the level spacing distribution is $\beta$-independent.\\
The numerical simulation showing an excellent agreement with the distribution Eq. (\ref{DisSymplectic}) is given in Fig. \ref{FigSymplectic}.
\subsection{$N \times N$ Lorentzian Hamiltonian}
\begin{figure}
\includegraphics[scale=0.5]{Edge_of_spectrum.eps}
\caption{The exponent of the geometrical fall-off $1/x^\alpha$ of an $N\times N$ Lorentzian matrix as a function of the number of levels $n\equiv N_e$ taken to define the averaged mean level spacing. The levels are
taken at the edge of the spectrum starting from the largest eigenvalue. }
\label{Fig4}
\end{figure}
\begin{figure}
\includegraphics[scale=0.5]{n8_andN40_edgeSpectrum.eps}
\caption{Distribution of the averaged level spacing at the edge of the spectrum for $N_e=8$ and $\beta=1$ (Blue curve). The size of the matrix is $N=40$. The red curve represents the theoretical
result for $2\times2$ orthogonal Lorentzian ensembles. The figure shows a very good agreement
in the description of the tail which suggests a very high degree of fluctuation and the absence of the mean for this distribution.}
\label{EdgeOf_spectrum}
\end{figure}
\begin{figure}
\includegraphics[scale=0.5]{n8_Log_Log_edgeSpectrum.eps}
\caption{$log(\mathcal{P}(s))$ as a function of $log(s)$. The linear decay for large values of $s$ is characteristic of a geometrical fall-off of the distribution $\mathcal{P}(s)$. Red curve is the linear fit of the numerical data.
It has for equation: $y=-1.95x+0.30$. The absolute value of the slope $\alpha=1.95$ represents the exponent of the geometrical fall-off.}
\label{LogLog}
\end{figure}
It is well known \cite{Brouwer} that a large matrix taken from the Lorentzian ensemble is equivalent to a Gaussian matrix sharing the same mean level spacing $\Delta$ at the bulk of the spectrum. This result
was obtained by comparing the cluster functions \cite{Brouwer} and one can understand this as a special case of the general idea of the bulk spectrum universality \cite{Widenmuller}. Therefore, it is easy to
test that the level spacing taken from the bulk spectrum is well fitted by the Wigner surmise. The situation is not the same at the edge (tail) of the spectrum: the fluctuations are higher and
the mean level spacing diverges. Fig. \ref{EdgeOf_spectrum} shows the distribution of the averaged level spacing of an Orthogonal Lorentzian matrix at the edge of the spectrum, defined as :
$$ S=\frac{1}{N_e-1}\sum^{N_e}_{i=1} |E_i-E_{i-1}|$$
where the sum is taken over some $N_e$ levels at the edge of the spectrum (the levels $E_i$ are ordered such that $E_1$ represents the largest eigenvalue.). It shows clearly that the tail falls off with a geometrical
law comparable to that of the $2\times2$ matrix studied in the previous sections. This can be more visible in the plot of $\log(\mathcal{P})$ as a function of $\log(s)$ shown in Fig. \ref{LogLog}. The decay law seems quite well fitted
by $\frac{1}{s^\alpha}$ with $\alpha$ a parameter depending on the number of levels considered in the edge. As shown in Fig. \ref{EdgeOf_spectrum}, the law of the averaged level spacing has no mean
since mostly we have $\alpha <2$. It is now important to define where the edge of the spectrum roughly starts. This can be seen as the levels where the distribution of the spacing between two consecutive
levels stops to be well fitted by a Wigner surmise. In Fig. \ref{Fig4}, we give the exponent $\alpha$ for different numbers $N_e$ of levels taken from the edge of the spectrum. We see that the exponent $\alpha$ approaches the value $\alpha=2$ when we
increase $N_e$. We stress that increasing $N_e$ needs larger values of $N$ in order to count only the eigenvalues at the edge. \\
This behavior at the edge of the spectrum suggests a new physics different from that of the bulk. This was already noticed in \cite{Adel}\cite{Adel2} where the statistics of thermopower and the delay time
of a chaotic cavity were found to be different in the two situations corresponding to a Fermi energy lying in the bulk of the spectrum or in its edge. Moreover, some features and distributions can be directly obtained from the
$2\times 2$ Lorentzian Hamiltonian instead of the original $N\times N$ matrix. Indeed, the distribution of the Seebeck coefficient \cite{Adel} and the Wigner's time \cite{Adel2} at the edge
of the spectrum were found directly by considering the $2\times2$ Lorentzian Hamiltonian.
\subsection{Conclusion:}
We gave the exact form of the level spacing distribution of a $2\times2$ Lorentzian matrix.
This kind of matrices appear in chaotic scattering problems as an effective matrix replacing large matrices describing chaotic cavities.
The tail of the distribution is slowly decaying and leads to the absence of the mean level spacing.
\section{**********}
The author is grateful to J. L. Pichard and K. Muttalib for introducing him to this subject and to RMT in general. He would like to thank G. Fleury for valuable discussions and remarks.
\\
The author acknowledges partial support of the R\'egion Basse Normandie.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
A quandle is a set $X$ together with a binary operation
$*:X\times X\to X$ satisfying certain conditions (see definition
on example \ref{exrack} below), it generalizes the operation
of conjugation on a group, but also is an algebraic structure that behaves well with respect to
Reidemeister moves, so it is very useful for defining knot/links invariants. Knot theorists
have defined a cohomology theory for quandles (see \cite{CJKS}
and \cite{tCES}) in such a way that 2-cocycles give rise to knot invariants by means of
the so-called state-sum procedure.
Biquandles are generalizations of quandles in the sense that quandles give rise to solutions
of the Yang-Baxter equation by setting $\sigma(x,y):=(y,x*y)$. For biquandles there is also a cohomology theory and state-sum procedure for producing knot/links invariants
(see \cite{CES}).
In this work, for a set theoretical solution of the Yang-Baxter equation $(X,\sigma)$, we
define a d.g. algebra $B=B(X,\sigma)$, containing the semigroup algebra
$A=k\{X\}/\langle xy=zt : \sigma(x,y)=(z,t)\rangle$,
such that $k\otimes_AB\otimes_Ak$ and $\mathrm{Hom}_{A-A}(B,k)$ are respectively
the standard homology and cohomology complexes attached to general set theoretical
solutions of the Yang-Baxter equation.
We prove that this d.g. algebra has a natural structure of d.g. {\em bialgebra}
(Theorem \ref{teobialg}). Also, depending on properties of the solution $(X,\sigma)$
(square free, quandle type, biquandle, involutive,...) this d.g. bialgebra $B$ has
natural (d.g. bialgebra) quotients, giving rise to the standard
sub-complexes computing quandle cohomology (as sub-complex of rack homology),
biquandle cohomology, etc.
As a first consequence of our construction, we give a very simple and purely algebraic proof
of the existence of a cup product in cohomology. This was known for rack cohomology
(see \cite{Cl}), the proof was based on topological methods, but it was unknown for biquandles
or general solutions of the Yang-Baxter equation.
A second consequence is the existence of a comparison map between Yang-Baxter (co)homology
and Hochschild (co)homology of the semigroup algebra $A$. Looking carefully this comparison map
we prove that it factors through a complex of "size" $A\otimes \mathfrak{B}\otimes A$, where
$\mathfrak{B}$ is the Nichols algebra associated to the solution $(X,-\sigma)$.
This result leads to new questions, for instance when $(X,\sigma)$ is
involutive (that is $\sigma^2=\mathrm{Id}$) and the characteristic
is zero we show that this complex is acyclic (Proposition \ref{propinvo}), we wander if
this is true in any other characteristic, and for non necessarily involutive solutions.
{\bf Acknowledgements:}
The first author wishes to thank Dominique Manchon for fruitful discussion during a visit to
Laboratoire de math\'ematiques de l'Universit\'e Blaise Pascal where a preliminary version of
the bialgebra $B$ for racks came up. He also want to thanks Dennis Sullivan for very pleasant
stay in Stony Brook where the contents of this work was discussed in detail, in particular,
the role of Proposition \ref{fnormal} in the whole construction.
\subsection{Basic definitions}
A set theoretical solution of the Yang-Baxter equation (YBeq) is a pair $(X, \sigma)$ where $\sigma: X\times X\rightarrow X\times X$
is a bijection satisfying
\[
(\mathrm{Id}\times\sigma)(\sigma\times \mathrm{Id})(\mathrm{Id}\times\sigma)=(\sigma\times \mathrm{Id})(\mathrm{Id}\times\sigma)(\sigma\times \mathrm{Id}):X\times X\times X\rightarrow X\times X\times X
\]
If $X=V$ is a $k$-vector space and $\sigma$ is a linear bijective map satisfying YBeq
then it is called a braiding on $V$.
\begin{ex}\label{exrack} A set $X$ with a binary operation $\triangleleft:X\times X\rightarrow X\times X$ is called a rack if
\begin{itemize}
\item $-\triangleleft x:X\rightarrow X$ is a bijection $\forall x\in X$ and
\item $(x\triangleleft y)\triangleleft z=(x\triangleleft z)\triangleleft (y\triangleleft z)$ $\forall x,y,z \in X$.
\end{itemize}
$x\triangleleft y $ is usually denoted by $x^y$.
If $X$ also verifies that $x\triangleleft x=x$ then $X$ is called a {\em quandle}.
An important example of rack is $X=G$ a group, $x\triangleleft y=y^{-1}xy$.
If $(X,\triangleleft)$ is a rack, then \[
\sigma(x,y)=(y, x\triangleleft y)
\]
is a set theoretical solution of the YBeq.
\end{ex}
Let $M=M_X$ be the monoid freely generated in $X$ with relations \[
xy=zt
\]
$\forall x,y,z,t$ such that $\sigma(x,y)=(z,t)$. Denote $G_X$ the group with the same generators and relations.
For example, when
$\sigma= \text{flip}$ then $M=\mathbb{N}_0^{(X)}$ and $G_X=\mathbb{Z}_0^{(X)}$. If $ \sigma=\mathrm{Id}$ then $M$ is the free (non abelian)
monoid in $X$. If $\sigma$ comes from a rack $(X,\triangleleft)$ then $M$ is the monoid with relation
$xy=y(x\triangleleft y)$ and $G_X$ is the group with relations $x\triangleleft y=y^{-1}xy$.
\section{A d.g. bialgebra associated to $(X,\sigma)$}
Let $k$ be a commutative ring with 1.
Fix $X$ a set, and $\sigma:X\times X\to X\times X$ a solution of the YBeq.
Denote $A_\sigma(X)$, or simply $A$ if $X$ and $\sigma$ are understood,
the quotient of the free $k$ algebra on generators $X$
modulo the ideal generated by elements of the form $xy-zt$ whenever $\sigma(x,y)=(z,t)$:
\[
A:=k\langle X\rangle/\langle xy-zt : x,y\in X,\ (z,t)=\sigma(x,y)\rangle=k[M]
\]
It can be easily seen that
$A$ is a $k$-bialgebra declaring $x$ to be grouplike for any $x\in X$,
since $A$ agrees with the semigroup-algebra on $M$ (the monoid
freely generated by $X$ with relations $xy\sim zt$).
If one considers $G_X$, the group freely generated by
$X$ with relations $xy=zt$, then $k[G_X]$ is the (non commutative) localization of $A$, where one has inverted the
elements of $X$.
An example of $A$-bimodule that will be used later, which is actually a $k[G_X]$-module, is
$k$ with $A$-action determined on generators by
\[
x\lambda y=\lambda, \ \forall x,y\in X,\ \lambda\in k
\]
We define $B(X,\sigma)$ (also denoted by $B$) the algebra freely generated
by three copies of $X$, denoted $x$, $e_x$ and $x'$,
with relations as follows:
whenever $\sigma(x,y)=(z,t)$ we have
\begin{itemize}
\item $ xy\sim zt$ , $xy'\sim z't$, $x'y'\sim z't'$
\item $ xe_{y}\sim e_zt$, $ e_xy'\sim z'e_{t}$
\end{itemize}
Since the relations are homogeneous, $B$ is a graded algebra declaring
\[
|x|=|x'|=0,\ \ |e_x|=1
\]
\begin{teo}\label{teobialg}
The algebra $B$ admits the structure of a differential graded bialgebra, with $d$ the unique superderivation satisfying
\[
d(x)=d(x')=0,\ \
d(e_x)=x-x'
\]
and comultiplication determined by
\[
\Delta(x)=x\otimes x,\
\Delta(x')=x'\otimes x',\
\Delta(e_x)=x'\otimes e_x+e_x\otimes x
\]
\end{teo}
By differential graded bialgebra we mean that the differential is both a derivation with respect to multiplication, and
coderivation with respect to comultiplication.
\begin{proof}
In order to see that $d$ is well-defined as super derivation, one must check that the relations
are compatible with $d$. The first relations
are easier since
\[
d(xy-zt)=
d(x)y+xd(y)-d(z)t-zd(t)=0+0-0-0=0
\]
and similar for the others
(this implies that $d$ is $A$-linear and $A'$-linear). For the rest of the relations:
\[
d(xe_{y}-e_zt)=
xd(e_y)-d(e_z)t
=
x(y-y')-(z-z')t
\]
\[
=xy-zt-(xy'-z't)=0
\]
\[
d(e_xy'-z'e_{t})
=(x-x')y'-z'(t-t')
=xy'-z't -(x'y'-z't')=0
\]
It is clear now that $d^2=0$ since $d^2$ vanishes on generators.
In order to see that $\Delta$ is well defined, we compute
\[
\Delta(xe_y-e_zt)
=
(x\otimes x)( y'\otimes e_y+e_y\otimes y)
-(z'\otimes e_z+e_z\otimes z)(t\otimes t)
\]
\[=
xy'\otimes xe_y+xe_y\otimes xy
-z't\otimes e_zt-e_zt\otimes zt
\]
and using the relations we get
\[=
xy'\otimes xe_y+xe_y\otimes xy
-xy'\otimes xe_y-xe_y\otimes xy=0
\]
similarly
\[
\Delta(x'e_y-e_zt')
=
(x'\otimes x')( y'\otimes e_y+e_y\otimes y)
-(z'\otimes e_z+e_z\otimes z)(t'\otimes t')
\]
\[
=
x'y'\otimes x'e_y+x'e_y\otimes x'y
-z't'\otimes e_zt'-e_zt'\otimes zt'
\]
\[=
x'y'\otimes x'e_y+x'e_y\otimes x'y
-x'y'\otimes x'e_y-x'e_y\otimes x'y=0
\]
This proves that $B$ is a bialgebra, and $d$ is (by construction) a derivation.
Let us see that it is also a coderivation:
\[
(d\otimes 1+1\otimes d)(\Delta(x))=
(d\otimes 1+1\otimes d)(x\otimes x)=0=\Delta(0)=\Delta(dx)
\]
for $x'$ is the same. For $e_x$:
\[
(d\otimes 1+1\otimes d)(\Delta(e_x))=
(d\otimes 1+1\otimes d)(x'\otimes e_x+e_x\otimes x)
\]
\[=
x'\otimes (x-x')+(x-x')\otimes x
=
x'\otimes x-x'\otimes x'+x\otimes x
-x'\otimes x
\]
\[=
-x'\otimes x'+x\otimes x
=\Delta(x-x')=\Delta(de_x)
\]
\end{proof}
\begin{rem}
$\Delta$ is coassociative.
\end{rem}
For a particular element of the form
$b=e_{x_1}\dots e_{x_n}$, the formula for $d(b)$ can be computed as follows:
\[
d(e_{x_1}\dots e_{x_n})=\sum_{i=1}^{n} (-1)^{i+1}e_{x_1}\dots e_{x_{i-1}}d(e_{x_i}) e_{x_{i+1}}\dots e_{x_n}
\]
\[
=\sum_{i=1}^{n} (-1)^{i+1}e_{x_1}\dots e_{x_{i-1}}(x_i-x'_i)e_{x_{i+1}}\dots e_{x_n}
\]
\[=
\overbrace{\sum_{i=1}^{n} (-1)^{i+1}e_{x_1}\dots e_{x_{i-1}}x_ie_{x_{i+1}}\dots e_{x_n}}^{I}
-\overbrace{\sum_{i=1}^{n} (-1)^{i+1}e_{x_1}\dots e_{x_{i-1}}x'_ie_{x_{i+1}}\dots e_{x_n}}^{II}
\]
If one wants to write it in a normal form (say, every $x$ on the right, every $x'$ on the left,
and the $e_x$'s in the middle), then one should use the relations in $B$: this might be a very
complicated formula, depending on the braiding. We give examples in some particular cases.
Lets denote $\sigma(x,y)=(\sigma^1\!(x,y), \sigma^2(x,y))$.
\begin{comment}
Using the relations in $B$ one has
\[
I=\sum^n_{i=1}(-1)^{i+1}e_{x_1}\dots e_{x_{i-1}}e_{y_{i+1}^1}\dots e_{y_n^1}y_{n,i}^2
\]
where
\[
\begin{array}{rcl}
y_{i+1,i}&=&(\sigma^1\!(x_i, x_{i+1}), \sigma^2(x_i, x_{i+1}))\\
y_{i+2,i}&=& (\sigma^1\!(y_{i+1,i}^2, x_{i+2}), \sigma^2(y_{i+1,i}^2, x_{i+2}))\\
y_{i+3,i}&=&(\sigma^1\!(y_{i+2,i}^2,x_{i+3}),\sigma^2(y_{i+2,i}^2,x_{i+3}))\\
\vdots&&\vdots\\
y_{n,i}&=&(\sigma^1\!(y_{n-1,i}^2,x_n),\sigma^2(y_{n-1,i}^2,x_n))
\end{array}
\]
and similarly
\[
II=\sum^n_{i=1}(-1)^{i+1}(z_{1,i}^1)'e_{z_{1,i}^2}\dots e_{z_{i-2,i}^2}e_{z_{i-1,i}^2}e_{x_i+1}\dots e_{x_n}
\]
where
\[\begin{array}{rcl}
z_{i-1,i}&=&(\sigma^1\!(x_{i-1},x_i),\sigma^2(x_{i-1},x_i))
\\
z_{i-2,i}&=&(\sigma^1\!(x_{i-2},z_{i-1,i}^1),\sigma^2(x_{i-2},z_{i-1,i}^1))
\\
\vdots&&\vdots
\\
z_{1,i}&=&(\sigma^1\!(x_1,z_{2,i}^1),\sigma^2(x_1,z_{2,i}^1))
\end{array}\]
\[
\partial f(x_1,\dots,x_n)=f(d(e_{x_1}\dots e_{x_n}))=\]
\[
\sum^n_{i=1}(-1)^{i+1}\left(f(x_1,\dots, x_{i-1},y_{i+1,i}^1,\dots,
y_{n,i}^1)y_{n,i}^2-(z_{1,i}^1)'f(z_{1,i}^2,\dots,z_{i-1,i}^2,x_{i+1},\dots, x_n)\right)
\]
\end{comment}
\begin{ex} In low degrees we have
\begin{itemize}
\item $d(e_x)=x-x'$
\item $d(e_xe_y)=(e_zt-e_xy)-(x'e_y-z'e_t)$, where as usual $\sigma(x,y)=(z,t)$.
\item $d(e_{x_1}e_{x_2}e_{x_3})=A_I-A_{II}$
where
$A_I=e_{\sigma^1\!(x_1,x_2)}e_{\sigma^1\!(\sigma^2(x_1,x_2),x_3)}\sigma^2(\sigma^2(x_1,x_2),x_3)-e_{x_1}e_{\sigma^1\!(x_2,x_3)}
\sigma^2(x_2,x_3)+e_{x_1}e_{x_2}x_3$
$A_{II}= x_1'e_{x_2}e_{x_3}-\sigma^1\!(x_1,x_2)'e_{\sigma^2(x_1,x_2)}e_{x_3}+
\sigma^1\!(x_1,\sigma^1\!(x_2,x_3))'e_{\sigma^2(x_1,\sigma^1\!(x_2,x_3))}e_{\sigma^2(x_2,x_3)}$
In particular, if $f:B\to k$ is an $A$-$A'$ linear map, then
\[
f(d(e_{x_1}e_{x_2}e_{x_3}))=
f(e_{\sigma^1\!(x_1,x_2)}e_{\sigma^1\!(\sigma^2(x_1,x_2),x_3)})
-f(e_{x_1}e_{\sigma^1\!(x_2,x_3)})+f(e_{x_1}e_{x_2})
\]\[-f(e_{x_2}e_{x_3})+f(e_{\sigma^2(x_1,x_2)}e_{x_3})-
f(e_{\sigma^2(x_1,\sigma^1\!(x_2,x_3))}e_{\sigma^2(x_2,x_3)})
\]
Erasing the $e$'s we notice the relation with the cohomological complex given in \cite{CES},
see Theorem \ref{teocomplejo} below.
\end{itemize}
If $X$ is a rack and $\sigma$ the braiding defined by $\sigma(x,y)=(y,x\triangleleft y)=(x,x^y)$, then:
\begin{itemize}
\item $d(e_x)=x-x'$
\item $d(e_xe_y)=
( e_{y}x^{y}
- e_{x}y)
-(x'e_{y}-y'e_{x^y})$
\item $d(e_xe_ye_z)=
e_xe_yz
-e_xe_zy^z
+e_ye_zx^{yz}
-x'e_ye_z
+y'e_{x^y}e_z
-z'e_{x^z}e_{y^z}$.
\item In general, expressions I and II are
\[
I=\sum_{i=1}^{n} (-1)^{i+1} e_{x_1}\dots e_{x_{i-1}}e_{x_{i+1}}\dots e_{x_n}x_{i}^{x_{i+1}\dots x_{n}}
\]
\[
II=\sum_{i=1}^{n} (-1)^{i+1}x'_ie_{x_1^{x_i}}\dots e_{x_{i-1}^{x_i}}e_{x_{i+1}}\dots e_{x_n}
\]
then
\[
\partial f(x_1,\dots,x_n)=f(d(e_{x_1}\dots e_{x_n}))=\]
\[
\sum_{i=1}^{n} (-1)^{i+1} \left(f(x_1,\dots, x_{i-1},x_{i+1},\dots, x_n)x_{i}^{x_{i+1}\dots x_{n}}-x'_if({x_1}^{x_i},\dots
, x_{i-1}^{x_i},x_{i+1},\dots, x_n)\right)
\]
Let us consider $k\otimes_{k[M']} B\otimes_{k[M]} k$
then $d$ represents the canonical differential of rack homology and
$\partial f (e_{x_1}\dots e_{x_n})=f(d(e_{x_1}\dots e_{x_n}))$ gives the traditional rack cohomology structure.
In particular, taking trivial coefficients:
\[
\partial f(x_1,\dots,x_n)=f(d(e_{x_1}\dots e_{x_n}))=\]
\[
\sum_{i=1}^{n} (-1)^{i+1} \left(f(x_1,\dots, x_{i-1},x_{i+1},\dots, x_n)-f({x_1}^{x_i},\dots, x_{i-1}^{x_i},x_{i+1}\dots, x_n)\right)
\]
\end{itemize}
\end{ex}
\begin{teo}\label{teocomplejo}
Taking in $k$ the trivial $A'$-$A$-bimodule, the complexes associated to set theoretical Yang-Baxter solutions
defined in \cite{CES} can be recovered as
\[
(C_\bullet(X,\sigma), \partial)\simeq (k\otimes_{A'} B_\bullet \otimes_{A} k, \partial=id_k\otimes_{A'} d\otimes_{A}id_k)
\]
\[
(C^\bullet(X,\sigma), \partial^*)\simeq (\mathrm{Hom}_{A'-A}(B, k), \partial^*=d^*)
\]
\end{teo}
In the proof of the theorem we will assume first Proposition \ref{fnormal}
that says that one has a left $A'$-linear and right $A$-linear
isomorphism:
\[B\cong A'\otimes TE\otimes A\]
where $A'=TX'/(x'y'=z't': \sigma(x,y)=(z,t))$ and $A=TX/(xy=zt: \sigma(x,y)=(z,t))$.
We will prove
Proposition \ref{fnormal} later.
\begin{proof}
In this setting every expression in $x,x',e_x$, using the relations defining $B$, can be written as
$x'_{i_1}\cdots x'_{i_n} e_{x_1}\cdots e_{x_k} x_{j_1}\cdots x_{j_l}$, tensorizing leaves the expression
$$1\otimes e_{x_1} \cdots e_{x_k}\otimes 1$$
This shows that $T=k\otimes_{k[M']} B\otimes_{k[M]} k\simeq T\{e_x\}_{x\in X}$, where $\simeq$ means isomorphism of $k$-modules.
This also induces isomorphisms of complexes
\[
(C_\bullet(X, \sigma), \partial)\simeq (k\otimes_{A'} B_\bullet \otimes_{A} k, \partial= id_k\otimes_{A'} d\otimes_{A}id_k)
\]
\[
(C^\bullet(X, \sigma), \partial^*)\simeq (\mathrm{Hom}_{A'-A}(B, k), d^*)
\]
\end{proof}
Now we will prove Proposition \ref{fnormal}:
Call
$Y=\langle x,x',e_x\rangle_{x\in X}$ the free monoid in $X$ with unit 1, $k\langle Y\rangle$ the $k$ algebra associated to $Y$.
Lets define
$w_1=xy'$, $w_2=xe_y$ and $w_3=e_xy'$.
Let $S=\{r_1,r_2,r_3\}$ be the reduction system defined as follows: $r_i:k\langle Y\rangle\rightarrow k\langle Y\rangle$ the families of
$k$-module endomorphisms
such that $r_i$ fix all elements except
$r_1(xy')=z't$,\ \ $r_2(xe_y)=e_zt$ and
$r_3(e_xy')=z'e_t$.
Note that $S$ has more than 3 elements, each $r_i$ is a family of reductions.
\
\begin{defi}
A reduction $r_i$ {\em acts trivially} on an element $a$ if $w_i$ does not apear in $a$, ie: $Aw_iB$ apears with coefficient 0.
\end{defi}
Following \cite{B}, $a\in k\langle Y\rangle$ is called {\em irreducible} if $Aw_iB$ does not appear for $i\in\{1,2,3\}$. Call
$k_{irr}\langle Y\rangle$ the $k$ submodule of irreducible elements of $k\langle Y\rangle$.
A finite sequence of reductions is called {\em final} in $a$ if $r_{i_n}\circ \dots \circ r_{i_1}(a)\in k_{irr}(Y)$.
An element $a\in k\langle Y\rangle$ is called {\em reduction-finite} if for every sequence of reductions $r_{i_n}$ acts trivially on
$r_{i_{n-1}}\circ \dots \circ r_{i_1}(a)$ for sufficiently large $n$.
If a is reduction-finite, then any maximal sequence of reductions, such that each $r_{i_j}$
acts nontrivially on $r_{i_{(j-1)}}\dots r_{i_1}(a)$, will be finite, and hence a final
sequence. It follows that the reduction-finite elements form
a k-submodule of $k\langle Y\rangle$
$a\in k\langle Y\rangle$ is called {\em reduction-unique} if is reduction finite and it's image under every finite
sequence of reductions is the same.
This comon value will be denoted $r_s(a)$.
\begin{defi}
Given a monomial $a\in k\langle Y \rangle$ we define the disorder degree of
$a$, $\hbox{disdeg}(a)=\sum_{i=1}^{n_x}rp_i+\sum_{i=1}^{n_{x'}}lp_j$, where
$rp_i$ is the position of the $i$-th letter ``$x$'' counting from right to left, and $lp_i$ is the position of the $i$-th letter ``$x'$''
counting from left to right.
If $a=\sum_{i=1}^{n} k_i a_i$ where $a_i$ are monomials in leters of $X,X', e_X$ and $k_i\in K-\{0\} $,
\[\hbox{disdeg}(a):=\sum_{i=1}^{n}\hbox{disdeg}(a_i)\]
\end{defi}
\begin{ex}\begin{itemize}
\item $\hbox{disdeg}(x_1e_{y_1}x_2z'_1x_3z'_2)=(2+4+6)+(4+6)=22$
\item $\hbox{disdeg}(xe_yz')=3+3=6$ and $\hbox{disdeg}(x'e_yz)=1+1$
\item $\hbox{disdeg}(\prod_{i=1}^{n}x'_i\prod_{i=1}^{m}e_{y_i}\prod^{k}_{i=1}z_i)=\frac{n(n+1)}{2}+\frac{k(k+1)}{2}$
\end{itemize}
\end{ex}
The reduction $r_1$ lowers disorder degree in two and reductions $r_2$ and $r_3$ lowers disorder degree in one.
\begin{rem}\begin{itemize}
\item $k_{irr}(Y)=\{\sum A'e_B C: A' \ \hbox{word in}\ X', e_B \hbox{word in}\ e_x, C \ \hbox{word in}\ X\}$.
\item $k_{irr}\simeq TX'\otimes TE\otimes TX$
\end{itemize}
\end{rem}
Take for example $a=xe_yz'$, there are two possible sequences of final reductions: $r_3\circ r_1\circ r_2$ or $r_2\circ r_1\circ r_3$.
The result will be $a=A'e_B C$ and $a=D'e_E F$ respectively, where
$A=\sigma^{(1)}\left(\sigma^{(1)}(x,y),\sigma^{(1)}(\sigma^{(2)}(x,y),z)\right)$
$B=\sigma^{(2)}\left(\sigma^{(1)}(x,y),\sigma^{(1)}(\sigma^{(2)}(x,y),z)\right)$
$C=\sigma^{(2)}\left(\sigma^{(2)}(x,y),z\right)$
$D=\sigma^{(1)}\left(x,\sigma^{(1)}(y,z)\right)$
$E=\sigma^{(1)}\left(\sigma^{(2)}(x,\sigma^{(1)}(y,z),\sigma^{(2)}(y,z))\right)$
$F=\sigma^{(2)}\left(\sigma^{(2)}(x,\sigma^{(1)}(y,z),\sigma^{(2)}(y,z))\right)$
We have $A=D$, $B=E$ and $C=F$ as $\sigma$ is a solution of YBeq, hence \\
$r_3\circ r_1\circ r_2(xe_yz')=r_2\circ r_1\circ r_3(xe_yz')$.
A monomial $a$ in $k\langle Y\rangle$ is said to have an {\em overlap ambiguity} of $S$ if $a=ABCDE$
such that $w_i=BC$ and $w_j=CD$. We shall say the
overlap ambiguity is {\em resolvable} if there exist compositions of
reductions, $r,r'$ such that $r(Ar_i(BC)DE)=r'(ABr_j(CD)E)$.
Notice that it is enough to take $r=r_s$ and $r'=r_s$.
\begin{rem}
In our case, there is only one type of overlap ambiguity and is the one we solved previously.
\end{rem}
\begin{proof}
There is no rule with $x'$ on the left nor rule with $x$ on the right, so there will be no overlap ambiguity including the family $r_1$.
There is only one type of ambiguity involving reductions $r_2$ and $r_3$.
\end{proof}
Notice that $r_s$ is a proyector and $I=\langle xy'-z't,xe_y-e_zt,e_xy'-z'e_t\rangle$ is trivially included in the kernel.
We claim that it is actually equal:
\begin{proof}
As $r_s$ is a proyector, an element $a\in \ker$ must be $a=b-r_s(b)$ where $b\in k\langle Y\rangle$. It is enough to prove it for monomials $b$.
\begin{itemize}
\item if $a=0$ the result follows trivially.
\item if not, then take a monomial $b$ where at least one of the products $xy'$, $xe_y$ or $e_xy'$ appear. Lets suppose $b$ has a factor $xy'$
(the rest of the cases are analogous).
$b=Axy'B$ where $A$ or $B$ may be empty words. $r_1(b)=Ar_1(xy')B=Az'tB$. Now we can rewrite:
$b-r_s(b)=\underbrace{Axy'B-Az'tB}_{\in I}+Az'tB-r_s(b)$.
As $r_1$ lowers $\hbox{disdeg}$ in two, we have $\hbox{disdeg}(Az'tB-r_s(b))<\hbox{disdeg}(b-r_s(b))$ then in a finite number of steps we get $b=\sum^{N}_{k=1} i_k$
where $i_k\in I$. It follows that $b\in I$.
\end{itemize}
\end{proof}
\begin{coro} $r_s$ induces a $k$-linear isomorphism:
\[k\langle Y\rangle/\langle xy'-z't,xe_y-e_zt,e_xy'-z'e_t\rangle\rightarrow TX'\otimes TE\otimes TX \]
\end{coro}
Returning to our bialgebra, taking quotients we obtain the following proposition:
\begin{prop}\label{fnormal}
$B\simeq \left(TX'/(x'y'=z't')\right)\otimes TE\otimes \left(TX/(xy=zt)\right)$
\end{prop}
Notice that $\overline{x_1\dots x_n}=\overline{\prod \beta_m\circ \dots \circ\beta_1(x_1,\dots, x_n)}$ where $\beta_i=\sigma^{\pm 1}_{j_i}$,
analogously with $\overline{x'_1\dots x'_n}$.
This ends the proof of Theorem \ref{teocomplejo}.
\begin{ex}
\item If the coeficients are trivial, $f\in C^1(X,k)$ and we identify
$C^1(X,k)=k^X$, then
\[
(\partial f)(x,y)=f(d(e_xe_y))=-f(x)-f(y)+f(z)+f(t)
\]
where as usual $\sigma(x,y)=(z,t)$
(If instead of considering $\mathrm{Hom}_{A'-A}$, we consider $\mathrm{Hom}_{A-A'}$ then
$(\partial f)(x,y)=f(d(e_xe_y))=f(x)+f(y)-f(z)-f(t)$ but with $\sigma(z,t)=(x,y)$).
\item Again with trivial coefficients, and $\Phi\in C^2(X,k)\cong k^{X^2}$, then
\[
(\partial \Phi)(x,y,z)=\Phi (d(e_xe_ye_z))=\Phi\left(\overbrace{xe_ye_z}^{I}-\overbrace{x'e_ye_z}^{II}-\overbrace{e_xye_z}^{III}+
\overbrace{e_xy'e_z}^{IV}+\overbrace{e_xe_yz}^{V}-\overbrace{e_xe_yz'}^{VI} \right) \]
If considering $\mathrm{Hom}_{A'-A}$ then,using the relations defining $B$, the terms $I,III,IV$ and $VI$ changes leaving
\[
\partial \Phi(x,y,z)=\Phi (\sigma^1\!(x,y),\sigma^1\!(\sigma^2(x,y),z))-\Phi(y,z)-\Phi(x,\sigma^1\!(y,z))+\]
\[
\Phi(\sigma^2(x,y),z)+\Phi(x,y)-\Phi(\sigma^2(x,\sigma^1\!(y,z)),\sigma^2(y,z))
\]
\item If $M$ is a $k[T]$-module (notice that $T$ need not to be invertible as in \cite{tCES}) then
$M$ can be viewed as an $A'-A$-bimodule via
\[
x' \cdot m=m
,\ \ m\cdot x=Tm
\]
The actions are compatible with the relations defining $B$:
\[
(m\cdot x)\cdot y =T^2 m \ ,\ \ (m\cdot z)\cdot t =T^2 m \]
and
\[
x'\cdot (y'\cdot m)=m \ , \ \ z'\cdot (t'\cdot m)= m
\]
Using these coefficients we get twisted cohomology as in \cite{tCES} but for general YB solutions.
If one takes the special case of $(X,\sigma)$ being a rack, namely $\sigma(x,y)=(y,x\triangleleft y)$, then the general formula
gives
\[
\partial f(x_1,\dots,x_n)=f(d(e_{x_1}\dots e_{x_n}))=\]
\[
\sum_{i=1}^{n} (-1)^{i+1} \left(Tf(x_1,\dots, x_{i-1},x_{i+1},\dots, x_n)-f({x_1}^{x_i},\dots, x_{i-1}^{x_i},x_{i+1},\dots, x_n)\right)
\]
that agree with the differential of the twisted cohomology defined in \cite{tCES}.
\end{ex}
\begin{rem}
If $c(x\otimes y)=f(x,y)\sigma^1\!(x,y)\otimes \sigma^2(x,y)$, then $c$ is a solution of YBeq if and only if $f$ is a 2-cocycle.
\end{rem}
\[
c_1\circ c_2\circ c_1(x\otimes y\otimes z)
\]
\[
=a\overbrace{\sigma^1\!\left(\sigma^1\!(x,y),\sigma^1\!(\sigma^2(x,y),z)\right)\otimes \sigma^2\left(\sigma^1\!(x,y),\sigma^1\!(\sigma^2(x,y),z\right)\otimes \sigma^2\left(\sigma^2(x,y),z)\right)}^{I}
\]
where
\[
a=f(x,y)f\left(\sigma^2(x,y),z\right)f\left(\sigma^1\!(x,y),\sigma^1\!(\sigma^2(x,y),z)\right)
\]
\[
c_2\circ c_1 \circ c_2 (x\otimes y\otimes z)
\]
\[
b\overbrace{\sigma^1\!(x,\sigma^1\!(y,z))\otimes \sigma^1\!\left(\sigma^2(x,\sigma^1\!(y,z)),\sigma^2(y,z)\right)\otimes \sigma^2\left(\sigma^2(x,\sigma^1\!(y,z),\sigma^2(y,z))\right)}^{II}
\]
where
\[b= f(y,z)f\left(x,\sigma^1\!(y,z)\right)f\left(\sigma^2(x,\sigma^1\!(y,z)),\sigma^2(y,z)\right)
\]
Writing YBeq with this notation leaves:
\begin{equation}\label{trenza-trenza}
\sigma \ is\ a\ braid \Leftrightarrow I=II
\end{equation}
Take $f$ a two-cocycle, then
\[
0=\partial f(x,y,z)=f(d(e_xe_ye_z))=f((x-x')e_ye_z-e_x(y-y')e_z+e_xe_y(z-z'))
\]
is equivalent to the following equality
\[
f(xe_ye_z)+f(e_xy'e_z)+f(e_xe_yz)=f(x'e_ye_z)+f(e_xye_z)+f(e_xe_yz')
\]
using the relations defining $B$ we obtain
\[
f\left(e_{\sigma^1\!(x,y)}e_{\sigma^1\!(\sigma^2(x,y),z)}\sigma^2(\sigma^2(x,y)z)\right)+f\left(\sigma^1\!(x,y)'e_{\sigma^2(x,y)}e_z\right)
+f\left(e_xe_yz\right)
\]
\[
=f\left(x'e_ye_z\right)+f\left(e_xe_{\sigma^1\!(y,z)}\sigma^2(y,z)\right)+
f\left(\sigma^1\!(x,\sigma^1\!(y,z))'e_{\sigma^2(x,\sigma^1\!(y,z))}e_{\sigma^2(y,z)}\right)
\]
If $G$ is an abelian multiplicative group and $f:X\times X\rightarrow (G, \cdotp)$ then the previous formula says
\[
f\left(e_{\sigma^1\!(x,y)}e_{\sigma^1\!(\sigma^2(x,y),z)}\sigma^2(\sigma^2(x,y)z)\right)f\left(\sigma^1\!(x,y)'e_{\sigma^2(x,y)}e_z\right)
f\left(e_xe_yz\right)
\]
\[
=f\left(x'e_ye_z\right)f\left(e_xe_{\sigma^1\!(y,z)}\sigma^2(y,z)\right)
f\left(\sigma^1\!(x,\sigma^1\!(y,z))'e_{\sigma^2(x,\sigma^1\!(y,z))}e_{\sigma^2(y,z)}\right)
\]
which is exactly the condition $a=b$.
Notice that if the action is trivial, then the equation above simplifies giving
\begin{equation}\label{2-cocycle}
f\!\left(e_{\sigma^1\!(x,y)}e_{\sigma^1\! (\sigma^2(x,y),z)} \right)\!
f \!\left(e_{\sigma^2(x,y)}e_z\right)\!
f\!\left(e_xe_y\right)\!
\newline
=f\!\left(e_ye_z\right)\!f\!\left(e_xe_{\sigma^1\!(y,z)}\right)\!
f\!\left(e_{\sigma^2(x,\sigma^1\!(y,z))}e_{\sigma^2(y,z)}\right)
\end{equation}
which is precisely the formula on \cite{CES} for Yang-Baxter 2-cocycles
(with $R_1$ and $R_2$ instead of $\sigma^1$ and $\sigma^2$).
\section{1st application: multiplicative structure
on cohomology}
\begin{prop} $\Delta$ induces an associative product in $\mathrm{Hom}_{A'-A}(B,k)$ (the graded Hom).
\end{prop}
\begin{proof} It is clear that $\Delta$ induces an associative product on
$\mathrm{Hom}_{k}(B,k)$ (the graded Hom), and
$\mathrm{Hom}_{A'-A}(B,k)\subset\mathrm{Hom}_k(B,k)$ is a $k$-submodule. We will show that it is in fact a subalgebra.
Consider the $A'$-$A$ diagonal structure on $B\otimes B$
(i.e. $x_1'.(b\otimes b').x_2=x_1'bx_2\otimes x_1'b'x_2$)
and denote $B\otimes^DB$ the $k$-module $B\otimes B$ considered as $A'-A$-bimodule in this diagonal way.
We claim that
$\Delta: B\rightarrow B\otimes^D B$ is a morphism of $A'-A$-modules:
\[
\Delta(x_1'yx_2)=x_1'yx_2\otimes x_1'yx_2=x_1'(y\otimes y)x_2
\]
same with $y'$, and with $e_x$:
\[
\Delta(x_1'e_yx_2)=(x_1'\otimes x_1')(y'\otimes e_y+e_y\otimes y)(x_2\otimes x_2)=x'_1\Delta(e_y)x_2
\]
Dualizing $\Delta$ one gets:
\[
\Delta^*:\mathrm{Hom}_{A'-A}(B\otimes^DB, k)\rightarrow \mathrm{Hom}_{A'-A}(B,k)
\]
consider the natural map
$$\iota:\mathrm{Hom}_k(B,k)\otimes \mathrm{Hom}_k(B,k)\rightarrow \mathrm{Hom}_k(B\otimes B, k)$$
$$\iota(f\otimes g)(b_1\otimes b_2)=f(b_1)g(b_2)$$
and denote $\iota|$ by
\[
\iota|=\iota|_{\mathrm{Hom}_{A'-A}(B,k)\otimes \mathrm{Hom}_{A'-A}(B,k)}
\]
Let us see that
\[
Im (\iota|)\subset \mathrm{Hom}_{A'-A}(B\otimes B, k)\subset \mathrm{Hom}_k(B\otimes B, k)
\]
If $f,g: B\rightarrow k$ are two $A'-A$-module morphisms (recall $k$ has trivial actions, i.e. $x'\lambda=\lambda$ and $\lambda x=x$),
then
\[
\iota(f\otimes g)(x'(b_1\otimes b_2))=f(x'b_1)g(x'b_2)=(x'f(b_1))(x'g(b_2))\]
\[=f(b_1)g(b_2)=x'\iota (f\otimes g)(b_1\otimes b_2)
\]
\[
\iota(f\otimes g)((b_1\otimes b_2)x)=f(b_1x)g(b_2x)=(f(b_1)x)(g(b_2)x)
\]
\[=(f(b_1)g(b_2))x=\iota(f\otimes g)(b_1\otimes b_2)x
\]
So, it is possible to compose $\iota|$ and $\Delta$, and obtain in this way an associative multiplication in $\mathrm{Hom}_{A'-A}(B,k)$.
\end{proof}
Now we will describe several natural quotients of $B$, each of them
give rise to a subcomplex of the cohomological complex of $X$
with trivial coefficients that are not only subcomplexes but also subalgebras;
in particular they are associative algebras.
\subsection{Square free case}
A solution $(X,\sigma)$ of YBeq satisfying $\sigma(x,x)=(x,x)\forall x\in X$ is called {\em square free}.
For instance, if $X$ is a rack, then this condition is equivalent to $X$ being a quandle.
In the square free situation,
namely when $X$ is such that $\sigma (x,x)=(x, x)$ for all $x$, we add the condition $e_xe_x\sim 0$.
If $(X,\sigma)$ is a square-free solution of the YBeq, let us denote
$sf$ the two sided ideal of $B$ generated by $\{e_xe_{x}\}_{x\in X}$.
\begin{prop}
\label{propsf}
$sf$ is a differential Hopf ideal. More precisely,
\[
d(e_xe_{x})=0
\hbox{ and }
\Delta(e_xe_{x})=x'x'\otimes e_xe_{x}+
e_xe_{x}\otimes xx.\]
\end{prop}
In particular $B/sf$ is a differential graded bialgebra.
We may identify \\
$\mathrm{Hom}_{A'A}(B/sf,k)\subset
\mathrm{Hom}_{A'A}(B,k)$ as the elements $f$ such that $f(\dots,x,x,\dots)=0$. If $X$ is a quandle, this
construction leads to the quandle-complex.
We have $\mathrm{Hom}_{A'A}(B/sf,k)\subset
\mathrm{Hom}_{A'A}(B,k)$ is not only a subcomplex, but also a subalgebra.
\subsection{Biquandles}
In \cite{KR}, a generalization of quandles is proposed (we recall it with different notation),
a solution $(X,\sigma)$ is called
non-degenerated, or {\em birack} if in addition,
\begin{enumerate}
\item for any $x,z\in X$ there exists a unique $y$ such that $\sigma^1\!(x,y)=z$, (if this is the case, $\sigma^1\!$ is called {\em left invertible}),
\item for any $y,t\in X$ there exists a unique $x$ such that $\sigma^2(x,y)=t$, (if this is the case, $\sigma^2$ is called {\em right invertible}),
\end{enumerate}
A birack is called {\em biquandle} if, given $x_0\in X$, there exists a unique $y_0\in X$ such that
$\sigma(x_0,y_0)=(x_0,y_0)$. In other words, if there exists a bijective map $s:X\to X$ such
that
\[
\{(x,y):\sigma(x,y)=(x,y)\}=
\{(x,s(x)): x\in X\}
\]
\begin{rem}
Every quandle solution is a biquandle, moreover, given a rack $(X,\triangleleft)$, then
$\sigma(x,y)=(y,x\triangleleft y)$ is a biquandle if and only if $(X,\triangleleft)$ is a quandle.
\end{rem}
If $(X,\sigma)$ is a biquandle, for all $x\in X$ we add in $B$ the relation $e_{x}e_{s(x)}\sim 0$.
Let us denote $bQ$ the two sided ideal of $B$ generated by $\{e_xe_{sx}\}_{x\in X}$.
\begin{prop}
\label{propbQ}
$bQ$ is a differential Hopf ideal. More precisely,
$d(e_xe_{sx})=0$ and
$\Delta(e_xe_{sx})=x's(x)'\otimes e_xe_{sx}+
e_xe_{sx}\otimes xs(x)$.
\end{prop}
In particular $B/bQ$ is a differential graded bialgebra.
We may identify
\[
\mathrm{Hom}_{A'A}(B/bQ,k)\cong
\{f\in \mathrm{Hom}_{A'A}(B,k) : f(\dots,x,s(x),\dots)=0\}
\subset
\mathrm{Hom}_{A'A}(B,k)\]
In \cite{CES}, the condition $f(\dots,x_0,s(x_0),\dots)=0$ is
called the {\em type 1 condition}. A consequence of the above proposition is that
$\mathrm{Hom}_{A'A}(B/bQ,k)\subset
\mathrm{Hom}_{A'A}(B,k)$ is not only a subcomplex, but also a subalgebra.
Before proving this proposition we will review some other similar constructions.
\subsection{Identity case}
The two cases above may be generalized in the following way:
Consider $S\subseteq X\times X$ a subset of elements verifying
$\sigma(x,y)=(x,y)$ for all $(x,y)\in S$.
Define $idS$ the two sided ideal of $B$ given by $idS=\langle e_xe_y/ (x,y)\in S\rangle$.
\begin{prop}
\label{propidS}
$idS$ is a differential Hopf ideal. More precisely,
$d(e_xe_{y})=0$ for all $(x,y)\in S$ and
$\Delta(e_xe_{y})=x'y'\otimes e_xe_{y}+
e_xe_{y}\otimes xy$.
\end{prop}
In particular $B/idS$ is a differential graded bialgebra.\\
If one identifies $\mathrm{Hom}_{A'A}(B/sf,k)\subset
\mathrm{Hom}_{A'A}(B,k)$ as the elements $f$ such that
\[ f(\dots,x,y,\dots)=0 \ \forall (x,y)\in S\]
We have that $\mathrm{Hom}_{A'A}(B/idS,k)\subset
\mathrm{Hom}_{A'A}(B,k)$ is not only a subcomplex, but also a subalgebra.
\subsection{Flip case}
Consider the condition $e_xe_y+e_ye_x\sim 0$ for all pairs such that $\sigma(x,y)=(y,x)$. For such a pair $(x,y)$ we have
the equations $xy=yx$, $xy'=y'x$, $x'y'=y'x'$ and $xe_y=e_yx$. Note that there is no equation for $e_xe_y$.
The two sided ideal $D=\langle e_xe_y+e_ye_x:\sigma(x,y)=(y,x)\rangle$ is a differential and Hopf ideal.
Moreover, the following generalization is still valid:
\subsection{Involutive case}
Assume $\sigma(x,y)^2=(x, y)$. This case is called {\em involutive} in \cite{ETS}.
Define $Invo$ the two sided ideal of $B$ given by $Invo=\langle e_xe_y+e_ze_t : (x,y)\in X,\sigma(x,y)=(z,t)\rangle$.
\begin{prop}
\label{propInvo}
$Invo$ is a differential Hopf ideal. More precisely,
$d(e_xe_{y}+e_ze_t)=0$ for all $(x,y)\in X$ (with $(z,t)=\sigma(x,y)$) and
if $\omega=e_xe_y+e_ze_t$ then
$\Delta(\omega)=x'y'\otimes \omega+\omega \otimes xy$.
\end{prop}
In particular $B/Invo$ is a differential graded bialgebra.
If one identifies \\
$\mathrm{Hom}_{A'A}(B/Invo,k)\subset
\mathrm{Hom}_{A'A}(B,k)$ then
$\mathrm{Hom}_{A'A}(B/Invo,k)\subset \mathrm{Hom}_{A'A}(B,k)$ is not only a subcomplex, but a subalgebra.
\begin{conj}
$B/Invo$ is acyclic in positive degrees.
\end{conj}
\begin{ex} If $\sigma=flip$ and $X=\{x_1,\dots,x_n\}$ then $A=k[x_1,\dots,x_n]=SV$, the symmetric algebra on
$V=\oplus _{x\in X}kx$. In this case
$(B/Invo,d)\cong (S(V)\otimes\Lambda V\otimes S(V),d)$ gives the Koszul resolution of $S(V)$ as
$S(V)$-bimodule.
\end{ex}
\begin{ex} If $\sigma=Id$, $X=\{x_1,\dots,x_n\}$
and $V=\oplus _{x\in X}kx$, then $A=TV$ the tensor algebra.
If $\frac12\in k$, then
$(B/invo,d)\cong TV\otimes (k\oplus V)\otimes TV$ gives the Koszul resolution
of $TV$ as $TV$-bimodule. Notice that we don't really need $\frac12\in k$,
one could replace $invo=\langle e_xe_y+e_xe_y:(x,y)\in X\times X\rangle$ by
$idXX=\langle e_xe_y:(x,y)\in X\times X\rangle$.
\end{ex}
The conjecture above, besides these examples, is supported by
next result:
\begin{prop}\label{propinvo}
If $\mathbb{Q}\subseteq k$, then $B/Invo$ is acyclic in positive degrees.
\end{prop}
\begin{proof}
In $B/Invo$ it can be defined $h$ as the unique (super)derivation such that:
\[
h(e_x)=0; h(x)=e_x, h(x')=-e_x
\]
Let us see that $h$ is well defined:
\[
h(xy-zt)=e_xy+xe_y-e_zt-ze_t=0
\]
\[
h(xy'-z't)=e_xy'-xe_y+e_zt-z'e_t=0
\]
\[
h(x'y'-z't')=-e_xy'-x'e_y+e_zt'+z'e_t=0
\]
\[
h(xe_y-e_zt)=e_xe_y+e_ze_t=0
\]
Notice that in particular next equation shows
that $h$ is not well-defined in $B$.
\[
h(e_xy'-z'e_t)=e_xe_y+e_ze_t=0
\]
\[
h(zt'-x'y)=e_zt'-ze_t+e_xy-x'e_y=0
\]
\[
h(ze_t-e_xy)= e_ze_t+e_xe_y=0
\]
\[
h(e_zt'-x'e_y)=e_ze_t+e_xe_y=0
\]
\[
h(e_xe_y+e_ze_t)=0
\]
Since (super) commutator of (super)derivations is again a derivation,
we have that $[h, d]=hd+dh$ is also a derivation.
Computations on generators:
\[
h(e_x)=2e_x,\ h(x)=x-x', \ h(x')=x'-x
\]
or equivalently
\[
h(e_x)=2e_x,\ h(x+x')=0, \ h(x-x')=2(x-x')
\]
One can also easily see that $B/Invo$ is generated by
$e_x,x_{\pm}$, where $x_\pm=x\pm x'$, and that their
relations are homogeneous. We see that $hd+dh$ is nothing but the Euler
derivation with respect to the grading defined by
\[
\deg e_x=2,\
\deg x_+=0,\
\deg x_-=2,\]
We conclude automatically that the homology vanish for positive degrees of the $e_x$'s
(and similarly for the $x_-$'s).
\end{proof}
Next, we generalize Propositions
\ref{propsf}, \ref{propbQ},
\ref{propidS} and \ref{propInvo}.
\subsection{Braids of order $N$}
Let $(x_0,y_0)\in X\times X$ such that
$\sigma^N(x_0,y_0)=(x_0,y_0)$ for some $N\geq 1$.
If $N=1$ we have the ''identity case'' and all subcases, if $N=2$ we have the
''involutive case''.
Denote
\[
(x_i,y_i):=\sigma^i(x_0,y_0) \ 1\leq i \leq N-1
\]
Notice that the following relations hold in $B$:
\begin{itemize}
\item[$\star$] $x_{N-1}y_{N-1}\sim x_0y_0$, \ $x_{N-1}y'_{N-1}\sim x'_0y_0$, \ $x'_{N-1}y'_{N-1}=x'_0y'_0$
\item[$\star$] $ x_{N-1}e_{y_{N-1}}\sim e_{x_0}y_0$, \ $ e_{x_{N-1}}y'_{N-1}\sim x'_0e_{y_0}$
\end{itemize}
and for $1\leq i \leq N-1$:
\begin{itemize}
\item[$\star$] $x_{i-1}y_{i-1}\sim x_iy_i$, \ $x_{i-1}y'_{i-1}\sim x'_iy_i$, \ $x'_{i-1}y'_{i-1}=x'_iy'_i$
\item[$\star$] $ x_{i-1}e_{y_{i-1}}\sim e_{x_i}y_i$, \ $ e_{x_{i-1}}y'_{i-1}\sim x'_ie_{y_i}$
\end{itemize}
Take $\omega=\sum_{i=0}^{N-1}e_{x_i}e_{y_i}$, then we claim that
\[
d\omega=0\]
and
\[\Delta\omega
=x_0y_0\otimes\omega+\omega\otimes x'_0y'_0\]
For that, we compute
\[
d(\omega)=\sum_{i=0}^{N-1}(x_i-x'_i)e_{y_i}-e_{x_i}(y_i-y'_i)=
\]
\[
\sum_{i=0}^{N-1}(x_ie_{y_i}-e_{x_i}y_i)- \sum_{i=0}^{N-1} (x'_ie_{y_i}-e_{x_i}y'_i)=0
\]
For the comultiplication,
we recall that
\[\Delta(ab)=\Delta(a)\Delta(b)\]
where the product on the right hand side is defined using the Koszul sign rule:
\[
(a_1\otimes a_2)(b_1\otimes b_2)=(-1)^{|a_2||b_1|}a_1b_1\otimes a_2b_2
\]
So, in this case we have
\[
\Delta(\omega)=\sum_{i=0}^{N-1}\Delta (e_{x_i}e_{y_i})=
\]
\[
\sum_{i=0}^{N-1}(x'_iy'_i\otimes e_{x_i}e_{y_i}-x'_ie_{y_i}\otimes e_{x_i}y_i+ e_{x_i}y'_i\otimes x_ie_{y_i}+e_{x_i}e_{y_i}\otimes x_iy_i)
\]
the middle terms cancel telescopically, giving
\[
=\sum_{i=0}^{N-1}(x'_iy'_i\otimes e_{x_i}e_{y_i}+e_{x_i}e_{y_i}\otimes x_iy_i)
\]
and the relation $x_iy_i\sim x_{i+1}y_{i+1}$ gives
\[
=x'_0y'_0\otimes( \sum_{i=0}^{N-1}e_{x_i}e_{y_i})+
(\sum_{i=0}^{n-1}e_{x_i}e_{y_i})\otimes x_0y_0
\]
\[
=x'_0y'_0\otimes \omega+
\omega\otimes x_0y_0
\]
Then the two-sided ideal of $B$ generated by $\omega$ is a Hopf ideal.
If instead of a single $\omega$ we have several $\omega_1,\dots \omega_n$, we simply remark that
the sum of differential Hopf ideals is also a differential Hopf ideal.
\begin{rem} If X, is finite then for every $(x_0,y_0)$ there exists $N>0$ such that
$\sigma^N(x_0,y_0)=(x_0,y_0)$.
\end{rem}
\begin{rem}
Let us suppose $(x_0,y_0)\in X\times X$ is such that $\sigma^N(x_0,y_0)=(x_0,y_0)$ and $u\in X$ an arbitrary element.
Consider the element
\[
((\mathrm{Id} \times \sigma)( \sigma \times \mathrm{Id}) (u,x_0,y_0)=(\widetilde x_0,\widetilde y_0,u'')
\]
graphically
\[
\xymatrix{
u\ar[rd]&x\ar[ld]&y\ar[d]\\
\widetilde x\ar[d]&u'\ar[rd]&y\ar[ld]\\
\widetilde x&\widetilde y&\widetilde u''
}
\]
then $\sigma^N(\widetilde x_0,\widetilde y_0)=(\widetilde x_0,\widetilde y_0)$.
\end{rem}
\begin{proof}\[
(\sigma^N \times id)(\widetilde x_0,\widetilde y_0,u'')=(\sigma^N\times id)(id\times \sigma)(\sigma \times id)(u,x_0,y_0)=
\]
\[
(\sigma^{N-1}\times id)(\sigma\times id)(id\times \sigma)(\sigma \times id)(u,x_0,y_0)=
\]
using YBeq
\[
(\sigma^{N-1}\times id)(id\times \sigma)(\sigma \times id)(id\times \sigma)(u,x_0,y_0)=
\]
repeating the procedure $N-1$ times leaves
\[
(id\times \sigma)(\sigma \times id)(id\times \sigma^N)(u,x_0,y_0)=(id\times \sigma)(\sigma \times id )(u,x_0,y_0)=(\widetilde x_0,\widetilde y_0,u'')
\]
\end{proof}
\section{$2^{nd}$ application: Comparison with Hochschild cohomology}
$B$ is a differential graded algebra, and on each degree $n$
it is isomorphic to $A\otimes (TV)_n\otimes A$, where $V=\oplus_{x\in X}ke_x$.
In particular $B_n$
is free as $A^e$-module. We
have {\em for free} the existence of a comparison map
\[
\xymatrix@-2ex{
\cdots\ar[r]&B_n\ar[r]\ar@{=}[d]&\cdots\ar[r]&B_2\ar[r]^d\ar@{=}[d]&B_1\ar[r]^d\ar@{=}[d]&\ar@{=}[d]B_0\\
\cdots\ar[r]&A'(TX)_n A\ar@{=}[d] ^{\cong}\ar[r]&\cdots\ar[r] &\oplus_{x,y\in X}A'e_xe_y A\ar@{=} ^{\cong}[d]\ar[r]^d&\oplus_{x\in X} A'e_x A\ar[r]^d\ar@{=}[d]&A' A\ar@{=}[d] ^{\cong}\\
\cdots\ar[r]&A\otimes V^{\otimes n}\otimes A\ar[d]^{\widetilde\mathrm{Id}}\ar[r]&\cdots\ar[r] &A\otimes V^{\otimes 2}\otimes A\ar[d]^{\widetilde\mathrm{Id}}\ar[r]^{d_2}& A\otimes V\otimes A\ar[d]^{\widetilde\mathrm{Id}}\ar[r]^{d_1}&A\otimes A\ar[d]^{\mathrm{Id}}\ar[r]^m& A\ar[d]^{\mathrm{Id}}\ar[r]&0\\
\cdots\ar[r]&A\otimes A^{\otimes n}\otimes A\ar[r]& \cdots\ar[r] &A\otimes A^{\otimes 2}\otimes A\ar[r]^{b'}& A\otimes A\otimes A\ar[r]^{b'}&A\otimes A\ar[r]^m& A\ar[r]&0\\
}\]
\begin{coro}
For all $A$-bimodule $M$, there exists natural maps
\[
\widetilde\mathrm{Id}_*: H^{YB}_\bullet(X,M)\to H_\bullet(A,M)
\]
\[
\widetilde\mathrm{Id}^*: H^\bullet(A,M)\to H_{YB}^\bullet(X,M)
\]
that are the identity in degree zero and 1.
\end{coro}
Moreover, one can choose an explicit map with extra properties. For that we recall some definitions: there is a set theoretical section to the canonical projection from the
Braid group to the symmetric group
\[
\xymatrix{
\mathbb{B}_n\ar@{->>}[r]& \mathbb{S}_n\ar@/_/@{..>}[l]
}
\]
\[
\xymatrix{
T_s:=\sigma_{i_1}\dots\sigma_{i_k}
&
s=\tau_{i_1}\dots \tau_{i_k} \ar@{|->}[l]
}
\]
where
\begin{itemize}
\item $\tau\in S_n$ are transpositions of neighboring elements $i$ and $i+1$,
so-called simple transpositions,
\item $\sigma_i$ are the corresponding generators of $\mathbb{B}_n$,
\item $\tau_{i_{1}}\dots \tau_{i_{k}}$ is one of the shortest words representing $s$.
\end{itemize}
This inclusion factorizes trough
\[
\mathbb{S}_n\hookrightarrow \mathbb{B}_n^+\hookrightarrow \mathbb{B}_n
\]
It is a set inclusion not preserving the monoid structure.
\begin{defi}
The permutation sets
\[
\mathrm{Sh}_{p_1,\dots,p_k}:=\left\{ s\in \mathbb{S}_{p_1+\dots+p_k}/s(1)<\dots<s(p_1),\cdots, s(p+1)<\dots < s(p+p_k) \right\},
\]
where $p=p_1+\dots+p_{k-1}$,
are called {\em shuffle sets}.
\end{defi}
\begin{rem}
It is well known that a braiding $\sigma$ gives an action of the positive braid monoid $B_n^+$ on $V^{\otimes n}$, i.e. a monoid morphism
\[
\rho: B_n^+\rightarrow End_\mathbb{K}(V^{\otimes n})
\]
defined on generators $\sigma_i$ of $B_n^+$ by
\[
\sigma_i\mapsto \mathrm{Id}_V^{\otimes(i-1)}\otimes \sigma\otimes \mathrm{Id}_V^{\otimes(n-i+1)}
\]
Then there exists a natural extension of a braiding in $V$ to a braiding in $T(V)$.
\[
{\bf \sigma}(v\otimes w)=(\sigma_k \dots \sigma_1)\circ\dots\circ(\sigma_{n+k-2}\dots\sigma_{n-1})\circ(\sigma_{n+k-1}\dots\sigma_n)(vw)\in V^{k}\otimes V^{n}
\]
for $v\in V^{\otimes n}$, $w\in V^{k}$ and $vw$ being the concatenation.
Graphically
\[
\xymatrix{
\ar[rrrrrdd]&\ar[rrrrrdd]&\dots&\ar[rrrrrdd]&\otimes&\ar[llllldd]&\ar[llllldd]&\dots&\ar[llllldd]\\
&&&&&&&&\\
&&\dots&&\otimes&&&\dots&
}\]
\end{rem}
\begin{defi}
The quantum shuffle multiplication on the tensor space $T(V)$ of a braided vector space $(V,\sigma)$ is the $k$-linear extension
of the map
\[
\shuffle_\sigma
=
\shuffle_\sigma^{p,q}:V^{\otimes p}\otimes V^{\otimes q}\rightarrow V^{\otimes(p+q)}
\]
\[
\overline v\otimes \overline w \mapsto
\overline v \shuffle_\sigma\overline w :=\sum_{s\in Sh_{p,q}}T_s^\sigma(\overline{vw})
\]
Notation: $T_s^{\sigma}$ stands for the lift $T_s\in \mathbb{B}_n^+$
acting on $V^{\otimes n}$ via the braiding $\sigma$.
The algebra $Sh_\sigma(V):=(TV,\shuffle_\sigma)$
is called the quantum shuffle algebra on $(V,\sigma)$
\end{defi}
It is well-known that
$\shuffle_\sigma$ is an associative product on $TV$(see for example
\cite{Le} for details) that makes it a Hopf algebra with deconcatenation
coproduct.
\begin{defi}
Let $V$ be a braided vector space, then the quantum symmetrizer
map $\shuffle_{\sigma}:V^{\otimes n}\to V^{\otimes n}$ defined by
\[
QS_{\sigma}(v_1\otimes\cdots \otimes v_n)=
\sum_{\tau\in \mathbb{S}_n}T^\sigma_\tau(v_1\otimes\cdots\otimes v_n)
\]
where $T_\tau^\sigma$ is the lift $T^\sigma_\tau\in \mathbb{B}_{n}^+$
of $\tau$, acting on $V^{\otimes n}$ via the braiding $\sigma$.
In terms of shuffle products the quantum symmetrizer can be computed as
\[
\omega \shuffle_{\sigma} \eta
:=\sum_{\tau\in\mathrm{Sh}_{p,q}}T^\sigma_\tau(\omega\otimes\eta)
\]
\end{defi}
The quantum symmetrizer map can also be defined as
\[
QS_\sigma(v_1\otimes\cdots\otimes v_n)=
v_1\shuffle_\sigma \cdots \shuffle_\sigma v_n
\]
With this notation, next result reads as follows:
\begin{teo}\label{teoid}
The $A'$-$A$-linear quantum symmetrizer map
\[
\xymatrix{
A'V^{\otimes n}A\ar[r]^{\widetilde \mathrm{Id}}& A\otimes A^{\otimes n}\otimes A\\
a_1'e_{x_1}\cdots e_{x_n}a_2\ar@{|->}[r]&
a_1\otimes(x_1\shuffle_{-\sigma} \cdots \shuffle_{-\sigma} x_n)\otimes a_2}
\]
is a chain map lifting the identify. Moreover,
$\widetilde\mathrm{Id}:B\to (A\otimes TA\otimes A,b')$ is a differential graded algebra
map, where in $TA$ the product is $\shuffle_{-\sigma}$, and
in $A\otimes TA\otimes A$ the multiplicative structure
is not the usual tensor product algebra, but the braided one.
In particular, this map factors through
$A\otimes \mathfrak{B}\otimes A$, where $\mathfrak{B}$ is the Nichols algebra
associated to the braiding $\sigma'(x\otimes y)= - z\otimes t$, where
$x,y\in X$ and $\sigma(x,y)=(z,t)$.
\end{teo}
\begin{rem}The Nichols algebra
$\mathfrak{B}$ is the quotient of $TV$ by the ideal generated by (skew)primitives that
are not in $V$, so the result above explains the good behavior
of the ideals $invo$, $idS$,
or in general the ideal generated by
elements
of the form
$\omega=\sum_{i=0}^{N-1}e_{x_i}e_{y_i}$ where $\sigma(x_i,y_i)=(x_{i+1},y_{i+1})$ and $\sigma^N(x_0,y_0)=(x_0,y_0)$.
It would be interesting to know the properties of $A\otimes\mathfrak{B}\otimes A$
as a differential object, since it appears to be a candidate of
Koszul-type resolution for the semigroup algebra $A$
(or similarly the group algebra $k[G_X]$).
\end{rem}
The rest of the paper is devoted to the proof of \ref{teoid}. Most of the Lemmas
are "folklore" but we include them for completeness. The interested reader can
look at \cite{Lebed2} and references therein.
\begin{lem}\label{AC monoid}
Let $\sigma$ be a braid in the braided (sub)category that contains two associative algebras $A$ and $C$, meaning there
exists
bijective functions
\[
\sigma_A:A\otimes A\to A\otimes A,\
\sigma_C:C\otimes C\to C\otimes C,\
\sigma_{C,A}:C\otimes A\to A\otimes C\]
such that
\[
\sigma_*(1,-)=(-,1)\hbox{ and } \sigma_*(-,1)=(1,-) \ \hbox{ for } *\in \{A,C;C,A\}
\]
\[
\sigma_{C,A}\circ (1\otimes m_A)=(m_A\otimes 1)(1\otimes \sigma_{C,A})(\sigma_{C,A}\otimes 1)
\]
and
\[\sigma_{C,A}\circ ( m_C\otimes 1)=(1\otimes m_C)(\sigma_{C,A}\otimes 1)(1\otimes \sigma_{C,A})\]
Diagrammatically
\
\xymatrix{
C\ar[d]&A\ar[rd]^{\!\!\!m_A}&&A\ar[ld]\\
\ar[rrd]^{\!\!\!\!\!\!\! \sigma_{C,A}}&&A\ar[lld]&\\
A&&C&
}
\xymatrix{
\\
\\
&=^{[*]}&\\}
\xymatrix{
C\ar[rrd]^{\!\!\!\!\sigma_{C,A}}&&A\ar[lld]&A\ar[d]\\
A\ar[d]&&C\ar[rd]&A\ar[ld]\\
A\ar[rd]&&A\ar[ld]&C\ar[d]\\
&A&&C
}
\]
and
\[
\xymatrix{
C\ar[rd]^{\ \ m_C}&&C\ar[ld]&A\ar[d]\\
&C\ar[rrd]^{\!\!\!\!\!\!\!\sigma_{C,A}}&&A\ar[lld]\\
&A&&C
}
\xymatrix{
\\
\\
&=^{[**]}&\\}
\xymatrix{
C\ar[d]&C\ar[rrd]&&A\ar[lld]\\
C\ar[rd]&A\ar[ld]&&C\ar[d]\\
A\ar[d]&C\ar[rd]&&C\ar[ld]\\
A&&C&
}
\]
Assume that they
satisfy the braid equation with any combination of $\sigma_A,\sigma_C$ or $\sigma_{A,C}$.
Then, $A\otimes_\sigma C=A\otimes C$ with product defined by
\[
(m_A\otimes m_C)\circ(\mathrm{Id}_A\otimes \sigma_{C,A}\otimes\mathrm{Id}_C)\colon
(A\otimes C)\otimes (A\otimes C)\to A\otimes C
\]
is an associative algebra. In diagram:
\[
\xymatrix{
A\ar[d]&&C\ar[rd]^{\!\!\!\sigma}&A\ar[ld]&&C\ar[d]\\
A\ar[rd]^{\ \ m_A}&&A\ar[ld]&C\ar[rd]^{\ \ m_C}&&C\ar[ld]\\
&A&&&C&
}\]
\end{lem}
\begin{proof}
Take $m\circ (1\otimes m)((a_1\otimes c_2)\otimes ((a_2\otimes c_2)\otimes (a_3\otimes c_3))$
use $[*]$, associativity in $A$, associativity in $C$ then $[**]$ and the result follows.
\end{proof}
\begin{lem}
Let $M$ be the monoid freely generated by $X$
module the relation $xy=zt$ where $\sigma(x,y)=(z,t)$, then,
$\sigma:X\times X\to X\times X$ naturally extends to a braiding in $M$ and verifies
\[
\xymatrix{
M\ar[rd]^{\ \ \ m}& &M\ar[ld]&M\ar[d]^\mathrm{Id}\\
&M\ar[rrd]^\sigma&&M\ar[lld]\\
&M&&M
}
\xymatrix{
\\
\\
&=&\\
}
\xymatrix{
M\ar[d]^{\mathrm{Id}} &M\ar[rrd]^\sigma&&M\ar[lld]\\
M\ar[rd]^{\!\!\sigma}&M\ar[ld]&&M\ar[d]\\
M\ar[d]&M\ar[rd]^{\ \ \ m}&&M\ar[ld]\\
M&&M}
\]
\[
\xymatrix{
M\ar[d]&M\ar[rd] &&M\ar[ld]_{m}\\
M \ar[rrd]^\sigma&&M\ar[lld]\\
M&&M
}
\xymatrix{
\\
\\
&=&\\
}
\xymatrix{
M\ar[rd]^{\!\!\sigma} &M\ar[ld]&&M\ar[d]\\
M\ar[d]&M\ar[rrd]^\sigma&&M\ar[ld]\\
M\ar[rd]^m&&M\ar[ld]&M\ar[d]^\mathrm{Id}\\
&M&&M}
\]
\end{lem}
\begin{proof}
It is enough to prove that the extension mentioned before is well defined in the quotient. Inductively, it will be enough to
see that
$\sigma(axyb,c)=\sigma(aztb,c)$ and $\sigma(c,axyb)=\sigma(c,aztb)$ where
$ \sigma(x,y)=(z,t)$, and this follows
immediately from the braid equation:
A diagram for the first equation is the following:
\[
\xymatrix{
a\ar[d]&x\ar[rd]&y\ar[ld]&b\ar[rd]&c\ar[ld]\\
\ar[d]&z\ar[d]&t\ar[rd]&\ar[ld]&\ar[d]\\
\ar[d]&\ar[rd]&\ar[ld]&\ar[d]&\ar[d]\\
\ar[rd]&\ar[ld]&\ar[d]&\ar[d]&\ar[d]\\
&&\alpha&\beta&}
\xymatrix{
\\
\\
&=&\\
}
\xymatrix{
a\ar[d]&x\ar[d]&y\ar[d]&b\ar[rd]&c\ar[ld]\\
\ar[d]&\ar[d]&\ar[rd]&\ar[ld]&\ar[d]\\
\ar[d]&\ar[rd]&\ar[ld]&\ar[d]&\ar[d]\\
\ar[rd]&\ar[ld]&\ar[rd]&\ar[ld]&\ar[d]\\
&&\alpha^*&\beta^*&
}
\]
As $\alpha\beta=\alpha^*\beta^*$ the result follows.
\end{proof}
\begin{lem}
$m\circ\sigma=m$, diagrammatically:
\[
\xymatrix{
M\ar[rrd]&&M\ar[lld]\\
M \ar[rd]^{\ \ \ m} &&M\ar[ld]\\
& M }
\xymatrix{
\\
\\
&=&\\
}
\xymatrix{
M\ar[rd]^{\ \ \ m}&&M\ar[ld]\\
& M\ar[d]^\mathrm{Id}\\
& M
}
\]
\end{lem}
\begin{proof} Using successively that $m\circ \sigma_i=m$, we have:
\[m\circ \sigma(x_1\dots x_n, y_1\dots y_k)=m\left((\sigma_k\dots \sigma_1)\dots(\sigma_{n+k-1}\dots \sigma_n)_{(x_1\dots x_ny_1\dots y_k)}\right)\]
\[=m\left((\sigma_{k-1}\dots \sigma_1)\dots(\sigma_{n+k-1}\dots \sigma_n)_{(x_1\dots x_ny_1\dots y_k)}\right)=\dots\newline\]
\[
=m(x_1\dots x_n,y_1\dots y_k)
\]
\end{proof}
\begin{coro}
If one considers $A=k[M]$,
then the algebra $A$ verifies all diagrams in previous lemmas.
\end{coro}
\begin{lem}
If $T=(TA, \shuffle_\sigma)$ there are bijective functions
\[
\sigma_{T,A}:=\sigma|_{T\otimes A}: T\otimes A\rightarrow A\otimes T
\]
\[
\sigma_{A,T}:=\sigma|_{A\otimes T}: A\otimes T\rightarrow T\otimes A
\]
that verifies the hypothesis of Lemma \ref{AC monoid}, and the same for
$(TA, \shuffle_{-\sigma})$.
\end{lem}
\begin{coro}
$A\otimes (TA, \shuffle_{-\sigma})\otimes A$ is an algebra.
\end{coro}
\begin{proof}
Use \ref{AC monoid} twice and the result follows.
\end{proof}
\begin{coro}\label{btp}
Taking $A=k[M]$, then the standard resolution of $A$ as $A$-bimodule has a natural algebra structure
defining the braided tensorial product as follows:
\[
A\otimes TA\otimes A=
A\otimes_\sigma(T^cA,\shuffle_{-\sigma})\otimes_\sigma A\]
\end{coro}
Recall the differential of the standard resolution
is defined as
$b':A^{\otimes n+1}\to A^{\otimes n}$
\[ b'(a_0\otimes\dots\otimes a_n)= \sum_{i=0}^{n-1}(-1)^{i}a_0\otimes \dots\otimes a_ia_{i+1}\otimes\dots\otimes a_n\]
for all $n\geq 2$.
If $A$ is a commutative algebra then the Hochschild resolution
is an algebra viewed as $\oplus_{n\geq 2}A^{\otimes n}=A\otimes TA\otimes A$, with right and left
$A$-bilinear extension of the shuffle product on $TA$, and $b'$ is a (super) derivation with
respect to that product (see for instance Prop. 4.2.2 \cite{L}).
In the braided-commutative case we have the analogous result:
\begin{lem}
$b'$ is a derivation with respect to the product mentioned in Corollary \ref{btp}.
\end{lem}
\begin{proof}
Recall the commutative proof
as in Prop. 4.2.2 \cite{L}.
Denote $*$ the product
\[
(a_0\otimes\dots\otimes a_{p+1} )*(b_0\otimes\dots\otimes b_{q+1})=
a_0b_0\otimes((a_1\dots\otimes a_{p} )\shuffle (b_1\otimes\dots\otimes b_{q}))\otimes a_{p+1}b_{q+1}
\]
Since $\oplus_{n\geq 2}A^{\otimes n}=A\otimes TA\otimes A$
is generated by $A\otimes A$ and $1\otimes TA\otimes 1$, we check on generators.
For $a\otimes b\in A\otimes A$, $b'(a\otimes b)=0$, in particular, it satisfies Leibnitz
rule for elements in $A\otimes A$.
Also, $b'$ is $A$-linear on the left, and right-linear on the right, so
\[
b'\big((a_0\otimes a_{n+1})*(1\otimes a_1\otimes \cdots \otimes a_n\otimes 1)\big)=
b'(a_0\otimes a_1\otimes\cdots\otimes a_n\otimes a_{n+1})
\]
\[=
a_0b'(1\otimes a_1\otimes\cdots\otimes a_n\otimes 1)a_{n+1}
=(a_0\otimes a_{n+1})*b'(1\otimes a_1\otimes\cdots\otimes a_n\otimes 1)
\]
\[
=0+(a_0\otimes a_{n+1})*b'(1\otimes a_1\otimes\cdots\otimes a_n\otimes 1)
\]\[
=b'(a_0\otimes a_{n+1})*(1\otimes a_1\otimes\cdots\otimes a_n\otimes 1)
+(a_0\otimes a_{n+1})*b'(1\otimes a_1\otimes\cdots\otimes a_n\otimes 1)
\]
Now consider $(1\otimes a_1\otimes \dots\otimes a_{p}\otimes 1 )*(1\otimes b_1\otimes\dots\otimes b_{q}\otimes 1)$,
it is a sum
of terms where two consecutive tensor terms can be of the form
$(a_i,a_{i+1})$, or $(b_j,b_{j+1})$, or $(a_i,b_j)$ or $(b_j,a_i)$.
When one computes $b'$, multiplication of two consecutive tensor factors will give,
respectively, terms of the form
\[ \cdots\otimes a_ia_{i+1}\otimes \cdots, \ \cdots\otimes b_jb_{j+1}\otimes\cdots,\ \cdots\otimes a_ib_j\otimes\cdots,
\cdots \otimes b_ja_i\otimes\cdots
\]
The first type of terms will recover $b'((1\otimes a_1\otimes\cdots\otimes a_n\otimes 1))*
(1\otimes b_1\otimes\cdots\otimes b_q\otimes 1)$ and the second type of terms will recover
$\pm (1\otimes a_1\otimes\cdots\otimes a_n\otimes 1)*b'((1\otimes b_1\otimes\cdots\otimes b_q\otimes 1))$.
On the other hand, the difference between the third and forth type of terms is just
a single trasposition so they have different signs, while $a_ib_j=b_ja_i$ because the
algebra is commutative, if one take the {\em signed} shuffle then they cancel each other.
In the {\em braided} shuffle product, the summands are indexed by the same set
of shuffles, so we have the same type of terms, that is, when computing $b'$ of
a (signed) shuffle product, one may do the product of two elements in coming form the
first factor, two elements of the second factor.
or a mixed term. For the mixed terms, they will have the form
\[
\cdots\otimes A_iB_j \otimes\cdots \hbox{, or }
\cdots\otimes \sigma^1(A_i,B_j)\sigma^2(A_i,B_j)\otimes\cdots
\]
As in the algebra $A$ we have $A_iB_j=\sigma^1(A_i,B_j)\sigma^2(A_i,B_j)$ then this
terms will cancel leaving only the terms corresponding to
$b'(1\otimes a_1\otimes\cdots\otimes a_p \otimes 1)\shuffle_{-\sigma} (1\otimes b_1\otimes\cdots\otimes b_q\otimes )$
and $\pm(1\otimes a_1\otimes\cdots\otimes a_p\otimes 1 )\shuffle_{-\sigma} b'(1\otimes b_1\otimes\cdots\otimes b_q\otimes 1)$
respectively.
\end{proof}
\begin{coro} There exists a comparison morphism
$f:(B,d)\to (A\otimes TA\otimes A,b')$ which is a differential graded algebra morphism, $f(d)=b'(f)$,
simply defining it on $e_x$ ($x\in X$)
and verifying $f(x'-x)=b'(f(e_x))$.
\end{coro}
\begin{proof}
Define $f$ on $e_x$, extend $k$-linearly to $V$, multiplicatively to $TV$, and $A'$-$A$ linearly to
$A'\otimes TV\otimes A=B$. In order to see that $f$ commutes with the differential, by $A'$-$A$-linearity
it suffices to check on $TV$, but since $f$ is multiplicative on $TV$ it is enough to check on $V$, and by $k$-linearity we check on basis, that is, we only need $f(de_x)=b'f(e_x)$.
\end{proof}
\begin{coro}
$f|_{TX}$ is the quantum symmetrizer map, and therefore
$\mathrm{Ker}(f)\cap TX\subset B$ defines the Nichol's ideal
associated to $-\sigma$.
\end{coro}
\begin{proof}
\[
f(e_{x_1}\cdots e_{x_n})=f(e_{x_1})*\cdots *f(e_{x_n})=(1\otimes x_1\otimes 1)*\cdots *(1\otimes x_n \otimes 1)=1\otimes(x_1\shuffle \cdots \shuffle x_n)\otimes 1
\]
\end{proof}
The previous corollary explains why $\mathrm{Ker}(\mathrm{Id}-\sigma)\subset B_2$
gives a Hopf ideal and also ends the proof of Theorem \ref{teoid}.
\begin{question}
$Im(f)=A\otimes \mathfrak{B}\otimes A$ is a resolution of $A$ as a $A$-bimodule? namely,
is $(A\otimes\mathfrak{B}\otimes A,d)$ acyclic?
\end{question}
This is the case for involutive solutions in characteristic zero, but
also for $\sigma=$flip in any characteristic, and $\sigma=\mathrm{Id}$ (notice this $\mathrm{Id}$-case
gives the Koszul resolution for the tensor algebra). If the answer to that question is yes,
and $\mathfrak{B}$ is finite dimensional then $A$ have necessarily finite global dimension.
Another interesting question is how to relate
generators for the relations defining $\mathfrak{B}$ and cohomology classes for $X$.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The competition between thermal fluctuations, pinning and
interactions between vortices leads to many novel physical
phenomena in type-II high-temperature
superconductors~\cite{blatter}. Examples include the melting of the
Abrikosov flux-lattice into an entangled vortex-liquid~\cite{ns} and
the proposed existence of low temperature Bose-glass~\cite{nv},
vortex glass~\cite{fisher_glass} and Bragg glass~\cite{nat_sch}
phases.
Many experimental probes have been used to study these phenomena.
They include decoration, transport and magnetization measurements,
neutron scattering, electron microscopy, electron holography and
Hall probe microscopes. More recently it has become possible to
manipulate single vortices, for example using magnetic force
microscopy (MFM)~\cite{Wadas92}. These can, in principle, measure
directly many microscopic properties which have been up to now under
debate or assumed. The possibility of performing such experiments is
similar in spirit to single molecule experiments on motor proteins,
DNA, and RNA which have opened a window on phenomena inaccessible
via traditional bulk biochemistry experiments~\cite{singlemol}.
In this spirit Olson-Reichhardt and Hastings~\cite{ORH04} have
proposed using MFM to wind two vortices around each other. Such an
experiment allows direct probing of the energetic barrier for two
vortices to cut through each other. A high barrier for flux lines
crossing has important consequences for the dynamics of the
entangled vortex phase.
In this paper we introduce and study several experiments in which a
single vortex is depinned from extended defects using, for example,
MFM. A brief account of the results can be found in
Ref.~[\onlinecite{knp}]. First we consider a setup where MFM is used
to pull an isolated vortex bound to common extended defects such as
a columnar pin, screw dislocation, or a twin plane in the presence
of point disorder. Using a scaling argument, supported by numerical
and rigorous analytical results, we derive the displacement of the
vortex as a function of the force exerted by the tip of a magnetic
force microscope. We focus on the behavior near the depinning
transition and consider an arbitrary dimension $d$. We argue that
the transition can be characterized by a universal critical
exponent, which depends {\it only on the dimensionality of the
defect}. We show that unzipping experiments from a twin plane
directly measures the free-energy fluctuations of a vortex in the
presence of point disorder in $d=1+1$ dimensions. To the best of our
knowledge, there is only one, indirect, measurement of this
important quantity in Ref.~[\onlinecite{Bolle}]. The form of the
phase diagram in the force temperature plane is also analyzed in
different dimensions. Related results apply when a tilted magnetic
field is used to tear away vortex lines in the presence of point
disorder, which was not considered in earlier work on clean
systems.~\cite{hatano}. Furthermore, we show that a possible
experimental application of the scaling argument is a direct measurement of the vortex line tension in an unzipping
experiment. As we will show in this paper, in a system of finite size, the displacement of the flux line at the transition
depends only on the critical force exerted on the flux line by the
MFM tip, the flux line tension and the sample thickness. Thus
unzipping experiments can provide valuable information on the
microscopic properties of flux lines.
Next we consider a setup where a single vortex is pulled out of a
plane with many vortices. It is known that the large-scale behavior
of vortices in a plane is characterized by a single dimensionless
number, often referred to as the Luttinger liquid parameter due to
an analogy with bosons in $d=1+1$ dimensions. We show that
experiments which unzip a single vortex out of the plane can be used
to directly probe the Luttinger liquid parameter. We also discuss
the effects of disorder both within the defect and in the bulk with the same setup.
\section{Unzipping a vortex from a defect}
\label{Sec2}
\subsection{Review of clean case}
\label{sectioncleancase}
We begin by considering the unzipping of a vortex from an extended
defect in a clean sample. For a columnar defect the system is
depicted in Fig.~\ref{fig:clean_unzip}. At the top of the sample the
MFM applies a constant force ${\bf f}$ which pulls the vortex away
from the defect. We assume that at the bottom of the sample the
vortex is always bound to the defect at a specific location. This
assumption will not influence the results since below the unzipping
transition the flux line is unaffected by the boundary conditions at
the far end of the sample.
\begin{figure}[ht]
\center
\includegraphics[width=8cm]{Clean_unzip.eps}
\caption{A MFM tip applies a constant force ${\bf f}$ which pulls the
vortex away from the defect. The configuration of the vortex is
represented by ${\bf r}(\tau)$. We assume throughout that the vortex
is always bound to the defect at the bottom of the sample so that
${\bf r}(\tau=0)=0$. }\label{fig:clean_unzip}
\end{figure}
In the absence of external force, and for an external field aligned
with the defect, the appropriate energy for a given configuration
${\bf r}(\tau)$ of the vortex is given by \cite{blatter}:
\begin{equation}
F_0\!\!=\!\!\!\int_0^L \!\!\!\!d \tau \left[ \frac{\gamma}{2}
(\partial_\tau {\bf r}(\tau))^2+V({\bf r}(\tau)) \right] .
\label{f0}
\end{equation}
Here $\gamma$ is the line tension and $L$ is the length of the
sample along the $\tau$ direction. The vector ${\bf r}(\tau)$
represents the configuration of the vortex in the $d$ dimensional
space and $V({\bf r})$ is a short-ranged attractive
potential describing the $d'$-dimensional extended defect (in
Fig.~\ref{fig:clean_unzip} $d=3$ and $d'=1$). The effect of the the
external force, exerted by the MFM, can be incorporated by adding to
the free energy the contribution
\begin{equation}
F_1=-{\bf f}\cdot {\bf r(L)}=-\int_0^L {\bf f}\cdot
\partial_\tau \bf r(\tau)\,d\tau
\label{eq:unzipfe}
\end{equation}
where we have used ${\bf r}(\tau=0)={\bf 0}$. Here ${\bf f}$
stands for the local force exerted by the MFM in the transverse
direction. The free energy of a given configuration of the vortex
is given by
\begin{equation}
F({\bf r})=F_0({\bf r})+F_1({\bf r}) \;.
\end{equation}
The problem, as stated, has been studied first in the context of
vortices in the presence of a tilted magnetic field~\cite{hatano}
and the results have been applied to the related problem of DNA
unzipping~\cite{Lubensky}.
We note that a similar setup can be
achieved by using a transverse magnetic field instead of the
external force. See Fig.~(\ref{fig:clean_unzip_mag}).
\begin{figure}[ht]
\center
\includegraphics[width=8cm]{Clean_unzip_mag.eps}
\caption{Same as in Fig.~\ref{fig:clean_unzip} but with a transverse
magnetic field instead of the MFM force tearing the flux line away from a
defect.}\label{fig:clean_unzip_mag}
\end{figure}
Indeed in the free energy (\ref{eq:unzipfe}) the external force
couples to the slope of the flux line $\partial_\tau {\bf r}$ in the
same way as the external magnetic field does~\cite{hatano}. The only
difference between the two setups is that there are now equal and
opposite forces acting on the top and bottom ends of the sample.
However this difference is important only in short samples, where
the two ends of the flux line are not independent from each other.
In this paper we focus on the thermal average of distance of the tip
of the vortex from the extended defect $\langle x_m (\tau=L)\rangle$. This quantity
is related to the thermal average of the length of the vortex that is unzipped from the
defect, $\langle \tau_m \rangle$, through $\langle x_m \rangle = f
\langle \tau_m \rangle / \gamma$. Here and throughout the paper
$\langle \ldots \rangle$ denotes a thermal average while an overbar
denotes an average over realizations of the disorder.
As stated above the universal behavior of $\langle \tau_m \rangle$
(or equivalently $\langle x_m \rangle$) within this disorder-free model have been
derived previously. Here we sketch two approaches which will be
generalized to samples with quenched disorder in the rest of the paper.
In the first approach, instead of directly summing over
configurations of the vortex we perform the sum in two parts by
dividing the vortex into bound and unbound segments. The unbound
segment starts at the point where the vortex departs the defect
without ever returning to hit it again up to the top of the sample.
Using Eq.~(\ref{eq:unzipfe}) it is straightforward to integrate over
vortex configurations to obtain for the partition function of the
unzipped segment
\begin{eqnarray}
Z_u(\tau_m)&=&\int {\cal D}{\bf r}(\tau)\,\mathrm
e^{-\beta\int_0^{\tau_m}\! d \tau \left[ \frac{\gamma}{2}
(\partial_\tau {\bf r}(\tau))^2-{\bf
f}\cdot\partial_\tau{\bf r(\tau)}\right]} \nonumber \\
&\propto& \mathrm e^{\tau_m \beta f^2 /2\gamma} \;,
\label{free_unzip1}
\end{eqnarray}
so that the free energy associated with this conditional partition function is
\begin{equation}
{\cal F}_u(\tau_m)=-\beta^{-1}\ln Z_u(\tau_m)= - f^2 \tau_m/
2\gamma \;,
\label{free_unzip}
\end{equation}
where $\beta$ is the inverse temperature. Henceforth in this paper we
set $\beta=1$, which can be always achieved by appropriate rescaling
of the energy units. Even though the above sum also runs over
configurations which return to the defect it is easy to verify that
these configurations give rise to exponentially small correction in
the $\tau_m$. Equation~(\ref{free_unzip}) implies that as the force,
${\bf f}$, increases the free energy density of the unzipped portion
of the vortex decreases. In contrast, the free energy density of the
bound part is, clearly, independent of the force and given by ${\cal
F}_b(\tau_m)=V_0(L-\tau_m)$, where $V_0$ is the free energy per unit
length of a bound vortex and $L$ is the length of the sample along
the defect. The vortex will be unzipped when $f=f_c=\sqrt{2 \gamma
|V_0|}$ such that the free-energy densities of the bound and
unzipped states are equal.
In this representation the total free energy of the vortex is
given by
\begin{equation}
{\cal F}(\tau_m)={\cal F}_u(\tau_m)+{\cal F}_b(\tau_m)\;.
\end{equation}
The unconstrained partition function of the model is given by
\begin{equation}
Z=\int_0^L d\tau_m e^{-(f_c^2-f^2)\tau_m/2 \gamma} \;.
\end{equation}
Since both results are independent of the dimensionality of the
defect (columnar or planar) near the transition one always finds
in the $L \to \infty$ limit
\begin{equation}
\langle \tau_m \rangle \sim \frac{1}{(f_c-f)^\nu} \;,
\label{eq:clean}
\end{equation}
with $\nu=1$. Note, that it can easily be seen that approaching the
transition from above the average length of the vortex which is
bound to the defect, $\langle (L-\tau_m) \rangle$, diverges in the
same manner~\cite{hatano}.
An alternative approach, which will also be useful in this paper,
uses the mapping of the problem to the physics of a fictitious
quantum particle \cite{NelsonBook}. The contribution of the external
field ${\bf f}$ to the free energy now manifests itself as an
imaginary vector potential acting on the particle in $d-1$
dimensions (with the $\tau$ axis acting as a time direction).
Explicitly, using the standard conversion from path-integrals (see
Ref.~[\onlinecite{hatano}] for details) one finds that the problem
can be described in terms of a non-Hermitian Hamiltonian:
\begin{equation}
{\cal H}={1\over 2\gamma} {\bf p}^2 -{i\over \gamma} {\bf f} \cdot
{\bf p} +V({\bf r}) \;,
\label{H}
\end{equation}
where ${\bf p}=\frac{1}{i} \vec{\nabla}$ is the momentum operator.
In this language the vortex is bound to the defect as long as there
is a bound state in the Hamiltonian. As mentioned above $i {\bf f}$
is equivalent to a constant imaginary vector potential. This analogy
makes it apparent that solutions of the non-Hermitian problem can be
related to those of the Hermitian Hamiltonian (where one sets ${\bf
f}=0$) by an imaginary gauge-transformation~\cite{hatano}. In
particular the left $\psi^L_n({\bf r},{\bf f})$ and the right
$\psi^R_n({\bf r},{\bf f})$ eigenfunctions of the non-Hermitian
problem can be obtained from those of the Hermitian problem,
$\psi_n({\bf r},{\bf f}={\bf 0})$, using
\begin{eqnarray}
\psi^R_n({\bf r},{\bf f}) &=& {\cal U} \psi_n({\bf r},{\bf
f}={\bf 0}) \nonumber \\
\psi^L_n({\bf r},{\bf f}) &=& \psi_n({\bf r},{\bf f}={\bf 0})
{\cal U}^{-1} \;, \label{eq:gauge1}
\end{eqnarray}
where
\begin{equation}
{\cal U}=\mathrm e^{{\bf f} \cdot {\bf r}}= \mathrm e^{f x}\; ;
\;\;\;\;\; {\cal U}^{-1}=\mathrm e^{-{\bf f} \cdot {\bf r}}=\mathrm
e^{-f x} \;.
\label{eq:gauge2}
\end{equation}
The universal behavior of $\tau_m$ at the transition,
Eq.~(\ref{eq:clean}), was obtained in Ref.~[\onlinecite{hatano}] by noting
that
\begin{equation}
\langle\tau_m\rangle\propto \langle x_m\rangle ={\int x\psi^R_n({\bf
r}) d{\bf r}\over \int \psi^R_n({\bf r}) d{\bf r}}\propto{1\over
{f_c-f}}, \label{tau_m}
\end{equation}
where $\psi^R_n({\bf r})\propto \mathrm e^{-f_c r}$ at long $r$. We
note that the imaginary gauge transformation is justified only at
$f<f_c$. Otherwise the integrals in Eq.~(\ref{tau_m}) formally
diverge. In principle this divergence can be fixed by proper
treatment of the boundary conditions~\cite{AHNS}.
Finally we discuss the phase diagram of the model. It is natural
that as the temperature rises the value of the critical force
needed for the unzipping decreases (at very low temperatures a
possible reentrance has been discussed in
literature~\cite{Reentrant,Reentrant2}). Here we focus on the
existence of a critical force for any strength of the attractive
potential. The question is completely equivalent to that of the
existence of a localized vortex line in the absence of the external force.
Using the analogy to the quantum mechanical problem it is well known
that in two dimensions (corresponding to a three dimensional sample)
and below, as long as $\int_{-\infty}^\infty d{\bf r} V({\bf r}) <0$
there exists a bound state. Therefore, in real samples, as long as
the potential satisfies this condition there is always a minimum nonzero
critical force required to unbind the flux line from the defect.
\subsection{Scaling arguments for the disordered case.}
\label{sectionscaling}
\subsubsection{Critical Behavior}
We now consider the effect of point
disorder. In this case the free energy of a given vortex
configuration without the contribution from the force exerted by
the MFM is given by
\begin{equation}
F_0\!\!=\!\!\!\int_0^L \!\!\!\!d \tau \left[ \frac{\gamma}{2}
(\partial_\tau {\bf r}(\tau))^2+V({\bf r}(\tau))+\mu({\bf
r}(\tau),\tau) \right] . \label{f0dis}
\end{equation}
The frozen point disorder $\mu({\bf r},\tau)$ is assumed to be uncorrelated
and Gaussian distributed with $\overline{\mu}=0$ and
$\overline{\mu({\bf r},\tau)\mu({\bf r'},\tau')}=\sigma \delta({\bf
r}-{\bf r'})\delta(\tau-\tau')$. As mentioned above the overbar denotes an average over realizations of the
disorder and the $\langle \dots\rangle$ denotes averaging over
thermal fluctuations. The contribution from the force exerted by the
MFM retains the same form as in Eq.~(\ref{eq:unzipfe}).
The free energy, Eq.~(\ref{f0dis}), without the contribution from
the external force has been studied previously with and without the
presence of an extended defect. Without the defect it is well known
that if one fixes one end of the vortex at the bottom of the sample
the mean-square displacement, $\langle x^2(\tau) \rangle$ of the
vortex after traveling a distance $\tau$ into the sample behaves as
$\langle x^2(\tau) \rangle = B\tau^{2\zeta(d)}$, where
$\zeta(d=2)=2/3$ and $\zeta(d=3) \approx 0.61$ is the wandering
exponent~\cite{Karreview}. Moreover, for a given realization of disorder there is
typically a single dominant trajectory of the vortex and
realizations with two competing minima are very rare. Finally, it is
known that the free energy fluctuations scale as
\begin{equation}
\delta {\cal F} \propto \tau^{\omega(d)} \label{eq:fefluct}
\end{equation}
with $\omega(d=2)=1/3$, $\omega(d=3) \approx 0.22$. The exponents
$\omega$ and $\zeta$ are not independent. They satisfy the
scaling relation: $\omega=2 \zeta -1$. Note that as long as there is
no defect the free energy fluctuations are unaltered even in the
presence of a force acting on the vortex because this force can be
simply gauged away.
The universal properties of the unzipping transition with point
disorder can be analyzed using a simple scaling argument adapted
from~[\onlinecite{Lubensky}] for the unzipping of DNA (for an
abbreviated account, see Ref.~[\onlinecite{knp}]).
Following the discussion in the previous section, which analyzed
the unzipping problem with no disorder, we sum over configurations
of the unzipped part of the vortex and the zipped part of the
vortex separately. As before we denote the free energy of the
unzipped segment by ${\cal F}_u$ and the free energy of the bound
segment by ${\cal F}_b$. The total free energy is then given by
\begin{equation}
{\cal F}(\tau_m)={\cal F}_u (\tau_m)+{\cal F}_b(L-\tau_m).
\label{fm}
\end{equation}
For large $L$ and $\tau_m$ we can rewrite
\begin{equation}
\mathcal F_b(L-\tau_m)=\mathcal F_b(L)-\mathcal F_b(\tau_m).
\label{fb}
\end{equation}
For a one dimensional defect, like a columnar pin, $\mathcal F_b(L)$
is a constant equal to the free energy of a flux line completely
localized on the pin. For higher dimensional defects like a twin
plane, there is a subtlety. To see this, consider a zero temperature
case first. Then the partition function is determined by the optimal
trajectory of a flux line (in the presence of point disorder)
starting at $\tau=0$ and ending at $\tau=L-\tau_m$. However, this
trajectory might not be a part of the optimal trajectory going all
the way from $\tau=0$ to $\tau=L$. In this case clearly the relation
(\ref{fb}) does not hold. Since this relation does not hold at zero
temperature, it should not hold at finite temperatures as well.
However, in Ref.~[\onlinecite{fisher}] it was argued that such
situations (where the optimal trajectories do not coincide) are very
rare and can be neglected \cite{footnote}. Thus ignoring the
constant term $\mathcal F_b(L)$ we can rewrite Eq.~(\ref{fm}) as
\begin{equation}
{\cal F}(\tau_m)={\cal F}_u(\tau_m)-\mathcal F_b(\tau_m).
\end{equation}
We can identify three contributions to the free energy above. The
first is due to the average free energy difference between a vortex
on the defect and in the bulk of the sample. Close to the
transition, similarly to the analysis in the clean case, it is
linear in $\tau_m$ and behaves as $a(f_c-f)\tau_m$, where $a\approx
f_c/\gamma$ is a positive constant. The second contribution $\delta
{\cal F}_{b}$, comes from the free-energy fluctuations arising from
that part of the point disorder which is localized on or near the
defect. For a columnar defect of dimensionality $d'=1$ this
contribution is due to the sum of the fluctuations in the free
energy about the mean. The central limit theorem implies that it
behaves as $\tau_m^{1/2}$ at large $\tau_m$. For $d'>1$ this result
is modified and one can use known results for the free energy of a
direct path in a random media (see Eq.~(\ref{eq:fefluct})), which
leads to $\delta {\cal F}_{b}\propto \tau_m^{\omega(d')}$, where
$d'$ is the dimensionality of the defect \cite{Karreview}. Finally,
there is a contribution to the free energy fluctuations from the
interaction of the unzipped part of the vortex with the bulk point
disorder, $\delta {\cal F}_{u}$. This contribution behaves similarly
to $\delta \mathcal F_b$ with a different bulk exponent: $\delta
{\cal F}_{u}\propto \tau_m^{\omega(d)}$, where $d>d'$ is the
dimensionality of the sample. Collecting all three terms gives:
\begin{equation}
{\cal F}(\tau_m)=a(f_c-f)\tau_m +\delta\mathcal F_{b}(\tau_m)
+\delta\mathcal F_u(\tau_m)\;.
\label{f}
\end{equation}
As discussed above, $\omega(d)$ has been studied extensively in the
past and it is well known that $\omega(d')>\omega(d)$ for any
$d'<d$~\cite{Karreview}. Therefore, disorder on or close to the
defect controls the unbinding transition for {\it any dimension}
when $\tau_m$ is large and the problem is equivalent to unzipping
from a sample with point disorder localized on the defect. In
particular, this result implies that unzipping from a columnar
defect with point disorder in the bulk is in the same universality
class as unzipping of a DNA molecule with a random sequence of base
pairing energies. In fact, point disorder is likely to be
concentrated within real twin planes and near columnar damage tracks
created by heavy ion irradiation and near screw dislocations,
strengthening even more the conclusions of this simple argument.
The disorder averaged partition function is dominated by the minimum
of the free energy and thus by configurations with $\delta \mathcal
F_b(\tau_m)<0$. Using ${\cal F}(\tau_m)\approx a(f_c-f)\tau_m -b
m^{\omega(d')}$, where $b$ is a positive constant, the partition
function
\begin{equation}
Z=\int_0^L d\tau_m e^{-{\cal F}(\tau_m)}
\end{equation}
can be evaluated using a saddle-point approximation. We then find the
value of $\tau_m$ at the saddle point satisfies $a(f_c-f) =
\omega(d') b \tau_m^{\omega(d')-1}$. Therefore
\begin{equation}
\overline{\langle \tau_m \rangle} \sim \frac{1}{(f_c-f)^{\nu}}, \;\;
\nu=[1-\omega(d')]^{-1} \;,
\label{nu}
\end{equation}
with
\begin{eqnarray}
\nu&=&2 \;\;\;\;\;\;\;\, {\rm for} \;\;\; d'=1 \nonumber \\
\nu&=&3/2 \;\;\;\; {\rm for} \;\;\; d'=2\;.
\label{valuenu}
\end{eqnarray}
The result for the columnar defect ($d'=1$) agrees with known results from
DNA unzipping~\cite{Lubensky}.
To check the scaling argument we have preformed numerical
simulations in $d=1+1$ dimensions and have been able to solve analytically the
closely related problem of vortex unzipping from a wall in $d=1+1$ dimensions. We
have also studied the simplified problems of unzipping from a $d'=1$
and $d'=2$ dimensional defect with disorder localized on the defect.
While the $d'=1$ problem was solved previously~\cite{oper,Lubensky}
below (in Sec.~\ref{replica}) we describe a new method applying the replica method. Using
some approximations,this approach can be generalized to a $d'=2$
dimensional defect. In the next sections we describe all these
results which support the simple scaling argument presented above.
\subsubsection{Finite size scaling and determination of the flux
line tension.}
Another consequence of the result (\ref{nu}) is the possibility to
perform a finite size scaling analysis~\cite{privman}. In
particular, near the unzipping transition the unzipping length
$\overline{\langle \tau_m\rangle}$ should be a function of the
reduced force $\epsilon \propto (f_c-f)$ and the system size $L$. If
the scaling ansatz (\ref{nu}) is correct then we must have
\begin{equation}
\overline{\langle \tau_m\rangle}/L= g(\epsilon L^{1/\nu}),
\label{scaling1}
\end{equation}
where $g(x)$ is some scaling function. Quite generally we expect
that when $x\gg 1$ finite size effects are unimportant and $g(x)
\sim 1/x^\nu$ so that we recover the scaling (\ref{nu}) and at $x\ll
1$ we have $g(x)\to g_0$, where $g_0$ is a constant. Note that the
constant $g_0$ does not depend on the system size. This constant is
a universal number of the order of one, which depends on the
dimensionality of the defect. As we find below for the unzipping
from a columnar pin $g_0=0.5$ and for the unzipping from a twin
plane $g_0\approx 0.7$. Relation (\ref{scaling1}) can be used to
extract the line tension $\gamma$ through:
\begin{equation}
\gamma=f_c g_0 {L\over \langle x_m\rangle}.
\label{gamma}
\end{equation}
Here $f_c$ is the critical force and $\langle x_m\rangle$ is the
displacement of the MFM tip at the transition. To derive
Eq.~(\ref{gamma}) we used the fact that for the unbound
segment $\langle x_m\rangle=\overline{\langle\tau_m\rangle}
f/\gamma$.
We emphasize that the unzipping transition is first order both in
the clean and disordered cases. Indeed, the unzipping occurs only at
the boundary and does not affect total free energy of the flux line
in the thermodynamic limit. Nevertheless, similar to wetting
phenomena near first order transitions, this transition possesses
scaling properties characteristic of second order transitions like
diverging correlation lengths, finite size scaling etc.
\subsubsection{Phase diagram}
\label{scalingphasediagram}
Next, we use scaling arguments to consider the behavior of the
critical force as the strength of the disorder is varied. In
particular we focus on the existence of a critical strength of the
disorder beyond which the flux line spontaneously unzips without any
external force. The disorder induced unbinding transition of the
vortex from the defect in the absence of the force has been
considered previously~\cite{Kar85,Zap91,Tang93,Bal93,Kol92,Bal94}.
Below we extend a scaling argument presented first by Hwa and
Nattermann in Ref.~[\onlinecite{HwaNatter}] for a columnar pin in arbitrary
dimensions to include also planar defects.
Assume that the vortex is localized within a distance $l_\perp$ from
a columnar pin or a twin plane. Then, it consists of uncorrelated
segments of length $l_\parallel$ related to $l_\perp$ via
\begin{equation}
l_\parallel \propto l_\perp^{1/\zeta} \;,
\end{equation}
where $\zeta$ is the wandering exponent defined above. Each of these
segments has a free energy excess of order $l_\parallel^\omega$
higher than the energy of the delocalized vortex. The free energy cost per
length of localization therefore scales as $l_\parallel^{\omega-1}
\sim l_\perp^{(\omega-1)/\zeta}$. Clearly, a strong enough pinning
potential gives rise to a constant energy gain per unit length,
which suppresses the random energy cost of localization (note that
$\omega<1$ in any dimension).
So far we established that a localized phase can exist. Now let us
consider perturbative effects of a weak attractive potential and ask
whether the vortex immediately becomes bound to the defect. The free
energy gained in the presence of the defect, $\delta F$, can be
inferred by perturbations in the strength of the defect pinning energy $V$.
Assume that one end of the vortex is held on the defect. Then the
energy gain due to the attractive potential by the defect is
associated with the the return probability of the flux line back to
the defect. Since the root mean square displacement behaves as
$l_\parallel^\zeta$ the return probability behaves as
$l_\parallel^{1-(d-d^\prime)\zeta}$. The free energy gained by
hitting the defect $\delta F$ therefore scales as
$l_\parallel^{1-(d-d^\prime)\zeta}$.
To determine if the pin is relevant one has to compare this energy
to the intrinsic variations in the free energy, $\Delta F$, which
scale as $l_\parallel^{\omega(d)}$. This yields
\begin{equation}
g= \frac{\delta F}{\Delta F} \propto l_\parallel^{\varepsilon}
\end{equation}
with
\begin{equation}
\varepsilon=1-\zeta(d-d^\prime)-\omega=2-(d+2-d^\prime)\zeta.
\end{equation}
When $\varepsilon<0$ the defect potential is irrelevant, i. e. the
system gains more energy by minimizing returns to the defect, while
if $\varepsilon>0$ it is relevant, i. e. long excursions are
energetically costly.
As mentioned above in $d=3$ numerical simulations indicate that
$\zeta \approx 0.6$ which gives for the planar defect ($d^\prime=2$)
$\varepsilon \approx 1/8$ and for the columnar pin ($d^\prime=1$)
$\varepsilon \approx -1/2$. Therefore, a weak twin plane is always relevant and
the vortex is always bound to it. However, a weak columnar pin is
irrelevant and then one expects an unbinding transition. In $d=2$,
where there can be no twin plane, $\zeta=2/3$ and the columnar
defect is found to be marginal. As argued in Ref.~[\onlinecite{HwaNatter}]
it is in fact marginally {\it relevant}.
To summarize this discussion for columnar defects in 3 dimensional samples we
expect there is a critical strength of the bulk disorder beyond which the
flux line spontaneously unzips even at zero force. In contrast for a planar defect in
3 dimensions and for columnar defects in planar 2 dimensions superconductors we expect
that for any strength of the disorder there is a finite non-zero
value of the force needed to unzip the vortex.
Next, we will check the scaling (\ref{nu}) and the anticipated
localization / delocalization behavior for a number of different
situations using both analytical methods based on the replica trick
and numerical simulations.
\subsection{Unzipping from a disordered columnar pin without excursions.}
\label{replica}
We start our quantitative analysis from the simplest situation,
where one can get exact analytical results. Namely, we consider
unzipping from a 1D pin with disorder localized only on the pin.
Additionally we neglect all excursions of the vortex line from the
pin except for the unzipped region. This problem then becomes
identical to DNA unzipping. In Ref.~[\onlinecite{Lubensky}] the
authors analyzed this problem using a Fokker-Planck approach and
indeed derived $\nu=2$ near the unzipping transition. Here we show
how the same problem can be solved using the replica trick. The
solution was sketched in Ref.~[\onlinecite{kp}]. Here we review the
derivation for completeness and provide additional details.
Ignoring excursions of the bound part of the flux line into the bulk
gives the free energy a particularly simple form. We again write it
as a sum over the contribution from the bound and unbound segments.
The bound segment contribution is given by ${\cal
F}_b(\tau_m)=V_0(L-\tau_m)+\int_{\tau_m}^L d \tau_m' U(\tau_m')$,
where $V_0<0$ is the mean value of the attractive potential, $L$ is
the length of the columnar defect which is assumed to be very large,
and $U(\tau_m)$ is a random Gaussian uncorrelated potential with
zero mean satisfying
$\overline{U(\tau_{m_1})U(\tau_{m_2})}=\Delta\delta(\tau_{m_1}-\tau_{m_2})$.
The contribution from the unzipped part takes the same form as in
the clean case (see Eq. (\ref{free_unzip})). Collecting the two
terms gives:
\begin{equation}
\mathcal F(\tau_m)=\epsilon \tau_m+\int_{\tau_m}^L d \tau_m'
U(\tau_m').
\label{fz}
\end{equation}
As before we work in the units, where $k_B T=1$. In the equation
above the deviation from the unzipping transition is measured by
$\epsilon=(f_c^2-f^2)/2\gamma$, where $f$ is the force applied to
the end of the flux line and $f_c=\sqrt{2\gamma |V_0|}$ is the
critical force. In Eq.~(\ref{fz}) we dropped an unimportant constant
additive term $V_0 L$.
The statistical properties of the unzipping transition can be
obtained by considering $n$ replicas of the partition function $Z(\tau)=\exp(-\mathcal
F(\tau))$~\cite{edwards anderson}:
\begin{equation}
\overline{Z^n}=\int_0^L d\tau_1\ldots\int_0^L
d\tau_n\,\overline{\exp\left(-\sum_{\alpha=1}^n \mathcal
F(\tau_\alpha)\right)},
\label{Z_n}
\end{equation}
where the overbar denotes averaging over point disorder. The
averaging procedure can be easily done for a positive integer $n$.
We eventually wish to take the limit $n \to 0$. First we order the
coordinates $\tau_j$, where the $j^{th}$ replica unbinds from the
pin according to: $0\leq \tau_1\leq \tau_{2}\leq\dots\leq \tau_n$.
Then for $\tau\in[0,z_1)$ there are no replicas bound to the
columnar pin, for $\tau \in[\tau_1,\tau_2)$ there is one replica on
the pin until finally for $L \geq \tau\geq \tau_n$ all $n$ replicas
are bound to the pin. Using this observation and explicitly
averaging over the point disorder in Eq.~(\ref{Z_n}) we arrive at:
\begin{equation}
\overline{Z^n}\!=n!\!\int\limits_0^L d\tau_1.\,.\!\!\int\limits_{\tau_{n-1}}^L
\!\!d\tau_n\exp\!\left[-\!\sum\limits_{j=1}^n\! \epsilon \tau_j+
{\Delta\over 2} j^2(\tau_{j+1}-\tau_{j})\right],
\label{tuam2}
\end{equation}
where we use the convention $\tau_{n+1}=L$. The integral above is
straightforward to evaluate in the $L \to \infty$ limit so that
\begin{eqnarray}
&&\overline{Z^n}=\mathrm e^{n^2L\Delta/2}{1\over \epsilon_n^n}\prod_{j=1}^n
{1\over 1-\kappa_n j} \nonumber
\\
&&=\mathrm
e^{n^2L\Delta/2}\left({2\over\Delta}\right)^n
{\Gamma(1+1/\kappa_n-n)\over\Gamma(1+1/\kappa_n)}
\;, \phantom{XXX} \label{eq:partunzip}
\end{eqnarray}
where $\epsilon_n=\epsilon+\Delta n$ and
$\kappa_n=\Delta/2\epsilon_n$. The exponential prefactor is an
unimportant overall contribution of the whole columnar pin while the
rest of the expression is the ($L$ independent) contribution from
the unzipped region. Interestingly the restricted partition
functions for the unbinding problem from a hard wall (with no
external force) and for the unzipping from a 1 dimensional pin are identical
and thus there is equivalence between the two problems (see
Ref.~[\onlinecite{kp}] for more details.)
The disorder-averaged free energy is given by the limit
$\overline{\mathcal F}=-\lim_{n \to 0}
(\overline{Z^n}-1)/n$~[\onlinecite{edwards anderson}]. With the help
of Eq.~(\ref{eq:partunzip}) one obtains
\begin{equation}
\overline{\mathcal F}=\ln (\epsilon \kappa) + \Psi(1/\kappa),
\label{free_en}
\end{equation}
where $\Psi(x)$ is the digamma function and
$\kappa=\Delta/2\epsilon$. The unzipping transition occurs at
$\epsilon=0$ or equivalently at $\kappa \to \infty$. The expression
(\ref{free_en}) is identical to the one found in
Ref.~[\onlinecite{oper}] using a Fokker-Planck equation approach,
supporting the validity of the analytic continuation in $n$ for this
particular application of the replica calculation.
It is easy to see that this free energy yields
\begin{equation}
\overline{\langle \tau_m\rangle}={\partial \overline{\mathcal F}\over
\partial\epsilon}={1 \over \kappa\epsilon}\Psi^{(1)}(1/\kappa),
\label{zav}
\end{equation}
where $\Psi^{(n)}(x)$ stands for the $n$-th derivative of the
digamma function. The expression above predicts a crossover from
$\overline{\langle \tau_m\rangle}\approx 1/\epsilon$ for $\kappa\ll
1$ (far from the transition) to $\overline{\langle
\tau_m\rangle}\approx\kappa/\epsilon=\Delta/\epsilon^2$ for
$\kappa\gg 1$ (close to the transition) similarly to the unzipping
from the wall problem analyzed above. Also, it is easy to check that
\begin{equation}
w=\overline{\langle \tau_m^2 \rangle - \langle \tau_m
\rangle^2}={\partial^2 \overline{\mathcal F}\over \partial\epsilon^2}=-{1 \over
(\kappa\epsilon)^2}\Psi^{(2)}(1/\kappa). \label{fav}
\end{equation}
Here there is a crossover from $w \approx 1/\epsilon^2$ for $\kappa
\ll 1$ to $w \approx 2 \kappa/\epsilon^2=\Delta/\epsilon^3$ for
$\kappa\gg 1$. As has been noted in the context of DNA unzipping
\cite{Lubensky} $\sqrt{w}/\overline{\langle \tau_m\rangle}$ changes
from being of order unity for the weakly disordered $\kappa \ll 1$ case to
$\sim \epsilon^{1/2}$ for $\kappa \gg 1$. Thus for $\kappa \gg 1$,
close to the unzipping transition, thermal fluctuations become
negligible and one can work in the zero temperature limit.
The simplicity of the problem also allows finding the higher moments
of the distribution. Here we evaluate the second moment, which gives
the width of the distribution of $\overline{\langle \tau_m\rangle}$
due to different disorder realizations. Note that since the order of
averaging over thermal fluctuations and disorder is important this
quantity can not be extracted directly from Eq. (\ref{fav}). To
proceed we consider the generating function, ${\cal W}_n(\epsilon_j)$ defined by
\begin{equation}
{\cal W}_n(\epsilon_j)=\int\limits_{0}^L
d\tau_1\ldots\int\limits_{\tau_{n-1}}^L d\tau_n\,\mathrm
e^{-\sum\limits_{j=1}^n \epsilon_j\tau_j+\Delta/2
j^2(\tau_{j+1}-\tau_j)}\!\!. \nonumber \label{zm1}
\end{equation}
The second (and similarly the higher) moments can be found by
differentiating ${\cal W}_n$ with respect to $\epsilon_j$:
\begin{equation}
\overline{\langle \tau_m^2\rangle}=\lim_{n\to 0} \left. {1\over
{\cal W}_n(\epsilon_j)}\,{1\over n}\sum_{j=1}^n {\partial^2 {\cal
W}_n(\epsilon_j)\over\partial
\epsilon_j^2}\right|_{\epsilon_j=\epsilon}. \label{zm3}
\end{equation}
Upon evaluating the integral, we find
\begin{equation}
{\cal W}_n(\epsilon_j)=\prod_{j=1}^n {1\over
\sum_{k=1}^j\epsilon_k\,-\,\Delta j^2/2} \label{zm2}
\end{equation}
and correspondingly
\begin{equation}
\overline {\langle \tau_m^2\rangle}={1\over \epsilon^2}\lim_{n\to
0}{1\over n}\sum_{j=1}^n {2\over 1-\kappa j}\sum_{k=j}^n {1\over k
(1-\kappa k)}.
\end{equation}
This double sum can be calculated using a trick similar to the one
described in Ref.~[\onlinecite{Kardar}]:
\begin{eqnarray}
\overline{\langle \tau_m^2\rangle}&=&{2\kappa^2\over \epsilon
^2}\int\!\!\!\!\!\!\int\limits_{\!\!\!\!x>y>0}\!\!\!\! dx dy
{1\over \mathrm e^{\kappa x}-1}{y\,\mathrm e^{-y}\over \mathrm
e^{\kappa y }-1}\left[ \mathrm e^{\kappa y}+\mathrm e^{2y}\mathrm
e^{\kappa
x-x}\right]\nonumber\\
&-&{4\over \kappa
\epsilon^2}\Psi^{(1)}(1/\kappa)\left(C+\Psi(1/\kappa)\right),
\label{z2}
\end{eqnarray}
where $C\approx 0.577$ is Euler's constant. In the limit of weak
disorder or high temperature $\kappa\ll 1$, not surprisingly, we get
$\overline{\langle \tau_m^2\rangle }\approx 2/ \epsilon^2$, which
agrees with the Poissonian statistics of $\tau_m$ with an average
given by $\overline{\langle \tau_m \rangle}=1/\epsilon$. In the
opposite limit $\kappa\gg 1$ one finds $\overline{\langle
\tau_m^2\rangle }=4\kappa^2/ \epsilon^2$. Note that
$\overline{\langle \tau_m\rangle}=\kappa/\epsilon$, thus the
relative width of the distribution ($\delta \tau_m/\overline{\langle
\tau_m\rangle}$), defined as the ratio of the variance of the
unzipping length $\tau_m$ to its mean is larger by a factor of
$\sqrt{3}$ than that in the high temperature regime. The
distribution thus becomes superpoissonian at large $\kappa$. In
fact, in the limit $\kappa\to\infty$ one can derive the full
distribution function $P_{\kappa\to\infty}(\tau_m)$ using extreme
value statistics~\cite{Lubensky, ledoussal}:
\begin{equation}
{\cal P}_{\kappa\to\infty}(\tau_m)\approx {\epsilon/ \kappa}\,
G(\tau_m\,\epsilon/\kappa)
\end{equation}
with
\begin{equation}
G(x)={1\over\sqrt{\pi x}}\,\mathrm e^{-x/4}-{1\over 2}{\rm
erfc}(\sqrt{x}/2),
\end{equation}
where ${\rm erfc}(x)$ is the complimentary error function. It is
easy to check that this distribution indeed reproduces correct
expressions for the mean and the variance. We emphasize that while
the thermal fluctuations of the unzipping length become negligible
near the transition, the fluctuations due to different realizations
of point disorder are enhanced and lead to a wider-than-Poissonian
distribution of $\tau_m$.
To check these results and uncover subtleties that might
arise in experiments, we performed direct numerical simulations of
the partition function of the free energy (\ref{fz}). For this
purpose we considered a discrete version of the problem where the
partition function is
\begin{equation}
Z=\prod_l \mathrm e^{-\epsilon m_l+\sum_{l^\prime=1}^l U(m_l)}.
\end{equation}
Here $U(m_l)$ is the random potential uniformly distributed in the
interval $[-U_0,U_0]$ so that the disorder variance is
$\Delta=\overline{U^2(m_l)}=U_0^2/3$. For the simulations we choose
$\epsilon=\ln(1.2)-0.18\approx 0.00232$ and $U_0=0.3$, which gives
$\Delta=0.03$, $\kappa\approx 6.46$ and according to both
Eq.~(\ref{zav}) and numerical simulations $\overline{\langle
\tau_m\rangle}\approx 2860$. Then we computed $\delta
\tau_m/\overline{\langle \tau_m\rangle}$ using both Eq.~(\ref{z2})
and performing numerical simulations. For the chosen parameters the
equation~(\ref{z2}) gives $\delta \tau_m/\overline{\langle
\tau_m\rangle}\approx 1.68$, while the numerical simulations yield
$\delta \tau_m/\overline{\langle \tau_m\rangle}\approx 1.67$.
Clearly the results are very close to each other and the small
discrepancy can be attributed to the discretization error. In
Fig.~\ref{fig_var} we plot dependence of
$\delta\tau_m/\overline{\langle \tau_m\rangle}$ vs. system size.
\begin{figure}[h]
\center
\includegraphics[width=9cm]{distrib1d6.eps}
\caption{Dependence of the relative width of the distribution
$\delta \tau_m/\overline{\langle \tau_m\rangle}$ on the system size.
Symbols correspond to the actual data, the solid line is the guide
to the eye, and the dashed line corresponds to the replica result in
the thermodynamic limit.}\label{fig_var}
\end{figure}
It is obvious from the figure that in the thermodynamic limit
$L\to\infty$ the replica result is in excellent agreement with
numerical simulations. We mention that numerical simulations of
$\delta \tau_m$ show very strong finite size effects. Therefore one
has to go to very large $L\gtrsim 50 \overline{\langle
\tau_m\rangle}$ in order to approach the thermodynamic limit for the
width of the distribution.
Depending on the system the quantity $\overline{\langle
\tau_m^2\rangle}$ is not always experimentally accessible. For
example, in the unzipping experiments it is easier to measure
thermal average, $\langle \tau_m\rangle$, in each experimental run.
We note that this quantity has sample to sample fluctuations only
due to the presence of disorder. Then the variance of the
distribution will be characterized by $\overline{\langle
\tau_m\rangle^2}$. The difference between the two expectation values
is given by $w$ found in Eq.~(\ref{fav}). Defining $(\delta
\tau_m^{T})^2=\overline{\langle \tau_m\rangle^2}-\overline{\langle
\tau_m\rangle}^{\,2}$ and using Eqs.~(\ref{z2}) and (\ref{fav}) we
find that $\delta \tau_m^T/\overline{\langle \tau_m\rangle}\approx
\sqrt{\kappa/2}$ in the weak disorder limit ($\kappa\ll 1$) and
$\delta \tau_m^T/\overline{\langle \tau_m\rangle}\approx
\sqrt{3}-1/(\sqrt{3}\kappa)$ in the opposite limit $\kappa\gg 1$. We
plot both $\delta \tau_m^T$ and $\delta \tau_m$ versus the disorder
parameter $\kappa$ in Fig.~\ref{fig_dz}.
\begin{figure}[h]
\center
\includegraphics[width=9cm]{width.eps}
\caption{Dependence of the relative width of the
distribution on the disorder parameter $\kappa$. The two curves
correspond to different averaging over temperature and disorder (see
text for details). The horizontal line at $\sqrt{3}$ denotes the
asymptotic value of both $\delta \tau_m$ and $\delta \tau_m^T$ at
$\kappa\to\infty$}\label{fig_dz}
\end{figure}
The same issue of importance of the order of thermal and disorder
averaging appears in the calculation of the higher moments of
$\tau_m$, becoming irrelevant only in the limit ($\kappa\to\infty$),
which effectively corresponds to the zero temperature case.
Before concluding this section let us make a few remarks about the
rebinding transition, i.e., the rezipping that occurs with decreasing force. One can consider a similar setup with a lower
end of the flux line fixed at the bottom of the columnar pin and the
top end is pulled away from the pin with a force $f$. However, now
we will be interested in $f>f_c$. Then clearly most of the flux line
will be unzipped from the pin except for a portion near the bottom
end. If $f$ is very large, the length of the bound segment $\tilde
\tau_m$ near the sample boundary is small. However as $f$ decreases and approaches $f_c$ from
above, the length of this segment increases and finally diverges at
the transition. This rebinding transition can be described in a
similar spirit to the unbinding. For example instead of the free
energy (\ref{fz}) one has to deal with
\begin{equation}
\mathcal F(\tilde \tau_m)=|\epsilon|
\tilde\tau_m+\int_{0}^{\tilde\tau_m} d \tau_m' U(\tau_m').
\label{fzr}
\end{equation}
As we already noted the free energies (\ref{fz}) and (\ref{fzr}) are
equivalent up to an unimportant constant equal to the total disorder
potential of the pin: $\int_0^L d\tau_m' U(\tau_m')$. We conclude that the unbinding and rebinding
transitions for a single flux line on a disordered columnar pin are
identical. In other words, statistical properties of $\tau_m$ for a
given $f=f_c-\delta f$ are identical to those of $\tilde\tau_m$ for
$f=f_c+\delta f$.
\subsection{Unzipping from a planar defect without excursions.}
\label{replica_2D}
We now generalize the ideas of the previous section to the more
complicated problem of unzipping of a single flux line from a
disordered twin plane. As before we ignore excursions out of the
plane for the bound part of the flux line. Let us consider the
rebinding transition first. That is we assume that $f$ is slightly
greater than $f_c$ and we study the statistics of the
bound part of the flux line. We again assume that the flux line is
pinned at the bottom of the plane ($\tau=0$) and unbinds for $\tau$ larger than some
$\tilde\tau_m$.
The point disorder potential now depends on the two coordinates
$\tau$ and $z$ spanning the twin plane. Using Eq. (\ref{free_unzip}) the
partition function reads:
\begin{eqnarray}
&&Z=\int_0^L d\tilde\tau_m \int Dz(\tau^\prime)
\exp\biggl[-{f^2\over
2\gamma}\tilde\tau_m-V\tilde\tau_m\nonumber\\
&&~~~-\beta\int_0^{\tilde\tau_m} d\tau^\prime \left({\gamma\over
2}\left({dz\over d\tau^\prime}\right)^2+
\mu(\tau^\prime,z^\prime)\right)\Biggr],
\end{eqnarray}
where $V<0$ is the mean attractive potential of the twin plane and
we have dropped the unimportant $L$-dependent factors. As before, we
assume a Gaussian random noise with zero mean and
\begin{equation}
\overline{\mu(\tau_1,z_1)\mu(\tau_2,z_2)}=\sigma
\delta(\tau_1-\tau_2)\delta(z_1-z_2).
\end{equation}
We also introduce $\epsilon=-f^2/(2\gamma)-V$. Note that for the
rebinding transition $\epsilon<0$. After replicating the partition
function and averaging over point disorder we find
\begin{eqnarray}
&&\overline{Z^n}=n!\int\limits_{0}^L
d\tilde\tau_n\int\limits_{\tilde\tau_n}^{L}d\tilde\tau_{n-1}\ldots
\int\limits_{\tilde\tau_2}^L d\tilde\tau_1\int
Dz_1(\tau_1^\prime)\dots Dz_n(\tau_n^\prime)\nonumber\\
&&~~~~~~~\exp\left[\sum_{\alpha=1}^n
\epsilon\tilde\tau_\alpha+\!\!\!\int\limits_{\tilde\tau_{\alpha+1}}^{\tilde\tau_{\alpha}}
\!\!\!d\tau_\alpha^\prime \mathcal
L_{\alpha}[z_1(\tau_1^\prime),\ldots,
z_\alpha(\tau_\alpha^\prime)]\right],
\end{eqnarray}
where we define $\tilde\tau_{n+1}\equiv 0$ and $\mathcal L_\alpha$
is the Euclidean Lagrangian corresponding to the Hamiltonian
($\mathcal H_\alpha$) of $\alpha$ interacting particles~\cite{Kardar}:
\begin{equation}
\mathcal H_\alpha=-{\sigma\over 2}\alpha-{1\over
2\gamma}\sum_{\beta=1}^{\alpha} {\partial^2\over\partial
z_\beta^2}-\sigma\sum_{1\leq\beta<\gamma\leq\alpha}
\delta(z_\beta-z_\gamma).
\end{equation}
Close to the rebinding transition, we anticipate
$\tilde\tau_m\to\infty$ and thus the mean separation between the
rebinding times of different replicas $\tilde\tau_\alpha$ and
$\tilde\tau_{\alpha-1}$ diverges. Therefore the contribution to the
partition function coming from integration over $\tau_\alpha$ will
be dominated by the ground state of configurations with $\alpha$
replicas. In this case we can significantly simplify the partition
function and evaluate it analytically:
\begin{eqnarray}
&&\overline{Z^n}=n!\int\limits_{0}^L
d\tilde\tau_n\int\limits_{\tilde\tau_n}^{L}d\tilde\tau_{n-1}\ldots
\int\limits_{\tilde\tau_2}^L d\tilde\tau_1\\
&&\exp\left[\sum_{\alpha=1}^n \epsilon\tilde\tau_\alpha+(\mathcal
E_{\alpha}-\mathcal E_{\alpha-1})
\tilde\tau_{\alpha})\right].\nonumber
\end{eqnarray}
Here $\mathcal E_\alpha$ is the ground state energy of $\mathcal
H_{\alpha}$ with a subtracted term linear in $\alpha$, that just
renormalizes $f_c$. Close to the transition $\epsilon$ is linear
in the difference $f-f_c$. The energy, $\mathcal E_\alpha$, was
computed in Ref.~[\onlinecite{Kardar}]:
\begin{equation}
\mathcal E_\alpha=-{\sigma^2\gamma\over
12}\alpha^3=-\xi\alpha^3.
\end{equation}
Upon integrating over $\tilde\tau_\alpha$ one obtains
\begin{equation}
\overline{Z^n}=n!\prod_{\alpha=1}^n {1\over
|\epsilon|\alpha-\xi\alpha^3}\to\prod_{\alpha=1}^n {1\over
|\epsilon|-\xi\alpha^2}
\label{Z_n0}
\end{equation}
The product above can be reexpressed in terms of $\Gamma$-functions,
which in turn allows for a straightforward analytic continuation to
$n\to 0$:
\begin{equation}
\overline{Z^n} ={1\over
\xi^n}{1\over1+n{\sqrt{\xi}\over\sqrt{|\epsilon|}}}
{\Gamma\left({\sqrt{|\epsilon|}\over\sqrt{\xi}}-n\right)\over
\Gamma\left({\sqrt{|\epsilon|}\over\sqrt{\xi}}+n\right)}.
\label{Z_n1}
\end{equation}
Using this expression we obtain the free energy and the mean
length of the localized segment:
\begin{equation}
\mathcal F=-\lim_{n\to 0}{\overline{Z^n}-1\over n}=\ln
\xi+{\sqrt{\xi}\over\sqrt{|\epsilon|}}+2\Psi\left({\sqrt{|\epsilon|}\over
\sqrt{\xi}}\right),
\label{f_2d}
\end{equation}
\begin{equation}
\overline{\langle \tilde\tau_m \rangle}={\partial \mathcal
F\over\partial |\epsilon|}=-{\sqrt{\xi}\over
2|\epsilon|^{3/2}}+{1\over
\sqrt{|\epsilon|\xi}}\Psi^{(1)}\left({\sqrt{|\epsilon|}\over
\sqrt{\xi}}\right)
\label{tau_2d}
\end{equation}
where as before $\Psi^{(n)}(x)$ stands for the $n$th derivative of
the digamma function. This expression has the asymptotic behaviors:
\begin{eqnarray}
&&\overline{\langle \tilde\tau_m \rangle}\to
{1\over\epsilon}\qquad\quad~ \xi\ll
|\epsilon|\nonumber\\
&&\overline{\langle \tilde\tau_m \rangle}\to {\sqrt{\xi}\over
2|\epsilon|^{3/2}} \quad \xi\gg|\epsilon|.
\end{eqnarray}
This scaling confirms the crossover between exponents $\nu=1$ and
$\nu=3/2$ for the rebinding transition to a two-dimensional
disordered plane predicted by the simple scaling argument leading to
Eq.~(\ref{valuenu}).
In a similar way one can also consider an unzipping transition with
$f \leq f_c$. One finds an expression for the partition function
which is identical to (\ref{Z_n0}) with the substitution
$\xi\to-\xi$. Note however, that the analytic continuation of the
product (\ref{Z_n1}) results in a complex partition function and
hence a complex free energy. It thus appears that the analytic
continuation of the product (\ref{Z_n0}) to noninteger values of $n$
is not unique. One can always multiply it by any periodic function
of $n$, which is equal to unity when the argument is integer. While
we were able to find some real-valued analytic continuations of
$\overline{Z^n}$ to negative values of $\xi$, these continuations
did not lead to physically sensible results.
Because of the ambiguity of the analytic continuation and some
approximations used to derive Eqs.~(\ref{f_2d}) and (\ref{tau_2d})
we also performed numerical simulations for the vortex unzipping
from a disordered twin plane.
For numerical simulations we are using the lattice version of the
model, where in each step along the $\tau$ direction the vortex can
either move to the left or the right one lattice spacing. Note that
because we neglect excursions the vortex motion occurs strictly
within the plane until the vortex is unbound. Then the restricted
partition function for the bound part of the flux line, $Z(x,\tau)$,
which sums over the weights of all path leading to $x,\tau$,
starting at $x=0,\tau=0$ satisfies the recursion
relation~\cite{Kardar}
\begin{eqnarray}
&& Z(x,\tau+1)=e^{\mu(x,\tau+1)}\big[J Z(x-1,\tau) +J
Z(x+1,\tau)\nonumber\\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+(1-2J) Z(x,\tau)\big].
\label{eqz1}
\end{eqnarray}
We assume that $\mu(x,\tau)$ is uniformly distributed in the
interval $[-U_0,U_0]$ implying as before the variance $\sigma=U_0^2/3$. The
variable $J$ controls the line tension. In the continuum limit $J\ll
1$ and $U_0\ll 1$ the equation (\ref{eqz1}) reduces to the
Schr\"odinger equation:
\begin{equation}
{\partial Z\over \partial\tau}=-\mathcal H Z(x,\tau)
\label{Z_tauu}
\end{equation}
with the Hamiltonian given by Eq.~(\ref{H}) with $\gamma=2J$ and
$f=0$ (there is no force acting on the flux line within the
plane). We note that even if the parameters of the discrete model are not
small we still expect that Eq.~(\ref{Z_tauu}) remains valid at long
length and time scales. However, the relation between the
microscopic parameters of the discrete model and the parameters of
the effective coarse-grained Hamiltonian (\ref{H}) is more
complicated.
In our simulations we evaluated numerically the free energy of the
bound part of the vortex line for each realization of point disorder
and used the analytical expression for the free energy of the
unbound part, for which point disorder can be neglected. The latter
is given by Eq.~(\ref{free_unzip}). This free energy is controlled
by a single parameter $f^2/(2\gamma)$. Use of the analytic result
(\ref{free_unzip}) significantly simplifies calculations of
$\overline{\langle \tau_m\rangle}$ and allows us to perform large
scale simulations.
First we verify the scaling (\ref{nu}) with $\nu=3/2$ at the
unzipping transition. To do this we perform standard finite size
scaling procedure.
\begin{figure}[ht]
\center
\includegraphics[width=9cm]{scaling_2d_6.eps}
\caption{Ratio of the unzipping length
$\overline{\langle\tau_m\rangle}$ to the system size $L$ as a
function of $f^2/2\gamma$ for different system sizes. Here $f$ is
the external force and $\gamma$ is the line tension of the vortex
(see Eqs.~(\ref{free_unzip1} and (\ref{free_unzip}))). According to
the scaling relation (\ref{scaling1}) the crossing point corresponds
to the unzipping transition. In simulations the parameters of the
microscopic model (\ref{eqz1}) are chosen to be $J=0.2$, $U_0=2$}
\label{fig3}
\end{figure}
In Fig.~\ref{fig3} we show dependence of the ratio
$\overline{\langle\tau_m\rangle}/L$ on the parameter $f^2/(2\gamma)$
for four different sizes. As we expect from the scaling relation
(\ref{scaling1}) the three curves intersect at the same point
corresponding to the unzipping transition ($g_0\approx 0.7$). Once
we determine the crossing point corresponding to the critical force
$f_c$ we can verify the scaling relation (\ref{scaling1}) with
$\nu=3/2$.
\begin{figure}[ht]
\center
\includegraphics[width=9cm]{scaling_2d_1.eps}
\caption{Data collapse of $\overline{\langle\tau_m\rangle}/L$ as a
function of $\epsilon L^{1/\nu}$ with the exponent $\nu=3/2$ for two
different system sizes (see Eq.~(\ref{scaling1})). The parameters of
the model are the same as in Fig.~\ref{fig3}. The inset shows
derivative of $\overline{\langle\tau_m\rangle}$ with respect to
$\epsilon$ for $L=12800$. Clearly the scaling function is asymmetric
with respect to $\epsilon\to -\epsilon$. Thus the unbinding and
rebinding transitions are not equivalent.}
\label{fig:collapse2D}
\end{figure}
In Fig.~\ref{fig:collapse2D} we plot
$\overline{\langle\tau_m\rangle}/L$ versus the scaling parameter
$\epsilon L^{1/\nu}$ (see Eq.~(\ref{scaling1})) with $\nu=3/2$ for
two different system sizes. Clearly the data collapse is nearly
perfect, which proves the validity of the scaling (\ref{scaling})
with $\nu=3/2$ for the unzipping of a flux line from a twin plane.
The inset shows the derivative of $\overline{\langle\tau_m\rangle}$
with respect to $\epsilon$. Clearly this derivative is asymmetric
with respect to $\epsilon\to\ -\epsilon$, implying that there is no
symmetry between the unbinding and rebinding transitions. This is
contrary to the unzipping from a columnar pin with no excursions,
where such a symmetry does exist.
Next we turn to verifying the analytic prediction for
$\overline{\langle\tau_m\rangle}$, Eq.~(\ref{tau_2d}). As we argued
above the parameter $\zeta$ describing the disorder strength can be
easily extracted from microscopic parameters of the model only in
the continuum limit $U_0\gg 1$, $J\ll 1$. Unfortunately, it is not
possible to do simulations directly in the continuum limit ($J\ll 1$
and $U_0\ll 1$). Indeed as Eq.~(\ref{tau_2d}) suggests in order to
see the scaling exponent $\nu=3/2$ one needs to go to length scales much
larger than $1/\xi$, where $\xi=\sigma^2 J/12=U_0^4 J/36$. If
$J\ll 1$ and especially $U_0\ll 1$ then one has to simulate
extremely large system sizes where $L$ is larger than $10^7$ for
$U_0=0.1$ and $J=0.1$. Therefore we perform simulations in the regime
where $J$ and especially $U_0$ are appreciable. We then
regard $\xi$ as a fitting parameter of the model which should be
equal roughly to $U_0^4 J/36$.
\begin{figure}[ht]
\center
\includegraphics[width=9cm]{scaling_2d_5.eps}
\caption{Dependence of the length of the bound part of the flux line
to the twin plane $L-\overline{\langle\tau_m\rangle}$ on $\epsilon$
for the rebinding transition. Different curves correspond to
different system sizes. The solid black line is the best single
parameter fit using Eq.~(\ref{tau_2d}) with $\xi$ being the
fitting parameter.}
\label{fig:replica1}
\end{figure}
In Fig.~\ref{fig:replica1} we show results of numerical simulation
for the rezipping length $L-\overline{\langle\tau_m\rangle}$ on the
detuning parameter $\epsilon$ for different system sizes. The solid
black line is the best single-parameter fit to the data using the
analytic expression (\ref{tau_2d}). The fitting parameter $\xi$
found from simulations is $\xi \approx 0.036$, while a continuum
estimate $U_0^4 J/36$ gives $\xi \approx 0.089$, which is very
reasonable given that this estimate is valid only at $U_0\ll 1$. We
also performed similar simulations for $U_0=1.5$ and got a very good
fit with (\ref{tau_2d}) for $\xi=0.018$, while the continuum
estimate gives $\xi \approx 0.028$. We thus see that indeed as
$U_0$ decreases the fitting parameter $\xi$ becomes closer to the
continuum expression.
While we were not able to derive a closed analytic expression for
$\overline{\langle\tau_m\rangle}$ for the unbinding transition, we
performed numerical simulations. As the inset in
Fig.~\ref{fig:collapse2D} suggests the transition is highly
asymmetric. In fact this asymmetry persists in the thermodynamic
limit.
\begin{figure}[ht]
\center
\includegraphics[width=9cm]{scaling_2d_7.eps}
\caption{Comparison of dependences of
$L-\overline{\langle\tau_m\rangle}$ for the rebinding transition and
$\overline{\langle\tau_m\rangle}$ for the unbinding transition on
$|\epsilon|$. We used the parameters of Fig.~\ref{fig3} with
$L=51200$. The finite size effects are negligible on the scale of
the graph. Both curves interpolate between $1/|\epsilon|$ dependence
at $|\epsilon|\gg \xi$ and $C/|\epsilon|^{3/2}$ at $|\epsilon|\ll
\xi$. However, the prefactor $C$ for the unbinding transition is
about three times larger than for the rebinding.}
\label{fig:unbind_rebind}
\end{figure}
In Fig.~\ref{fig:unbind_rebind} we plot
$L-\overline{\langle\tau_m\rangle}$ for the rebinding transition and
$\overline{\langle\tau_m\rangle}$ for the unbinding versus
$|\epsilon|$. Both curves interpolate between $1/|\epsilon|$
dependence at weak disorder $|\epsilon|\ll \xi$ and
$C/|\epsilon|^{3/2}$ dependence at strong disorder $|\epsilon|\gg
\xi$. However, the prefactor $C$ in front of $1/|\epsilon|^{3/2}$
is larger for the unzipping transition.
\subsection{Unzipping from a hard wall}
\label{Bethe_ansatz}
As the next step we consider unzipping from an attractive hard wall
in $d=1+1$ dimensions with point disorder in the bulk. Our method
is a straightforward generalization of the Bethe ansatz solution
found by Kardar in the absence of the external force~\cite{Kardar}.
The system is illustrated in Fig. \ref{Bethe}. Here the potential
experienced by the flux line, $V(x)$, has a short ranged attractive
part and an impenetrable core at $x=0$. While the scaling argument
is unchanged in this case, this problem has the merit of being
exactly solvable within the replica approach. Since most details of
the calculation are identical to those presented in
Ref.~[\onlinecite{Kardar}], here we only outline the solution.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{unzipwall.eps}
\caption{\label{Bethe} An illustration of the setup considered in
the Bethe Ansatz calculation. The flux line is restricted to the
half plane and a MFM tip is acting on it at the top of the sample.}
\end{figure}
After replicating the free energy Eq.~(\ref{f0dis}) along with the
contribution from the external field Eq.~(\ref{eq:unzipfe}) and
averaging over the disorder, the replicated sum over all path weights connecting points $(0,0)$ and $(x,\tau)$,
$\overline{Z^n(x,t)}$, can be calculated from
\begin{equation}
\partial_\tau \overline{Z^n(x,t)} = -{\cal H} \overline{Z^n(x,t)} \;,
\end{equation}
with the initial condition $\overline{Z^n(x,0)}=\delta(x)$. The
replicated system describes $n$ {\it attractively interacting
bosons} with a non-Hermitian Hamiltonian ${\cal H}$ given by
\begin{eqnarray}
{\cal H}&=& \sum_{\alpha=1}^n \left[ -\frac{1}{2\gamma}
\partial^2_{x_\alpha}-f
\partial_{x_\alpha}+ V(x_\alpha) \right]\nonumber
\\&-&\sigma \sum_{\alpha < \beta}
\delta(x_\alpha-x_\beta) -\frac{1}{2} \sigma n \;.
\end{eqnarray}
In Ref.~[\onlinecite{Kardar}] the problem was solved for $f=0$ using
the Bethe Ansatz. The boundary conditions were that the ground state
wave function should vanish at large $x$ should decay as
$\exp(-\lambda x)$ for the particle closest to the wall. One then
finds that for the permutation ${\bf P}$ of particles such that
$0<x_{P1}<x_{P2}< \ldots <x_{Pn}$ the wave function for $f=0$ is
\begin{equation}
\Psi_{f=0}\sim\exp \left(-\sum_{\alpha=1}^n \kappa_\alpha x_{P
\alpha}\right) \;.
\end{equation}
Here $\kappa_\alpha = \lambda+2(\alpha-1)\kappa$, $\kappa=
\sigma\gamma/2$. Taking the zero replica limit it was
found~\cite{Kardar} that for weak disorder ($\sigma\gamma/2 <
\lambda$) the vortex is bound to the wall while for strong
disorder ($\sigma\gamma/2 > \lambda$) it is unbound.
The ground state wave function for the {\it non-zero} value of the force
can be obtained by noting that the non-Hermitian term acts like an
imaginary vector potential. In particular, it can be gauged away when
the vortices are bound to the wall as discussed in Sec.
\ref{sectioncleancase} (see Eqs.~(\ref{eq:gauge1}) and
(\ref{eq:gauge2})). This imaginary gauge transformation gives
\begin{equation}
\Psi_{f}=\Psi_{f=0}\exp\left( \sum_{\alpha=1}^n fx_\alpha \right)
\;,
\end{equation}
which implies that the solution is
\begin{equation}
\Psi_{f}=\exp \left(-\sum_{\alpha=1}^n \tilde{\kappa}_\alpha x_{P
\alpha}\right) \;,
\end{equation}
with $\tilde{\kappa}_\alpha = \lambda+2(\alpha-1)\kappa-f$. The
effect of the force is simply to shift all the $\kappa_\alpha$'s by a
constant. The average localization length (which satisfies near the
transition $\langle x _m\rangle \simeq f_c \langle \tau_m \rangle /
\gamma$) is then given by
\begin{equation}
\langle x_m \rangle={1\over \tilde Z_n n}\int_0^\infty
\prod_{j=1}^n dx_j \left[\sum_{j=1}^n x_j\right]\,\Psi_f(x_j),
\label{eq:Kardarresult}
\end{equation}
where $\tilde Z_n=\int_0^\infty \prod_{j=1}^n dx_j \Psi_f(x_j)$.
Note that the normalization factor $\tilde Z_n$ in the equation
above is formally equivalent to the partition function (\ref{tuam2})
for the unzipping from a columnar pin without excursions if we
identify $\lambda-\kappa-f$ with $\epsilon$ and $\kappa$ with
$\Delta/2$. This equivalence implies that $\langle x_m\rangle$ for
the unzipping from a hard wall has the same statistical properties
as $\overline{\langle\tau_m\rangle}$ for the unbinding from a
columnar pin (for more details see Ref.~[\onlinecite{kp}]). In particular, the
unzipping problem has a crossover from $\langle x_m\rangle \sim
1/(f_c-f)$ for $\lambda-f\gg\kappa$ to $\langle x_m\rangle \sim 1/
(f_c-f)^{3/2}$ in the opposite limit.
This example confirms another prediction of
the simple scaling argument: the critical exponents for the
unbinding transition are determined only by the dimensionality of
the defect even if the disorder is also present in the bulk of the system.
\subsection{Unzipping from a columnar pin with excursions into the bulk.}
\label{sec:numerics}
In this section we consider the setup similar to Sec.~\ref{replica},
namely unzipping from a columnar defect in $d=1+1$ dimensions, but
allowing excursions of the flux line to the bulk (see
Fig.~\ref{fig:unzip_1D}).
\begin{figure}[ht]
\centering
\includegraphics[width=9cm]{unzip.eps}
\caption{A setup illustrating unzipping from a
columnar pin in $d=1+1$ dimensions with excursions into the bulk.}
\label{fig:unzip_1D}
\end{figure}
Unfortunately there is no analytic solution available for this
problem. Therefore we present only numerical results. As in
Sec.~\ref{replica_2D} we consider a lattice version of the model
where in each step along the $\tau$ direction the vortex can either
move to the left or the right one lattice spacing. The attractive
potential was placed at $x=0$. The restricted partition function of
this model, $Z(x,\tau)$, which sums over the weights of all path
leading to $x,\tau$, starting at $x=0,\tau=0$ satisfies the
recursion relation~\cite{Kardar}:
\begin{eqnarray}
&&Z(x,\tau+1)=\delta_{x,0}(e^{V}-1)Z(0,\tau) \nonumber\\
&&+e^{\mu(x,\tau+1)}\left[J e^f Z(x-1,\tau) +J
e^{-f}Z(x+1,\tau)\right]. \label{eqz}
\end{eqnarray}
Similarly to Eq.~(\ref{eqz1}) we assume that $\mu(x,\tau)$ is
uniformly distributed in the interval $[-U_0,U_0]$ implying the
variance $\sigma=U_0^2/3$. The variable $J$ controls the line
tension, $V$ is the attractive pinning potential, and $f$ is proportional
to the external force. In the continuum limit $J\ll 1$, $f\ll 1$,
and $U_0\ll 1$, equation (\ref{eqz}) reduces to the Schr\"odinger
equation:
\begin{equation}
{\partial Z\over \partial\tau}=-\mathcal H Z(x,\tau)
\label{Z_tau}
\end{equation}
with the Hamiltonian given by Eq.~(\ref{H}) with $\gamma=2J$.
For the simulations we have chosen particular values of $J=0.1$ and
$V=0.1$. As before we work in units such that $k_B T=1$. In the
results described below the partition function was evaluated for
each variance of the disorder for several systems of finite width
$w=2L_x$ averaging over the time-like direction (typically $\tau
\simeq 10^6$ ``time'' steps) with the initial condition $Z(0,0)=1$
and $Z(x,0)=0$ for $x \neq 0$.
To analyze the numerics we performed a finite size scaling analysis. In the spirit of Eq.~(\ref{nu}), in the vicinity of the transition we
expect the scaling form (compare Eq.~(\ref{scaling1})):
\begin{equation}
\overline{\langle\tau_m\rangle}=L_x \Phi\left[L_x(f_c-f)^\nu\right],
\label{scaling}
\end{equation}
where $\Phi$ is some scaling function.
Based on the results of previous sections we anticipate a smooth
interpolation between scaling exponents $\nu=1$ and $\nu=2$ with either
increasing $L_x$ or increasing strength of disorder at fixed $L_x$.
To perform the finite size scaling we obtain for each value of $L_x$
a value for the exponent $\nu$ from the best collapse of the
numerical data of two systems sizes $L_x$ and $L_x/2$. In
Fig.~\ref{fig1} we plot $1/\nu$ as a function of the system size
$L_x$. As can be seen the data is consistent with $\nu$ saturating
at $\nu=2$ for large systems. The crossover to $\nu=2$ is much more
rapid if the point disorder is enhanced near the columnar pin (see
the inset in Fig.~\ref{fig1}), as might be expected for damage
tracks created by heavy ion radiation.
\begin{figure}
\center
\includegraphics[width=8.5cm]{scaling_1dv2.eps}
\caption{Effective exponent $1/\nu$ versus $L_x$ for a fixed
strength of point disorder $\sigma=0.03$. The results are
consistent with the general argument that this exponent should
saturate at $\nu=2$ as $L_x\to\infty$. The inset shows the same
exponent vs $\sigma_c$, the variance of additional point disorder
placed directly on the columnar pin extracted from two system
sizes $L_x=600$ and $L_x=1200$. It appears that $\nu\to 2$ as
$\sigma_c$ increases.} \label{fig1}
\end{figure}
Next, we test the behavior of the critical force as the
disorder strength is increased. According to our discussion in Sec.
\ref{scalingphasediagram}, we anticipate that in the absence of an
external force the flux line is always bound to the pin in $1+1$
dimensions. This is in contrast with the problem of unzipping from
the wall discussed in the previous section, where there is a
critical strength of the disorder, $\sigma_c$, which leads to an
unbinding transition for $f=0$. Note that the existence of a critical value
of the disorder is a direct consequence (see discussion in Sec.
\ref{scalingphasediagram}) of the excursions of the vortex from the
defect which, as argued above, do not modify the critical behavior
of the unzipping transition. The existence of a critical value of
the disorder is therefore strongly dependent on the dimensionality
of the problem.
In numerical simulations for each strength of disorder we determine
the critical force plotting the ratio
$\overline{\langle\tau_m\rangle}/L_x$ for two different sizes $L_x$
and using the scaling relation~(\ref{scaling}). Note that this ratio
does not depend on $L_x$ at $f=f_c$ (see also the discussion in
Sec.~\ref{replica_2D}). We checked that this is indeed the case.
Upon repeating this procedure for different disorder strengths we
obtain the dependence $f_c(U_0)$ which is plotted in
Fig.~\ref{fig8}.
\begin{figure}[ht]
\hspace{0.5cm}
\includegraphics[bb=1cm 1cm 20cm 25cm, scale=0.38, angle=90]{crit_f.eps}
\caption{Critical force for unzipping from a columnar defect in
$1+1$ dimensions as a function of the disorder
strength.}\label{fig8}
\end{figure}
The graph suggests that there is no unbinding transition at zero
tilt at any strength of disorder consistent with the scaling
argument presented in Sec.~\ref{scalingphasediagram} and those of
Ref.~[\onlinecite{HwaNatter}]. We point out that the strongest disorder
shown in the graph $U_0=0.9$ required samples quite extended in the
time-like direction, $L_\tau\approx 10^8$.
\section{Unzipping a Luttinger liquid}
\label{sec:Lutunzip}
We now turn to consider the effect of interactions on the unzipping
of single vortices. To do this we study a system where the vortices
are preferentially bound to a thin two-dimensional slab which is
embedded in a three-dimensional sample so that the density of
vortices in the slab is much higher than in the bulk.
Experimentally, this setup could be achieved using, for example, a
twin plane in YBCO or by inserting a thin plane with a reduced lower
critical field $H_{c1}$ (with, for example, molecular beam epitaxy)
into a bulk superconductor. The scenario we analyze is one where a
MFM is used to pull a single vortex out of the two-dimensional slab
(see Fig. \ref{fig9}). The physics of the vortices confined to two
dimensions is well understood and is analogous to a spinless Luttinger
liquid of bosons (see, e.g. Ref.~[\onlinecite{AHNS}]).
As we show below the dependence of the displacement of the vortex
from the two-dimensional slab on the force exerted by the MFM
depends on the physics of the two-dimensional vortex liquid which
resides in the slab. Specifically, the critical properties of the
unbinding transition depend on the ``Luttinger liquid parameter''
which controls the large-distance behavior of the vortex liquid. The
experimental setup can thus be used to probe the two-dimensional
physics of the vortices in the slab.
\begin{figure}[ht]
\includegraphics[scale=0.6]{UnzipLuttinger.eps}
\caption{Possible experimental setup for studying unzipping from
Luttinger Liquid. A MFM is used to pull a single vortex out of a
plane where the vortices are confined. The measured quantity is the
distance of the pulled vortex from the confining plane as a function
of the force $f$. }\label{fig9}
\end{figure}
\subsection{Two-dimensional vortex liquids}
The physics of vortices in two dimensions is very well understood.
The vortices form a one-dimensional array located at position
$x_i(\tau)$. The density profile of the vortices is then given by
\begin{equation}
n(x,\tau)=\sum_j \delta \left[ x-x_j(\tau)\right] \;,
\end{equation}
where $x$ and $\tau$ denote transverse and longitudinal coordinates
with respect to the vortices and $i$ is an index labeling the
vortices. By changing variable into the phonon displacement field
$u_j$ through $x_j(\tau)=a\left[j+u_j(\tau)\right]$, where $a$ is
the mean distance between vortex lines the free-energy of a
particular configuration can be written as:
\begin{equation}
{\cal F}_0=\frac{a^2}{2} \int dx d\tau \left[ c_{11}
(\partial_x u)^2 + c_{44} (\partial_\tau u)^2\right] \;.
\end{equation}
Here $c_{11}$ and $c_{44}$ are the compressional and the tilt moduli
respectively. After rescaling the variables $x$ and $\tau$ according
to
\begin{equation}
x \to x \left(\frac{c_{11}}{c_{44}}\right)^{1/4} \;\; ,
\; \tau \to \tau \left(\frac{c_{44}}{c_{11}}\right)^{1/4} \;,
\end{equation}
the free energy takes the isotropic form
\begin{equation}
{\cal F}_0=\frac{A}{2}\int dx d\tau
\left[ (\partial_x u)^2 + (\partial_\tau u)^2\right]
\end{equation}
with $A=a^2\sqrt{c_{11}c_{44}}$. The partition function is then
given by the functional integral
\begin{equation}
Z=\int D u(x,\tau) e^{-S} \;,
\end{equation}
with $S=S_0={\cal F}_0/T$. In the limit of large sample sizes in the
``timelike'' direction one can regard $Z$ as the zero temperature
partition function of interacting bosons~\cite{AHNS}. In this
language the imaginary time action can be written as
\begin{equation}
S_0=\frac{\pi}{2g}\int dx d\tau
\left[ (\partial_x u)^2 + (\partial_\tau u)^2\right] \;.
\label{freeaction}
\end{equation}
Here we set $\hbar=1$ and identified the Luttinger-liquid parameter,
$g$, as
\begin{equation}
g=\frac{\pi T}{A} \;.
\label{Lutpara}
\end{equation}
The Luttinger-liquid parameter controls the long-distance properties
of the model. For vortices $g$ it is a
dimensionless combination of the compressional and tilt moduli, the
density of vortices and temperature.
Various properties of Luttinger liquids are well understood. For
example, the correlation function for the density fluctuations
$\delta n(x,\tau)=n(x,\tau)-n_0$, where $n_0=1/a$ is the mean
density, obeys
\begin{equation}
\langle \delta n(x,\tau) \delta n(0,0) \rangle \simeq
\frac{\cos\left( 2 \pi n_0 x \right)}{(x^2+\tau^2)^g} \;.
\end{equation}
There is quasi long-range order in the system and the envelope of
the density correlation function decays as a power law with the exponent
depending only on $g$. As we show below, $g$ can be probed by
unzipping a single vortex out of a plane which contains a $(1+1)$-dimensional vortex liquid.
In what follows we also consider the case where there is point
disorder present in the sample. The behavior will be strongly influenced by the behavior of the vortices
in two dimensions in the presence of disorder. This problem has been
studied in some detail in the past (see e.g. Ref.~[\onlinecite{pkn}] and
references therein). Here we briefly review features which will be
important in analyzing the unzipping problem. The most relevant (in
the renormalization group sense) contributions to the action from
the point disorder is
\begin{equation}
S_{PD}=2\int dx d\tau R(x,\tau)
\cos \left[2 \pi u(x,\tau) +\beta(x,\tau) \right] \;,
\end{equation}
where positive (negative) $R$ implies a repulsive (attractive)
potential between the vortices and the quenched random disorder. We assume, for simplicity, that $\beta(x,\tau)$ is
distributed uniformly between $0$ and $2 \pi$ and $R(x,\tau)$ has a
an uncorrelated Gaussian distribution with the variance $\Delta_0$:
\begin{equation}
\overline{R(x_1,\tau_1)R(x_2,\tau_2)}=
\Delta_0 \delta(x_1-x_2)\delta(\tau_1-\tau_2) \;,
\end{equation}
where the overbar, as before, represents averaging over disorder.
To analyze the disordered problem, similar to the single vortex case,
we use the replica trick. Then the replicated noninteracting part of
the action becomes
\begin{equation}
S_0=\frac{\pi}{2g} \sum_{\alpha,\beta} \int
\int dx d\tau \left[ \frac{\partial u_\alpha}{\partial \tau}
\frac{\partial u_\beta}{\partial \tau} +\frac{\partial u_\alpha}{\partial x}
\frac{\partial u_\beta}{\partial x} \right] \left[ \delta_{\alpha,\beta}
- \frac{\kappa}{g} \right] \;.
\end{equation}
Here $u_\alpha(x,\tau)$ is the replicated phonon field and $\kappa$
is an off-diagonal coupling which is zero in the bare model but is
generated by the disorder. It plays the role of a quenched random
``chemical potential'' which is coupled to the first derivative of
the phonon field $u$. The replica indices, $\alpha$ and $\beta$ run
from $1$ to $n$ and at the end of the calculation one takes the
limit $n \to 0$. After replication the contribution from the point
disorder becomes
\begin{equation}
S_{PD}=-\Delta_0 \sum_{\alpha,\beta} \int \int dx d\tau
\cos 2 \pi \left[ u_\alpha (x,\tau) - u_\beta (x,\tau) \right] \;.
\end{equation}
The combined action can be treated within the renormalization group
using a perturbation series near $g=1$ where a phase transition
between a vortex liquid and a vortex glass
occurs~\cite{fisher_v_glass}. By continuously eliminating degrees of
freedom depending on frequency and momentum within the shell
$\Lambda - \delta \Lambda < \sqrt{\omega^2+q^2} < \Lambda$, one
obtains the following renormalization group equations~\cite{Cardy,
pkn}
\begin{eqnarray}
\frac{dg}{dl}&=&0 \\
\frac{d \Delta}{dl}&=&2(1-g) \Delta - 2 C \Delta^2 \\
\frac{d \kappa}{dl}&=&C^2 \Delta^2
\end{eqnarray}
Here $l$ is the flow parameter $\Lambda(l)=\Lambda e^{-l}$. $C$ is
a non-universal constant which depends on the cutoff $\Lambda$. The
equations are subject to the initial conditions $\kappa(l=0)=0$ and
$\Delta(l=0)=\Delta_0$. Note that the Luttinger liquid parameter is
not renormalized. Analyzing the flow equations it has been shown
that in the vortex liquid phase ($g>1$) the correlation of the
density fluctuation behaves in the vortex liquid phase as
\begin{equation}
\langle \delta n(x,\tau) \delta n(0,0) \rangle
\simeq \frac{1}{(x^2+\tau^2)^{g+\tilde{\kappa}/2}} \;,
\end{equation}
where $\tilde{\kappa}$ is a nonuniversal exponent. In the glass
phase ($g<1$) correlations decay faster than a power law, with
\begin{equation}
\langle \delta n(x,\tau) \delta n(0,0) \rangle
\simeq \exp \left( -(1-g)^2 \ln^2 \sqrt{x^2+\tau^2}\right) \;.
\end{equation}
In what follows we consider a setup in which a two dimensional array
of vortices, whose properties have been described above, is embedded
in a three dimensional bulk sample. As shown below when a single vortex
is unzipped into the bulk in a clean sample the critical properties
of the unzipping transition yield information on the properties of
the two dimensional vortex liquid. In particular, they provide a
direct measure of the Luttinger-liquid parameter. In the same setup
in a disordered sample we will show that the critical properties of
the unzipping transition will be modified. In particular, they can
yield information on the on the three-dimension wandering exponent
of a single vortex in a disordered sample.
\subsection{Unzipping a Luttinger liquid: The clean case}
Consider first an experiment where an attractive two-dimensional
potential holds vortices confined to it. A MFM then pulls a {\it
single} vortex out of the plane (see Fig. \ref{fig9}). We assume
throughout that the density of vortices in the three dimensional bulk
is so small that we can neglect interactions between the vortex that
is pulled out of the sample and vortices in the three dimensional
bulk. In this subsection only the clean case (no point disorder) will be studied.
We assume the MFM exerts a force ${\bf f}=f \hat{x}$. As in the
unzipping experiments discussed above we expect that for large
forces $f>f_c$ the vortex will be completely pulled out of the two
dimensional slab. Similar to the case of the unzipping of a single
vortex we write the free energy of the vortex as a sum of two
contributions. The first, ${\cal F}_u(\tau_m)$, arises from the part
of the vortex that is outside the two dimensional slab. The second
${\cal F}_b(\tau_m)$ is the change in the free-energy of the
vortices that remain inside the two dimension slab. As before
$\tau_m$ is the length along the $\tau$ direction which is unbound
from the two-dimensional slab. The free-energy of the unzipped part
is clearly identical to that calculated in Eq.~\ref{free_unzip} or
explicitly
\begin{equation}
{\cal F}_u(\tau_m)= - f^2 \tau_m/ 2\gamma \;.
\label{eq:unzupfeagain}
\end{equation}
The calculation of the free-energy, ${\cal F}_b(\tau_m)$, is
somewhat more involved. Clearly there is a linear contributions due
to the length $\tau_m$ removed from the attractive potential of the
slab. However, in addition there is an extra contribution from the
energy of the dislocation, ${\cal F}_d(\tau_m)$, (see Fig.
\ref{fig9}) created in the two dimensional vortex array. This
contribution to the free-energy, as we show below, is {\it
non-linear} and controlled by the Luttinger liquid parameter $g$.
This non-linearity results, near the unzipping transition, in a
sensitivity of the critical properties to the value of $g$.
We leave the details of the calculation of the dislocation energy to
Appendix~\ref{App:dislocation} and present here only the key steps
of derivation.
In order to satisfy boundary conditions near the interface one can
use the method of images (see Fig.~(\ref{fig11})). The free energy
of this dislocation pair can be calculated by standard methods (see
details in Appendix~\ref{App:dislocation}). In particular, at large
$\tau_m$ it behaves logarithmically (see e.g. Ref.~[\onlinecite{chakin}]):
\begin{equation}
{\cal F}_d=\frac{T}{4g} \ln(\tau_m/a_0),
\label{free_en_dis}
\end{equation}
where $a_0$ is the short range cutoff of the order of the distance
between flux lines. We note that the free energy of the dislocation
near the interface (\ref{free_en_dis}) is one half of the free
energy of a dislocation pair.
With the energy of the dislocation in hand we can now analyze the
properties of the unzipped length near the transition using the
methods used for analyzing the single vortex unzipping experiments.
The contributions to the free energy are from the unzipped part of
the vortex and the energy of the dislocation. Collecting all the
relevant terms, near the transition the free energy is given by
\begin{equation}
{\cal F}(\tau_m)={\cal F}_u(\tau_m)+{\cal F}_b(\tau_m)=
\epsilon\tau_m+\frac{T}{4g}\ln(\tau_m/a_0)\;.
\end{equation}
The probability of finding a certain value of $\tau_m$ is then given
by
\begin{equation}
P(\tau_m) \propto e^{-F(\tau_m)/T}=\frac{C}{\tau_m^{1/(4g)}}e^{-\epsilon\tau_m}.,
\end{equation}
where $C$ is the normalization constant. At the transition
$\epsilon=0$ the distribution becomes a pure power law in $\tau_m$.
Therefore, the average value of $\tau_m$ is very sensitive to the
value of $g$. In particular, for $g>1/4$ (i.e. for weakly
interacting flux lines) the behavior of $\langle\tau_m\rangle$ near
the transition is identical to that of a single vortex in the
absence of interactions with other vortices
\begin{equation}
\langle \tau_m \rangle \sim {1\over\epsilon} \;.
\end{equation}
In contrast, for $1/8 < g < 1/4$ (stronger interactions) there is a
continuously varying exponent governing the transition
\begin{equation}
\langle\tau_m\rangle\sim {1\over \epsilon^{2-1/4g}} \;.
\end{equation}
And finally, for $g<1/8$ (strongly interacting flux lines) we find
that $\langle \tau_m\rangle$ does not diverge near the transition.
Note that even though in this regime the mean displacement remains
constant at the transition the higher moments of $\tau_m$ diverge
and are thus sensitive to $\epsilon$. The reason for this is at the
transition the distribution of $\tau_m$ is a power law.
\subsection{Unzipping from a twin plane with point disorder}
\label{3c}
We now consider the problem of unzipping a vortex from a
plane with many vortices in the presence of disorder. In the spirit
of the treatments presented in this paper, one needs to calculate the
free-energy of the unzipped part of the vortex ${\cal F}_u(\tau_m)$,
the free-energy of the bound part of the vortex ${\cal F}_b(\tau_m)$
and the {\it fluctuations} in both quantities averaged over
realizations of disorder. This can be done perturbatively near $g=1$. We again relegate details of the derivation of
the dislocation energy to
Appendix~\ref{App:dislocation1}. One conclusion from our
calculations is that the mean free energy of the dislocation near
the boundary is not affected by the disorder and is given by
Eq.~(\ref{free_en_dis}). Another important conclusion is that the
fluctuations of the free energy also depend logarithmically on
$\tau_m$:
\begin{equation}
\overline{\delta {\cal F}^2_d(\tau_m)}= T^2\frac{\kappa(\infty)}{8g^2} \ln(\tau_m/a_0)
\end{equation}
for $g>1$ and
\begin{equation}
\overline{\delta {\cal F}^2_d(\tau_m)}=T^2\frac{(1-g)^2}{4} \ln^2(\tau_m/a_0)
\end{equation}
for $g<1$.
\begin{figure}[ht]
\includegraphics[scale=0.6]{UnzipLuttingerdis.eps}
\caption{Possible experimental setup for studying unzipping from
Luttinger Liquid in the presence of disorder.}\label{fig10}
\end{figure}
We note that in the case of many flux lines there is a weak
logarithmic dependence of free energy fluctuations on $\tau_m$ as
opposed to strong power law dependence in the case of a single flux
line (compare Eq.~(\ref{eq:fefluct})). This somewhat surprising
result is a consequence of the screening of strong power-law
fluctuations by other flux lines. We note that if the pinning of
flux lines by disorder is extremely strong so that tearing a single
flux line does not affect positions of other lines in the duration
of experiment, we are back to the single flux line physics and
$\overline{\delta \mathcal F_d^2}\propto \tau_m$.
To complete the analysis, we need to consider the free-energy
contribution from the unzipped part. Of particular of importance are
the free-energy fluctuations due to the disorder in the bulk of the
sample. As discussed in Sec. \ref{Sec2}, in a three dimensional
sample these grow as $\delta {\cal F}_u \propto m^{\omega(3)}$ with
$\omega(3) \simeq 0.22$. This contribution grows much quicker than
the contribution from the fluctuations in the free-energy of the
dislocation. Therefore following the ideas of Sec. \ref{Sec2} the
total free-energy is given by
\begin{equation}
{\cal F}(m)=a(f_c-f)\tau_m -b\tau_m^{\omega(3)}\;.
\label{fplane}
\end{equation}
where $a$ and $b$ are positive constants. Minimizing Eq. (\ref{fplane}) gives for the critical properties in
this case
\begin{equation}
s \sim \frac{1}{(f_c-f)^{1.28}} \;.
\end{equation}
Thus screening disorder fluctuations in the plain by other flux
lines effectively enhances the role of disorder in the bulk. As the
result these unzipping experiments can serve as a probe of the
three-dimensional anomalous wandering exponent.
\acknowledgements
YK was supported by the Israel Science Foundation and thanks the
Boston University visitors program for hospitality. YK and DRN were
supported by the Israel-US Binational Science Foundation. Research
by DRN was also supported by the National Science Foundation,
through grant DMR 0231631 and through the Harvard Materials Research
Science and Engineering center through grant DMR0213805. AP was
supported by AFOSR YIP.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Learning with Adversarial Contexts}
\label{sec:adversarial}
In this section, we focus on the adversarial contexts case, where at time $t\in [T]$, the contexts $x_{t,1},\cdots,x_{t,K}$ can be arbitrarily chosen by an adversary who observes all past contexts and rewards, with $\|x_{t,a}\|_2\le 1$ for any $a\in [K]$.
We first state the main results in Section~\ref{subsec:adversarial_main}, which
characterize the upper and lower bounds of the regret.
We then give a UCB-based algorithm in the sequential batch setting in Section \ref{subsec:adversarial_UCB} and describe several important aspects of the algorithm, including a variant that is used for theoretical bound purposes.
Next, in Section \ref{subsec.UCB_upperbound}, we show that the proposed sequential batching algorithm achieves the regret upper bound in Theorem \ref{thm.adversarial} .
Finally, we prove the regret lower bound in Section \ref{subsec.adversarial} and therefore establish that the previous upper bound is close to be tight.
\subsection{Main Results}\label{subsec:adversarial_main}
\begin{theorem}\label{thm.adversarial}
Let $T$, $M$ and $d$ be the learning horizon, number of batches and each context's dimension, respectively. Denote by $\mathsf{polylog}(T)$ all the poly-logarithmic factors in $T$.
\begin{enumerate}
\item Under Assumption~\ref{aspn.TKd}, there exists a sequential batch learning algorithm \textbf{Alg}= $({\mathcal{T}}, \pi)$, where ${\mathcal{T}}$ is a uniform grid defined by $t_m = \lfloor \frac{mT}{M}\rfloor$ and $\pi$ is explicitly defined in Section \ref{subsec:adversarial_UCB},
such that:
\begin{align*}
\sup_{\theta^\star: \|\theta^\star\|_2\le 1} \mathbb{E}_{\theta^\star}[R_T\left(\textbf{Alg}\right)] \le \mathsf{polylog}(T)\cdot \left(\sqrt{dT} + \frac{dT}{M}\right).
\end{align*}
\item Conversely, for $K=2$ and any sequential batch learning algorithm, we have:
\begin{align*}
\sup_{\theta^\star: \|\theta^\star\|_2\le 1} \mathbb{E}_{\theta^\star}[R_T\left(\textbf{Alg}\right)] \ge c\cdot \left(\sqrt{dT} + \left(\frac{T\sqrt{d}}{M}\wedge \frac{T}{\sqrt{M}}\right)\right),
\end{align*}
where $c>0$ is a universal constant independent of $(T,M,d)$.
\end{enumerate}
\end{theorem}
Our subsequent analysis easily gives high-probability regret upper bounds. However, for simplicity and to highlight more clearly the matching between the upper and lower bounds,
we stick with presenting results on expected regret.
Theorem \ref{thm.adversarial} shows a polynomial dependence of the regret on the number of batches $M$ under adversarial contexts, and the following corollary is immediate.
\begin{corollary}\label{cor.adversarial}
Under adversarial contexts, $\Theta(\sqrt{dT})$ batches achieve the fully online regret $\tilde{\Theta}(\sqrt{dT})$.
\end{corollary}
According to Corollary \ref{cor.adversarial}, $T$ batches are not necessary to achieve the fully online performance under adversarial contexts: $\Theta(\sqrt{Td})$ batches suffice. Since we are \text{not} in the high-dimensional regime (per Assumption~\ref{aspn.TKd}, $d \le \sqrt{T}$), the number of batches needed without any performance suffering is at most $O(T^{0.75})$, a sizable reduction from $O(T)$. Further, in the low-dimensional regime (i.e. when $d$ is a constant), only $O(\sqrt{T})$ batches are needed to achieve fully online performance.
Nevertheless, $O(\sqrt{dT})$ can still be a fairly large number.
In particular, if only a constant number of batches are available, then the regret is linear. The lower bound indicates that not much better can be done in the adversarial contexts.
This is because the power of the adversary under adversarial contexts is too strong when the learner only has a few batches: the adversary may simply pick any batch and choose all contexts anterior to this batch to be orthogonal with the contexts within this batch, such that the learner can learn nothing about the rewards in any given batch.
\subsection{A Sequential Batch UCB Algorithm}\label{subsec:adversarial_UCB}
The overall idea of the algorithm is that, at the end of every batch, the learner computes an estimate $\hat{\theta}$ of the unknown parameter $\theta^\star$ via ridge regression as well as a confidence set that contains $\theta^\star$ with high probability. Then, whenever the learner enters a new batch, at each time $t$ he simply picks the action with the largest upper confidence bound. Finally, we choose the uniform grid, i.e., $t_m = \lfloor \frac{mT}{M}\rfloor$ for each $m\in [M]$. The algorithm is formally illustrated in Algorithm \ref{algo.ucb}.
\begin{algorithm}[h!]
\DontPrintSemicolon
\SetAlgoLined
\BlankLine
\caption{Sequential Batch UCB (SBUCB) \label{algo.ucb}}
\textbf{Input:} time horizon $T$; context dimension $d$; number of batches $M$; tuning parameter $\gamma>0$.
\textbf{Grid choice:} ${\mathcal{T}} = \{t_1,\cdots,t_M\}$ with $t_m = \lfloor \frac{mT}{M}\rfloor$.
\textbf{Initialization:} $A_0 = I_d\in \mathbb{R}^{d\times d}$, $\hat{\theta}_0={\bf 0}\in \mathbb{R}^d$, $t_0 = 0$.
\For{$m \gets 1$ \KwTo $M$}{
\For{$t\gets t_{m-1}+1$ \KwTo $t_m$}{
Choose $a_t = \arg\max_{a\in [K]} x_{t,a}^\top \hat{\theta}_{m-1} + \gamma\sqrt{x_{t,a}^\top A^{-1}_{m-1} x_{t,a}}$ (break ties arbitrarily). \\
}
Receive rewards in the $m$-th batch: $\{r_{t,a_t}\}_{t_{m-1}+1 \le t \le t_m}$.
$A_m = A_{m-1} + \sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\top$. \\
$\hat{\theta}_m = A^{-1}_m\sum_{t=t_{m-1}+1}^{t_m} r_{t,a_t}x_{t,a_t}$.
}
\end{algorithm}
\begin{remark}
Note that when $M=T$ (i.e. the fully online setting), Algorithm~\ref{algo.ucb} degenerates to the standard LinUCB algorithm in~\cite{chu2011contextual}.
\end{remark}
To analyze the sequential batch UCB algorithm, we need to first show that the constructed confidence bound is feasible. By applying \cite[Lemma 1]{chu2011contextual} to our setting, we immediately obtain the following concentration result that the estimated $\hat{\theta}_{m-1}$ is close to the true $\theta^\star$:
\begin{lemma}\label{lemma.concentration}
Fix any $\delta > 0$.
For each $m\in [M]$, if for a fixed sequence of selected contexts $\{x_{t,a_t}\}_{t\in [t_m]}$ up to time $t_m$, the (random) rewards $\{r_{t,a_t}\}_{t\in [t_m]}$ are independent, then for each $t \in [t_{m-1}+1, t_m]$,
with probability at least $1-\frac{\delta}{T}$, the following holds for all $a\in [K]$:
\begin{align*}
|x_{t,a}^\top (\hat{\theta}_{m-1} - \theta^\star)| \le \left(1+\sqrt{\frac{1}{2}\log\left(\frac{2KT}{\delta}\right)}\right)\sqrt{x_{t,a}^\top A_{m-1}^{-1}x_{t,a}}.
\end{align*}
\end{lemma}
\begin{remark}
Lemma~\ref{lemma.concentration} rests on an important conditional independence assumption of the rewards $\{r_{t,a_t}\}_{t\in [t_m]}$. However, this assumption
does not hold in the vanilla version of the algorithm as given in Algorithm~\ref{algo.ucb}.
This is because a future selected action $a_t$ and hence the chosen context $x_{t,a_t}$ depends on
the previous rewards. Consequently, by conditioning on $x_{t,a_t}$, previous rewards, say $r_{\tau_1}, r_{\tau_2}$ ($\tau_1, \tau_2 < t$), can become dependent. Note the somewhat subtle issue here on
the dependence of the rewards: when conditioning on $x_{t,a_t}$, the corresponding reward $r_t$ becomes independent of all the past rewards $\{r_\tau\}_{\tau < t}$. Despite this, when a future $x_{t^\prime, a_{t^\prime}}$ is revealed ($t^\prime > t$), these rewards (i.e. $r_t$ and all the rewards prior to $r_t$) become coupled again: what was known about $r_t$ now reveals information about the previous rewards $\{r_\tau\}_{\tau < t}$, because $r_t$ itself would not determine the selection of $x_{t^\prime, a_{t^\prime}}$:
all those rewards have influence over $x_{t^\prime, a_{t^\prime}}$. Consequently, a complicated dependence structure is thus created when conditioning on $\{x_{t,a_t}\}_{t\in [t_m]}$.
This lack of independence issue will be handled with a master algorithm variant of Algorithm~\ref{algo.ucb} discussed in the next subsection.
Using the master algorithm to decouple dependencies is a standard technique in contextual bandits that was first developed in~\cite{auer2002using}. Subsequently, it has been used for the same purpose in~\cite{chu2011contextual, li2017provably}, among others. We will describe how to adapt the master algorithm in our current sequential batch learning setting next. We end this subsection by pointing out that, strictly speaking, our regret upper bound is achieved only by this master algorithm, rather than Algorithm~\ref{algo.ucb}. However, we take the conventional view that the master algorithm is purely used as a theoretical construct (to resolve the dependence issue) rather than a practical algorithm that should actually be deployed in practice. In practice, Algorithm~\ref{algo.ucb} should be used instead. For that reason, we discuss the master algorithm only in the proof.
\end{remark}
\subsection{Regret Analysis for Upper bound}\label{subsec.UCB_upperbound}
We start with a simple fact from linear algebra that will be useful later.
\begin{lemma}\cite[Lemma 11]{auer2002using}\label{lemma.eigenvalue}
Let $A$ be a symmetric matrix such that $I_d\preceq A$, and $x\in \mathbb{R}^d$ be a vector satisfying $\|x\|_2\le 1$. Then the eigenvalues $\lambda_1,\cdots,\lambda_d$ of $A$ and the eigenvalues $\nu_1,\cdots,\nu_d$ of $A+xx^\top$ can be rearranged in a way such that $\lambda_i\le \nu_i$ for all $i\in [d]$, and
\begin{align*}
\mathsf{Tr}(A^{-1}xx^\top) \le 10\sum_{j=1}^d \frac{\nu_j - \lambda_j}{\lambda_j}.
\end{align*}
\end{lemma}
We next establish a key technical lemma that will be used in establishing our regret upper bound.
\begin{lemma}\label{lemma.trace_sum}
Define $X_m = \sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\top$. We have:
\begin{align*}
\sum_{m=1}^M \sqrt{ \mathsf{Tr}(A_{m-1}^{-1} X_m)} \le \sqrt{10}\log(T+1)\cdot \left(\sqrt{Md} + d\sqrt{\frac{T}{M}} \right).
\end{align*}
\end{lemma}
\begin{proof}
We start by noting that with the above notation, we have $A_m=A_{m-1}+X_m$ for any $m\in [M]$ with $A_0=I_d$.
Applying Lemma \ref{lemma.eigenvalue} repeatedly, we may rearrange the eigenvalues $\lambda_{m,1},\cdots,\lambda_{m,d}$ of $A_m$ in such a way that $\lambda_{m-1,j}\le \lambda_{m,j}$ for all $m\in [M], j\in [d]$, and
\begin{align}\label{eq.reduction_eigenvalue}
\sum_{m=1}^M \sqrt{ \mathsf{Tr}(A_{m-1}^{-1} X_m)} \le \sqrt{10}\cdot \sum_{m=1}^M \sqrt{\sum_{j=1}^d \frac{\lambda_{m,j} - \lambda_{m-1,j}}{\lambda_{m-1,j}}}.
\end{align}
Note that $\lambda_{0,j}=1$ for all $j\in [d]$.
Note further that $ \lambda_{M,j}\le 1+T, \forall j \in [d]$, which follows from the
fact that $z^\top (A_M) z = z^\top (I_d + \sum_{t=1}^T x_{t,a_t}x_{t,a_t}^\top ) z = \|z\|_2^2 + \sum_{t=1}^T \|z^\top x_{t,a_t}\|^2_2 \le (T+1) \|z\|_2^2$, since $\|x_{t,a_t}\|_2 \le 1$.
Consequently, every eigenvalue of $A_M$ must be bounded by $T+1$.
Utilizing the above two pieces of information on $\lambda_{0,j}$ and $ \lambda_{M,j}$, we then have the following:
\begin{align}
\sum_{m=1}^M \sqrt{\sum_{j=1}^d \frac{\lambda_{m,j} - \lambda_{m-1,j}}{\lambda_{m,j}}} &\le \sqrt{M\sum_{m=1}^M\sum_{j=1}^d \frac{\lambda_{m,j} - \lambda_{m-1,j}}{\lambda_{m,j}} }
=\sqrt{M \sum_{j=1}^d \sum_{m=0}^{M-1} \frac{\lambda_{m+1,j} - \lambda_{m,j}}{\lambda_{m+1,j}} } \nonumber \\
&\le \sqrt{M\sum_{j=1}^d \int_{\lambda_{0,j}}^{\lambda_{M,j}} \frac{dx}{x} }
= \sqrt{M\sum_{j=1}^d \log \lambda_{M,j}}
\le \sqrt{Md\log(T+1)}, \label{eq.inequality_1}
\end{align}
where the first inequality follows from $(\sum_{i=1}^n x_i)^2 \le n \sum_{i=1}^n x_i^2$, for any real numbers
$x_1, \dots, x_n$.
We now look at the difference between Equation~\eqref{eq.reduction_eigenvalue} and Equation~\eqref{eq.inequality_1} and have:
\begin{align*}
\sqrt{\sum_{j=1}^d \frac{\lambda_{m,j} - \lambda_{m-1,j}}{\lambda_{m-1,j}}} - \sqrt{\sum_{j=1}^d \frac{\lambda_{m,j} - \lambda_{m-1,j}}{\lambda_{m,j}}} &\stepa{\le} \frac{\sum_{j=1}^d \frac{(\lambda_{m,j}-\lambda_{m-1,j})^2}{\lambda_{m,j}\lambda_{m-1,j} }}{\sqrt{\sum_{j=1}^d \frac{\lambda_{m,j} - \lambda_{m-1,j}}{\lambda_{m-1,j}}}}\\
& = \frac{\sum_{j=1}^d \frac{(\lambda_{m,j}-\lambda_{m-1,j})^{1/2}}{\lambda_{m-1,j}^{1/2} } \cdot
\frac{(\lambda_{m,j}-\lambda_{m-1,j})^{3/2}}{\lambda_{m,j}\lambda_{m-1,j}^{1/2}}}{\sqrt{\sum_{j=1}^d \frac{\lambda_{m,j} - \lambda_{m-1,j}}{\lambda_{m-1,j}}}}\\
&\stepb{\le}
\frac{\sqrt{\sum_{j=1}^d \frac{(\lambda_{m,j}-\lambda_{m-1,j})}{\lambda_{m-1,j} }} \cdot
\sqrt{\sum_{j=1}^d \frac{(\lambda_{m,j}-\lambda_{m-1,j})^{3}}{\lambda_{m,j}^2\lambda_{m-1,j}}}}{\sqrt{\sum_{j=1}^d \frac{\lambda_{m,j} - \lambda_{m-1,j}}{\lambda_{m-1,j}}}}\\
&= \sqrt{\sum_{j=1}^d \frac{(\lambda_{m,j}-\lambda_{m-1,j})^3}{\lambda_{m-1,j}\lambda_{m,j}^2} },
\end{align*}
where step (a) follows from the basic inequality $\sqrt{a}-\sqrt{b}\le (a-b)/\sqrt{a}$ for $a\ge b\ge 0$, and step (b) is due to Cauchy--Schwartz.
Note further that $\lambda_{m,j}-\lambda_{m-1,j}\le \mathsf{Tr}(X_m) = \sum_{t=t_{m-1}+1}^{t_m} \|x_{t,a_t}\|_2^2\le t_m - t_{m-1} = \frac{T}{M}$, we therefore have:
\begin{align}
&\sum_{m=1}^M \left(\sqrt{\sum_{j=1}^d \frac{\lambda_{m,j} - \lambda_{m-1,j}}{\lambda_{m-1,j}}} - \sqrt{\sum_{j=1}^d \frac{\lambda_{m,j} - \lambda_{m-1,j}}{\lambda_{m,j}}} \right) \le \sum_{m=1}^M \sqrt{\sum_{j=1}^d \frac{(\lambda_{m,j}-\lambda_{m-1,j})^3}{\lambda_{m-1,j}\lambda_{m,j}^2} } \nonumber \\
& \le \sum_{m=1}^M \sqrt{\sum_{j=1}^d \frac{(\lambda_{m,j}-\lambda_{m-1,j})^2}{\lambda_{m,j}^2 }}\cdot \sqrt{\frac{T}{M}}
\le \sum_{m=1}^M \sum_{j=1}^d \frac{\lambda_{m,j}-\lambda_{m-1,j}}{\lambda_{m,j} }\cdot \sqrt{\frac{T}{M}} \nonumber \\
&\le \sqrt{\frac{T}{M}}\sum_{j=1}^d \int_{\lambda_{0,j}}^{\lambda_{M,j}} \frac{dx}{x} \nonumber \\
&\le d\sqrt{\frac{T}{M}}\log(T+1) \label{eq.inequality_2},
\end{align}
where the second inequality follows from the fact that
$\lambda_{m-1,j}\ge \lambda_{0,j}=1$ for any $m\in [M]$.
Now combining \eqref{eq.reduction_eigenvalue}, \eqref{eq.inequality_1} and \eqref{eq.inequality_2} completes the proof.
\end{proof}
We are now ready to prove the regret upper bound stated in Theorem~\ref{thm.adversarial}.
\begin{proof}[Proof of Statement 1 in Theorem~\ref{thm.adversarial}]
\begin{enumerate}
\item[]
\item \textbf{Regret bound under conditional independence assumption.}
For a given $\delta > 0$, set the hyper-parameter $\gamma$ in Algorithm~\ref{algo.ucb} to be $1+\sqrt{\frac{1}{2}\log(\frac{2KT}{\delta})}$ for the entire proof.
Under the conditional independence assumption in Lemma~\ref{lemma.concentration},
by a simple union bound over all $t \in [T]$, we have with probability at least $1 - \delta$, the following event holds:
\begin{align*}
\forall m \in [M], \forall t \in [t_{m-1}+1, t_m], \forall a\in [K], \quad |x_{t,a}^\top (\hat{\theta}_{m-1} - \theta^\star)| \le \gamma\sqrt{x_{t,a}^\top A_{m-1}^{-1}x_{t,a}}.
\end{align*}
On this high probability event (with probability $1 - \delta$), we can bound the regret as follows:
\begin{align}
R_T(\textbf{Alg}) &= \sum_{t=1}^T \left( \max_{a\in [K]}x_{t,a}^\top \theta^\star - x_{t,a_t}^\top \theta^\star \right)
= \sum_{m=1}^M \sum_{t=t_{m-1}+1}^{t_m} \left( \max_{a\in [K]}x_{t,a}^\top \theta^\star - x_{t,a_t}^\top \theta^\star \right) \nonumber \\
& \le \sum_{m=1}^M \sum_{t=t_{m-1}+1}^{t_m} \left( \max_{a\in [K]} \Big(x_{t,a}^\top \hat{\theta}_{m-1} + \gamma\sqrt{x_{t,a}^\top A_{m-1}^{-1}x_{t,a}}\Big) - x_{t,a_t}^\top \theta^\star \right) \nonumber \\
& = \sum_{m=1}^M \sum_{t=t_{m-1}+1}^{t_m} \left( x_{t,a_t}^\top \hat{\theta}_{m-1} + \gamma\sqrt{x_{t,a_t}^\top A_{m-1}^{-1}x_{t,a_t}} - x_{t,a_t}^\top \theta^\star \right) \nonumber \\
& = \sum_{m=1}^M \sum_{t=t_{m-1}+1}^{t_m} \left( x_{t,a_t}^\top (\hat{\theta}_{m-1} - \theta^\star) + \gamma\sqrt{x_{t,a_t}^\top A_{m-1}^{-1}x_{t,a_t}} \right) \nonumber\\
& \le
\sum_{m=1}^M \sum_{t=t_{m-1}+1}^{t_m} 2\gamma\sqrt{x_{t,a_t}^\top A_{m-1}^{-1}x_{t,a_t}} = 2\gamma\cdot \sum_{m=1}^M \sum_{t=t_{m-1}+1}^{t_m} 1\cdot \sqrt{x_{t,a_t}^\top A_{m-1}^{-1} x_{t,a_t}} \nonumber \\ \label{eq.regret_ucb}
&\le 2\gamma\sqrt{\frac{T}{M}}\cdot \sum_{m=1}^M \sqrt{\sum_{t=t_{m-1} +1}^{t_m} x_{t,a_t}^\top A_{m-1}^{-1} x_{t,a_t}} =
2\gamma\sqrt{\frac{T}{M}}\cdot \sum_{m=1}^M \sqrt{ \mathsf{Tr}(A_{m-1}^{-1} X_m)},
\end{align}
where the inequality in~\eqref{eq.regret_ucb} follows from Cauchy--Schwartz and the choice of a uniform grid (without loss of generality we assume that $T/M$ is an integer).
Next, setting $\delta = \frac{1}{T}$ (and hence resulting in $\gamma = 1+\sqrt{\frac{1}{2}\log\left(2KT^2\right)}$) and applying Lemma~\ref{lemma.trace_sum} to the upper bound in~\eqref{eq.regret_ucb}, we immediately obtain that again on this high-probability event:
\begin{align}
R_T(\textbf{Alg}) &\le 2\sqrt{10}\left(\sqrt{\frac{1}{2}\log\left(2KT^2\right)}+1\right)\log(T+1)\sqrt{\frac{T}{M}}\left(\sqrt{Md} + d\sqrt{\frac{T}{M}} \right) \nonumber\\
&\label{eq.x}= \mathsf{polylog}(T)\cdot (\sqrt{dT} + \frac{dT}{M}).
\end{align}
Consequently, taking the expectation of $R_T(\textbf{Alg})$ yields the same bound as in Equation~\eqref{eq.x}, since with probability at most $\frac{1}{T}$, the total regret over the entire horizon is at most $T$ (each time accumulates at most a regret of $1$ by the normalization assumption).
Since the regret bound is independent of $\theta^\star$, it
immediately follows that
$\sup_{\theta^\star: \|\theta^\star\|_2\le 1} \mathbb{E}_{\theta^\star}[R_T(\pi)]
\le \mathsf{polylog}(T)\cdot (\sqrt{dT} + \frac{dT}{M}).$
\item \textbf{Building a Master algorithm that satisfies conditional independence}
To complete the proof, we need to validate the conditional independence assumption in Lemma~\ref{lemma.concentration}. Since the length of the confidence intervals does not depend on the random rewards, this task can be done by using a master algorithm SupSBUCB (Algorithm~\ref{algo:SupSBUCB}), which runs in $O(\log T)$ stages at each time step $t$ similar to \cite{auer2002using}, which is subsequently adopted in the linear contextual bandits setting~\cite{chu2011contextual} and then in the generalized linear contextual bandits setting~\cite{li2017provably} for the same purpose of meeting the conditional independence assumption.
Note that SupSBUCB is responsible for selecting the actions $a_t$ and it does so by calling BaseSBUCB (Algorithm~\ref{algo:BaseSBUCB}), which merely performs regression.
This master-base algorithm pair has by now become a standard trick to get around the conditional dependency in the vanilla UCB algorithm for a variety of contextual bandits problems (by sacrificing at most $O(\log T)$ regret).
\begin{algorithm}[!h]
\DontPrintSemicolon
\SetAlgoLined
\BlankLine
\caption{SupSBUCB \label{algo:SupSBUCB}}
\textbf{Inputs}: $T, M \in \mathbb{Z}_{++}$, Grid ${\mathcal{T}}=\{t_1,t_2,\cdots,t_M\}$.
$S\leftarrow \log(T), \Psi_1^s \leftarrow \emptyset$ for all $s\in[S]$
\For{$m=1,2,\cdots,M$}{
Initialize $\Psi_{m+1}^{s^{\prime}}\leftarrow \Psi_{m}^{s^{\prime}}$ for all $s^{\prime} \in [S]$.
\For{$t= t_{m-1}+1, \dots, t_m$}{
$s \leftarrow 1$ and $\hat{A}_1 \leftarrow [K]$
\textbf{Repeat:}
Use BaseSBUCB with $\Psi_{m}^s$ to compute $\theta_m^s$ and $A_m^s$
For all $a \in \hat{A}_s$, compute $w^s_{t,a} =\gamma \sqrt{x_{t,a}^T (A_{m}^s)^{-1}x_{t,a}}$, $\hat{r}_{t,a}^s = \langle \theta_m^s, x_{t,a} \rangle$
\textbf{(a)} If $w^s_{t,a}\leq 1/\sqrt{T}$ for all $a\in \hat{A}_s$,
choose $a_t = \arg\max_{a\in \hat{A}_s}\left( \hat{r}_{t,a}^s+w^s_{t,a}\right)$.
\textbf{(b)} Else if $w^s_{t,a}\leq 2^{-s}$ for all $a \in \hat{A}_s$,
$\hat{A}_{s+1}\leftarrow \{a\in \hat{A}_s \,\,\vert\,\,\hat{r}_{t,a}^s+w^s_{t,a}\geq \max_{a^{\prime}\in \hat{A}_s}(\hat{r}_{t,a^{\prime}}^s+w^s_{t,a^{\prime}})-2^{1-s}\}$,
\quad $s \leftarrow s+1$.
\textbf{(c)} Else choose any $a_t \in \hat{A}_{s}$ such that $w_{t,a_t}^s >2^{-s}$, Update
$\Phi_{m+1}^{s} \leftarrow
\Phi_{m+1}^{s} \cup \{t\}.$
\textbf{Until}{\quad an action $a_t$ is found.}
}
}
\end{algorithm}
\begin{algorithm}[!h]
\DontPrintSemicolon
\SetAlgoLined
\BlankLine
\caption{BaseSBUCB \label{algo:BaseSBUCB}}
\textbf{Input}: $\Psi_m$.
$A_m = I_d+\sum_{\tau \in \Psi_m}x_{t,a_{\tau}}x^{\prime}_{t,a_{\tau}}$
$c_m = \sum_{\tau \in \Psi_m} r_{\tau,a_{\tau}}x_{\tau,a_{\tau}}$
$\theta_m = A_m^{-1} c_m$
\textbf{Return} $(\theta_m, A_m)$.
\end{algorithm}
More specifically, the master algorithm developed by~\cite{auer2002finite} has the following structure: each time step is divided into at most $\log T$ stages. At the beginning of each stage $s$, the learner computes the confidence interval using only the previous contexts designated as belonging to that stage and selects any action whose confidence interval has a large length (exceeding some threshold). If all actions has a small confidence interval, then we end this stage, observe the rewards of the given contexts and move on to the next stage with a smaller threshold on the length of the confidence interval. In other words, conditional independence is obtained by successive manual masking and revealing of certain information. One can intuitively think of each stage $s$ as a color, and each time step $t$ is colored using one of the $\log T$ colors (if colored at all). When computing confidence intervals and performing regression, only previous contexts that have the same color are used, instead of all previous contexts.
Adapting this algorithm to the sequential batch setting is not difficult: we merely keep track
of the sets $\Psi_{m}^s$ ($s \in [\log T]$) per each batch $m$ (rather than per each time step $t$ as in the fully online learning case). Note that we are still coloring each time step $t$, the difference here lies in the frequency at which we are running BaseSBUCB to compute the confidence bounds and rewards.
Due to great similarity we omit the details here and refer to \cite[Section 4.3]{auer2002using}. In particular, by establishing similar results to \cite[Lemma 15, Lemma 16]{auer2002using}, it is straightforward to show that the regret of the master algorithm SupSBUCB here is enlarged at most by a multiplicative factor of $O(\log T)$, which leads to the upper bound in Theorem \ref{thm.adversarial}.
\end{enumerate}
\end{proof}
\subsection{Regret Analysis for Lower bound}\label{subsec.adversarial}
In this section, we establish the regret lower bound and show that for any fixed grid ${\mathcal{T}}=\{t_1,\cdots,t_M\}$ and any learner's policy on this grid, there exists an adversary who can make the learner's regret at least $\Omega(\sqrt{Td}+(T\sqrt{d}/M \wedge T/\sqrt{M}))$ even if $K=2$. Since the lower bound $\Omega(\sqrt{Td})$ has been proved in \cite{chu2011contextual} even in the fully online case, it remains to show the lower bound $\Omega(T\sqrt{d}/M \wedge T/\sqrt{M})$. Note that in the fully online case, the lower bound $\Omega(\sqrt{Td})$ given in \cite{chu2011contextual} is obtained under the same assumption $d^2 \le T$ as in Assumption~\ref{aspn.TKd}.
\begin{proof}[Proof of Statement 2 in Theorem~\ref{thm.adversarial}]
First we consider the case where $M\ge d/2$, and without loss of generality we may assume that $d'=d/2$ is an integer (if $d$ is odd, then we can take $d^\prime = \frac{d-1}{2}$ and modify the subsequent procedure only slightly). By an averaging argument, there must be $d'$ batches $\{i_1,i_2,\cdots,i_{d'}\}\subset [M]$ such that
\begin{align}\label{eq.large_batch}
\sum_{k=1}^{d'} \left(t_{i_k} - t_{i_k-1} \right) \ge \frac{d'T}{M}.
\end{align}
Now $\theta^*$ is chosen as follows: Flip $d'$ independent fair coins to obtain $U_1,\cdots,U_{d'}\in \{1,2\}$, and set $\theta^\star = (\theta_1,\cdots,\theta_d)$ with
$\theta_{2k-1} = \frac{1}{\sqrt{d'}}\mathbbm{1}(U_k = 1), \theta_{2k} = \frac{1}{\sqrt{d'}}\mathbbm{1}(U_k = 2), \forall k\in [d']$.
(If $d$ is odd, then the last component $\theta_d$ is set to $0$.)
Note that $\theta^\star$ is a random variable and clearly $\|\theta^\star\|_2=1$ (surely). Next the contexts are generated in the following manner: for $t\in (t_{m-1},t_{m}]$, if $m=i_k$ for some $k\in [d']$, set $x_{t,1}=e_{2k-1}, x_{t,2}=e_{2k}$, where $e_j$ is the $j$-th basis vector in $\mathbb{R}^d$; otherwise, set $x_{t,1}=x_{t,2}={\bf 0}$.
Now we analyze the regret of the learner under this environment. Clearly, for any $k\in [d']$, the learner has no information about whether $(\theta_{2k-1}, \theta_{2k}) = (1/\sqrt{d'},0)$ or $(0,1/\sqrt{d'})$ before entering the $i_k$-th batch, while an incorrect action incurs an instantenous regret $1/\sqrt{d'}$. Consequently, averaged over all possible coin flips $(U_1,\cdots,U_{d'})\in \{1,2\}^{d'}$, the expected regret is at least:
\begin{align*}
\frac{1}{2}\sum_{k=1}^{d'} \frac{t_{i_k} - t_{i_{k-1}}}{\sqrt{d'}} \ge \frac{1}{2\sqrt{2}}\cdot \frac{T\sqrt{d}}{M}
\end{align*}
due to \eqref{eq.large_batch}, establishing the lower bound $\Omega\left(\frac{T\sqrt{d}}{M}\right)$ when $M\ge d/2$.
Next, in the case where $M < d/2$, choose $d^\prime = M$.
Here, we obviously have $\sum_{k=1}^{d'} \left(t_{i_k} - t_{i_k-1} \right) = T.$
In this case, again flip $d'$ independent fair coins to obtain $U_1,\cdots,U_{d'}\in \{1,2\}$, and set $\theta^\star = (\theta_1,\cdots,\theta_d)$ with
$\theta_{2k-1} = \frac{1}{\sqrt{d'}}\mathbbm{1}(U_k = 1), \theta_{2k} = \frac{1}{\sqrt{d'}}\mathbbm{1}(U_k = 2), \forall k\in [d']$.
Set all remaining components of $\theta$ to $0$.
The contexts are generated as follows: for $t\in (t_{m-1},t_{m}], 1\le m \le M$, set $x_{t,1}=e_{2m-1}, x_{t,2}=e_{2m}$.
In this case, we again average over all possible coin flips $(U_1,\cdots,U_{d'})\in \{1,2\}^{d'}$, and the expected regret is at least:
\begin{align*}
\frac{1}{2}\sum_{m=1}^{M} \frac{t_{m} - t_{m-1}}{\sqrt{d'}} = \frac{1}{2}\cdot \frac{T}{\sqrt{M}}
\end{align*}
Combining the above two cases yields a lower bound of $ \Omega\left(\frac{T\sqrt{d}}{M}\wedge \frac{T}{\sqrt{M}}\right)$.
\end{proof}
\section{Definitions and Auxiliary Results}\label{appendix.auxiliary}
\begin{definition}
Let $(\mathcal{X}, \mathcal{F})$ be a measurable space and $P$, $Q$
be two probability measures on $(\mathcal{X}, \mathcal{F})$.
\begin{enumerate}
\item The total-variation distance between $P$ and $Q$ is defined as:
$$ \mathsf{TV}(P,Q) = \sup_{A \in \mathcal{A}} |P(A) - Q(A)|.$$
\item The KL-divergence between $P$ and $Q$ is:
\begin{equation*}
D_{\text{\rm KL}}(P\|Q) = \begin{cases}
\int \log \frac{dP}{dQ} dP \text{\quad if $P << Q$} \\
+\infty \text{\quad otherwise}
\end{cases}
\end{equation*}
\end{enumerate}
\end{definition}
\begin{lemma}\cite[Lemma 2.6]{Tsybakov2008}\label{lemma.TV_KL}
Let $P$ and $Q$ be any two probability measures on the same measurable space. Then
\begin{align*}
1- \mathsf{TV}(P,Q) \ge \frac{1}{2}\exp\left(-D_{\text{\rm KL}}(P\|Q)\right).
\end{align*}
\end{lemma}
\begin{lemma}\cite[Theorem 6.1]{wainwright2019high}
\label{lemma.wishart}
Let $x_1,x_2,\cdots,x_n\sim {\mathcal{N}}(0,I_d)$ be i.i.d. random vectors. Then for any $\delta>0$,
\begin{align*}
\mathbb{P}\left(\sigma_{\max}\left(\frac{1}{n}\sum_{i=1}^n x_ix_i^\top\right) \ge 1+\sqrt{\frac{d}{n}}+\delta \right) \le \exp\left(-\frac{n\delta^2}{2}\right),
\end{align*}
where $\sigma_{\max}(A)$ denotes the largest singular value of $A$.
\end{lemma}
\section{Proof of Main Lemmas}
\subsection{Proof of Lemma \ref{lemma.equator}}
Let $y_{t,a} = \Sigma^{-1/2}x_{t,a}$, then each $y_{t,a}$ is marginally distributed as ${\mathcal{N}}(0,I_d)$. Define
\begin{align*}
B \triangleq \frac{1}{t_m-t_{m-1}}\sum_{t=t_{m-1}+1}^{t_m} y_{t,a_t}y_{t,a_t}^\top.
\end{align*}
Recall that $a_t = \arg\max_{a\in [K]} x_{t,a}^\top \hat{\theta} = \arg\max_{a\in [K]} y_{t,a}^\top (\Sigma^{1/2}\hat{\theta})$ for any $t\in [t_{m-1}+1,t_m]$, and $\hat{\theta}$ is an estimate of $\theta^\star$ that is independent of all contexts in the current batch $ [t_{m-1}+1,t_m]$. By rotational invariance of ${\mathcal{N}}(0,I_d)$, we can without loss of generality assume $\Sigma^{1/2}\hat{\theta}=ce_d$ for some $c>0$. Consequently, each $y_{t,a_t}$ follows the distribution
$\mu_t = {\mathcal{N}}(0,1) \otimes \cdots \otimes {\mathcal{N}}(0,1) \otimes \nu_t,$
where $\nu_t$ is the probability distribution of $\max_{a\in [K]} Z_{t,a}$, where each $Z_{t,a}$ is a standard Gaussian and the $Z_{t,a}$'s can be correlated across different $a$'s.
Now for $y=(y_1,y_2,\cdots,y_d)\sim \mu_t$ and any unit vector $u\in \mathbb{R}^d$, we show that there exist numerical constants $c_1,c_2>0$ independent of $(d,K)$ such that
\begin{align}\label{eq.large_prob_fixed_u}
\mathbb{P}\left(|y^\top u| \ge c_1\right) \ge c_2.
\end{align}
To establish \eqref{eq.large_prob_fixed_u}, we distinguish into two cases. If $|u_d|<\frac{1}{2}$, using the fact that $\mathbb{P}(|{\mathcal{N}}(0,1)+t|\ge c)$ is minimized at $t=0$ for any fixed $c>0$, we conclude that
\begin{align*}
\mathbb{P}\left(|y^\top u| \ge c_1 \right) \ge \mathbb{P}\left( \left| \sum_{i=1}^{d-1}y_iu_i \right| \ge c_1 \right) = \mathbb{P}(|{\mathcal{N}}(0,1-u_d^2)|\ge c_1) \ge \mathbb{P}\left(\left|{\mathcal{N}}(0,\frac{3}{4})\right|\ge c_1\right)
\end{align*}
is lower bounded by some positive constant. If $|u_d|\ge \frac{1}{2}$, we have
\begin{align*}
\mathbb{P}\left(|y^\top u| \ge c_1 \right)\ge \frac{1}{2}\mathbb{P}\left(|u_dy_d| \ge c_1 \right) \ge \frac{1}{2}\mathbb{P}\left(|y_d| \ge 2c_1 \right) \ge \frac{1}{2}\mathbb{P}\left(Z_{t,1}\ge 2c_1\right) = \frac{1}{2}\mathbb{P}({\mathcal{N}}(0,1)\ge 2c_1),
\end{align*}
which is again lower bounded by a numerical constant. Hence the proof of \eqref{eq.large_prob_fixed_u} is completed.
Based on \eqref{eq.large_prob_fixed_u} and the deterministic inequality
\begin{align*}
u^\top\cdot \left(\frac{1}{t_m-t_{m-1}}\sum_{t=t_{m-1}+1}^{t_m} y_{t,a_t}y_{t,a_t}^\top\right)\cdot u \ge \frac{c_1^2}{t_m-t_{m-1}}\sum_{t=t_{m-1}+1}^{t_m} \mathbbm{1}\left(|y_{t,a_t}^\top u| \ge c_1 \right),
\end{align*}
the Chernoff inequality yields that for any unit vector $u\in \mathbb{R}^d$, we have
\begin{align}\label{eq.concentration_fixed_u}
\mathbb{P}\left( u^\top B u \ge \frac{c_1^2c_2}{2}\right) \ge 1 - e^{-c_3(t_m-t_{m-1})},
\end{align}
where $c_3>0$ is some numerical constant.
Next we prove an upper bound of $\lambda_{\max}(B)$, i.e., the largest eigenvalue of $B$. Since $(a+b)(a+b)^\top \preceq 2(aa^\top + bb^\top)$ for any vectors $a,b\in\mathbb{R}^d$, for $y_t\sim \mu_t$ we have
\begin{align*}
y_ty_t^\top \preceq 2(v_tv_t^\top + w_tw_t^\top),
\end{align*}
where $v_t=(v_{t,1},\cdots,v_{t,d-1},0)$ with $v_{t,i}\sim{\mathcal{N}}(0,1)$, and $w_t=(0,\cdots,0,w_{t,d})$ with $w_{t,d}\sim \nu_t$. By concentration of Wishart matrices (cf. Lemma \ref{lemma.wishart}), with probability at least $1-e^{-\Omega(t_m-t_{m-1})}$,
\begin{align*}
\lambda_{\max}\left(\frac{1}{t_m-t_{m-1}}\sum_{t=t_{m-1}+1}^{t_m} v_tv_t^\top \right) \le c_4
\end{align*}
holds for some numerical constant $c_4>0$. For the second term, since $w_{t,d}\sim \nu_t$ is the maximum of $K$ arbitrary ${\mathcal{N}}(0,1)$ random variables, the Gaussian tail and the union bound imply that $|w_{t,d}|\le \sqrt{c_5\log(KT)}$ with probability at least $1-O(T^{-5})$. Hence, with probability at least $1 - O(T^{-4})$, we have
\begin{align*}
\lambda_{\max}\left(\frac{1}{t_m-t_{m-1}}\sum_{t=t_{m-1}+1}^{t_m} w_tw_t^\top \right) = \frac{1}{t_m-t_{m-1}}\sum_{t=t_{m-1}+1}^{t_m} w_{t,d}^2 \le c_5\log (KT).
\end{align*}
Combining all the previous results, and using $\lambda_{\max}(A+B)\le \lambda_{\max}(A)+\lambda_{\max}(B)$ for symmetric matrices $A,B$, we conclude that with probability at least $1-e^{-\Omega(t_m-t_{m-1})} - O(T^{-4})$, we have
\begin{align}\label{eq.lambda_max}
\lambda_{\max}(B) \le c_6\log (KT)
\end{align}
holds for some numerical constant $c_6>0$.
Finally, we are ready to prove a lower bound on $\lambda_{\min}(B)$ via an $\varepsilon$-net argument. Let ${\mathcal{N}}_d(\varepsilon)$ be an $\varepsilon$-net of the unit ball in $\mathbb{R}^d$ (both in $\ell_2$ norm) with cardinality at most $(1+\frac{2}{\varepsilon})^d$. Standard $\varepsilon$-net techniques (cf. \cite[Section 2.3.1]{tao2012topics}) give
\begin{align*}
\min_{u: \|u\|_2=1} u^\top Bu \ge \min_{u\in {\mathcal{N}}_d(\varepsilon)} u^\top Bu - 2\varepsilon\lambda_{\max}(B).
\end{align*}
Hence, choosing $\varepsilon = \frac{c_1^2c_2}{8c_6\log (KT)}$ and combining \eqref{eq.concentration_fixed_u}, \eqref{eq.lambda_max} and the union bound over ${\mathcal{N}}_d(\varepsilon)$ gives
\begin{align*}
\mathbb{P}\left(\lambda_{\min}(B) \ge \frac{c_1^2c_2}{4}\right) \ge 1 - e^{O(d\log\log (KT)) - \Omega(t_m-t_{m-1})} - O(T^{-4}).
\end{align*}
By noting that $t_m - t_{m-1} = \Omega(d\sqrt{T})$ due to the choice of the grid in \eqref{eq.minimax_grid}, the parameter $a$ in \eqref{eq.a}, and the assumption $M=O(\log\log T)$, we conclude that $\lambda_{\min}(B) \ge c_7$ for some numerical constant $c_7>0$ with probability at least $1 - O(T^{-4})$. The proof is completed by noting that
$$
\frac{1}{t_m - t_{m-1}}\sum_{t = t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\top = \Sigma^{1/2} B \Sigma^{1/2} \succeq \Sigma^{1/2} (c_7I_d) \Sigma^{1/2}= c_7\Sigma
$$
whenever $\lambda_{\min}(B) \ge c_7$ and the assumption $\lambda_{\min}(\Sigma)\ge \kappa/d$.
\qed
\subsection{Proof of Lemma \ref{lemma.Q}}
Let $v_1,\cdots,v_d$ be an orthonormal basis of $\mathbb{R}^d$ with $v_1=u_t$. By rotational invariance of the uniform distribution on spheres, we have $(v_1^\top \theta, v_2^\top \theta, \cdots, v_d^\top \theta)\sim \mathsf{Unif}(\Delta\mathbb{S}^{d-1})$ under $Q_0$. Now recall that
\begin{align*}
\frac{dQ_1}{dQ_0}(\theta) = \frac{r_t\jiao{v_1,\theta}_+}{Z_0}, \qquad \frac{dQ_2}{dQ_0}(\theta) = \frac{r_t\jiao{v_1,\theta}_-}{Z_0},
\end{align*}
we conclude that if $\theta' = \theta - 2(v_1^\top \theta)v_1$, we have
\begin{align*}
\frac{dQ_1}{dQ_0}(\theta) = \frac{dQ_2}{dQ_0}(\theta').
\end{align*}
As a result, it is equivalent to have $\theta\sim Q_1$ or $\theta' = \theta - 2(v_1^\top \theta)v_1\sim Q_2$.
For the identity \eqref{eq.Z_0}, recall that the density of $\theta=(\theta_1,\cdots,\theta_d)\sim \mathsf{Unif}(\Delta\mathbb{S}^{d-1})$ is
\begin{align}\label{eq.uniform_density}
f(\theta) = f(\theta_2,\cdots,\theta_d) = \left(\frac{d\pi^{d/2}\Delta^{d-1}}{\Gamma(\frac{d}{2}+1)}\right)^{-1}\frac{2\Delta}{\sqrt{\Delta^2 - \theta_2^2 - \cdots - \theta_d^2}}\cdot \mathbbm{1}\left(\sum_{i=2}^d \theta_i^2 \le \Delta^2\right),
\end{align}
where $\Gamma(t)=\int_0^\infty x^{t-1}e^{-x}dx$ is the Gamma function. Hence, by rotational invariance, we have
\begin{align*}
Z_0 &= \frac{r_t}{2}\mathbb{E}_{Q_0}[|\theta_1|] = r_t\Delta\cdot \int_{\sum_{i=2}^d \theta_i^2 \le \Delta^2} \left(\frac{d\pi^{d/2}\Delta^{d-1}}{\Gamma(\frac{d}{2}+1)}\right)^{-1} d\theta_2\cdots d\theta_d \\
&= r_t\Delta\left(\frac{d\pi^{d/2}\Delta^{d-1}}{\Gamma(\frac{d}{2}+1)}\right)^{-1}\cdot \frac{\Delta^{d-1} \pi^{\frac{d-1}{2}}}{\Gamma(\frac{d-1}{2}+1)} = r_t\Delta\cdot\begin{cases}
\frac{2^d}{\pi d}\binom{d}{d/2}^{-1}, & \text{if }d\text{ is even} \\
\frac{1}{2^d}\binom{d-1}{(d-1)/2}, & \text{if }d\text{ is odd}
\end{cases}.
\end{align*}
Using Stirling's approximation $\sqrt{2\pi n}(\frac{n}{e})^n\le n!\le e\sqrt{n}(\frac{n}{e})^{n}$ for any $n\ge 1$, we have
\begin{align}\label{eq.combinatorics}
\frac{2}{e^2}\sqrt{\frac{\pi}{n}}\le \frac{1}{2^{2n}}\binom{2n}{n} \le \frac{e}{\pi\sqrt{2n}}
\end{align}
for all $n\ge 1$, and the rest of \eqref{eq.Z_0} follows from \eqref{eq.combinatorics}.
As for the second moment in \eqref{eq.second_moment}, we use the spherical coordinates
\begin{align*}
\left\{
\begin{array}{lr}
\theta_2 = r\cos\varphi_1, \\
\theta_3 = r\sin\varphi_1\cos\varphi_2, \\
\vdots \\
\theta_{d-1} = r\sin\varphi_1\sin\varphi_2\cdots\sin\varphi_{d-3}\cos\varphi_{d-2}, \\
\theta_d = r\sin\varphi_1\sin\varphi_2\cdots\sin\varphi_{d-3}\sin\varphi_{d-2}.
\end{array}
\right.
\end{align*}
to obtain
\begin{align*}
\mathbb{E}_{Q_1}[(v_1^\top \theta)^2] &= \frac{r_t}{2Z_0}\cdot \mathbb{E}_{Q_0}[|\theta_1|^3] \\
&= \frac{r_t\Delta}{Z_0}\cdot \int_{\sum_{i=2}^d \theta_i^2 \le \Delta^2} \left(\frac{d\pi^{d/2}\Delta^{d-1}}{\Gamma(\frac{d}{2}+1)}\right)^{-1} (\Delta^2-\theta_2^2-\cdots-\theta_d^2) d\theta_2\cdots d\theta_d \\
&= \frac{r_t\Delta}{Z_0}\cdot \int_0^\Delta \int_0^\pi \cdots \int_0^\pi \int_0^{2\pi} \left(\frac{d\pi^{d/2}\Delta^{d-1}}{\Gamma(\frac{d}{2}+1)}\right)^{-1} (\Delta^2-r^2) \\
&\qquad \cdot r^{d-2}\sin^{d-3}(\varphi_1) \sin^{d-4}(\varphi_2)\cdots \sin(\varphi_{d-3}) drd\varphi_1\cdots d\varphi_{d-2} \\
&= \frac{r_t\Delta}{Z_0}\left(\frac{d\pi^{d/2}\Delta^{d-1}}{\Gamma(\frac{d}{2}+1)}\right)^{-1}\cdot \frac{2\Delta^{d+1}}{d^2-1}\cdot \frac{\Gamma(\frac{d-2}{2})\Gamma(\frac{1}{2})}{\Gamma(\frac{d-1}{2})}\cdot \frac{\Gamma(\frac{d-3}{2})\Gamma(\frac{1}{2})}{\Gamma(\frac{d-2}{2})}\cdot \cdots \cdot \frac{\Gamma(1)\Gamma(\frac{1}{2})}{\Gamma(\frac{3}{2})}\cdot 2\pi \\
&= \frac{r_t\Delta}{Z_0}\left(\frac{d\pi^{d/2}\Delta^{d-1}}{\Gamma(\frac{d}{2}+1)}\right)^{-1}\cdot \frac{2\Delta^{d+1}}{d^2-1}\cdot\frac{2\pi^{\frac{d-1}{2}}}{\Gamma(\frac{d-1}{2})} \\
&= \frac{2\Delta^2}{d+1}.
\end{align*}\qed
\section{Problem Formulation}
\label{sec:problem_formulation}
We introduce the problem of sequential batch learning on finite-action linear contextual bandits.
\subsection{Notation}
We start by fixing some notation that will be used throughout the paper. For a positive integer $n$, let $[n]\triangleq\{1,\cdots,n\}$. For real numbers $a,b$, let $a\wedge b\triangleq \min\{a,b\}$. For a vector $v$, let $v^\top$ and $\|v\|_2$ be the transpose and $\ell_2$ norm of $v$, respectively. For square matrices $A,B$, let $\mathsf{Tr}(A)$ be the trace of $A$, and let $A\preceq B$ denote that the difference $B-A$ is symmetric and positive semi-definite. We adopt the standard asymptotic notations: for two non-negative sequences $\{a_n\}$ and $\{b_n\}$, let $a_n=O(b_n)$ iff $\limsup_{n\to\infty} a_n/b_n<\infty$, $a_n=\Omega(b_n)$ iff $b_n=O(a_n)$, and $a_n=\Theta(b_n)$ iff $a_n=O(b_n)$ and $b_n=O(a_n)$. We also write $\tilde{O}(\cdot), \tilde{\Omega}(\cdot)$ and $\tilde{\Theta}(\cdot)$ to denote the respective meanings within multiplicative logarithmic factors in $n$. For probability measures $P$ and $Q$, let $P\otimes Q$ be the product measure with marginals $P$ and $Q$. If measures $P$ and $Q$ are defined on the same probability space, we denote by $\mathsf{TV}(P,Q) = \frac{1}{2}\int |dP-dQ| $ and $ D_{\text{KL}}(P\|Q) = \int dP\log\frac{dP}{dQ}$ the total variation distance and Kullback--Leibler (KL) divergences between $P$ and $Q$, respectively.
\subsection{Decision Procedure and Reward Structures}
Let $T$ be the time horizon of the problem. At the beginning of each time $t\in [T]$, the decision maker observes a set of $K$ $d$-dimensional feature vectors (i.e. contexts) $\{x_{t,a} \mid a \in [K]\} \subseteq \mathbb{R}^d$ corresponding to the $t$-th unit.
If the decision maker selects action $a \in [K]$, then a reward $r_{t,a} \in \mathbb{R}$ corresponding to time $t$ is incurred (although not necessarily immediately observed).
We assume the mean reward is linear: that is, there exists an underlying (but unknown) parameter
$\theta^\star$ such that
$$r_{t,a} = x_{t,a}^\top \theta^\star + \xi_t,$$
where $\{\xi_t\}_{t=0}^{\infty}$ is a sequence of zero-mean independent sub-Gaussian random variables with a uniform upper bound on the sub-Gaussian constants. Without loss of generality and for notational simplicity,
we assume each $\xi_t$ is $1$-sub-Gaussian: $\mathbf{E}[e^{\lambda \xi_t}] \le e^{\lambda^2/2}, \forall t, \forall \lambda \in \mathbb{R}$.
Further, without loss of generality (via normalization), we assume $\|\theta^\star\|_2\le 1$.
We denote by $a_t$ and $r_{t, a_t}$ the action chosen and the reward obtained at time $t$, respectively.
Note that both are random variables; in particular, $a_t$ is random either because the action is randomly selected based on the contexts $\{x_{t,a} \mid a \in [K]\}$ or because the contexts $\{x_{t,a} \mid a \in [K]\}$
are random, or both.
As there are different (but equivalent) formulations of contextual bandits, we briefly discuss the meaning of the above abstract quantities and how they arise in practice. In general, at each round $t$, an individual characterized by $v_t$ (a list of characteristics associated with that individual) becomes available.
When the decision maker decides to apply action $a_t$ to this individual,
a reward $y_t(v_t, a_t)$, which depends (stochastically) on both $v_t$ and $a_t$, is obtained. In practice, for both modelling and computational reasons, one often first featurizes the individual characteristics and the actions.
In particular, with sufficient generality, one assumes $\mathbf{E}[y_t(v_t, a_t) \mid v_t, a_t] = g_{\theta} (\phi(v_t, a_t))$,
where $g_{\theta}(\cdot)$ is the parametrized mean reward function and $\phi(v_t, a_t)$ extracts the features from the given raw individual characteristics $v_t$ and action $a_t$. In the above formulation,
as is standard in the literature, we assume the feature map $\phi(\cdot)$ is known and given and $x_{t,a} = \phi(v_t, a)$. Consequently, we directly assume access to contexts $\{x_{t,a} \mid a \in [K]\}$.
Note that the linear contextual bandits setting then corresponds to $g_{\theta}(\cdot)$ is linear.
\subsection{Sequential Batch Learning}
In the standard online learning setting, the decision maker immediately observes the reward $r_{t, a_t}$ after selecting action $a_t$ at time $t$. Consequently, in selecting $a_t$, the decision maker can base his decision on all the past contexts $\{x_{\tau,a} \mid a \in [K], \tau\le t\}$ and all the past rewards $\{r_{\tau,a_\tau} \mid \tau\le t-1\}$.
In constrast, we consider a \textit{sequential batch learning} setting, where the decision maker is only allowed to partition the $T$ units into (at most) $M$ batches, and the reward corresponding to each unit in a batch can only be observed at the end of the batch. More specifically, given a maximum batch size $M$, the decision maker needs to choose a sequential batch learning algorithm \textbf{Alg} that has the following two components:
\begin{enumerate}
\item A \emph{grid} ${\mathcal{T}}=\{t_1,t_2,\cdots,t_M\}$, with $0 = t_0 < t_1<t_2<\cdots<t_M=T$.
Intuitively, this grid partitions the $T$ units into $M$ batches: the $k$-th batch contains units $t_{k-1}+1$ to $t_k$. Note that without loss of generality, the decision maker will always choose a grid of $M$ batches (despite being allowed to choose less than $M$ batches) since choosing less than $M$ batches will only decrease the amount of information available and hence yields worse performance. More formally, for any sequential batch learning algorithm that uses a grid of size less than $M$, there exists another sequential batch learning algorithm that uses a grid of size $M$ whose performance is no worse.
\item A sequential batch policy $\pi = (\pi_1, \pi_2, \dots, \pi_T)$ such that each $\pi_t$ can only use reward information from all the prior batches (as well as all the contexts that can be observed up to $t$). More specifically, for any $t \in [T]$, define the batched history $\mathcal{H}^t = \{x_{\tau,a} \mid a \in [K]\}_{\tau=1}^t \cup \{r_{\tau, a_\tau}\}_{\tau=1}^{j(t) - 1}$, where $j(t)$ is the unique integer satisfying
$t_{j(t)-1}<t\le t_{j(t)}$. Intuitively, there are $j(t) - 1$ batches prior to unit $t$: only the rewards of those batches have been observed. With this definition, a sequential batch policy is any policy $\pi$ such that
each $\pi_t$ is adapted to $\mathcal{H}^t$ for each $t \in [T]$.
\end{enumerate}
\begin{remark}
$M = T$ yields the standard online learning setting, where the decision maker need not select a grid. Consequently, the sequential batch learning setting has a more complex decision space--one that entails selecting both the grid and the policy.
\end{remark}
\subsection{Performance Metric: Regret}
To assess the performance of a given sequential batch learning algorithm \textbf{Alg}, we compare the cumulative reward obtained by \textbf{Alg} to the cumulative reward obtained by an \textbf{optimal} policy, if the decision maker had the prescient knowledge of the optimal action for each unit (i.e. an oracle that knows $\theta^\star$). This is formalized by the following definition of regret.
\begin{definition}\label{def:regret}
Let \textbf{Alg} $= ({\mathcal{T}}, \pi)$ be a sequential batch learning algorithm.
The regret of \textbf{Alg} is:
\begin{align}\label{eq.regret}
R_T(\textbf{Alg}) \triangleq \sum_{t=1}^T \left( \max_{a\in [K]}x_{t,a}^\top \theta^\star - x_{t,a_t}^\top \theta^\star \right),
\end{align}
where $a_1, a_2, \dots, a_T$ are actions generated by \textbf{Alg} in the online decision making process.
\end{definition}
\begin{remark}
Although the form of regret defined here is the same as that in standard online learning, the goal here is much more ambitious, because batches induce delays in obtaining reward feedback, and hence the decision maker cannot immediately incorporate the feedback into his subsequent decision making process.
Nevertheless, we still make the performance comparison to the oracle that is used in the standard online learning setting.
Further, note that $R_T(\textbf{Alg})$ is a random variable since as mentioned earlier, $a_t$ is random.
For simplicity, we will mostly focus on bounding the expected regret, where the expectation is taken with respect to all sources of randomness (to be made precise later in various settings). However, high-probability regret bounds can also be obtained in our setting, and we will discuss them in relevant places throughout the paper.
\end{remark}
\subsection{Adversarial Contexts v.s. Stochastic Contexts}
The regret defined in Definition~\ref{def:regret} is also a function of all the contexts that arrive over a horizon of $T$.
In the bandits literature, depending on how these contexts are generated, there are two main categories.
The first category is adversarial contexts, where at each $t$, an adversary can choose the contexts $\{x_{t,a} \mid a \in [K]\}$ that will be revealed to the decision maker. The second category is stochastic contexts, where at each $t$, $\{x_{t,a} \mid a \in [K]\}$ is \textbf{iid} drawn from some fixed underlying distribution.
In the standard online learning case (i.e., $M=T$), the optimal regret bounds under adversarial or stochastic contexts are both $\tilde{\Theta}(\sqrt{dT})$ (see~\cite{chu2011contextual} and Theorem \ref{thm.stochastic} below). However, it turns out that in the sequential batch setting, there is a sharp difference between the adversarial contexts case and the stochastic contexts case.
In particular, it turns out that the power of the adversary in choosing the contexts has a profound impact on what regret bounds can be obtained. Consequently, we divide our presentation into these two distinct regimes: we study the adversarial contexts case in Section~\ref{sec:adversarial} and the stochastic contexts case in Section~\ref{sec:stochastic}.
In each of these two settings, we give upper and lower bounds for regret.
Finally, throughout the paper, we consider the standard low-dimensional contextual bandits regime where the action set is not too large, as made precise by the following assumption.
\begin{assumption}\label{aspn.TKd}
$K=O(\mathsf{poly}(d))$ and $T\ge d^2$.
\end{assumption}
That is, we are assuming that the number of actions cannot be too large (at most polynomial in the dimension of the context), for there is a phase transition on the regret when $K$ can be exponential in $d$ (see~\cite{abe2003reinforcement}). Further, the time horizon must be large enough, otherwise one cannot even estimate the true parameter $\theta^\star$ within a constant $\ell_2$ risk at the end of the time horizon $T$.
Equivalently, we are in the low-dimensional contextual bandits setting, where the number of covariates is small compared to the number of samples we receive over the entire horizon.
\section{Introduction}
\label{sec:intro}
With the rapid advances of digitization of the economy, massive amounts of user-specific data have become increasingly available. Among its varied implications, one that holds center-stage importance is the advent of the new era of data-driven personalized decision making: equipped with such user-specific data, decision makers across a wide variety of domains are now able to personalize the service decisions based on individuals' characteristics, thereby improving the outcomes. Fundamentally, such improved outcomes are achieved by intelligently exploring and exploiting the heterogenity in a given population, which manifests itself as different individuals responding differently to the same treatments/recommendations/actions.
Such heterogenity is ubiquitous across a variety of applications, including
medical treatment selection in clinical trials, product recommendation in marketing,
order provisioning in inventory management,
hospital staffing in operation rooms, ads selection in online advertising~\cite{bertsimas2007learning, LCLS2010, kim2011battle, he2012timing, chapelle2014, chen2015statistical, bastani2015online, SBF2017, ferreira2018online, ban2019big}.
Situated in this broader context and rising to materialize the value from personalization, contextual bandits have emerged to be the predominant mathematical framework that is at once rich and elegant. Its three modelling cores, contexts, actions, and rewards (representing individual features, recommendations and outcomes respectively), capture the salient aspects of personalized decision making and provide fertile ground for developing algorithms that contribute to making quality decisions at the fine-grained individual level.
As such, in the past decade, dedicated efforts have been devoted to this area, yielding a flourishing line of literature. Broadly speaking, the existing contextual bandits literature falls into two main categories: offline contextual bandits and online contextual bandits.
In offline contextual bandits \cite{langford2011doubly, zhang2012estimating,zhao2012estimating, zhao2014doubly, JMLR:v16:swaminathan15a, rakhlin2016bistro, athey2017efficient, kitagawa2018should, kallus2018confounding, zhou2018offline, deep-learning-logged-bandit-feedback}, the decision maker is given a full batch dataset that has been collected from historical observations.
The decision maker aims to learn from this dataset an effective policy (i.e. a function that maps contexts to decisions) that will be deployed in the future and that (hopefully) yields good performance.
As such, in offline contextual bandits, the decision maker is not allowed to perform active learning: selecting a policy in this setting is one-shot and the decision maker is solely concerned with identifying the best policy using available data.
On the other hand, in online contextual bandits, the decision maker actively interacts with the data-collection process: as data arrive sequentially, the decision maker can adapt his decisions based on what has been observed in the past, thereby deciding what data is collected. Typically, in such settings, a decision is made on the current individual based on all the past feedback, yielding an outcome that is immediately observed and incorporated to make the next decision. A rich literature has studied this setting--\cite{LCLS2010,rusmevichientong2010linearly,FCGS2010, rigollet2010nonparametric, chu2011contextual, goldenshluger2013linear, AG2013a, AG2013b, RV2014, russo2016information, JBNW2017, LLZ2017,abeille2017linear,dimakopoulou2017estimation,li2017provably}, for a highly incomplete list-- and has developed online contextual bandits algorithms
(most notably UCB-based algorithms and Thompson sampling based algorithms)
that effectively balance exploration with exploitation trade-off, a key challenge therein.
See further \cite{BCN2012, lattimore2018bandit,slivkins2019introduction} for three articulate surveys that more systematically describe the field.
However, in practice, it is often the case that neither setting provides a close approximation of the underlying reality. Specifically, that the decision maker can only perform one-shot policy learning--the key setting in offline contextual bandits--is simply too restrictive and pessimistic for almost all applications. On the other hand, assuming that the decision maker can constantly observe and incorporate feedback at a per-individual scale--the key setting online contextual bandits--is also a over-simplification and too optimistic for many applications. In fact, reality often stands somewhere in between: decision makers across many applications are typically able to perform active learning and incorporate feedback from the past to adapt their decisions in the future; however, due to the physical and cost constraints, such adaptation is often limited to a fixed number of rounds of interaction.
For instance, in clinical trials~\cite{robbins1952some,thompson1933likelihood,kim2011battle}, each trial involves applying medical treatments to a group of patients, where the medical outcomes are observed and collected for the entire group at the end of the trial.
The data collected from previous trials are then analyzed to design the medical treatment selection schemes for the next trial.
Note that as the medical information from previous trials are incorporated to inform
treatment selection in future trials, medical decision makers do have the
have the flexibility in adaptive learning. However, such flexibility is limited since in practice, only a limited number of trials (e.g. $3$) can be conducted, far less than the number of patients in the trials, hence rendering the standard online learning models inapplicable here.
Another example where adaptive learning is possible but with batch constraints is product recommendation in marketing~\cite{bertsimas2007learning, schwartz2017customer}.
In this case, the marketer sends out product offers to a batch of customers at once when running a promotions campaign. Customers' feedback will then be batch collected at the end and analyzed to design the next round of promotions in the targeted population. Other examples include crowdsourcing~\cite{kittur2008crowdsourcing} and simulations~\cite{chick2009economic}, where both cases exhibit the characteristic that a single-run experiment consists of a batch of individuals.
\subsection{Related Work}
Motivated by these considerations, we study the problem of sequential batch learning in linear contextual bandits in this paper, where a decision maker can adaptively learn to make decisions subject to the batch constraints that, as described above, commonly occur in practice.
This batch constrained bandits problem have recently been studied in the simple multi-armed bandits (MAB) case. In particular, \cite{perchet2016batched} studied sequential batch learning in the 2-armed MAB problem,
where they have given a successive elimination algorithm during each batch and established that
$O(\log \log T)$ batches are needed in order to achieve the same regret bound as in the standard online learning setting. Very recently, \cite{gao2019batched} generalized the results to the $K$-armed bandit setting (despite the seeming simplicity, the generalization is not easy) and obtained a tight $\Theta(\log \log T)$ result therein even when the batch sizes can be chosen adaptively.
However, these initial efforts on MAB settings, despite providing interesting insights, cannot capture
individuals' characteristics: in MABs, decisions can only be made at a population level (i.e. the decision maker aims to select an action that is the best for the entire population), rather than personalized at an individual level, which severly limits its practical applicability. In this paper, we fill in this gap by providing
a comprehensive inquiry into sequential batch learning in linear contextual bandits with a finite number of actions, a cannonical setting where decisions are provisioned based on the individual features.
Our goal is then to delineate, in this more general setting, how the batch constraints impact the performance of adaptive decision making and characterize in depth the fundamental limits therein, thereby shedding light on how practical adaptive decision making can be most efficiently done in practice.
Our work is also related to but distinct from learning on bandits (either contextual or MABs) with delayed feedback~\cite{NAGS2010, DHKKLRZ2011, JGS2013, QK2015, GST2016, bis2019, zhou2019learning}: since the decision maker is not able to observe rewards in the interim of a batch (as they only come at the batch's end), feedback are delayed from the decision maker's perspective.
However, a key difference exists between learning in bandits and our sequential batch learning mdoel: the former setting works with exogenous delays--drawn either from some stochastic process or from some arbitrary sequence--that, completely contrary to the latter setting, is neither influenced nor known by the decision maker. The literature on learning in bandits with delayed feedback then develops adapted algorithms in the presence of delays and study how regret bounds scale as a function of the underlying delays. On the other hand, feedback delays in sequential batch learning are endogenous and arise as a consequence of the batch constraints; in particular, the decision maker in this setting is at full discretion of choosing the batch sizes which in turn determines how the feedback for each individual is delayed. Consequently, the frameworks provided by the learning-in-bandits-with-delays literature--both the aglorithms and analyses--are not applicable here. Of course, it should also be pointed out that
when viewed through the lens of learning with delayed feedback, the delays in our setting exhibit a particular structure: if a batch has size $B$, then the reward for the first item in the batch is delayed by $M-1$ time units, the reward for the second item in the batch is delayed by $M-2$ time units, and so on, and the reward for the last item in the batch has no delays.
Consequently our results here also do not directly imply regret bounds in that literature.
\subsection{Our Contributions}
We study the sequential batch learning problem in linear contextual bandits when the number of actions is finite (and not too large, to be made precise later). Our main contributions are twofold.
First, we consider the adversarial contexts setting, where the contexts can be generated arbitrarily or even adversarially. In this setting, we provide a UCB-style algorithm that is adapted to the sequential batch setting and achieves $\mathsf{polylog}(T)\cdot (\sqrt{dT} + dT/M)$ expected regret,
where $d$ is the dimension of the context, $M$ is the number of allowed batches and $T$ is the learning horizon. This regret bound highlights that in the adversarial context setting, $\Theta(\sqrt{dT})$ batches--rather than $T$ batches--are sufficient to achieve the $\tilde{\Theta}(\sqrt{dT})$ regret bound that is minimax optimal in the standard online learning setting. Further, we characterize the fundamental limits of learning in this setting by establishing a $c\cdot (\sqrt{dT} + \min \{ T\sqrt{d}/M, T/\sqrt{M}\})$ lower bound for the expected regret, where $c$ is some universal constant.
This regret lower bound highlights that at least $O(\sqrt{T})$ batches are needed for any algorithm to achieve the optimal performance for standard online learning (when no batch constraints are present). Consequently, our regret bound is tight up to at most a factor of $O(\sqrt{d})$ (and other polylog factors). In the common low-dimensional regime (i.e. $d$ considered a constant when compared to $T$), our regret bounds provide a complete characterization and indicates the proposed algorithm is optimal.
Second, we consider the stochastic contexts setting, where the contexts are generated \textit{iid} from a Gaussian distribution. In this case, we reveal an interesting phenomenon that stands in sharp constrast to the adversarial contexts setting: a simple pure-exploitation algorithm alone can achieve the $\tilde{\Theta}(\sqrt{dT})$ regret bound (which, as mentioned above, is minimax optimal in standard online learning) using only $O(\log\log(T/d^2))$ matches, far less than what is required in the adversarial contexts setting. More specifically, we establish a $\mathsf{polylog}(T)\cdot \sqrt{dT}(T/d^2)^{{1}/{[2(2^M-1)]}}$ upper bound
and a $c\cdot \sqrt{dT}({T}/{d^2})^{{1}/{[2(2^M-1)]}}$ lower bound for the expected regret, respectively. Consequently, up to polylog factors, our regret bound is minimax optimal, indicating that
$O(\log\log ({T}/{d^2}))$ batches are also necessary to achieve the optimal performance of the standard online learning setting.
To further appreciate this result, and to get some intuition into why such bounds are possible, consider the special case where $M = 3$ (not uncommon in a typical clinical trial) and the context distribution follows $\mathcal{N}(0, {\mathbf{I}_d}/{d})$ (dividing the identity matrix by $d$ simply ensures that the norm of each context is bounded by $1$ in mean square).
In this case, our bound indicates that the optimal performance is $\tilde{O}(d^{{5}/{14}} T^{{4}/{7}})$, which is already quite close to $O(\sqrt{dT})$; further,
pure exploitation--selecting the best action given our current estimate--on each batch
is able to achieve this regret bound.
Why?
To get a rough sense, let's allocate the $T$ units into three batches in the following way: the first batch contains $O(d^{{6}/{7}} T^{{4}/{7}})$ units, the second batch contains $O(d^{{2}/{7}} T^{{6}/{7}})$ units and the third batch contains the rest.
Now, for the moment, let's assume that the contexts selected over time are \textit{iid}: of course they are not \textit{iid}, because how one context is chosen at time $t$ depends critically on how contexts were chosen in the previous times (otherwise, there is no learning that occurs); but for simplicity, let's assume they are.
Then the regret incurred on the first batch--since we haven't observed anything and hence know nothing--is $\tilde{O}({d^{{6}/{7}} T^{{4}/{7}}}/{\sqrt{d}}) = \tilde{O}(d^{{5}/{14}} T^{{4}/{7}})$, since each unit incurs $\tilde{O}(1/\sqrt{d})$ regret (this is a consequence of the normalization as well as a separate but simple analysis of instantaneous regret).
Now after observing the results from the first batch, and running a least squares on that, we now have an estimate of the underlying parameters that achieves a certain level of accuracy.
How accurate is our estimate now?
From standard theory on linear regression (which says that if each covariate sample is \textit{iid} drawn from a standard multi-variate Guassian and there are $n$ samples, then with high probability $\|\hat{\theta} -\theta^*\|_2 = O(\sqrt{d/n})$), we can deduce that after observing $O(d^{{6}/{7}} T^{{4}/{7}})$ samples in the first batch, we would be able to achieve the following estimation accuracy:
$\|\hat{\theta} -\theta^*\|_2 = O_p(\sqrt{d} \sqrt{{d}/{(d^{{6}/{7}} T^{{4}/{7}})}} ) = O_p(d^{4/7}T^{-2/7})$, where the extra $\sqrt{d}$ factor is again a result of our normalization on the covariance matrix.
Now, a more accurate such estimate on $\theta^*$ would yield smaller regret for each individual unit,
and if each individual's regret is inversely proportional to $\|\hat{\theta} -\theta^*\|_2$--as it turns out to be--then the total regret for the second batch is the number of units in that batch times each individual regret, yielding $\tilde{O}(1/\sqrt{d}) \times O( d^{2/7}T^{6/7}/(d^{-4/7}T^{2/7})) = \tilde{O}(d^{{5}/{14}} T^{{4}/{7}})$, where the $\tilde{O}(1/\sqrt{d})$ factor is the proportionality constant that, as before, comes from the normalization factor on the covariance matrix.
Finally, after observing all the $O(d^{2/7} T^{6/7})$ units in the second batch,
our estimation accuracy would further improve to
$\|\hat{\theta} -\theta^*\|_2 = O_p(\sqrt{d} \sqrt{d/(d^{2/7}T^{6/7})} ) = O_p(d^{5/7}T^{-3/7})$.
Consequently, since there are $O(T)$ units in the third batch, the total regret
in this batch would be--by applying the same line of reasoning as in the second batch--$\tilde{O}(1/\sqrt{d}) \times O(T/(d^{-6/7}T^{3/7})) = \tilde{O}(d^{5/14}T^{4/7})$.
As such, aggregating over all three batches, we obtain $\tilde{O}(d^{5/14}T^{4/7})$ total regret.
It then remains to complete the reasoning by showing that the selected contexts form a well-conditioned matrix--at least well-conditioned enough to ensure the same estimation rate in standard linear regression applies--despite being selected in a non-\textit{iid} way to maximize the rewards, as it turns out to be true.
Additionally, we also give gap-dependent regret bounds--both upper and lower bounds--in the stochastic contexts setting (note that there is no notion of gap when the contexts can be arbitrary). These bounds are typically sharper compared to their gap-independent counterparts (to which the above-mentioned regret bounds belong) when the gap is large; in particular, the dependence on $T$ would
be logarithmic rather than $\sqrt{T}$. Section~\ref{sec:gap} provides a more detailed discussion on this, including how the pure-exploitation algorithm should be (slightly) modified in this case to achieve the minimax optimal gap-dependent regret bound.
Finally, we mention that our analyses easily yield high-probability regret bounds as well for all settings, although for simplicity, we have chosen only to present bounds on expected regret.
\section{Learning with Stochastic Contexts}
\label{sec:stochastic}
In this section, we focus on the stochastic contexts case,
where at each time $t\in [T]$, each context $x_{t,a}$ is drawn
from ${\mathcal{N}}(0,\Sigma)$, with a possibly unknown covariance matrix $\Sigma$.
Note that for each $t$, $x_{t,a}$'s can be arbitrarily corrleated across different $a$'s.
This is a simple setting that presents
an interesting case for study: at a population level, each one of
the $K$ actions is equally good; in particular, if the decision maker
is not allowed to personalize the action based on the context and hence restricted to choose a single-action policy (i.e. always choose action $1$ or action $2$ no matter what the contexts are), then all the actions perform equally well. However, as we shall see, being able
to select different actions based on the realized contexts allows
the decision maker to do much more. We start by making an assumption
on the covariance matrix.
\begin{assumption}\label{assumption:cov}
The covariance matrix $\Sigma$ satisfies
$
\frac{\kappa}{d} \le \lambda_{\min}(\Sigma) \le \lambda_{\max}(\Sigma) \le \frac{1}{d}
$
for some numerical constant $\kappa>0$, where $\lambda_{\min}(\Sigma), \lambda_{\max}(\Sigma)$ denote the smallest and the largest eigenvalues of $\Sigma$, respectively.
\end{assumption}
The upper bound $ \lambda_{\max}(\Sigma) \le 1/d$ in Assumption \ref{assumption:cov} ensures that $\mathbb{E}\|x_{t,a}\|_2^2\le 1$, and therefore the stochastic contexts share the similar constraint with the previous adversarial contexts. The lower bound $\lambda_{\min}(\Sigma) \ge \kappa/d$ ensures that each stochastic context is approximately distributed as an isotropic Gaussian random vector, with a bounded condition number no less than $\kappa^{-1}$. We assume that $\kappa>0$ is a fixed constant (say $0.1$) and will not optimize the dependence on $\kappa$.
The next theorem presents tight regret bounds for the stochastic contexts case.
\begin{theorem}\label{thm.stochastic}
Let $T$, $M=O(\log\log T)$ and $d$ be the learning horizon, number of batches and each context's dimension, respectively. Denote by $\mathsf{polylog}(T)$ all the poly-logarithmic factors in $T$.
\begin{enumerate}
\item
Under Assumptions \ref{aspn.TKd} and \ref{assumption:cov}, there exists a sequential batch learning algorithm \textbf{Alg}= $({\mathcal{T}}, \pi)$ (explicitly defined in Section \ref{subsec.pure-exp}) such that:
\begin{align*}
\sup_{\theta^\star: \|\theta^\star\|_2\le 1} \mathbb{E}_{\theta^\star}[R_T(\pi)] \le \mathsf{polylog}(T)\cdot \sqrt{\frac{dT}{\kappa}}\left(\frac{T}{d^2}\right)^{\frac{1}{2(2^M-1)}}.
\end{align*}
\item
Conversely, even when $K=2$ and contexts $x_{t,a}\sim {\mathcal{N}}(0,I_d/d)$ are independent over all $a\in [K], t\in [T]$, for any $M\le T$ and any sequential batch learning algorithm, we have:
\begin{align*}
\sup_{\theta^\star: \|\theta^\star\|_2\le 1} \mathbb{E}_{\theta^\star}[R_T(\pi)] \ge c\cdot \sqrt{dT}\left(\frac{T}{d^2}\right)^{\frac{1}{2(2^M-1)}},
\end{align*}
where $c>0$ is a numerical constant independent of $(T,M,d)$.
\end{enumerate}
\end{theorem}
Theorem \ref{thm.stochastic} completely characterizes the minimax regret for the sequential batch learning problem in linear contextual bandits with stochastic contexts, and shows a doubly exponential dependence of the optimal regret on the number of batches $M$. The following corollary is immediate.
\begin{corollary}\label{cor.stochastic}
Under stochastic contexts, it is necessary and sufficient to have $\Theta(\log\log (T/d^2))$ batches to achieve the fully online regret $\tilde{\Theta}(\sqrt{dT})$.
\end{corollary}
In contrast to Corollary \ref{cor.adversarial}, the above corollary shows that a much smaller number of batches are capable of achieving the fully online performance, which suits better for many practical scenarios. Note that for smaller number of batches, Theorem \ref{thm.stochastic} also gives the tight regrets within logarithmic factors, e.g., the optimal regret is $\tilde{\Theta}(Td^{-1/2})$ when $M=1$, is $\tilde{\Theta}(T^{2/3}d^{1/6})$ when $M=2$, is $\tilde{\Theta}(T^{4/7}d^{5/14})$ when $M=3$, and so on.
\subsection{A Sequential Batch Pure-Exploitation Algorithm}\label{subsec.pure-exp}
In contrast to the adversarial contexts, under stochastic contexts the decision maker enjoys the advantage that he can choose to learn the unknown parameter $\theta^\star$ from any desired direction. In other words, the exploration of the learner is no longer subject to the adversary's restrictions, and strikingly, making decisions based on the best possible inference of $\theta^\star$ is already sufficient.
\begin{algorithm}[h!]
\DontPrintSemicolon
\SetAlgoLined
\BlankLine
\caption{Sequential Batch Pure-exploitation \label{algo.pure-exp}}
\textbf{Input:} Time horizon $T$; context dimension $d$; number of batches $M$. \\
\textbf{Set} $a = \Theta\left( \sqrt{T}\cdot \left(\frac{T}{d^2}\right)^{\frac{1}{2(2^M-1)}} \right)$\\
\textbf{Grid choice}: ${\mathcal{T}} = \{t_1,\cdots,t_M\}$, with $t_1 = ad, \quad t_m = \lfloor a\sqrt{t_{m-1}} \rfloor, m=2,3,\cdots,M,.$\\
\textbf{Initialization:} $A = {\bf 0}\in \mathbb{R}^{d\times d}$, $\hat{\theta}={\bf 0}\in \mathbb{R}^d$\;
\For{$m \gets 1$ \KwTo $M$}{
\For{$t\gets t_{m-1}+1$ \KwTo $t_m$}{
choose $a_t = \arg\max_{a\in [K]} x_{t,a}^\top \hat{\theta}$ (break ties arbitrarily). \\
receive reward $r_{t,a_t}$.
}
}
$A\gets A + \sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\top$. \\
$\hat{\theta} \gets A^{-1}\sum_{t=t_{m-1}+1}^{t_m} r_{t,a_t}x_{t,a_t}$.
\end{algorithm}
The algorithm we use in this setting is quite simple (see Algorithm~\ref{algo.pure-exp}). Specifically, under a particularly chosen grid ${\mathcal{T}}=\{t_1,t_2,\cdots,t_M\}$, the learner, at the beginning of each batch, uses the least squares estimate $\hat{\theta}$ of $\theta^\star$ based on the data in the previous batches, and then simply selects the action $a\in [K]$ which maximizes the estimated reward $x_{t,a}^\top \hat{\theta}$ for any time $t$ in this batch. Then at the end of each batch, the learner updates his estimate $\hat{\theta}$ of $\theta^\star$ based on the new observations from the current batch.
How do we select the grid ${\mathcal{T}}$? Intuitively, in order to minimize overall regret, we must ensure that the regret incurred on each batch is not too large, because the overall regret is dominated by the batch that has the largest regret. Guided by this observation, we can see intuitively an optimal way of selecting the grid must ensure that each batch's regret is the same (at least orderwise in terms of the dependence of $T$ and $d$): for otherwise, there is a way of reducing the regret order in one batch and increasing the regret order in the other and the sum of the two will still have smaller regret order than before (which is dominated by the batch that has larger regret order). As we shall see later, the following grid choice satisfies this equal-regret-across-batches requirement:
\begin{align}\label{eq.minimax_grid}
t_1 = ad, \quad t_m = \lfloor a\sqrt{t_{m-1}} \rfloor, \qquad m=2,3,\cdots,M,
\end{align}
where the parameter $a = \Theta\left( \sqrt{T}\cdot \left(\frac{T}{d^2}\right)^{\frac{1}{2(2^M-1)}} \right)$ is chosen so that $t_M=T$.
\subsection{Regret Analysis for Upper bound}\label{subsec.stochastic_upperbound}
We now turn to establishing the upper bound in Theorem \ref{thm.stochastic}.
We again execute a two-step program. First, we prove that Algorithm \ref{algo.pure-exp} with the grid ${\mathcal{T}}=\{t_1,\cdots,t_M\}$ in \eqref{eq.minimax_grid} attains the regret upper bound in Theorem \ref{thm.stochastic}, assuming the conditional independence assumption (cf. Lemma \ref{lemma.difference}) holds. Second, similar to the master algorithm in the previous section, we then modify Algorithm \ref{algo.pure-exp} slightly to validate this condition. One thing to note here is that, unlike in the adversarial contexts case, here the modification is much simpler, as we shall see later.
We start by establishing that the least squares estimator $\hat{\theta}$ is close to the true parameter $\theta^\star$ at the beginning of every batch with high probability. By the theory of least squares, this would be obvious if the chosen contexts $x_{t,a_t}$ were i.i.d. Gaussian. However, since the action $a_t$ depends on all contexts $(x_{t,a})_{a\in [K]}$ available at time $t$, the probability distribution of $x_{t,a_t}$ may be far from isotropic. Consequently, a priori, there might be one or more directions in the context space that were never chosen, hence yielding inaccurate estimation of $\theta^\star$ along that (or those) direction(s). However, as we shall see next, this is not a concern: we establish that the matrix formed by the selected contexts are reasonbly well-conditioned, despite being selected in a greedy fashion.
\begin{lemma}\label{lemma.equator}
For each $m\in [M]$, with probability at least $1-O(T^{-4})$ we have
\begin{align*}
\lambda_{\min}\left(\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\top \right) \ge c\cdot \frac{\kappa(t_m-t_{m-1})}{d},
\end{align*}
where $c>0$ is a numerical constant independent of $(K,T,d,m,\kappa)$.
\end{lemma}
The proof of the above lemma is a bit long and hence deferred to the appendix. Based on Lemma \ref{lemma.equator}, we are ready to show that the least squares estimator $\hat{\theta}$ is close to the true parameter $\theta^\star$ with high probability. For $m\in [M]$, let $\hat{\theta}_m$ be the estimate at the end of $m$-th batch, and $A_m = \sum_{t=1}^{t_m} x_{t,a_t}x_{t,a_t}^\top$ be the regression matrix.
\begin{lemma}\label{lemma.difference}
For each $m\in [M]$, if the rewards $\{r_{t,a_t}\}_{t\in [t_m]}$ up to time $t_m$ are mutually independent given the selected contexts $\{x_{t,a_t}\}_{t\in [t_m]}$, then with probability at least $1-O(T^{-3})$,
\begin{align*}
\|\hat{\theta}_m - \theta^\star\|_2 \le Cd\cdot \sqrt{\frac{\log T}{\kappa t_m}}
\end{align*}
for a numerical constant $C>0$ independent of $(K,T,d,m,\kappa)$.
\end{lemma}
\begin{proof}{Proof.}
By the standard algebra of linear regression, we have:
\begin{align*}
\hat{\theta}_m - \theta^\star = A_m^{-1}\sum_{t=1}^{t_m}x_{t,a_t} (r_{t,a_t} - x_{t,a_t}^\top \theta^\star).
\end{align*}
Hence, conditioned on the contexts $\{x_{t,a_t}\}_{t\in [t_m]}$, the noise terms $r_{t,a_t} - x_{t,a_t}^\top \theta^\star$ are independent by the assumption, and each noise term $r_{t,a_t} - x_{t,a_t}^\top \theta^\star$ is $1$-sub-Gaussian.
Next, we show that the random vector $\hat{\theta}_m - \theta^\star$ is $\sigma^2$-sub-Gaussian conditioned on the contexts with $\sigma^2 = \lambda_{\min}(A_m)^{-1}$. To see this, we start by recalling that a centered (i.e. zero-mean) random vector $V$ is $v$-sub-Gaussian if the scalar random variable $\langle V, u\rangle$ is $v$-sub-Guassian for any unit vector $u$.
Consequently, take any unit vector $u \in \mathbf{R}^d$, we have:
$$\langle \hat{\theta}_m - \theta^\star , u \rangle = \langle A_m^{-1}\sum_{t=1}^{t_m}x_{t,a_t} (r_{t,a_t} - x_{t,a_t}^\top \theta^\star), u\rangle = \sum_{t=1}^{t_m} u^T A_m^{-1}x_{t,a_t} (r_{t,a_t} - x_{t,a_t}^\top \theta^\star).$$
Since each term in the summand is $(u^T A_m^{-1}x_{t,a_t})^2$-sub-Gaussian, and since all of them are independent (after being conditioned on $\{x_{t,a_t}\}_{t\in [t_m]}$), their sum is also sub-Gaussian with the sub-Gaussian constant equal to the sum of the sub-Guassian constants:
\begin{align*}
&\sum_{t=1}^{t_m} (u^T A_m^{-1}x_{t,a_t})^2= \sum_{t=1}^{t_m} u^T A_m^{-1}x_{t,a_t} x_{t,a_t}^T A_m^{-1} u
= u^T A_m^{-1}\big( \sum_{t=1}^{t_m} x_{t,a_t} x_{t,a_t}^T \big) A_m^{-1} u \\
&= u^T A_m^{-1}A_m A_m^{-1} u = u^T A_m^{-1} u \le \lambda_{\max}(A_m^{-1}) = \lambda_{\min}(A_m)^{-1}.
\end{align*}
Since the above inequality holds for any unit vector $u$, choosing $\sigma^2 = \lambda_{\min}(A_m)^{-1}$
establishes the claim.
Proceeding further, by Lemma \ref{lemma.equator}, we have for each $m\in [M]$, with probability at least $1-O(T^{-4})$
$\lambda_{\min}\left(\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\top \right) \ge c\cdot \frac{\kappa(t_m-t_{m-1})}{d}$. Consequently, by a union bound over all $M$ (which is at most $T$),
we have with probability at least $1-O(T^{-3})$, $\lambda_{\min}\left(\sum_{t=t_{m-1}+1}^{t_m} x_{t,a_t}x_{t,a_t}^\top \right) \ge c\cdot \frac{\kappa(t_m-t_{m-1})}{d}$ for all $m \in [M]$.
Since $\lambda_{\min}(X+Y)\ge \lambda_{\min}(X)+\lambda_{\min}(Y)$ for any symmetric matrices $X,Y$,
it then follows that with probability at least $1-O(T^{-3})$:
\begin{align*}
\lambda_{\min}(A_m) = \lambda_{\min}\left(\sum_{l=1}^m \sum_{t=t_{l-1}+1}^{t_l} x_{t,a_t}x_{t,a_t}^\top \right) \ge \sum_{l=1}^m \lambda_{\min}\left(\sum_{t=t_{l-1}+1}^{t_l} x_{t,a_t}x_{t,a_t}^\top \right) \ge \frac{c\kappa t_m}{d}.
\end{align*}
Finally, since $\hat{\theta}_m - \theta^\star$ is a $\frac{d}{c\kappa t_m}$-sub-Gaussian random vector, $\|\hat{\theta}_m - \theta^\star\|_2^2 $
is a sub-exponential random variable.
Therefore, conditioned on the above event for the stochastic contexts, the sub-exponential concentration gives the claimed upper bound on $\|\hat{\theta}_m - \theta^\star\|_2$ with a further probability at least $1 - O(T^{-3})$ over the random noises. Finally, taking a union bound to complete the proof.
\end{proof}
Lemma~\ref{lemma.difference} shows that given the conditional independence assumption, the estimator $\hat{\theta}$ given by pure exploitation essentially achieves the rate-optimal estimation of $\theta^\star$ even if one purely explores. This now positions us well to prove the upper bound of Theorem \ref{thm.stochastic}. Of course, bear in mind that when using Algorithm~\ref{algo.pure-exp}, the conditional independence assumption does not hold, for the choice of future contexts depends on the rewards in the previous batches. Therefore, we will use sample splitting to build another master algorithm to gain independence at the cost of the sample size reduction by a multiplicative factor of $M$ (recall that $M = O(\log\log T)$). The following proof implements these two steps; note that in this setting, the master algorithm is entirely different from and much simpler than the one given in the adversarial case.
\begin{proof}[Proof of Statement 1 in Theorem~\ref{thm.stochastic}]
\begin{enumerate}
\item[]
\item \textbf{Regret bound under conditional independence assumption.}
Consider the $m$-th batch with any $m\ge 2$, and any time point $t$ inside this batch. By the definition of $a_t$, we have $x_{t,a_t}^\top \hat{\theta}_{m-1}\ge x_{t,a}^\top \hat{\theta}_{m-1}$ for any $a\in [K]$. Consequently,
\begin{align*}
\max_{a\in [K]} (x_{t,a} - x_{t,a_t})^\top \theta^\star &\le \max_{a\in [K]} (x_{t,a} - x_{t,a_t})^\top (\theta^\star - \hat{\theta}_{m-1}) \\
&\le \max_{a,a'\in [K]} (x_{t,a} - x_{t,a'})^\top (\theta^\star - \hat{\theta}_{m-1}) \\
&\le 2\max_{a\in [K]} |x_{t,a}^\top (\theta^\star - \hat{\theta}_{m-1})|.
\end{align*}
For fixed $a\in [K]$, marginally we have $x_{t,a}\sim {\mathcal{N}}(0,\Sigma)$ independent of $\hat{\theta}_{m-1}$. Therefore, conditioning on the previous contexts and rewards, we have $x_{t,a}^\top (\theta^\star - \hat{\theta}_{m-1})\sim {\mathcal{N}}(0,\sigma^2)$ with
$$
\sigma^2 = (\theta^\star - \hat{\theta}_{m-1})^\top \Sigma (\theta^\star - \hat{\theta}_{m-1}) \le \frac{\|\theta^\star - \hat{\theta}_{m-1}\|_2^2}{d}
$$
by Assumption \ref{assumption:cov}. By a union bound over $a\in [K]$, with probability at least $1-O(T^{-3})$ over the randomness in the current batch we have
\begin{align*}
\max_{a\in [K]} (x_{t,a} - x_{t,a_t})^\top \theta^\star \le 2\max_{a\in [K]} |x_{t,a}^\top (\theta^\star - \hat{\theta}_{m-1})| = O\left(\|\theta^\star - \hat{\theta}_{m-1} \|_2 \cdot \sqrt{\frac{\log(KT)}{d}}\right).
\end{align*}
Applying Lemma \ref{lemma.difference} and another union bound, there exists some numerical constant $C'>0$ such that with probability at least $1-O(T^{-3})$, the instanteous regret at time $t$ is at most
\begin{align*}
\max_{a\in [K]} (x_{t,a} - x_{t,a_t})^\top \theta^\star \le C'\sqrt{\log(KT)\log T}\cdot \sqrt{\frac{d}{\kappa t_{m-1}}}.
\end{align*}
Now taking the union bound over $t\in [T]$, the total regret incurred after the first batch is at most
\begin{align}\label{eq.later_batch}
\sum_{m=2}^M C'\sqrt{\log(KT)\log T}\cdot t_m\sqrt{\frac{d}{\kappa t_{m-1}}} \le C'\sqrt{\frac{\log(KT)\log T}{\kappa}}M\cdot a\sqrt{d}
\end{align}
with probability at least $1-O(T^{-2})$, where the inequality is due to the choice of the grid in \eqref{eq.minimax_grid}.
As for the first batch, the instanteous regret at any time point $t$ is at most the maximum of $K$ Gaussian random variables ${\mathcal{N}}(0,(\theta^\star)^\top \Sigma \theta^\star)$. Since $\|\theta^\star\|_2\le 1$ and $\lambda_{\max}(\Sigma)\le 1/d$, we conclude that the instanteous regret is at most $C''\sqrt{\log(KT)/d}$ for some constant $C''>0$ with probability at least $1-O(T^{-3})$. Now by a union bound over $t\in [t_1]$, with probability at least $1-O(T^{-2})$ the total regret in the first batch is at most
\begin{align}\label{eq.first_batch}
C''\sqrt{\log(KT)/d}\cdot t_1 = C''\sqrt{\log(KT)}\cdot a\sqrt{d}.
\end{align}
Now combining \eqref{eq.later_batch}, \eqref{eq.first_batch} and the choice of $a$ in Algorithm~\ref{algo.pure-exp} gives the desired regret bound in Theorem \ref{thm.stochastic} with high probability (note that $M=O(\log\log T)$), and consequently in expectation.
\item \textbf{Building a Master algorithm that satisfies conditional independence}
\begin{algorithm}[h!]
\DontPrintSemicolon
\SetAlgoLined
\BlankLine
\caption{Batched Pure-exploitation (with sample splitting) \label{algo.sample_splitting}}
\textbf{Input:} Time horizon $T$; context dimension $d$; number of batches $M$; grid ${\mathcal{T}} = \{t_1,\cdots,t_M\}$ same as in Algorithm~\ref{algo.pure-exp}.\;
\textbf{Initialization:} Partition each batch into $M$ intervals evenly, i.e., $(t_m,t_{m+1}]=\cup_{j=1}^M T_m^{(j)}$. \;
\For{$m \gets 1$ \KwTo $M$}{
\If{$m=1$}{
choose $a_t = 1$ and receives reward $r_{t,a_t}$ for any $t\in [1,t_1]$.
}
\Else{
\For{$t\gets t_{m-1}+1$ \KwTo $t_m$}{
choose $a_t = \arg\max_{a\in [K]} x_{t,a}^\top \hat{\theta}_{m-1}$ (break ties arbitrarily). \\
receive reward $r_{t,a_t}$.
}
}
$T^{(m)} \gets \cup_{m'=1}^m T_{m'}^{(m)}.$\\
$A_m\gets \sum_{t\in T^{(m)}} x_{t,a_t}x_{t,a_t}^\top$. \\
$\hat{\theta}_m \gets A_m^{-1}\sum_{t\in T^{(m)}} r_{t,a_t}x_{t,a_t}$.
}
\textbf{Output: resulting policy $\pi=(a_1,\cdots,a_T)$}.
\end{algorithm}
We start by proposing a sample splitting based master algorithm (see Algorithm~\ref{algo.sample_splitting}) that ensures that when restricting to the subset of observations used for constructing $\hat{\theta}$, the rewards are conditionally independent given the contexts.
The key modification in Algorithm \ref{algo.sample_splitting} lies in the computation of the estimator $\hat{\theta}_{m}$ after the first $m$ batches. Specifically, instead of using all past contexts and rewards before $t_m$, we only use the past observations inside the time frame $T^{(m)}\subsetneq [t_m]$ to construct the estimator. The key property of the time frames is the disjointness, i.e., $T^{(1)},\cdots,T^{(M)}$ are pairwise disjoint. Then the following lemma shows that the conditional independence condition holds within each time frame $T^{(m)}$.
\begin{lemma}\label{lemma.cond_indep}
For each $m\in [M]$, the rewards $\{r_{t,a_t}\}_{t\in T^{(m)}}$ are mutually independent conditioning on the selected contexts $\{x_{t,a_t}\}_{t\in T^{(m)}}$.
\end{lemma}
\begin{proof}{Proof.}
For $t\in T^{(m)}$, the action $a_t$ only depends on the contexts $\{x_{t,a}\}_{a\in [K]}$ at time $t$ and the past estimators $\hat{\theta}_1, \cdots, \hat{\theta}_{m-1}$. However, for any $m'\in [m-1]$, the estimator $\hat{\theta}_{m'}$ only depends on the contexts $x_{\tau,a_\tau}$ and rewards $r_{\tau,a_\tau}$ with $\tau\in T^{(m')}$. Repeating the same arguments for the action $a_\tau$ with $\tau\in T^{(m')}$, we conclude that $a_t$ only depends on the contexts $\{x_{\tau,a}\}_{a\in [K],\tau\in \cup_{m'\le m-1} T^{(m')}\cup \{t\}}$ and rewards $\{r_{\tau,a_\tau} \}_{\tau\in \cup_{m'\le m-1} T^{(m')}}$. Consequently, by the disjointness of $T^{(m)}$ and $\cup_{m'\le m-1} T^{(m')}$, the desired conditional independence holds.
\end{proof}
By Lemma \ref{lemma.cond_indep}, the conditional independence condition of Lemma \ref{lemma.difference} holds for Algorithm \ref{algo.sample_splitting}. Moreover, the sample splitting in Algorithm \ref{algo.sample_splitting} reduces the sample size by a multiplicative factor at most $M$ at each round, and $M=O(\log\log T)$, therefore all proofs in Section \ref{subsec.pure-exp} continue to hold with a multiplicative penalty at most doubly logarithmic in $T$. As a result, Algorithm \ref{algo.sample_splitting} achieves the regret upper bound in Theorem \ref{thm.stochastic}.
\end{enumerate}
\end{proof}
\subsection{Lower bound}\label{subsec.stochastic_lower}
In this section we prove the minimax lower bound of the regret under stochastic contexts for $K=2$.
The lower bound argument for the stochastic context case is quite involved and we start by establishing the following key lemma.
\begin{lemma}\label{lemma.lower_bound}
For any fixed grid $0=t_0<t_1<\cdots<t_M=T$ and any $\Delta \in [0,1]$, the following minimax lower bound holds for any policy $\pi$ under this grid:
\begin{align*}
\sup_{\theta^\star: \|\theta^\star\|_2 \le 1} \mathbb{E}[R_T(\pi)] \ge \Delta \cdot\sum_{m=1}^M \frac{t_m-t_{m-1}}{10\sqrt{d}}\exp\left(-\frac{16t_{m-1}\Delta^2}{d^2}\right).
\end{align*}
\end{lemma}
\begin{proof}{Proof.}
Let $\theta^\star =\theta\sim \mathsf{Unif}(\Delta\mathbb{S}^{d-1})$ be uniformly distributed on the $d$-dimensional sphere centered at the origin with radius $\Delta$. Clearly $\|\theta^\star\|_2\le 1$ surely since $\Delta\le 1$. Hence,
\begin{align}\label{eq.bayesian}
\sup_{\theta^\star: \|\theta^\star\|_2 \le 1} \mathbb{E}[R_T(\pi)] \ge \mathbb{E}_\theta \mathbb{E}[R_T(\pi)] = \sum_{t=1}^T \mathbb{E}_\theta\left(\mathbb{E}\left[\max_{i\in \{1,2\}}(x_{t,i}-x_{t,a_t})^\top \theta \right] \right).
\end{align}
We will lower bound each term in the RHS of \eqref{eq.bayesian} separately. Note that there are multiple sources of randomness involved in the expectation: the randomness in the parameter $\theta$, in the contexts $x_{t,i}$, and in all the past rewards which determine the random action $a_t$. Throughout the proof, $\mathbb{E}_\theta$ denotes taking expectation with respect to $\theta$, $\mathbb{E}_x$ denotes taking expectation with respect to all (past and current) random contexts, and $P_{\theta,x}^t$ denotes the distribution of all random rewards observable before time $t$ conditioned on the parameter $\theta$ and contexts $x$, with $\mathbb{E}_{P_{\theta,x}^t}$ being the corresponding expectation.
Note that for each $t\in [T]$, we have
$\max_{i\in \{1,2\}} (x_{t,i} - x_{t,a_t})^\top \theta = \jiao{x_{t,1}-x_{t,2},\theta}_+\cdot \mathbbm{1}(a_t=2) + \jiao{x_{t,1}-x_{t,2},\theta}_-\cdot \mathbbm{1}(a_t=1),$
where we define $\jiao{u,v}_+ = \max\{0,u^\top v\}$ and $\jiao{u,v}_- = \max\{0,-u^\top v\}$. Taking expectations on both sides gives
\begin{align}
& \mathbb{E}_\theta \mathbb{E}_{P_{\theta,x}^t} \left[ \max_{i\in \{1,2\}} (x_{t,i} - x_{t,a_t})^\top \theta\right] \nonumber \\
&= \mathbb{E}_\theta \left[ \jiao{x_{t,1}-x_{t,2},\theta}_+\cdot \mathbb{P}_{P_{\theta,x}^t}(a_t=2) + \jiao{x_{t,1}-x_{t,2},\theta}_-\cdot \mathbb{P}_{P_{\theta,x}^t}(a_t=1) \right] \nonumber \\
&= Z_0\cdot \left( \mathbb{E}_{\mathbb{E}_{Q_1}P_{\theta,x}^t}(a_t= 2) + \mathbb{E}_{\mathbb{E}_{Q_2}P_{\theta,x}^t}(a_t= 1) \right), \label{eq.change_of_measure}
\end{align}
where in the last identity \eqref{eq.change_of_measure} we define two new probability distributions of $\theta$ via
\begin{align*}
\frac{dQ_1}{dQ_0}(\theta) = \frac{\jiao{x_{t,1}-x_{t,2}, \theta}_+}{Z_0}, \qquad \frac{dQ_2}{dQ_0}(\theta) = \frac{\jiao{x_{t,1}-x_{t,2}, \theta}_-}{Z_0},
\end{align*}
where $Q_0=\mathsf{Unif}(\Delta\mathbb{S}^{d-1})$ is the original probability measure of $\theta$, $Z_0$ is the common normalization factor, and $\mathbb{E}_{Q_i}P_{\theta,x}^t$ denotes the mixture distribution of $z\sim P_{\theta,x}^t$ where $\theta\sim Q_i$, for $i\in \{1,2\}$. The following lemma investigates some properties of $Q_1$ and $Q_2$.
\begin{lemma}\label{lemma.Q}
Let $x_{t,1} - x_{t,2} = r_tu_t$ with $r_t\ge 0, \|u_t\|_2=1$. Then $\theta\sim Q_1$ if and only if $\theta-2(u_t^\top \theta)u_t\sim Q_2$. Moreover, we have
\begin{align}\label{eq.Z_0}
Z_0 &= r_t\Delta\cdot\begin{cases}
\frac{2^d}{\pi d}\binom{d}{d/2}^{-1}, & \text{if }d\text{ is even} \\
\frac{1}{2^d}\binom{d-1}{(d-1)/2}, & \text{if }d\text{ is odd}
\end{cases} \ge \frac{r_t\Delta}{5\sqrt{d}}, \\ \label{eq.second_moment}
\mathbb{E}_{Q_1}(u_t^\top \theta)^2 &= \mathbb{E}_{Q_2}(u_t^\top \theta)^2 = \frac{2\Delta^2}{d+1}.
\end{align}
\end{lemma}
The proof of Lemma \ref{lemma.Q} is postponed to the appendix. Continuing from \eqref{eq.change_of_measure}, we have
\begin{align}
\mathbb{E}_{\mathbb{E}_{Q_1}P_{\theta,x}^t}(a_t= 2) + \mathbb{E}_{\mathbb{E}_{Q_2}P_{\theta,x}^t}(a_t= 1) &\stepa{\ge} 1-\mathsf{TV}(\mathbb{E}_{Q_1}P_{\theta,x}^t, \mathbb{E}_{Q_2}P_{\theta,x}^t) \nonumber \\
&\stepb{\ge} \frac{1}{2}\exp\left( - D_{\text{KL}}(\mathbb{E}_{Q_1}P_{\theta,x}^t \| \mathbb{E}_{Q_2}P_{\theta,x}^t ) \right) \nonumber \\
&\stepc{=} \frac{1}{2}\exp\left( - D_{\text{KL}}(\mathbb{E}_{Q_1}P_{\theta,x}^t \| \mathbb{E}_{Q_1}P_{\theta - 2(u_t^\top \theta)u_t,x}^t ) \right) \nonumber \\
&\stepd{\ge} \frac{1}{2}\exp\left( - \mathbb{E}_{Q_1}D_{\text{KL}}(P_{\theta,x}^t \| P_{\theta - 2(u_t^\top \theta)u_t,x}^t ) \right) \label{eq.divergence},
\end{align}
where step (a) follows from Le Cam's first lemma (cf. e.g., \cite{Tsybakov2008}), step (b) is due to Lemma \ref{lemma.TV_KL} in Appendix \ref{appendix.auxiliary}, step (c) follows from Lemma \ref{lemma.Q}, and step (d) is due to the joint convexity of the KL divergence. For $t\in (t_{m-1},t_m]$, the learner can only observe rewards up to time $t_{m-1}$ at time $t$, and therefore
\begin{align}
D_{\text{KL}}(P_{\theta,x}^t \| P_{\theta - 2(u_t^\top \theta)u_t,x}^t ) &= \frac{1}{2}\sum_{\tau=1}^{t_{m-1}} \left[x_{\tau,a_\tau}^\top[\theta - (\theta - 2(u_t^\top \theta)u_t)] \right]^2 \nonumber\\
&= 2\sum_{\tau=1}^{t_{m-1}} (u_t^\top \theta)^2(u_t^\top x_{\tau,a_\tau})^2. \label{eq.KL_divergence}
\end{align}
Now combining \eqref{eq.change_of_measure} to \eqref{eq.KL_divergence}, we arrive at
\begin{align*}
\mathbb{E}_\theta \mathbb{E}_{P_{\theta,x}^t} \left[ \max_{i\in \{1,2\}} (x_{t,i} - x_{t,a_t})^\top \theta\right] &\ge \frac{r_t\Delta}{10\sqrt{d}}\exp\left(- \frac{4\Delta^2}{d+1} u_t^\top \left(\sum_{\tau=1}^{t_{m-1}} x_{\tau,a_\tau}x_{\tau,a_\tau}^\top\right) u_t\right) \\
&\ge \frac{r_t\Delta}{10\sqrt{d}}\exp\left(- \frac{4\Delta^2}{d+1} u_t^\top \left(\sum_{\tau=1}^{t_{m-1}} (x_{\tau,1}x_{\tau,1}^\top+x_{\tau,2}x_{\tau,2}^\top)\right) u_t\right).
\end{align*}
Finally, we take the expectation with respect to the contexts $x$. Using the independence of $\{x_{\tau,i}\}_{\tau<t}$ and $(r_t,u_t)$, the convexity of $x\mapsto \exp(-x)$ and $\mathbb{E}[x_{\tau,i}x_{\tau,i}^\top] = I_d/d$, we arrive at
\begin{align}\label{eq.target}
\mathbb{E}_x \mathbb{E}_\theta \mathbb{E}_{P_{\theta,x}^t} \left[ \max_{i\in \{1,2\}} (x_{t,i} - x_{t,a_t})^\top \theta\right] &\ge \frac{\mathbb{E}[r_t]\Delta}{10\sqrt{d}}\exp\left(-\frac{8\Delta^2t_{m-1}}{d(d+1)} \right) \nonumber\\
&\ge \frac{\Delta}{10\sqrt{d}}\exp\left(-\frac{16\Delta^2t_{m-1}}{d^2} \right),
\end{align}
where in the last inequality we have used
\begin{align*}
\mathbb{E}[r_t] = \mathbb{E}\|x_{t,1}-x_{t,2}\|_2 \ge \frac{\mathbb{E}\|x_{t,1}-x_{t,2}\|_1}{\sqrt{d}} = \frac{2}{\sqrt{\pi}} > 1.
\end{align*}
Combining \eqref{eq.bayesian} and \eqref{eq.target} completes the proof of Lemma \ref{lemma.lower_bound}.
\end{proof}
We are now ready to put everything together and complete the proof of the lower bound.
\begin{proof}[Proof of Statement 2 in Theorem~\ref{thm.stochastic}]
For any fixed grid ${\mathcal{T}}=\{t_1,\cdots,t_M\}$, define $s=\min\{m\in [M]: t_m\ge d^{2} \}$, which always exists due to our assumption that $T\ge d^2$. Now choosing some candidates of $\Delta \in \{1, \frac{d}{\sqrt{t_s}}, \frac{d}{\sqrt{t_{s+1}}}, \cdots, \frac{d}{\sqrt{T}}\} \subset [0,1]$ in Lemma \ref{lemma.lower_bound} gives
\begin{align}\label{eq.minimax}
\sup_{\theta^\star: \|\theta^\star\|_2 \le 1} \mathbb{E}[R_T(\pi)] \ge c\cdot \max\left\{\frac{t_s}{\sqrt{d}}, t_{s+1}\sqrt{\frac{d}{t_s}}, t_{s+2}\sqrt{\frac{d}{t_{s+1}}},\cdots, T\sqrt{\frac{d}{t_{M-1}}} \right\}
\end{align}
for some numerical constant $c>0$. After some algebra, the right-hand side of \eqref{eq.minimax} may be further lower bounded by
\begin{align*}
\sup_{\theta^\star: \|\theta^\star\|_2 \le 1} \mathbb{E}[R_T(\pi)] \ge c\sqrt{dT}\cdot \left(\frac{T}{d^2}\right)^{\frac{1}{2(2^{M-s+1}-1)}} \ge c\sqrt{dT}\cdot \left(\frac{T}{d^2}\right)^{\frac{1}{2(2^{M}-1)}}.
\end{align*}
\end{proof}
\section{Problem-Dependent Regret Bounds}\label{sec:gap}
The regret bounds given in the previous two sections are problem-independent regret bounds (also known as gap-independent regret bounds in the bandits literature): they do not depend on the underlying parameters of the probability distribution. When the contexts are stochastic, under certain ``margin" conditions, we can also consider problem-dependent regret bounds that can result in sharper bounds than those problem-independent ones. When the number of contexts is small (e.g., $K=2$), there could be a large margin between the performance of the optimal context and any sub-optimal contexts if $\|\theta^\star\|_2$ is bounded away from zero, raising the possibility that a problem-dependent regret bound sometimes better than the worst-case regret $\Theta(\sqrt{dT})$ could be obtained in sequential batch learning. The next theorem characterizes this.
\begin{theorem}\label{thm.problem-dependent}
Assume $K=2$, and let $T$, $M=O(\log T)$, $d$ be the learning horizon, number of batches and the dimension of each context respectively. Denote by $\mathsf{polylog}(T)$ all the poly-logarithmic factors in $T$. Assume without loss of generality $\|\theta^*\|_2 > 0$.
\begin{enumerate}
\item
Under Assumptions \ref{aspn.TKd} and \ref{assumption:cov}, there exists a sequential batch learning algorithm \textbf{Alg}= $({\mathcal{T}}, \pi)$ (explicitly defined below ) that achieves the following regret:
\begin{align*}
\mathbb{E}_{\theta^\star}[R_T(\pi)] \le \mathsf{polylog}(T)\cdot \frac{(d/\kappa)^{3/2}}{\|\theta^\star\|_2} \left(\frac{T}{d^2}\right)^{\frac{1}{M}}.
\end{align*}
\item
Conversely, when the contexts $x_{t,a}\sim {\mathcal{N}}(0,I_d/d)$ are independent over all $a\in [K], t\in [T]$, for any $M\le T$ and any sequential batch learning algorithm, we have:
\begin{align*}
\sup_{\theta^\star: \|\theta^\star\|_2\le 1} \|\theta^\star\|_2\cdot \mathbb{E}_{\theta^\star}[R_T(\pi)] \ge c\cdot d^{3/2} \left(\frac{T}{d^2}\right)^{\frac{1}{M}},
\end{align*}
where $c>0$ is a numerical constant independent of $(T,M,d)$.
\end{enumerate}
\end{theorem}
\begin{corollary}
In this setting, it is necessary and sufficient to have $\Theta(\log(T/d^2))$ batches to achieve the optimal problem-dependent regret $\tilde{\Theta}(d^{3/2} / \|\theta^\star\|_2)$.
Here we are not aiming to get the tightest dependence on $\log T$ (note that $\tilde{\Theta}(\cdot)$ hides polylog factors).
\end{corollary}
Note that the dependence on $T$ is significantly better than $\sqrt{T}$ in the problem-dependent bound, showing that a large $\|\theta^\star\|_2$ makes learning simpler. We remark that although the problem-dependent regret in Theorem \ref{thm.problem-dependent} only holds for $K=2$, the generalization to a generic $K$ is straightforward. Moreover, the margin between the optimal context and the sub-optimal context shrinks quickly as $K$ gets larger, and therefore the margin-based problem-dependent bound is not that useful compared with the worst-case regret bound in Theorem \ref{thm.stochastic} for large $K$.
\subsection{Proof of the Upper Bound in Theorem \ref{thm.problem-dependent}}
The sequential batch learning algorithm which achieves the claimed upper bound is exactly the batched pure-exploitation algorithm with sample splitting shown in Algorithm \ref{algo.sample_splitting}, with a different choice of the grid: we consider a geometric grid ${\mathcal{T}}' = \{t_1', t_2', \cdots, t_M'\}$ with
\begin{align*}
t_1' = bd^2, \qquad t_m' = \lfloor bt_{m-1}' \rfloor, \quad m=2,3,\cdots,M,
\end{align*}
where $b = \Theta((T/d^2)^{1/M})$ so that $t_M' = T$. Next we show that with the above choice of the grid, Algorithm \ref{algo.sample_splitting} attains the regret upper bound in Theorem \ref{thm.problem-dependent}.
Consider the $m$-th batch with any $m\ge 2$, and any time point $t$ inside this batch. Define $v_t = x_{t,1} - x_{t,2}$, then our algorithm chooses the wrong arm if and only if $v_t^\top \theta^\star$ and $v_t^\top \hat{\theta}_{m-1}$ have different signs. Hence, the instantenous regret at time $t$ is
\begin{align*}
v_t^\top \theta^\star \cdot \mathbbm{1}(v_t^\top \theta^\star \ge 0, v_t^\top \hat{\theta}_{m-1} \le 0) - v_t^\top \theta^\star \cdot \mathbbm{1}(v_t^\top \theta^\star \le 0, v_t^\top \hat{\theta}_{m-1} \ge 0),
\end{align*}
and by the symmetry of $v_t\sim {\mathcal{N}}(0,2\Sigma)$, it holds that
\begin{align*}
\mathbb{E}\left[\max_{a\in \{1,2\}} (x_{t,a} - x_{t,a_t})^\top \theta^\star \right] = 2\mathbb{E}\left[v_t^\top \theta^\star\cdot \mathbbm{1}(v_t^\top \theta^\star \ge 0, v_t^\top \hat{\theta}_{m-1} \le 0) \right].
\end{align*}
Set $\delta = \sqrt{d\log T/(\kappa t_{m-1}')}$, and partition the non-negative axis $\mathbb{R}_+$ into $\bigcup_{i=0}^\infty [i\delta, (i+1)\delta)$. Using this partition gives
\begin{align}
&\mathbb{E}\left[v_t^\top \theta^\star\cdot \mathbbm{1}(v_t^\top \theta^\star \ge 0, v_t^\top \hat{\theta}_{m-1} \le 0)\cdot \mathbbm{1}(\|v_t\|_2 \le \sqrt{10\log T}) \right] \nonumber\\
&= \sum_{i=0}^\infty \mathbb{E}\left[v_t^\top \theta^\star\cdot \mathbbm{1}(v_t^\top \theta^\star \in [i\delta, (i+1)\delta), v_t^\top \hat{\theta}_{m-1} \le 0)\cdot \mathbbm{1}(\|v_t\|_2 \le \sqrt{10\log T}) \right] \nonumber\\
&\le \sum_{i=0}^\infty (i+1)\delta\cdot \mathbb{P}\left(v_t^\top \theta^\star \in [i\delta, (i+1)\delta), v_t^\top \hat{\theta}_{m-1} \le 0,\|v_t\|_2 \le \sqrt{10\log T} \right) \nonumber\\
&\le \sum_{i=0}^\infty (i+1)\delta\cdot \mathbb{P}\left(v_t^\top \theta^\star \in [i\delta, (i+1)\delta), v_t^\top (\theta^\star - \hat{\theta}_{m-1}) \ge i\delta,\|v_t\|_2 \le \sqrt{10\log T} \right) \nonumber\\
&\le \sum_{i=0}^\infty (i+1)\delta\cdot \mathbb{P}\left(v_t^\top \theta^\star \in [i\delta, (i+1)\delta) \right)\cdot \mathbb{P}\left( v_t^\top (\theta^\star - \hat{\theta}_{m-1}) \ge i\delta \big| v_t^\top \theta^\star \in [i\delta, (i+1)\delta), \|v_t\|_2 \le \sqrt{10\log T}\right). \label{eq.partition}
\end{align}
We deal with each term in \eqref{eq.partition} separately. For $\mathbb{P}\left(v_t^\top \theta^\star \in [i\delta, (i+1)\delta) \right)$, note that $v_t^\top\theta^\star$ is a normal random variable with variance $(\theta^\star)^\top \Sigma \theta^\star \ge \lambda_{\min}(\Sigma)\|\theta^\star\|_2^2\ge \kappa\|\theta^\star\|_2^2/d$, thus the probability density of this random variable is upper bounded by $\sqrt{d/2\pi \kappa}/\|\theta^\star\|_2$ everywhere. Therefore,
\begin{align}\label{eq.anticoncentration}
\mathbb{P}\left(v_t^\top \theta^\star \in [i\delta, (i+1)\delta) \right) \le \delta\cdot \frac{\sqrt{d}}{\sqrt{2\pi\kappa}\|\theta^\star\|_2}.
\end{align}
For the second term of \eqref{eq.partition}, the proof of Lemma \ref{lemma.difference} shows that the random vector $\theta^\star - \hat{\theta}_{m-1}\in \mathbb{R}^d$ is $d/(c\kappa t_{m-1}')$-subGaussian for some absolute constant $c>0$, and is also independent of $v_t$. Hence, conditioning on $\|v_t\|_2\le \sqrt{10\log T}$, the random variable $v_t^\top(\theta^\star - \hat{\theta}_{m-1})$ is also subGaussian with parameter $\|v_t\|_2^2d/(c\kappa t_{m-1})\le 10d\log T/(c\kappa t_{m-1}')$. Consequently, subGaussian concentration gives
\begin{align}\label{eq.concentration}
\mathbb{P}\left( v_t^\top (\theta^\star - \hat{\theta}_{m-1}) \ge i\delta \big| v_t^\top \theta^\star \in [i\delta, (i+1)\delta), \|v_t\|_2 \le \sqrt{10\log T}\right) \le \exp\left(-\frac{c\kappa i^2\delta^2t_{m-1}'}{20d\log T}\right).
\end{align}
Combining \eqref{eq.partition}, \eqref{eq.anticoncentration}, \eqref{eq.concentration} and the choice of $\delta$, we conclude that
\begin{align*}
\mathbb{E}\left[v_t^\top \theta^\star\cdot \mathbbm{1}(v_t^\top \theta^\star \ge 0, v_t^\top \hat{\theta}_{m-1} \le 0)\cdot \mathbbm{1}(\|v_t\|_2 \le \sqrt{10\log T}) \right] &\le \frac{d^{3/2}\log T}{\sqrt{2\pi \kappa^3}t_{m-1}'\|\theta^\star\|_2} \sum_{i=0}^\infty (i+1)e^{-ci^2/20} \\
&\le C\cdot \frac{d^{3/2}\log T}{\kappa^{3/2}t_{m-1}'\|\theta^\star\|_2}.
\end{align*}
Moreover, since $v_t^\top \theta^\star \le 2$ almost surely and $\mathbb{P}(\|v_t\|_2\ge \sqrt{10\log T})\le T^{-5}$, we also have
\begin{align*}
\mathbb{E}\left[v_t^\top \theta^\star\cdot \mathbbm{1}(v_t^\top \theta^\star \ge 0, v_t^\top \hat{\theta}_{m-1} \le 0)\cdot \mathbbm{1}(\|v_t\|_2 > \sqrt{10\log T}) \right] \le 2T^{-5}.
\end{align*}
Therefore, by the choice of the grid, the expected total regret in the $m$-th batch is at most
\begin{align*}
\left(C\cdot \frac{d^{3/2}\log T}{\kappa^{3/2}t_{m-1}'\|\theta^\star\|_2} + 2T^{-5}\right)\cdot t_m' = O\left(\frac{d^{3/2}\log T}{\kappa^{3/2}\|\theta^\star\|_2}\cdot \left(\frac{T}{d^2}\right)^{1/M} \right).
\end{align*}
The first batch is handled in the same way as the upper bound proof of Theorem \ref{thm.stochastic}. Specifically, the expected total regret in the first batch is
\begin{align*}
O\left( t_1'\cdot \sqrt{\frac{\log T}{d}} \right) = O\left(\sqrt{d^3\log T}\left(\frac{T}{d^2}\right)^{\frac{1}{M}}\right) = O\left(\frac{\sqrt{d^3\log T}}{\kappa^{3/2}\|\theta^\star\|_2}\left(\frac{T}{d^2}\right)^{\frac{1}{M}} \right).
\end{align*}
Finally summing up all batches $m=1,2,\cdots,M$ completes the proof.
\subsection{Proof of the lower bound in Theorem \ref{thm.problem-dependent}} The proof is entirely analogous to the lower bound proof of Theorem \ref{thm.stochastic}. First we observe that by Lemma \ref{lemma.lower_bound}, for any $\Delta \in [0,1]$ and fixed grid ${\mathcal{T}} = \{t_1, t_2, \cdots, t_M\}$ we have
\begin{align*}
\inf_\pi \sup_{\theta^\star: \|\theta^\star\|_2\le 1} \|\theta^\star\|_2\cdot \mathbb{E}_{\theta^\star}[R_T(\pi)] &\ge \Delta\cdot \inf_\pi \sup_{\theta^\star: \Delta\le \|\theta^\star\|_2\le 1} \mathbb{E}_{\theta^\star}[R_T(\pi)] \\
&\ge \Delta^2 \cdot\sum_{m=1}^M \frac{t_m-t_{m-1}}{10\sqrt{d}}\exp\left(-\frac{16t_{m-1}\Delta^2}{d^2}\right).
\end{align*}
Now define $s = \min\{m\in [M]: t_m \ge d^2 \}$, which always exists due to the assumption $T\ge d^2$. Choosing $\Delta \in \{1,d^2/t_s,d^2/t_{s+1},\cdots,d^2/t_M\} \subseteq [0,1]$ in the above inequality gives
\begin{align*}
\inf_\pi \sup_{\theta^\star: \|\theta^\star\|_2\le 1} \|\theta^\star\|_2\cdot \mathbb{E}_{\theta^\star}[R_T(\pi)] \ge c\cdot \max\left\{\frac{t_s}{\sqrt{d}}, \frac{d^{3/2}t_{s+1}}{t_s}, \frac{d^{3/2}t_{s+2}}{t_{s+1}},\cdots, \frac{d^{3/2}T}{t_{M-1}} \right\}
\end{align*}
for some absolute constant $c>0$. Finally, applying $\max\{a_1,\cdots,a_n\} \ge \sqrt[n]{a_1a_2\cdots a_n}$ gives
\begin{align*}
\inf_{\pi} \sup_{\theta^\star: \|\theta^\star\|_2\le 1} \|\theta^\star\|_2\cdot \mathbb{E}_{\theta^\star}[R_T(\pi)] \ge c\cdot d^{3/2}\left(\frac{T}{d^2}\right)^{\frac{1}{M-s+1}} \ge c\cdot d^{3/2}\left(\frac{T}{d^2}\right)^{\frac{1}{M}},
\end{align*}
as claimed.
\section{Conclusion}
As we have shown in this paper, sequential batch learning provides an interesting and nontrivial departure from the traditional online learning setting where feedback is immediately observed and incorporated into making the next decision. We studied sequential batch learning in the linear contextual bandits setting and provided an in-depth inquiry into the algorithms and theoretical performance. An important insight here is that the nature of the contexts-adversarial or stochastic--has a significant impact on the optimal achievable performance, as well as the algorithms that would achieve the minimax optimal regret bounds.
Several questions immediately suggest themselves.
First, in the stochastic context setting, our current regret upper bound
depends heavily on the Gaussian assumption of the contexts. It would be interesting to see how far we can move beyond the Gaussian family.
It would be unlikely that the same result holds for any distribution and hence, characterizing a (hopefully large) class of distributions under which the same tight bounds are achievable would be interesting.
Another direction would be to look at more complex reward structures that go beyond linear bandits and see to what extent can the current set of results be generalized. We leave them for future work.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Datasets and their Templates}
\label{sec:appendix:analysis}
\subsection{Division of Crowdsourcing Instructions into Minimal Tasks}
\label{sec:appendix:division:screenshots}
Fig.~\ref{fig:subtsakdivision} shows an example of how a task is divided into multiple subtasks for the MC-TACO dataset. MC-TACO has five categories (Event Duration, Event Frequency etc.). Each category contributes to 2 subtasks one for question generation and one for answer generation.
\paragraph{Number of tasks in each dataset.}
Fig.~\ref{fig:no. of subtasks} illustrates how the number of steps in the data creation process varies across the 6 datasets. QASC and MC-TACO contain a relatively higher number of steps in the data creation process in comparison to DROP, Quoref, CosmosQA, and Winogrande.
\begin{figure}[H]
\centering
\includegraphics[scale=0.28,trim=2cm 4cm 0cm 3cm]{figures/Number_of_subtasks.pdf}
\caption{Variations in the number of subtasks}
\label{fig:no. of subtasks}
\end{figure}
\subsection{Analysis of Crowdsourcing Templates}
We analyzed crowdsourcing templates of 6 datasets: CosmosQA~\cite{huang2019cosmos},
DROP~\cite{dua2019drop},
MC-TACO~\cite{zhou2019going},
QASC~\cite{khot2020qasc},
Quoref~\cite{dasigi2019quoref}, and
Winogrande~\cite{sakaguchi2020winogrande}. Our intention behind the analysis is to identify similarities and differences across templates and subsequently decide regarding the collection of more templates.
\label{appendix:analysis:templates}
\paragraph{Size of the instructions.} We observe significant variation in size across the 6 datasets (Fig.~\ref{fig:size inst}). In the case of QASC, the instruction size associated with each step of the data creation process is very high, whereas for Winogrande, it is exactly the opposite-- instruction size associated with each step of the data creation process is very low. Instead, the size of the common instruction (i.e., the instruction preceding the first step of the data creation process) is high in Winogrande; this is also seen for DROP. The major mode of instruction varies across datasets. Examples and instructions associated with each step of data creation respectively take up the majority of space in Quoref and CosmosQA. MC-TACO relies on examples to explain the crowdsourcing task, while Winogrande and QASC depend mostly on common instructions and instructions associated with each step of the data creation process respectively, to explain the task to the crowdworker.
\paragraph{The number of positive/negative examples.}
Variation in the occurrence of \textsc{Positive} and \textsc{Negative} Examples across datasets has been illustrated in Fig.~\ref{fig:no. of examples}. Only Winogrande provides an equal number of \textsc{Positive} and \textsc{Negative} Examples.
QASC instructions do not contain any \textsc{Negative} Examples.
Overall, DROP instructions consist of a relatively higher number of examples than other datasets.
\begin{figure}[H]
\centering
\includegraphics[width=0.96\columnwidth ]{figures/example_num.png}
\caption{Variation in the number of positive and negative examples}
\label{fig:no. of examples}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.96\columnwidth]{figures/instruction_size.pdf}
\caption{Variation in the number of sentences in the crowdsourcing instructions across datasets}
\label{fig:size inst}
\end{figure}
\paragraph{Presence of reasons/suggestions in examples.} All datasets except QASC contain both \textsc{Positive} and \textsc{Negative} Examples.
However, Quoref is the only dataset to provide \textsc{Reasons} for all the \textsc{Positive} and \textsc{Negative} Examples. There are explanations associated with each of the \textsc{Negative} Examples, but the presence of explanations associated with \textsc{Positive} Examples varies across datasets. Finally, Quoref is the only dataset to provide \textsc{Suggestions} along with the \textsc{Reasons} associated with the \textsc{Negative} Examples.
\begin{comment}
\paragraph{Dimensions of Input and Output:}The input dimension of a step is defined as the number of previous step outputs that are fed as input. Parallely, the output dimension of a step is the number of distinct outputs the model needs to produce in that step-- for example, if a model has to generate both a question and an answer in a step, the output dimension will be 2. CosmosQA and QASC have relatively high dimensional instances, whereas Quoref and MC-TACO have relatively low dimensional instances.
\end{comment}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,trim=0cm 0cm 0cm 0cm]{figures/sub-task.pdf}
\caption{
Dividing a data creation task into multiple subtasks for the MC-TACO dataset.
}
\label{fig:subtsakdivision}
\end{figure}
\subsection{Qualitative Analysis}
\paragraph{Writing Style.} There are significant variation in writing style across the datasets, even among those datasets
that have the common a objective (e.g., DROP, Quoref and QASC).
DROP instructions say \textit{"There is an AI running in the background which will also try to answer the question. You won't be able to submit the question if the AI gives the same response."} The writing style in Quoref however is different: \textit{"We also want you to avoid questions that can be answered correctly by someone without actually understanding the paragraph. ..."}
\paragraph{Information.} We observe that sometimes instructions of a dataset contain information that is relevant to several other datasets, which do not contain similar instruction information. For example, Quoref, DROP and CosmosQA are datasets that are all based on reading comprehension tasks. CosmosQA contains a step in the data creation process asking users to skip passages containing inappropriate or offensive content. This information is also relevant to Quoref and DROP, but is not mentioned in their respective instructions.
\begin{figure}[t]
\centering
\includegraphics[scale=0.36,trim=0.1cm 0.1cm 0.1cm 0.1cm]{figures/Task_specification.pdf}
\caption{Variation in Task Specification: Quoref contains a single line instruction whereas the CosomosQA contains a detailed instruction. QASC on the other hand, contains examples along with instruction.}
\label{fig:task_specification}
\end{figure}
\paragraph{Hardness.} In a typical crowdsourcing task, certain tasks may be harder than the others, often these are the core tasks, e.g.: question generation, adversarial data creation, etc. Additional information, especially in the form of tips is always helpful in solving these hard tasks. Figure~\ref{fig:task_specification} illustrates that the task of question generation is stated differently in Quoref, CosmosQA, and QASC. QASC mentions an easy and detailed way to create questions, whereas CosmosQA mentions several different attributes of a good quality question. Knowing about the CosmosQA and QASC question generation processes may help with data creation for Quoref and other such question generation tasks, where less additional information is provided regarding question creation.
\subsection{Data Curation Effort}
\label{appendix:subsect:curation}
Table \ref{tab:datacuration} shows the effort distribution in the data curation process of \textsc{Natural Instructions}{}. Step-8 which involves parsing instances is the main bottleneck in the data curation process. Table \ref{tab:structure} shows the detailed structure of tasks in \textsc{Natural Instructions}{}. Fig.~\ref{fig:examplesfull} shows examples of four different tasks in \textsc{Natural Instructions}{}.
\begin{table}[h]
\centering
\footnotesize
\begin{tabular}{m{0.5cm}p{4.5cm}p{1.5cm}}
\toprule
step & task & time per task \\
\midrule
1 & Identify crowdsourced dataset and engage with their authors. & 20-30 mins \\
2 & Go through the template and understand the task. & 10-15 mins \\
3 & Manually fill fields in the schema with content from the template. & 30-45 mins \\
4 & Iterate over the instructions to ensure their clarity while eliminating the repeated content. Fix writing issue in examples, also typos etc.
& 2-3 hrs\\
5 & Create negative examples if not present. Add the missing explanations to the examples. & 1-2 hrs \\
6 & Extract the input/output instances from raw crowdsourcing annotations. & 0.5-24 hrs \\
7 & Final inspections of the data to verify the data quality
& 0.25- 2hrs \\
\midrule
& Overall & 6-34 hrs\\
\bottomrule
\end{tabular}
\caption{Steps taken to curate each task in \textsc{Natural Instructions}{} and their estimated times.
}
\label{tab:datacuration}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.75,trim=0.7cm 0.5cm 0.5cm 1.5cm]{figures/examples_detailed.pdf}
\caption{
Examples from \textsc{Natural Instructions}{}.
Each task follows the schema provided in Fig.~\ref{fig:schema_plate}.
}
\label{fig:examplesfull}
\end{figure*}
\begin{table*}
\centering
\small
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{llcc}
\toprule
task id & title& source dataset & task category\\
\midrule
1 & task001\_quoref\_question\_generation & Quoref & Question Generation \\
2 & task002\_quoref\_answer\_generation & Quoref & Answer Generation \\
\midrule
3 & task003\_mctaco\_question\_generation\_event\_duration & MC-TACO & Question Generation \\
4 & task004\_mctaco\_answer\_generation\_event\_duration & MC-TACO & Answer Generation \\
5 & task005\_mctaco\_wrong\_answer\_generation\_event\_duration & MC-TACO & Incorrect Answer Generation \\
6 & task006\_mctaco\_question\_generation\_transient\_stationary & MC-TACO & Question Generation \\
7 & task007\_mctaco\_answer\_generation\_transient\_stationary & MC-TACO & Answer Generation \\
8 & task008\_mctaco\_wrong\_answer\_generation\_transient\_stationary & MC-TACO & Incorrect Answer Generation \\
9 & task009\_mctaco\_question\_generation\_event\_ordering & MC-TACO & Question Generation \\
10 & task010\_mctaco\_answer\_generation\_event\_ordering & MC-TACO & Answer Generation \\
11 & task011\_mctaco\_wrong\_answer\_generation\_event\_ordering & MC-TACO & Incorrect Answer Generation \\
12 & task012\_mctaco\_question\_generation\_absolute\_timepoint & MC-TACO & Question Generation \\
13 & task013\_mctaco\_answer\_generation\_absolute\_timepoint & MC-TACO & Answer Generation \\
14 & task014\_mctaco\_wrong\_answer\_generation\_absolute\_timepoint & MC-TACO & Incorrect Answer Generation \\
15 & task015\_mctaco\_question\_generation\_frequency & MC-TACO & Question Generation \\
16 & task016\_mctaco\_answer\_generation\_frequency & MC-TACO & Answer Generation \\
17 & task017\_mctaco\_wrong\_answer\_generation\_frequency & MC-TACO & Incorrect Answer Generation \\
18 & task018\_mctaco\_temporal\_reasoning\_presence & MC-TACO & Classification \\
19 & task019\_mctaco\_temporal\_reasoning\_category & MC-TACO & Classification \\
20 & task020\_mctaco\_span\_based\_question & MC-TACO & Classification \\
21 & task021\_mctaco\_grammatical\_logical & MC-TACO & Classification \\
\midrule
22 & task022\_cosmosqa\_passage\_inappropriate\_binary & Cosmosqa & Classification \\
23 & task023\_cosmosqa\_question\_generation & Cosmosqa & Question Generation \\
24 & task024\_cosmosqa\_answer\_generation & Cosmosqa & Answer Generation \\
25 & task025\_cosmosqa\_incorrect\_answer\_generation & Cosmosqa & Incorrect Answer Generation \\
\midrule
26 & task026\_drop\_question\_generation & DROP & Question Generation \\
27 & task027\_drop\_answer\_type\_generation & DROP & Classification \\
28 & task028\_drop\_answer\_generation & DROP & Answer Generation \\
\midrule
29 & task029\_winogrande\_full\_object & Winogrande & Minimal Text Modification \\
30 & task030\_winogrande\_full\_person & Winogrande & Minimal Text Modification \\
31 & task031\_winogrande\_question\_generation\_object & Winogrande & Question Generation \\
32 & task032\_winogrande\_question\_generation\_person & Winogrande & Question Generation \\
33 & task033\_winogrande\_answer\_generation & Winogrande & Answer Generation \\
34 & task034\_winogrande\_question\_modification\_object & Winogrande & Minimal Text Modification \\
35 & task035\_winogrande\_question\_modification\_person & Winogrande & Minimal Text Modification \\
\midrule
36 & task036\_qasc\_topic\_word\_to\_generate\_related\_fact & QASC & Minimal Text Modification \\
37 & task037\_qasc\_generate\_related\_fact & QASC & Minimal Text Modification \\
38 & task038\_qasc\_combined\_fact & QASC & Minimal Text Modification \\
39 & task039\_qasc\_find\_overlapping\_words & QASC & Verification \\
40 & task040\_qasc\_question\_generation & QASC & Question Generation \\
41 & task041\_qasc\_answer\_generation & QASC & Answer Generation \\
42 & task042\_qasc\_incorrect\_option\_generation & QASC & Incorrect Answer Generation \\
\midrule
43 & task043\_essential\_terms\_answering\_incomplete\_questions & Essential Terms & Answer Generation \\
44 & task044\_essential\_terms\_identifying\_essential\_words & Essential Terms & Verification \\
\midrule
45 & task045\_miscellaneous\_sentence\_paraphrasing & Miscellaneous & Minimal Text Modification \\
46 & task046\_miscellaenous\_question\_typing & Miscellaenous & Classification \\
47 & task047\_miscellaenous\_answering\_science\_questions & Miscellaenous & Answer Generation \\
\midrule
48 & task048\_multirc\_question\_generation & MultiRC & Question Generation \\
49 & task049\_multirc\_questions\_needed\_to\_answer & MultiRC & Classification \\
50 & task050\_multirc\_answerability & MultiRC & Classification \\
51 & task051\_multirc\_correct\_answer\_single\_sentence & MultiRC & Answer Generation \\
52 & task052\_multirc\_identify\_bad\_question & MultiRC & Classification \\
53 & task053\_multirc\_correct\_bad\_question & MultiRC & Minimal Text Modification \\
54 & task054\_multirc\_write\_correct\_answer & MultiRC & Answer Generation \\
55 & task055\_multirc\_write\_incorrect\_answer & MultiRC & Incorrect Answer Generation \\
56 & task056\_multirc\_classify\_correct\_answer & MultiRC & Classification \\
57 & task057\_multirc\_classify\_incorrect\_answer & MultiRC & Classification \\
58 & task058\_multirc\_question\_answering & MultiRC & Answer Generation \\
\midrule
59 & task059\_ropes\_story\_generation & ROPES & Minimal Text Modification \\
60 & task060\_ropes\_question\_generation & ROPES & Question Generation \\
61 & task061\_ropes\_answer\_generation & ROPES & Answer Generation \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{Detailed set of tasks included in \textsc{Natural Instructions}{}}
\label{tab:structure}
\end{table*}
\clearpage
\onecolumn
\changed{
\subsection{Qualitative Comparison to PromptSource}
\label{subsec:promptsource}
We provide a comparison between our proposed dataset and PromptSource~\cite{sanh2021multitask}.
PromptSource tasks are mainly focused on the common NLP downstream tasks (such as question-answering, coreference, NLI, etc).
However, since we create tasks from various steps (including the intermediate steps) in a data creation process, our instructions contain a broader variety of tasks. For example, tasks for chaining facts (task 38; Table~\ref{tab:structure}), question typing (task 27; Table~\ref{tab:structure}) or detecting inappropriate content (task 22; Table~\ref{tab:structure}) are unique additions in \textsc{Natural Instructions}{}.
Additionally, since our instructions were originally written by various researchers targeted for crowdworkers, they are elaborate and contain the complete definition of each task.
This is somewhat evident from observation that GPT3 leads to higher performance on our instructions (Table~\ref{tab:prompt:source:gpt3:eval}).
Last but not least, since we represent the instructions in a structured format, we are able to ablate various elements of the instructions (definition, negative/positive examples, etc.) and empirically quantify their contributions (\S\ref{sec:experiments}).
}
\begin{table}[h]
\centering
\small
\begin{tabular}{clcc}
\toprule
Task & Model & PromptSource & \textsc{Natural Instructions}{} \\
\midrule
\multirow{2}{*}{ Quoref QA (002) } & GPT3-Instruct & 43 & {\bf 47} \\
& GPT3 & 2 & {\bf 13} \\
\multirow{2}{*}{ DROP QA (028) } & GPT3-Instruct & 6 & {\bf 10} \\
& GPT3 & 2 & {\bf 3} \\
\bottomrule
\end{tabular}
\caption{
Comparing zero-shot performance of GPT3 on our instructions vs. PromptSource.
The instructions curated in this work, despite being lengthier, lead to higher performance.
}
\label{tab:prompt:source:gpt3:eval}
\end{table}
\begin{table*}[h]
\centering
\includegraphics[scale=0.88,trim=1.4cm 13.4cm 1.2cm 1.85cm,clip=true]{figures/comparisonWithPromptSource-3.pdf}
\caption{Qualitative comparison of the task instructions for several shared tasks among \textsc{Natural Instructions}{} and PromptSource~\cite{sanh2021multitask}.}
\label{tab:prompt:source}
\end{table*}
\twocolumn
\clearpage
\section{Building Baselines for \textsc{Natural Instructions}{}}
In this section, we provide several details on the baselines included in our work.
\subsection{Encoding of the instructions}
\label{appendix:subsect:encoding}
According to our schema (\S\ref{subsec:schema}), each instruction $I_t$ for the $t$-th task is a set that contains the following fields:
$$
I_t = \setOf{
\I{t}{title},
\I{t}{def.},
\I{t}{avoid},
\I{t}{emph.},
\I{t}{prompt},
\I{t}{pos. ex.},
\I{t}{neg. ex.}
}
$$
To feed the instances to LMs, we first encoder them into plain text.
Let $enc(I, x)$ define a function that maps a given instruction $I$ and input instance $x$ to plain text.
Evidently, there are many choices for this function.
In our study, we consider the following encodings:
\paragraph{\textsc{No-instructions} encoding.}
This encoding is the conventional paradigm where no instructions exist:
\begin{equation} \label{eq1}
\small
\begin{split}
enc(I_t, x) := & \mathtt{\small input:} \; x \\
& \mathtt{\small output:} \textnormal{''}
\end{split}
\end{equation}
\paragraph{\textsc{prompt} encoding.}
In this encoding, we append the prompt message before the input:
\begin{equation} \label{eq2}
\small
\begin{split}
enc(I_t, x) := & \mathtt{\small Prompt:} \; \I{t}{prompt} \\
& \mathtt{\small input:} \; x \\
& \mathtt{\small output:} \textnormal{''}
\end{split}
\end{equation}
\paragraph{\textsc{Prompt + Definition} encoding.}
In this encoding, the prompt message and the task definition appear before the input:
\begin{equation} \label{eq3}
\small
\begin{split}
enc(I_t, x) := & \textnormal{``}\mathtt{\small Definition:} \; \I{t}{def.} \\
& \mathtt{\small Prompt:} \; \I{t}{prompt} \\
& \mathtt{\small input:} \; x \\
& \mathtt{\small output:} \textnormal{''}
\end{split}
\end{equation}
Intuitively, this encoding is more informative and more complex than ``prompt'' encoding.
\paragraph{\textsc{Full Instructions} encoding.}
This encoding contains all the instruction content:
\begin{equation}
\label{eq4}
\small
\begin{split}
enc(I_t, x) :=
& \textnormal{``}\mathtt{\small Definition:} \; \I{t}{def.} \\
& \mathtt{\small Prompt:} \; \I{t}{prompt} \\
& \mathtt{\small Things \; to \; Avoid:} \; \I{t}{avoid.} \\
& \mathtt{\small Emphasis \& Caution:} \; \I{t}{emph.} \\
& \textnormal{``}\mathtt{\small Negative Example1-} \\
& \hspace{0.7cm} \mathtt{\small input:} \; \I{t}{pos. ex.}\mathtt{(input)} \\
& \hspace{0.7cm} \mathtt{\small output:} \; \I{t}{pos. ex.}\mathtt{(output)} \\
& \hspace{0.7cm} \mathtt{\small reason:} \; \I{t}{pos. ex.}\mathtt{(reason)} \\
& \mathtt{\small Negative Example2-} \\
& \hspace{0.7cm} \mathtt{\small \hdots } \\
& \textnormal{``}\mathtt{\small Positive Example1-} \\
& \hspace{0.7cm} \mathtt{\small input:} \; \I{t}{pos. ex.}\mathtt{(input)} \\
& \hspace{0.7cm} \mathtt{\small output:} \; \I{t}{pos. ex.}\mathtt{(output)} \\
& \hspace{0.7cm} \mathtt{\small reason:} \; \I{t}{pos. ex.}\mathtt{(reason)} \\
& \mathtt{\small Positive Example2-} \\
& \hspace{0.7cm} \mathtt{\small \hdots } \\
& \mathtt{\small input:} \; x \\
& \mathtt{\small output:} \textnormal{''}
\end{split}
\end{equation}
where $enc_{\textnormal{ex}} (I_t)$ is an alternating encoding positive and negative examples. We include as many examples as possible, before exceeding the input limit.
\begin{comment}
\newcommand{\mathrel{+}=}{\mathrel{+}=}
\begin{equation*}
\begin{split}
& \mathtt{for \; (p, n) \; in \; zip(} \I{t}{pos. ex.}, \I{t}{neg. ex.} \mathtt{):} \\
& \hspace{0.5cm} enc_{\textnormal{ex}} (I_t) \mathrel{+}= \\
& \hspace{0.99cm} \textnormal{``}\mathtt{\small Positive Example-} \\
& \hspace{0.99cm} \mathtt{\small input:} \; \mathtt{p}_{\textnormal{\tiny input}} \; \mathtt{\small output:} \; \mathtt{p}_{\textnormal{\tiny output}} \\
& \hspace{0.99cm} \mathtt{\small reason:} \; \mathtt{p}_{\textnormal{\tiny reason}} \\
& \hspace{0.99cm} \mathtt{\small Negative Example-} \\
& \hspace{0.99cm} \mathtt{\small input:} \; \mathtt{n}_{\textnormal{\tiny input}} \; \mathtt{\small output:} \; \mathtt{n}_{\textnormal{\tiny output}} \\
& \hspace{0.99cm} \mathtt{\small reason:} \; \mathtt{n}_{\textnormal{\tiny reason}} \;
\mathtt{\small suggestion:} \; \mathtt{n}_{\textnormal{\tiny sugg.}}
\textnormal{''}
\end{split}
\end{equation*}
\end{comment}
\paragraph{\textsc{Positive Examples} encoding.}
This encoding contains only positive examples of the subtask (no task description, etc).
\begin{equation}
\label{eq5}
\small
\begin{split}
enc(I_t, x) :=
& \hspace{0.7cm} \mathtt{\small input:} \; \I{t}{pos. ex.}\mathtt{(input)} \\
& \hspace{0.7cm} \mathtt{\small output:} \; \I{t}{pos. ex.}\mathtt{(output)} \\
& \hspace{0.7cm} \mathtt{\small \hdots } \\
& \mathtt{\small input:} \; x \\
& \mathtt{\small output:} \textnormal{''}
\end{split}
\end{equation}
Such example-only have been used in several recent studies in the field~\cite{zhao2021calibrate}.
\clearpage
\onecolumn
\twocolumn
\section{Analysis on Baseline Results}
\label{sec:appendix:banalysis}
\changed{
\subsection{Comparison to Raw Instructions}
\label{subsec:efratlevycomparison}
We seek to understand the value of breaking the tasks into sub-tasks and mapping them into our proposed schema (\S\ref{sec:mapping}).
We compute performance of raw instructions (first sub-task of four datasets),
in the same vein as
\citep{efrat2020turking}'s setup.
We compare this to our \textsc{Full Instruction - neg examples} encoding.
The results in Table~\ref{tab:comparison:raw:instructions} indicate that GPT3 leads to higher performance with our encoding (2nd row) compared to raw instructions (first row).
Weak performance of LMs on raw instructions aligns with \citep{efrat2020turking}'s finding that ``language model performs poorly''.
\newcolumntype{R}[2]{%
>{\adjustbox{angle=#1,lap=\width-(#2)}\bgroup}%
l%
<{\egroup}%
}
\newcommand*\rot{\multicolumn{1}{R{30}{1em}}
\begin{table}[h]
\small
\centering
\begin{tabular}{ccccc}
\toprule
& \rot{Quoref} & \rot{MCTaco} & \rot{CosmosQA} & \rot{QASC} \\
\midrule
\makecell{raw instructions} & 12.5 & 5.00 & 6.9 & 3.7 \\
\makecell{our schema} & 25.8 & 42.6 & 17.7 & 51.3 \\
\bottomrule
\end{tabular}
\caption{Comparing GPT3 performance on raw crowdsourcing instructions vs. our encoding. All numbers are ROUGE-L.}
\label{tab:comparison:raw:instructions}
\end{table}
This might be partly due to the verbose language of the raw instructions:
the average length of the raw instructions is $2.5k$ tokens, in comparison to $950$ tokens for our encoding.
While repetition often helps human understanding, concise instructions seem to be more effective for computers.
}
\begin{comment}
\subsection{An Ablation Study of Instructional Elements}
\label{sec:ablation:study}
We conduct an ablation study with GPT3 on 3 distinct tasks (answer generation from Winogrande; question generation from QASC; verifying temporal reasoning category of a given question from MC-TACO).
Table~\ref{tab:ablation:subset} (top) shows the effect of eliminating various fields in the encoding while Table~\ref{tab:ablation:subset} (bottom) indicates the gains from adding each field.
The overall observation is that GPT3 benefits the most from \emph{positive examples}, mildly from \emph{definition}, and deteriorates with \emph{negative examples}.
We hypothesize it is easier for GPT3 to mimic the patterns in positive examples while utilizing \emph{negative examples} requires deeper understanding.
\begin{table}[h]
\centering
\includegraphics[scale=0.73,trim=8.7cm 5.7cm 2cm 1.9cm]{figures/ablation-subset.pdf}
\caption{An ablation study of the different fields included in \textsc{Natural Instructions}{} based on GPT3. This model benefits the most from \textsc{positive} examples and the least from \textsc{negative} examples.
}
\label{tab:ablation:subset}
\end{table}
\newpage
\end{comment}
\begin{comment}
\begin{table}[ht]
\centering
\small
\resizebox{\columnwidth}{!}{
\begin{tabular}{lcc}
\toprule
error type & GPT3 & BART \\
\midrule
does not follow instruction and generate an invalid question & 14 & 8\\
generates a nonsensical/vague question & 4 & 47\\
copies the given fact or a subset of it & 8 & 3 \\
explains the question after generating it & 6 & 0\\
generates a yes/no question & 12 & 4\\
generates candidate answers as output &4 & 0\\
generates questions whose answer does not exist &4 &3\\
\makecell[l]{generates generic questions independent\\ of the given context} &6 &0\\
\bottomrule
\end{tabular}
}
\caption{
Percentage of errors on QASC QG task (\S\ref{sec:error:analysis}).
The numbers do not sum to 100 since the error types are not mutually exclusive.
}
\label{Tab: Error Analysis}
\end{table}
\end{comment}
\begin{comment}
\begin{table*}[t]
\small
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{lcccccc|l||cccccc|l}
\toprule
& \multicolumn{7}{c}{BART} & \multicolumn{7}{c}{GPT3} \\
\cmidrule(r){2-8} \cmidrule(r){9-15}
task category → & QG & AG & CF & IAG & MM & VF & avg & QG & AG & CF & IAG & MM & VF & avg \\
\midrule
\textsc{No Instruction} & 26 & 6 & 0 & 21 & 33 & 7 & 13 & - & - & - & - & - & - & - \\
\midrule
\textsc{prompt} & 27 & 22 & 7 & 22 & 34 & \textbf{9} & 20 & 33 & 32 & 14 & 13 & \textbf{73} & 16 & 30 \\
{\ \ \ +\textsc{definition}} & 35 & 24 & 50 & \textbf{25} & 36 & 7 & 30$\uparrow$ (+50) & 36 & 35 & 40 & 14 & 70 & 16 & 35$\uparrow$ (+17)\\
{ \ \ \ +\textsc{things to avoid}} & 33 & 24 & 4 & 24 & \textbf{58} & \textbf{9} & 25$\uparrow$ (+25) & 28 & 33 & 11 & 16 & 68 & 14 & 28$\downarrow$ (-7) \\
{\ \ \ +\textsc{emphasis}} & 38 & 23 & 16 & \textbf{26} & 49 & 3 & 26$\uparrow$ (+30) & 29 & 28 & 18 & 16 & 72 & 16 & 30 \\
{\ \ \ +\textsc{pos. examp.}} & 53 & 22 & 14 & \textbf{25} & 17 & 7 & 23$\uparrow$ (+15) & \textbf{43} & 49 & 29 & 21 & 70 & \textbf{36} & 41$\uparrow$ (+37) \\
{\ \ \ +\textsc{definition+pos. examp.}} & 51 & 23 & \textbf{56} & \textbf{25} & 37 & 6 & 33$\uparrow$ (+65) & \textbf{43} & 50 & \textbf{45} & \textbf{23} & 70 & 32 & \textbf{44}$\uparrow$(+47) \\
{\ \ \ +\textsc{pos, neg ex+ explan.}} & 50 & 21 & 27 & 25 & 50 & 7 & 30 $\uparrow$ (+50) & 32 & 19 & 8 & 12 & 61 & 13 & 24$\downarrow$(-20) \\
\textsc{pos. examp.} & \textbf{55} & 6 & 18 & \textbf{25} & 8 & 6 & 20 & 30 & 32 & 15 & 16 & 68 & 23 & 31$\uparrow$(+3) \\
\midrule
\textsc{Full Instruction} & 46 & 25 & 52 & 25 & 35 & 7 & 32$\uparrow$ (+60) & 33 & 18 & 8 & 12 & 60 & 11 & 24$\downarrow$(-20) \\
{\ \ \ -\textsc{ examples }} & 40 & 24 & 36 & 25 & 55 & 8 & 31$\uparrow$ (+55) & 31 & 34 & 39 & 14 & 69 & 13 & 33$\uparrow$(+10) \\
{\ \ \ - \textsc{neg. examp.}} & 52 & \textbf{30} & 50 & \textbf{25} & 47 & 8 & \textbf{35}$\uparrow$ (+75) & \textbf{43} & \textbf{54} & 44 & 21 & 70 & 32 & \textbf{44}$\uparrow$(+47) \\
\bottomrule
\end{tabular}
}
\caption{
Full BART and GPT3 results with various input encodings for different task categories, under random split (\S\ref{subsec:split}).
Both models show improved results when encoded with instructions, comparing relative gains indicated in the `avg' columns (in percentage compared to \textsc{prompt} encoding.)
Category names: QG: Question Generation, AG: Answer Generation, CF: Classification, IAG: Incorrect Answer Generation, MM: Minimal Text Modification, VF: Verification.
}
\label{tab:random:splitfull2}
\end{table*}
\end{comment}
\begin{comment}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.4\textwidth}
\caption{GPT3}
\includegraphics[scale=0.62,trim=0cm 0cm 0cm 0.1cm ]{figures/gains-categories-gpt.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\caption{BART}
\includegraphics[scale=0.62,trim=0cm 0cm 0cm 0.1cm ]{figures/gains-categories-bart.pdf}
\end{subfigure}
\caption{
GPT3 and BART were evaluated with various encoding and various categories of tasks.
The benefit of instructions to the models depends on the semantics of the task. For instance, for GPT3 (left) \emph{minimal text modification} category benefits a lot, while the benefits to \emph{verification} tasks are minimal.
}
\label{fig:gains:per:categories}
\end{figure*}
\end{comment}
\begin{comment}
\subsection{Generalization vs. number of positive examples}
\label{subsection:numberofpositiveexamples}
Fig.~\ref{fig:GPTexample} and \ref{fig:BARTexample} illustrates the performance variation of models with respect to the number of examples. Clearly, addition of examples is not helping GPT3 and BART. Note that model performance with just the prompt+definition encoding is 35 in case of GPT3 and 30 in case of BART. This may suggest that the effort in creating many examples can be utilized to improve other aspects of Instruction. Another demerit of larger number of examples is that they increases input token size which increases the API usage cost in case of GPT3 and training time and higher memory usage in case of BART.
\end{comment}
\clearpage
\begin{comment}
\onecolumn
\subsection{User Study to Find Important Task-Specific Instruction Fields}
\label{subsec:appendix:user:study}
We ask our quality assessment annotators to also specify which instruction fields help them understand the task and answer prompts. For each of the 12 tasks in our evaluation set, we ask: \textit{Which instruction field helps you the most to understand the task and answer questions and why? Remember, on removing this field significant major information should get lost.} We compile these results category-wise, and present them in Table \ref{Tab: User Study}. In particular, there are two tasks Classification (CF) and Minimal Text Modification (MM) for which humans find only a single instruction field to be important. We find that models also find the same fields to be most important, as evinced in Table \S\ref{tab:random:splitfull}), where the performance of models with these fields is higher than the rest. Interestingly, this is similar to the patterns observed in the model performance (Table \S\ref{tab:random:splitfull}).
\begin{figure*}[]
\centering
\includegraphics[scale=0.55,trim=0.7cm 1cm 0.1cm 0cm]{figures/example_variation_GPT.pdf}
\caption{GPT3 performance as a function of the number of examples in its encoding. The number of examples is limited by three upperbounds: 3, 10 and 70. This shows that addition of examples is not helping GPT3.
}
\label{fig:GPTexample}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.55,trim=0.7cm 1cm 0.1cm 0cm]{figures/example_variation_BART.pdf}
\caption{BART performance as a function of the number of examples in its encoding. The number of examples is limited by two upperbounds: 3 and 10. This shows that addition of examples is not helping BART. Since BART's maximum token size is 1024, it can not fit a lot examples unlike GPT3, so we did not experiment further with larger number of examples.
}
\label{fig:BARTexample}
\end{figure*}
\end{comment}
\begin{comment}
\begin{table*}
\centering
\includegraphics[scale=0.65,trim=1.8cm 6.5cm 0cm 3cm]{figures/ablation-polished.pdf}
\caption{Detailed results of the encoding ablation performed on three distinct subtasks.}
\label{tab:ablation:all}
\end{table*}
\end{comment}
\begin{comment}
\begin{table*}
\centering
\includegraphics[scale=0.65,trim=1.7cm 8.9cm 0cm 2cm]{figures/results_table_det.pdf}
\caption{
Empirical results \textsc{Natural Instructions}{}.
The best numbers among the four encodings are indicated with \textbf{bold}.
The first row is {\color{gray} grayed out} since it is our oracle upperbound.
}
\label{tab:results:detail}
\end{table*}
\end{comment}
\subsection{Evaluating Generalization Across Tasks}
\begin{comment}
\paragraph{\textsc{Natural Instructions}{} tasks splits.}
To evaluate generalization across subtasks, we divide \textsc{Natural Instructions}{} into two collections:
(i) \emph{evaluation} tasks \task{eval} (35 subtasks) for test and
(ii) \emph{non-evaluation} tasks \task{non-eval} (26 subtasks) for training.\footnote{The tasks are enumerated in the appendix.}
In making this collection, we ensure that the tasks included in \task{eval} accept a relatively reliable automatic evaluation. For example, those tasks that have restricted answer space (like classification tasks) or those that have several gold output references. The end tasks of the source datasets which are typically the answer generation tasks are often included in the evaluation set.
Additionally, we ensure to have at least one representative subtask from each of the semantic categories (\S\ref{sec:mapping}) in the \emph{non-evaluation} collection
However, tasks within categories are very different from each other. For instance, creation of DROP questions requires understanding of numerical reasoning and reading comprehension, whereas creation of Winogrande questions requires understanding of co-reference resolution and the requirement of the task to create twin question and answer pairs.
\daniel{
emphasize that not any two tasks are exactly the same. Every two QG tasks are different (e.g., MC-TACO vs CosmosQA).
}
\paragraph{Evaluation.}
We formulate three evaluation settings with different supervision types available to a model (Table~\ref{tab:supervision:types}).
In `task-specific' setting, a model is supervised with the training instances of the evaluation task -- similar to the conventional setup.
In `few-shot' setting, a model only observes a few examples of the evaluation task.\footnote{
We use ``few-shot'' to refer to \emph{any setup with a small number of labeled examples},
regardless of whether these examples are used for fine-tuning or inference-time conditioning (no gradient updates).
}
In `generalization' setting, a model does not observe any instances from the evaluation task.
\end{comment}
\begin{comment}
\paragraph {``no-instructions''} encoding.
This encoding is the conventional paradigm where no instructions exist, except the input instance (Eq.~\ref{}).
\paragraph{\emph{``prompt''} encoding.}
In this encoding, we append the prompt message before the input instance.
\paragraph{\emph{``prompt + definition''} encoding.}
In this encoding, the prompt message and the task \emph{definition} appear before the input instance.
Intuitively, this encoding is more informative and more complex than \emph{``prompt''} only encoding.
\paragraph{\emph{``all instructions''} encoding.}
This encoding contains all the instruction content.
We include as many examples as possible, before exceeding the token limit of LMs.
\paragraph{\emph{``positive examples''} encoding.}
This encoding contains only positive examples from the task instructions.
Such example-only encodings have been used in several recent studies in prompting LMs~\cite{zhao2021calibrate}.
\end{comment}
\begin{comment}
\begin{table}[h]
\centering
\small
\resizebox{\columnwidth}{!}{
\begin{tabular}{L{1.9cm}cL{3.9cm}}
\toprule
Setup & Evaluation & Supervision \\
\midrule
task-specific & $T \in$ \task{eval} & all the instances of $T$ \\
\cmidrule(r){1-3}
few-shot & $T \in$ \task{eval} &
instructions of $T$ \\
\cmidrule(r){1-3}
generalization & $T \in$ \task{eval} & instructions+ instances of \task{non-eval} tasks + instructions of $T$ \\
\bottomrule
\end{tabular}
}
\caption{
Different modes of supervision considered in this work, when evaluating a model on the instances of a fixed task $T \in$ \task{eval}.
}
\label{tab:supervision:types}
\end{table}
\end{comment}
\begin{comment}
\section{Evaluating Language Models to Address \textsc{Natural Instructions}{}}
We use generative language models BART~\cite{lewis2019bart} and GPT-3~\cite{brown2020language} to address tasks in \textsc{Natural Instructions}{}. Here, we describe how we encode instructions and instances into plain text and feed them into generative language models (\S \ref{subsect:encoding}). We then describe the model details (\S \ref{subsec:models}).
We then explain how we use language models to encode instruction (\S \ref{subsect:encoding}). \daniel{to be updated}
\end{comment}
\begin{comment}
\paragraph{The benefit from instructions heavily depends on the task at hand.}
Figure~\ref{fig:gains:per:categories} shows the performance of our models on our task categories, broken down into several coarse input encodings.
Similar to our previous observations, \emph{all instructions} encoding \emph{typically} performs better than other encodings.
However, these gains are not uniform across task categories.
\end{comment}
\begin{comment}
\paragraph{Task-specific BART (oracle upper-bound estimate).}
We train BART on input/output instances of each task (no instructions) and evaluate on the same task.
This is the conventional setup where the model is fine-tuned to solve the task only, without any instructions involved.
Such a model, by design, won't generalize across different tasks since it is specialized to each subtask.
However, the numbers elicited from this can be viewed as the upper-bounds for each task (i.e., how well can BART perform, if it were to be trained on many instances of this particular task).
\end{comment}
\begin{comment}
\subsection{Task-specific Calibration of GPT-3}
\label{subsec:calibration}
We transform inputs (Instructions) to make it more understandable for GPT-3. We have a human-in-the loop setup to perform calibration. We do calibration in two steps (i) we develop various calibration procedure by experimenting with various type of prompts and identifying the types of prompt which help model follow instructions better. This is done on the non-eval split. (ii) we employ one of the calibration procedures by looking at few samples of the end-task in the eval split.
\end{comment}
\begin{comment}
\paragraph{A Data Creation Toolbox:}
\textsc{Natural Instructions}{} covers various skills beyond question generation and answer generation, such as sentence paraphrasing, verification of whether a question is in a specified category, etc that are frequently used during NLP dataset creation. A successful model trained on \textit{Natural Instructions} will be a toolbox for dataset creation. Our intuition behind the focus on dataset creation is that any NLP task can be expressed as a step in the data creation
\end{comment}
\section{Introduction}
We have witnessed great progress in solving many NLP datasets through fine-tuning pre-trained language models (LMs)~\cite{peters2018deep,brown2020language}.
More recent studies show tremendous promise in generalization \emph{within} the set of observed tasks through multi-task training and unified encoding~\cite{khashabi2020unifiedqa,aghajanyan2021muppet}.
However, cross-task generalization -- \emph{generalization} to \emph{unseen} tasks -- has generally remained under-explored.
For example, can we supervise a model with instances of grammar checking or question answering tasks, yet expect it to solve a different task like question typing (Fig.\ref{fig:teaster}).
Evidently, humans are capable of such generalizations; an average human can follow natural language \emph{instructions} to solve a variety of problems, as evident by the success of crowdsourcing platforms (also argued in~\citet{efrat2020turking}). In this paper, we study if models can generalize to \emph{unseen} tasks given their
crowdsourcing instructions (Fig.\ref{fig:teaster}).
\begin{figure}[t]
\centering
\includegraphics[scale=0.9, trim=0.75cm 0.8cm 0cm 1.0cm,clip=false]{figures/teaser1-2.pdf}
\caption{
We construct the \textsc{Natural Instructions}{} dataset from crowdsourcing instructions and instances of different NLP datasets. We study if models can learn from {\emph{\color{blue} seen}} tasks and generalize to {\emph{\color{red} unseen}} tasks given their natural crowdsourcing instructions.
}
\label{fig:teaster}
\end{figure}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\small
\resizebox{\linewidth}{!}{
\begin{tabular}{ccc}
\toprule
Task & \makecell{Instance-Level\\Generalization} & \makecell{Task-Level\\Generalization} \\
\midrule
\makecell{Training\\data} & $X^{\color{darkgreen} \text{train}}, Y^{\color{darkgreen} \text{train}}$ & \makecell{$(I_t, X_t^{{\color{darkgreen} \text{train}}}, Y_t^{{\color{darkgreen} \text{train}}})$ \\ $t \in \text{\task{\color{blue} seen}} $ \\ } \\
\midrule
Evaluation & \makecell{ $x \rightarrow y$ \vspace{0.2cm} \\ where: \\ $(x, y) \in (X^{ \color{purple} \text{test}}, Y^{ \color{purple} \text{test}})$ \vspace{0.3cm} } & \makecell{$(x, I_t) \rightarrow y$ \vspace{0.2cm} \\ where: \\ $(x, y) \in (X_t^{ {\color{purple} \text{test}}}, Y_t^{{\color{purple} \text{test}}})$ \\ $t \in$ \task{\color{red} unseen} } \\
\bottomrule
\end{tabular}
}
\caption{
A comparison of \emph{task} vs \emph{instance}-level generalization
$I_t$, $X_t$ and $Y_t$ indicate natural language instructions, input, and output sets respectively for task $t$.
In the conventional setup, training and evaluation are done on the instances of the same task.
However, in task-level generalization, a model is expected to generalize to {\color{red} unseen} tasks, where \task{\color{red} unseen} $\cap$ \task{\color{blue} seen}$ = \emptyset $.
}
\label{tab:comparison}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[scale=0.64,trim=0.2cm 0cm 0cm 1cm,clip=false]{figures/fig5scaleup-4.pdf}
\caption{BART evaluation on {\emph{unseen}} tasks ($y$-axis is perf. on \task{unseen}) when supervised with {\emph{seen}} tasks ($x$-axis is $|$\task{seen}$|$).
\changed{
A model using {\color{purple}instructions} ($I_t$) consistently improves with more observed tasks. In contrast, models with {\color{orange} no access to the instructions} show no sign of improved generalization.
}
\changed{Details in \S\ref{subsec:supervision:size:experiment}.}
}
\label{fig:scaling:tasks}
\end{subfigure}
\caption{The formal definition of generalization to unseen tasks (a) and a summary of its empirical outcome (b). }
\end{figure*}
We build \textsc{Natural Instructions}, a dataset consisting of {\it natural} crowdsourcing instructions for various tasks and their instances.
Training on {\it seen} tasks $\text{\task{\color{blue} seen}}$ in our dataset, we build a model that learns to follow natural instructions that define a task and perform tasks (i.e., mapping input to output).
Testing on \emph{unseen} tasks \text{\task{\color{red} unseen}}, we evaluate if the model can perform {\it unseen} tasks solely from their instructions and without any task-specific labeled data (Table~\ref{tab:comparison}; right).
In contrast to the instance-level generalization (Table~\ref{tab:comparison}; left), our model uses instruction as additional input, and evaluations are done on tasks that were not observed in the training stage.
\changed{
We compile \textsc{Natural Instructions}{} from task instructions written by researchers for crowdsourcing existing NLP datasets.
Such crowdsourcing instructions often elaborate a variety of details about how a task should (and should not) be done.
To provide a systematic study of various elements of crowdsourcing instructions, we map them
}
to a unified {\it schema} to cover the most important elements of task descriptions --- such as definition, constraints, positive and negative examples.
We collect tasks in \textsc{Natural Instructions}{} as minimal stand-alone steps provided to crowdworkers to complete a downstream NLP task.
For example, tasks collected from
\changed{QASC~\cite{khot2020qasc} include sub-tasks about generating topic words or combining facts, as well as answering multi-hop questions.
Therefore our dataset not only contains typical downstream tasks in NLP, but also the intermediate subtasks that are not well-represented in the common benchmarks.
}
The unified schema and the collection of minimal subtasks enable training LMs that can generalize across different tasks by learning from instructions.
In total, our dataset consists of 61 distinct NLP tasks and $193k$ instances.
Our experimental results indicate that LMs learn to leverage natural language instructions as they show improved generalization to new
tasks.
For example, a BART~\cite{lewis2019bart} achieves a 19\% gain in terms of cross-task generalization compared to a model not using instructions
(\S\ref{sec:experiments}).
Importantly, LMs can generalize better to unseen tasks if they observe more tasks in training (Fig.\ref{fig:scaling:tasks}).
This upward trajectory suggests the potential for stronger cross-task generalizable models upon scaling up the diversity of tasks represented in a meta-dataset of task instructions.
Despite the benefits of instructions, we observe a sizable gap between models' generalization and their estimated upperbounds (\ref{subsec:task-specific}), encouraging the community to work on this challenging problem.
\vspace{.1cm}
\noindent\textbf{Contributions:} In summary, the contributions of this work are as follows:
(a) we introduce \textsc{Natural Instructions}{}, a dataset of human-authored instructions curated from existing well-known datasets mapped to a unified schema, providing training and evaluation data for learning from instructions;
(b) we build models that can encode instructions and show:
(b.1) the benefit of cross-task generalization by leveraging instructions;
(b.2) the importance of different elements of instructions in the performance;
(b.3) noteworthy headroom for improvement on our benchmark, which hopefully will motivate further work in this direction.
\input{related}
\changed{
\section{Defining Cross-Task Generalization}
\label{subsec:input:output}
Here we formally define the problem setup for generalization across tasks.
Each task $t$ consists of input/output instances $(X_t, Y_t)$ and is described in terms of its natural language instructions $I_t$.
\vspace{-.2cm}
\paragraph{Task-specific models.}
Standard supervised learning algorithms use task-specific labeled instances to learn a mapping from input $x$ to output $y$: $M(x)=y$ for $(x,y)\in (X_t^{\text{train}}, Y_t^{\text{train}})$ and is evaluated on the test instances of the same (or similar) task $(X_t^{\text{test}}, Y_t^{\text{test}})$. We refer to this as the \emph{instance-level} generalization (Table~\ref{tab:comparison}; left).
\vspace{-.2cm}
\paragraph{Cross-task models.}
In this setup, the goal is to learn a model $M$ that at inference obtains the output $y$ given the input $x$ and the task instruction $I_t$: $M(I_t, x) = y, \; \mbox{for} \ (x,y)\in (X_t, Y_t)$.
In contrast to the task-specific models, no task-specific training data is used to learn the mapping $M$. We collect \textsc{Natural Instructions}\ (\S\ref{sec:construction:natural:instructions}) to study this question: can a model be trained to follow instructions via training tasks \task{seen} and be generalized to follow instructions for a task $t' \in$ \task{unseen}.
We refer to this as a \emph{task}-level generalization (Table~\ref{tab:comparison}; right).
}
\section{\textsc{Natural Instructions}{}}
\label{sec:construction:natural:instructions}
\textsc{Natural Instructions}{} consists of instructions that describe a task (e.g., question answering) and instances of that task (e.g., answers extracted for a given question).
Fig.\ref{fig:examples} shows an example instruction for the task of `generating questions that require an understanding of event duration' accompanied with positive and negative examples
that contextualize the task.
Here we introduce a schema for representing instructions (\S\ref{subsec:schema}) and then describe how existing datasets (their crowdsourcing templates) are mapped into our schema (\S\ref{sec:mapping}).
\begin{figure}[t]
\centering
\includegraphics[scale=0.70, trim=0.45cm 0.8cm 0cm 0.99cm]{figures/examples_detailed_two.pdf}
\caption{
An example from our dataset.
Note that it follows the schema provided in Fig.\ref{fig:schema_plate}. See Fig~.\ref{fig:examplesfull} for more examples.
}
\label{fig:examples}
\end{figure}
\subsection{Instruction Schema}
\label{subsec:schema}
Instructions used in crowdsourcing various datasets, are written by distinct authors for different purposes, and they are different in a variety of ways (see Appendix~\ref{appendix:analysis:templates} for their differences.) We introduce a unified schema (Fig.\ref{fig:schema_plate}) to consistently represent these diverse forms of instructions.
Our instruction schema is the result of our pilot study conducted on a subset of datasets. Below we describe the ingredients of this schema:
\begin{figure}[t]
\centering
\includegraphics[width=0.97\columnwidth,trim=0.35cm 0.8cm 0.5cm 1cm]{figures/schema-2.pdf}
\caption{The schema used for representing instruction in \textsc{Natural Instructions}{} (\S\ref{subsec:schema}), shown in plate notation.
}
\label{fig:schema_plate}
\end{figure}
\begin{itemize}[noitemsep,topsep=0pt,parsep=3pt,leftmargin=0.3cm]
\item \underline{\textsc{Title}} provides a high-level description of a task and its associated skill (such as question generation, answer generation).
\item \underline{\textsc{Prompt}} is a single sentence command that often appears before the input instance and connects it to the instructions.
\item \underline{\textsc{Definition}} provides the core detailed instructions for a task.
\item \underline{\textsc{Things to Avoid}} contain instructions regarding undesirable annotations that must be avoided. These help to define the scope of a task and the space of acceptable responses.
\item \underline{\textsc{Emphasis and Caution}} are short, but important statements highlighted in the crowdsourcing templates which were intended to be emphasized or warned against.
\item \underline{\textsc{Positive Examples}} contain inputs/outputs similar to the input given to a worker/system and its expected output, helping crowdworkers better understand a task~\cite{ali1981use}.
\item \underline{\textsc{Negative Examples}} contain inputs/outputs to emphasize \textsc{Things to Avoid} by providing examples that must not be produced.
\item \underline{\textsc{Reason}} provides explanations behind why an example is positive or negative.
\item \underline{\textsc{Suggestion}} contains suggestions on how a negative example could be modified to turn it into a positive example.
\end{itemize}
The next section describes the process of mapping the raw instructions (designed for crowdworkers) to our instruction schema.
\subsection{Constructing \textsc{Natural Instructions}}
\label{sec:mapping}
\subsubsection{Collecting Data}
\label{sec:datacollection}
\paragraph{Collecting raw instructions and instances.}
We use existing, widely adopted NLP benchmarks that are collected via crowdsourcing platforms and hence, come with crowdsourcing templates.
In the first step, we identified several datasets and engaged with their authors to get their crowdsourcing templates and raw data.
This yields the following datasets:
CosmosQA~\cite{huang2019cosmos},
DROP~\cite{dua2019drop},
Essential-Terms~\cite{khashabi2017learning},
MCTACO~\cite{zhou2019going},
MultiRC~\cite{khashabi2018looking},
QASC~\cite{khot2020qasc},
Quoref~\cite{dasigi2019quoref}, ROPES~\cite{lin2019reasoning} and
Winogrande~\cite{sakaguchi2020winogrande}.\footnote{
We only focus on textual instructions and avoid datasets that involve visual or auditory steps, mostly focusing on QA datasets that were available to the authors.
}
\vspace{-.2cm}
\paragraph{Splitting crowdsourcing instructions into minimal tasks.}
Almost all the crowdworking instructions include sequences of steps to guide crowdworkers in creating task instances.
For example, QASC and MCTACO include 7 and 19 steps in the data creation process, respectively.
We divide crowdsourcing instructions into their underlying steps and generate multiple subtasks that are minimal and standalone.\footnote{
We eliminate tasks that involve model-in-the-loop.
} Table~\ref{tab:sample:tasks} shows subtasks extracted for Quoref and QASC. For example, the main task in Quoref is to answer a question given a context paragraph, but the crowdsourcing template consists of two sub-tasks of {\it question generation} and {\it answer generation} with their separate instructions. This process results in a more consistent definition of tasks, enabling a successful mapping of instructions into our schema, in contrast to the work of \citet{efrat2020turking} that uses crowdsourcing instructions as-is.
\begin{table}
\centering
\footnotesize
\begin{tabular}{ll}
\toprule
source dataset & task \\
\midrule
\multirow{2}{*}{\makecell{Quoref\\ \cite{dasigi2019quoref} }} & question generation \\
& answer generation \\
\midrule
\multirow{6}{*}{\makecell{QASC\\ \cite{khot2020qasc}} } & topic word generation \\
& fact generation \\
& combining facts \\
& question generation \\
& answer generation \\
& incorrect answer generation \\
\bottomrule
\end{tabular}
\caption{
Examples of the datasets and the tasks formed from them.
The extracted tasks are independent annotation assignments in the crowdsourcing templates of the datasets.
The complete list is in Table~\ref{tab:structure} in Appendix.
}
\label{tab:sample:tasks}
\end{table}
\begin{table}
\footnotesize
\begin{tabular}{lcc}
\toprule
category & \# of tasks & \# of instances \\
\midrule
{question generation} & 13 & 38$k$ \\
{answer generation} & 16 & 53$k$ \\
{classification} & 12 & 36$k$ \\
{incorrect answer generation} & 8 & 18$k$ \\
{minimal modification} & 10 & 39$k$ \\
{verification} & 2 & 9$k$ \\
\midrule
Total & 61 & 193$k$ \\
\bottomrule
\end{tabular}
\caption{Task categories and their statistics.
}
\label{tab:taskcategories}
\end{table}
In total, there are 61 tasks, which are categorized into 6 semantic categories (Table~\ref{tab:taskcategories}).
We assigned these broad categories to the tasks to understand their collective behavior in the experiments.
It is noteworthy that, despite the apparent resemblance of the tasks included in the same category,
any pair of tasks are distinct.
For example, while \emph{question generation} is part of Quoref, CosmosQA, and QASC, each has its own separate variant of the question generation task (see Fig.\ref{fig:task_specification} in Appendix).
\subsubsection{Mapping Raw Instructions to Schema }
\label{subsec:maptoschema}
We manually fill in the fields of our instruction schema with the content from the crowdsourcing instructions.
For instance, parts of the raw instructions that are highlighted for emphasis are incorporated as part of our \emph{emphasis/caution} field.
The modifications suggested in this step were applied by one author and were verified by another author.\footnote{On average, the process of data curation for each task takes around 5 hrs-34 hrs (details in Appendix; Table~\ref{tab:datacuration}).}
\vspace{-.2cm}
\paragraph{Improving description quality and consistency.}
We edit raw instructions to ensure their quality. Particularly, we fix writing issues (typos, ambiguities, etc.) and redact repetitions.
While repetition often helps in augmenting human understanding, short and concise instructions are often more effective for computers due to their limited attention span~\cite{beltagy2020longformer}.
\vspace{-.2cm}
\paragraph{Augmenting examples and reasons.}
There is a large variance in the number of examples provided in the raw instructions. Instructions often include more positive examples, or some instructions do not include any negative examples (e.g., QASC).
Whenever possible, we add negative examples such that each task has at least two negative examples.
Furthermore, not all raw instructions contain \textsc{reasons} or \textsc{suggestions} for each of their examples. For example, positive examples are usually not accompanied by explanations, and most datasets do not include suggestions.
We add them, wherever such information is missing in the instructions.
\vspace{-.2cm}
\paragraph{Collecting input/output instances for subtasks.}
Most of our tasks are the intermediate steps in the crowdsourcing process.
Therefore, to extract input/output instances for each task, we need to parse the raw annotations of crowdworkers for every step. Since each dataset stores its annotations in a slightly different format, extracting and unifying such intermediate annotations can be non-trivial.
\vspace{-.2cm}
\paragraph{Verification.}
\changed{
An annotator verified the quality of the resulting data in consultation with dataset authors.
The annotator iterated on the authors' feedback (avg of 3 iters) until they were
satisfied.
}
\vspace{-.2cm}
\paragraph{Quality assessment.}
We ask independent human annotators to answer 240 random instances (20 instances from 12 random tasks, used later for our evaluation~\S\ref{subsec:split}).
The subsequent evaluation of the human-generated responses results in more than 96\% accuracy, which indicates that humans can effortlessly understand and execute our instructions.
\subsubsection{\textsc{Natural Instructions}\ Statistics}
\label{subsec:dataset:statistics}
In summary, \textsc{Natural Instructions}\ consists of subtasks each with a set of instructions and input/output instances (Fig.\ref{fig:examples} and \ref{fig:schema_plate}). The complete list of instructions is included in the appendix. In total, the dataset includes 61 tasks and 193$k$ instances.
Table~\ref{tab:taskcategories} shows data statistics for each task category.\footnote{We limit the number of instances in each task to $6.5k$ to avoid massive instance imbalance.} On average, instructions contain 4.9 positive examples and 2.2 negative examples.
The longest element of instructions is usually \textsc{Definitions} with 65.5 tokens and the shortest is \textsc{title} with 8.3 tokens (more statistics in Table~\ref{tab:schemastat}).
\begin{table}[ht]
\centering
\small
\begin{tabular}{lc}
\toprule
statistic & value \\
\midrule
``title'' length & 8.3 tokens \\
``prompt'' length & 12.6 tokens \\
``definition'' length & 65.5 tokens \\
``things to avoid'' length & 24.1 tokens\\
``emphasis/caution'' length & 45.0 tokens\\
``reason'' length & 24.9 tokens\\
``suggestion'' length & 19.6 tokens\\
num of positive examples & 4.9 \\
num of negative examples & 2.2 \\
\bottomrule
\end{tabular}
\caption{
Statistics of \textsc{Natural Instructions}{}
}
\label{tab:schemastat}
\end{table}
\section{Problem Setup and Models }
\label{subsec:setup}
\changed{
Here we define different cross-task generalization settings (\S \ref{subsec:split}) and the models (\S\ref{subsec:models}).
}
\begin{comment}
\subsection{Learning Tasks From Instructions}
\label{subsec:input:output}
Every task $t$ in \textsc{Natural Instructions}{} consists of an instruction $I_t$ and a set of input and output instances $D_t=\{(x,y)|x \in X_t,y \in Y_t\}.$
\vspace{-.2cm}
\paragraph{Task-specific models.} Standard supervised learning uses task-specific training instances to train a model that learns a mapping between input and output: $M(x)=y$ for $(x,y)\in D_t$.
\vspace{-.2cm}
\paragraph{Learning from instructions.} In this setup, the goal is to learn a model $M$ that at inference obtains the output $y$ given the input $x$ and the task instruction $I_t$: $M(I_t, x) = y, \; \mbox{for} \ (x,y)\in D_t$.
In contrast to the task-specific models, no task-specific training data is used to learn the mapping $M$. Instead, we use \textsc{Natural Instructions}\ to study this question: can a model be trained to follow instructions via training tasks \task{seen} and be generalized to follow instructions for a task $t' \in$ \task{unseen}.
We study various generalization settings with different splits of the tasks.
\end{comment}
\subsection{Task Splits and Generalizations Types}
\label{subsec:split}
\paragraph{Random split.}
This setup follows the common practice in benchmarking NLP models with random data splits. Here, two tasks from each task category (Table~\ref{tab:taskcategories}) in \textsc{Natural Instructions}{} are randomly selected for evaluation, and the rest of the tasks are used for training. This leads to 12 tasks in \task{unseen} and 49 tasks in \task{seen}.\footnote{Those tasks that do not accept a relatively reliable automatic evaluation are excluded from \task{unseen}. }
\paragraph{Leave-one-out generalization.}
To better understand the nature of cross-task generalization, we study more restrictive settings of dividing training and evaluation tasks.
\noindent \ul{leave-one-category}: evaluates how well a model generalizes to a task category if it is trained on others -- no task of that category is in \task{seen}.
\noindent \ul{leave-one-dataset}: evaluates how well a model can generalize to all tasks in a particular dataset if it is trained on all other tasks -- no task of that dataset is in \task{seen}.
This split prevents any leakage across tasks that belong to the same source datasets.
\noindent \underline{leave-one-task}: evaluates how well a model can learn a single task by training on all other tasks. \\
\subsection{Models}
\label{subsec:models}
We build models using pre-trained LMs with encoder-decoder architectures BART~\cite{lewis2019bart} for fine-tuning and GPT3~\cite{brown2020language} for few-shot experiments.
\paragraph{Encoding instructions and instances.}
For every problem setup, we map a given instruction $I_t$ and an input instance $x$ into a textual format and decode an output $y$ and obtain $enc(I_t, x)$.
This encoding function is then fed to an encoder-decoder model to predict $y$: $M:enc(I_t, x) \rightarrow y$.
\begin{figure}
\centering
\begin{boxedminipage}{\columnwidth}
\begin{equation*}
\small
\begin{split}
& \mathtt{\small Prompt:} \; \I{t}{prompt} \\
& \mathtt{\small Definition:} \; \I{t}{Definition} \\
& \mathtt{\small Things \; to \; Avoid:} \; \I{t}{avoid.} \\
& \mathtt{\small Emphasis \& Caution:} \; \I{t}{emph.} \\
& \textnormal{}\mathtt{\small Negative Example1-} \\
& \hspace{0.7cm} \mathtt{\small input:} \; \I{t}{pos. ex.}, \mathtt{\small output:} \; \I{t}{pos. ex.},
\mathtt{\small reason:} \; \I{t}{pos. ex.} \\
& \mathtt{\small Positive Example1-} \\
& \hspace{0.7cm} \mathtt{\small input:} \; \I{t}{pos. ex.},
\mathtt{\small output:} \; \I{t}{pos. ex.} \mathtt{\small reason:} \; \I{t}{pos. ex. } \\
& \mathtt{\small input:} \; x, \mathtt{\small output:} \textnormal{''}
\end{split}
\end{equation*}
\end{boxedminipage}
\caption{Encoding instruction $I_t$, where $I_t^c$ refers to the text of a component $c$ in the instruction schema.}
\label{fig:encoding}
\end{figure}
Encoding instances follows a standard NLP paradigm of mapping an input instance to text.
Each instruction $I_t$ consists of multiple elements as described in our instruction schema (\S\ref{subsec:schema}). Here, we map each element of the instruction to a textual format and append it before the input instance. Fig.\ref{fig:encoding} shows how we encode the full instruction.
To study the impact of each instruction element for cross-task generalization, we compare these encodings: (1) \textsc{prompt}, (2) \textsc{pos. examples}, (3) \textsc{prompt + definition}, (4) \textsc{prompt + things to avoid}, (5) \textsc{prompt + emphasis} , (6) \textsc{prompt + pos. examples}, (7) \textsc{prompt + + definition + pos. examples}, and (8) \textsc{Full instruction}.
\changed{
Each of these (e.g., \textsc{prompt} and \textsc{pos. examples}) correspond to prompting setups in the recent literature~\cite{scao2021many,lu2021fantastically}.
}
\begin{table*}[t]
\small
\centering
\begin{tabular}{clcccc}
\toprule
\makecell{model ↓} & \makecell{evaluation set \task{unseen} →} & \makecell{random split\\of tasks} &\makecell{leave-one-\\category (QG)} & \makecell{leave-one-\\dataset (QASC)} & \makecell{leave-one-\\task (QASC QG)} \\
\cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6}
\multirow{2}{*}{\makecell{BART (fine-Tuned)} } & \textsc{No instructions} & 13 & 6 & 37 & 20 \\
& \textsc{Full instructions} & \textbf{32} & \textbf{17} & \textbf{51} & \textbf{56} \\
\midrule
GPT3 (not fine-tuned)& \textsc{Full instructions} & 24 & 33 & 22 & 33 \\
\bottomrule
\end{tabular}
\caption{Cross-task generalization of BART under various splits (\S\ref{subsec:split}).
Fine-tuned BART shows improved performance when provided with instructions.
It also archives better performance than GPT3, despite being over $1k$ times smaller.
\changed{All numbers are ROUGE-L. }
}
\label{tab:bart:generalization:all:splits}
\end{table*}
\vspace{-.2cm}
\paragraph{BART.}
\label{sec:bart}
We use BART (base)~\cite{lewis2019bart} which allows us to fine-tune its model parameters.
This is an encoder-decoder architecture with $140m$ parameters.
For each setup, the input is encoded using different instruction elements, trained on all \task{seen} tasks, and evaluated on \task{unseen} (\S\ref{subsec:split}).
\vspace{-.2cm}
\paragraph{GPT3.}
As a comparison, we evaluate
GPT3~\cite{brown2020language} which is a $175B$ parameter autoregressive LM ($\times1.2k$ larger than BART) and has shown promising results in mimicking demonstrations provided in its prompt.
We cannot fine-tune the parameters of this massive model and use it as-is
under its default setting on the evaluation tasks in \task{unseen} (\S\ref{subsec:split}) using the encoding introduced earlier.
\begin{comment}
\begin{table*}[t]
\small
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{cl|c|cc|cc|cc}
\toprule
model ↓ & & random split & \multicolumn{6}{c}{leave-one-$x$ split} \\
\cmidrule(lr){3-3} \cmidrule(lr){4-9}
& & &\multicolumn{2}{c}{$x =$ category} & \multicolumn{2}{c}{$x =$ dataset} & \multicolumn{2}{c}{$x =$ task} \\
\cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9}
& evaluation set \task{unseen} → & \makecell{ALL} &\makecell{AG} & \makecell{QG} & \makecell{QASC} & \makecell{Quoref} & \makecell{Winogrande AG } & \makecell{QASC QG } \\
\midrule
\multirow{4}{*}{\makecell{\cha{BART-Fine-Tuned}} } & \textsc{No instructions} & 13 & 11 & 6 & 37 & 10 & 11 & 20 \\
\cmidrule(lr){2-9}
& \textsc{prompt+definition} & 30 &18 & 10 & 43 & \textbf{39} & 11 & 22 \\
& \textsc{prompt+pos. examp.} & 23 &18 & \textbf{20} & 47 & 33 & 16 & 55 \\
& \textsc{Full instructions} & \textbf{32} &\textbf{19} & 17 & \textbf{51} & 37 & \textbf{19} & \textbf{56} \\
\midrule
\cha{GPT3-Few-Shot}& \textsc{Full instructions} & 24 & 18 & 33 & 22 & 24 & 10 & 33 \\
\bottomrule
\end{tabular}
}
\caption{BART generalization under various leave-one-out splits (\S\ref{subsec:split}). Encoding instructions improve cross-task generalization across all settings.
\changed{All numbers are ROUGE-L. }
}
\label{tab:bart:generalization}
\end{table*}
\end{comment}
\input{maintable}
\section{Experiments}
\label{sec:experiments}
\vspace{-.1cm}
\paragraph{Evaluation metrics.}
We treat all of our tasks as text generation problems and evaluate them with
automated evaluation metrics for text generation.
In particular, we use
ROUGE-L~\cite{lin2004rouge} to automatically evaluate the generated outputs.\footnote{
Our experiments show that other metrics, e.g. BLEURT~\cite{sellam2020bleurt} are also correlated with ROUGE-L, which has also been used in generative QA tasks.
}
\vspace{-.2cm}
\paragraph{Implementation details.}
For BART, our models are trained for 3 epochs with a learning rate of 5e-5 for a given training split and input encoding. For GPT3, we use the {\texttt{davinci-instruct}} engine and produce outputs with greedy decoding,
generating up to a maximum number of tokens of 16 (the default value). We use the default stop condition which is 2 newline tokens.\footnote{The relevant code is available at:
\url{https://github.com/allenai/natural-instructions-v1}
}
\subsection{Generalization Under Various Task Splits}
\label{sec:gen:various:splits}
\changed{
Table~\ref{tab:bart:generalization:all:splits} reports the results of the BART model train and evaluated with various task splits (\S\ref{subsec:split})}.
For comparison, we evaluate GPT3 which uses no fine-tuning, unlike BART that is fine-tuned with the \task{seen} tasks.
The first column corresponds to random split of tasks, while the remaining columns report cross-task generalization results of the BART model under
\changed{
leave-one-$x$
}
splits (\S\ref{subsec:split}).
For
\changed{
$x =$ \ul{category},}
the tasks in \emph{question-generation} category are held out during training.
For
\changed{
$x =$ \ul{dataset},}
the tasks that were extracted from the \emph{QASC} dataset were excluded from training.
For
\changed{
$x =$ \ul{task},}
we train a model on all tasks, except \emph{QASC question generation} task which is used for evaluation.
\vspace{-.2cm}
\paragraph{Instructions benefit cross-task generalization.}
The results indicate that BART benefits from instructions in generalizing to new tasks, regardless of task splits.
For example, under random split, the model using \textsc{Full Instructions} results in +19\% gains over a model that is not using instructions.
This is particularly interesting for
leave-one-\ul{category}-out split
since the trained model can generalize to the tasks of a particular semantic category, without being exposed to it.
In comparison to GPT3, the fine-tuned BART model that utilizes instructions achieves a stronger performance despite being $\times 1k$ smaller than GPT3.
For example, a BART models using \textsc{Full Instructions} achieves 8\% higher performance than GPT3 under random split of tasks.
Note that the absolute values in leave-one-category are lower due to the difficulty of this setup compared to, for example, the random split setup.
While all settings involve evaluating on tasks not seen during training, the leave-one-category setting enforces more dissimilarity among training and evaluation tasks.
\subsection{Generalization Under Instruction Encoding and Task Categories}
Table~\ref{tab:random:splitfull2} reports the results of the BART model
per encodings of different instruction elements (\S\ref{subsec:models}) and for different task categories.
The table shows that encoding more elements of the instructions generally achieves better results than just using \textsc{prompt} or \textsc{positive examples}.
It additionally shows that the benefit of the instruction elements seems to depend on the target task category.
We observe that the \emph{question-generation} (QG) tasks benefit the most from \textsc{positive examples}, whereas in \emph{classification} (CF),
\textsc{positive examples} are of little help. We hypothesis this is because it is easier to mimic question-generation based on a few examples, whereas it is difficult to define classes via a few examples, where \textsc{definition} can be more helpful.
The models show little improvement in \emph{verification} (VF).
We hypothesize these tasks are inherently more difficult, partially because of their distinctness from the rest of the tasks in the dataset.
We hope future work on this line will study a wider variety of tasks and will improve our understanding of such failure cases.
\subsection{Generalization vs. Number of Seen Tasks}
\label{subsec:supervision:size:experiment}
Fig.\ref{fig:scaling:tasks} compares the impact of the number of seen tasks for cross-task generalization.
For supervision, we randomly sample a few tasks as \task{seen} and evaluate on 6 tasks (one from each category).
(each point in the figure is averaged over 5 random subsamples.)
The results
show that with \textsc{no-instruction} encoding there is no tangible value in observing more tasks.
In contrast, the generalization of the models that encode instructions improves with observing more tasks.
This is an exciting observation since it suggests that scaling up our dataset to more tasks may lead to stronger instruction-following systems.
\subsection{Analyses} \label{subsec:task-specific}
\paragraph{Upperbound: Task-specific Models.}
For each task, we obtain a task-specific model (\S~\ref{subsec:input:output}) by training BART separately on each task's annotated training data. We evaluate these task-specific models to obtain a loose estimate of \emph{upperbounds} for each task.
On average, task-specific models score 66\% which is considerably higher than our models' best generalization (32\%; Table~\ref{tab:bart:generalization:all:splits}).
This indicates that { there is considerable room for improving generalization-based models} that use instructions.
\begin{comment}
\begin{table}
\centering
\footnotesize
\resizebox{\columnwidth}{!}{
\begin{tabular}{lcc}
\toprule
error type & GPT3 & BART \\
\midrule
generates a nonsensical/vague question & 4 & 47\\
explains the question after generating it & 6 & 0\\
generates a yes/no question & 12 & 4\\
\makecell[l]{generates generic questions independent\\ of the given context} &6 &0\\
\bottomrule
\end{tabular}
}
\caption{
Percentage of errors on QASC QG task.
The numbers do not sum to 100 since the error types are not mutually exclusive.
}
\label{tab_error_analysis_main_text}
\end{table}
\subsection{Error Analysis}
Table~\ref{tab_error_analysis_main_text} shows the breakdown of most common error types for the QASC question generation task by analyzing 30 errors (more error analyses can be found in Appendix~\ref{sec:error:analysis}; Table~\ref{Tab: Error Analysis}).
\end{comment}
\begin{table}
\centering
\small
\resizebox{0.99\linewidth}{!}{
\begin{tabular}{llcc}
\toprule
Model ↓ & Split ↓ & \makecell{w/ neg.\\examples} & \makecell{w/o neg.\\examples} \\
\midrule
\multirow{5}{*}{BART} & random & 32 & {\bf 35} \\
& leave-one-$x$ \\
& \ $\drsh x=$ category (AG) & 19 & {\bf 21} \\
& \ $\drsh x=$ dataset (Quoref) & 37 & 37 \\
& \ $\drsh x=$ task (QASC QG) & 56 & {\bf 57} \\
\midrule
GPT3 & - & 24 & {\bf 44} \\
\bottomrule
\end{tabular}
}
\caption{
Effect of excluding negative examples from \textsc{Full Instruction} encoding. Negative instructions are surprisingly difficult for the models to learn from.
}
\label{tab:negative:examples}
\end{table}
\paragraph{Impact of Negative Examples.}
Crowdsourcing instructions often include negative examples to exemplify undesirable responses.
We study how negative examples in instructions affect cross-task generalization.
Our cases study (Table~\ref{tab:negative:examples}) indicates that the models work better \emph{without} (w/o) negative examples,
contrary to the previously-observed benefits of other instructional elements (e.g., definition, positive examples).
This is aligned with the previous studies ~\cite{xuan2020hard,lin2003bootstrapped} that discuss the challenges of learning from negative examples.
Interestingly, GPT3's drop (44 vs 24) is more significant than BART (35 vs 32), showing that BART can partly recover through the training step.
\begin{table*}[ht]
\centering
\resizebox{0.78\textwidth}{!}{
\footnotesize
\begin{tabular}{p{4.5cm}p{3.5cm}p{6.5cm}}
\toprule
Category & Helpful Fields & Explanation \\
\midrule
Question Generation (QG) & 1. \textsc{Definition} & - Provides a holistic picture of the task.\\
& 2. \textsc{Emphasis \& Caution} & - Provides key information for solving the task.\\
& 3. \textsc{Positive Examples} & - This gives an idea of what is expected in the output.\\
& 4. \textsc{Negative Examples} & - Good to know the common mistakes people do.\\
\midrule
Answer Generation (AG) & \textsc{1. Prompt} & - It limits the exploration space to question spans.\\
& \textsc{2. Definition} & - Provides a general understanding of the task. \\
& \textsc{3. Positive Examples} & - Reason field is very helpful.\\
\midrule
Classification (CF) & \textsc{1. Definition} & - The task is unclear without this field.\\
\midrule
Incorrect Answer Generation (IAG) & \textsc{1. Definition} & - Helps understand the utility of such a task.\\
& \textsc{2. Emphasis \& Caution} & - Source of some useful shortcuts.\\
& \textsc{3. Positive Examples} & - Helps in understanding the type of questions asked.\\
\midrule
Minimal Text Modification (MM) & \textsc{1. Things to Avoid} & - Provides critical information.\\
\midrule
Verification (VF) & \textsc{1. Definition} & - Makes the task easy to understand.\\
& \textsc{2. Things to avoid} & - Contains useful tips required for this task.\\
& \textsc{3. Positive Examples} & - Exemplifies task understanding.\\
& \textsc{4. Negative examples} & - Helps avoid potential mistakes.\\
\bottomrule
\end{tabular}
}
\caption{Results of humans' perceived importance of instruction elements. Our annotators, for example, find \textsc{Definition} and \textsc{Thing to Avoid} to be helpful for \textit{Classification} and \textit{Minimal Text Modification} tasks, respectively.}
\label{Tab:User:Study}
\end{table*}
\paragraph{Error Analysis.}
We randomly sample 30 erroneous predictions of our fine-tuned BART on 3 distinct tasks (Winogrande answer generation; QASC question generation; MC-TACO incorrect answer generation). We categorize the errors into common patterns (Table~\ref{Tab: Error Analysis}).
\begin{table}[ht]
\centering
\small
\begin{tabular}{lc}
\toprule
error type & BART \\
\midrule
{\color{brown} \textit{Generates a nonsensical/vague question}} & 47\\
{\color{brown}\textit{Generate an invalid question}} & 8\\
{\color{brown}\textit{Generates a yes/no question}} & 4\\
{\color{brown}\textit{Copies the given fact or a subset of it}} & 3 \\
{\color{brown}\textit{Generates unanswerable questions}} &3\\
\bottomrule
\end{tabular}
\caption{
Percentage of errors on QASC QG task.
The numbers do not sum to 100 since the error types are not mutually exclusive.
}
\label{Tab: Error Analysis}
\end{table}
Unlike GPT3 which generally suffers from generating redundant content irrelevant to instructions, our BART model provides more control, however, it can fail to generate proper output.
Here are several erroneous predictions from our model:
\egbox{
\fontsize{8pt}{10pt}\selectfont
{\color{brown}\textit{Generates a nonsensical/vague question}}\\
\textbf{BART input:}[...]input: Fact: graduations are used for measuring the volume of substances. output: ?\\
\textbf{BART output:} How many substances are there in volume?
}
\egbox{
\fontsize{8pt}{10pt}\selectfont
{\color{brown}\textit{Copies the given fact or a subset of it}}\\
\textbf{BART input:}[...] input: Fact: If someone is not breathing, begin with ventilation. output: ? \\
\textbf{BART output:} If someone is breathing, begin with ventilation.
}
\egbox{
\fontsize{8pt}{10pt}\selectfont
{\color{brown}\textit{Generates a yes/no question }}\\
\textbf{BART input:}[...] input: Fact: Lack of saliva hinders the break down of food into components the body can absorb. output: ?
\textbf{BART output:} Does saliva hinders the break down of food into components the body can absorb?
}
\paragraph{Perceived Impact of Instruction Elements.}
We survey human annotators to find out the value of instruction elements to humans.
Except for the negative examples which were shown to be difficult for models, we observe similar trends between humans' perceived value of those elements (Table~\ref{Tab:User:Study}) and their contributions to the model performance (Table~\ref{tab:random:splitfull2}).
For example, humans viewed \textsc{Definition} and \textsc{Things to Avoid} as necessary fields for \emph{classification} and \emph{minimal text modification} categories, respectively, which is compatible with our empirical observations (e.g., \textsc{prompt + definition} has the highest score on CF category in Table~\ref{tab:random:splitfull2}).
\section{Conclusion}
\label{sec:discussion}
In this paper, we studied the goal of building models that generalize to new tasks by encoding and understanding crowdsourcing instructions.
We introduced \textsc{Natural Instructions}{}, which is built based on existing crowdsourced datasets, that enables building such models and systematically evaluate them.
To the best of our knowledge, this is the first work to show the benefit of instructions towards improved cross-task generalization.
Additionally, we observe that our proposed task has a large room for improvement, which we believe
will bring more attention to building stronger models that can generalize to a wider range of tasks.
\begin{comment}
\vspace{-.2cm}
\paragraph{Future extensions.}
The observations made in \S\ref{subsec:supervision:size:experiment} indicate that there are likely benefits to repeating our study with a larger set of datasets.
We hope the future work expands our work with a larger and broader range of tasks.
We use automatic evaluation, in order to facilitate the replicability of the follow-up work on \textsc{Natural Instructions}{}. Admitting limitations of automatic evaluations, we hope future work will provide an easy-to-reproduce human evaluation for the tasks studied here, based on the recent proposals for streamlining human evaluation of text generation models~\cite{khashabi2021genie}.
\end{comment}
\section*{Acknowledgements}
We thank OpenAI for providing access to the GPT3 API,
authors who generously shared their dataset templates with us, Matt Peters and Nicholas Lourie for helpful input, the Beaker team for their support with experiments, and the anonymous reviewers for their helpful feedback.
The support of DARPA SAIL-ON, DARPA CHESS program, NSF IIS-2044660, ONR N00014-18-1-2826,
and Paul G. Allen Foundation is gratefully acknowledged.
\section{Related Works}
\label{sec:related:work}
\begin{comment}
\paragraph{Instructions in NLP applications.} \hanna{if no space, you can cut this paragraph}
Prior work has studied ``instructions'' in various niches, such as
robotic instructions~\cite{shridhar2020alfred, stepputtis2020language},
databases~\cite{kim2020natural},
programming~\cite{lin2018nl2bash,shao2020chartdialogs}, \emph{inter alia}.
Such {instructions} are inherently different from ours, as they are intended to be mapped to pre-defined symbolic forms (e.g., SQL commands).
Conversely, our instructions describe general NLP tasks (no underlying grammar) for measuring task-level generalization.
\end{comment}
\changed{
\vspace{-.2cm} \paragraph{Learning from instructions.}
There is recent literature on the extent to which models follow language instructions~~\cite{hase2021can,ye2021zero,Gupta2021TowardsGP,Zhong2021AdaptingLM}.
For example, \citet{efrat2020turking} examine if language models can follow crowdsourcing instructions with no further training. On the contrary, our work is pursuing a fundamentally different goal: creating a dataset of crowdsourcing instructions and task instances and formulating cross-task generalization by training models on seen tasks and measuring generalization to the remaining unseen ones.
\citet{weller-etal-2020-learning} construct a crowdsourced dataset with short question-like task descriptions.
Compared to this work, our instructions are longer, more complex and natural since they were used to collect datasets through crowdsourcing.
PromptSource and FLAN~\cite{wei2021finetuned,sanh2021multitask} are two concurrent works that pursue a similar goal as ours.
A key difference between our work to these works is in terms of data collection strategy.
Our work uses natural instructions created by NLP researchers before the dataset instances were created by crowd workers, and hence it contains the complete definition of each task (definition, things to avoid, negative examples, etc.).
On the other hand, instructions in the concurrent work are collected retroactively based on the already-available task instances.
Our {\it natural} instructions enable evaluating models on how they learn tasks given different elements of task descriptions. (See \S\ref{subsec:promptsource} for further comparisons.)
Nevertheless, we believe that all these approaches to constructing instructions and task categories are complementary and the community will benefit from considering both towards solving the challenging problem of cross-task generalization.
\vspace{-.2cm}\paragraph{Prompt engineering.}
Constructing effective discrete prompts for language models to perform NLP tasks is an active area of research~\cite{schick2020few,reynolds2021prompt,liu2021pre}.
Such prompts are often extremely short and may not include a complete definition of complex tasks.
In contrast, our instructions encode detailed instructions as they were used to collect the datasets.
Moreover, the goals are different:
Most prompt-engineering approaches seek prompts with higher performance on a particular task,
typically through assumptions about their target task which make them non-trivial to generalize to any other task.
However, our introduced meta dataset enables the measurement of generalization to unseen tasks.
\vspace{-.2cm}\paragraph{Beyond standard multi-task learning.}
Multi-task learning is a long-standing goal for AI~\cite{caruana1997multitask} and has led to successful models that can support a wider range of tasks
~\cite{mccann2018natural,raffel2020exploring,khashabi2020unifiedqa,mishra2020towards,aghajanyan2021muppet,ye2021crossfit}.
Most of the conventional setups in the multi-tasking literature evaluate on instances that belong to the tasks that are seen, i.e., their labeled instances were observed during training (1st column of Table~\ref{tab:comparison}).
We augment this setup by
introducing natural language instructions which enable our models to bridge to tasks that were not seen during training.
}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Lambda Function and Motivations}
The existence of the core decomposition with Peano quotient for planar compacta \cite{LLY-2019} enables us to associate to each compact set $K\subset\hat{\bbC}$ a map $\lambda_K:\hat{\bbC}\rightarrow\bbN\cup\{\infty\}$, called the {\em lambda function of $K$}. This function sends all points $x\notin K$ to zero and may take a positive value for some $x\in K$. It ``quantifies'' certain aspects of the topological structure of $K$, that is more or less related to the property of being locally connected. In particular, a continuum $K\subset\hat{\bbC}$ is locally connected if and only if $\lambda_K(x)=0$ for all $x\in\hat{\mathbb{C}}$. On the other hand, if a continuum $K\subset\hat{\bbC}$ is not locally connected at $x\in K$ then $\lambda_K(x)\ge1$; but the converse is not necessarily true.
The quantification in terms of lambda function allows us to carry out a new analysis of the topology of $K$, by computing or estimating $\lambda_K(x)$ for specific choices of $x\in K$. In the current paper, we will investigate an interesting phenomenon that was firstly revealed in a fundamental result by Marie Torhorst, as one of the three highlights of \cite{Torhorst}. This result is often referred to as Torhorst Theorem \cite[p.106, (2.2)]{Whyburn42} and reads as follows.
\begin{theorem*}[{\bf Torhorst Theorem}]
The boundary $F$ of every complementary domain $R$ of a locally connected continuum $M\subset\hat{\mathbb{C}}$ is itself a locally connected continuum.
\end{theorem*}
We will obtain an inequality that includes the Torhorst Theorem as a simple case.
The inequality is about the lambda function $\lambda_K$. The function $\lambda_K$ is based on the core decomposition of $K$ with Peano quotient \cite{LLY-2019}, which is motivated by some open questions in \cite{Curry10} and extends two earlier models of polynomial Julia sets developed in \cite{BCO11,BCO13}. Those models, briefly called BCO models, provide efficient ways (1) to describe the topology of unshielded compacta, like polynomial Julia sets, and (2) to obtain specific factor systems for polynomials restricted to the Julia set. The BCO models are special cases of a more general model, working well for all planar compacta, that associates natural factor systems to the dynamics of rational functions \cite{LLY-2019,LYY-2020}.
Recall that a {\bf Peano continuum} means the image of $[0,1]$ under a continuous map. By Hahn-Mazurkiewicz-Sierpi\'nski Theorem
\cite[p.256, \S 50, II, Theorem 2]{Kuratowski68}, a continuum is locally connected if and only if it is a Peano continuum. On the other hand,
a {\bf Peano compactum} is defined to be a compactum having locally connected components such that for any constant $C>0$ at most finitely many of its components are of diameter greater than $C$. Therefore, the Cantor ternary set is a Peano compactum and a Peano continuum is just a Peano compactum that is connected. Concerning how such a definition arises from the discussions of BCO models, we refer to \cite[Theorems 1-3]{LLY-2019}.
Given a compactum $K\subset\hat{\mathbb{C}}$, there exists an upper semi-continuous decomposition of $K$ into sub-continua, denoted as $\Dc_K^{PC}$, such that (1) the quotient space is a Peano compactum and (2) $\Dc_K^{PC}$ refines every other such decomposition of $K$ \cite[Theorem 7]{LLY-2019}.
We call $\Dc_K^{PC}$ the core decomposition of $K$ with Peano quotient. The hyperspace $\Dc_K^{PC}$ under quotient topology is called the Peano model of $K$. Every $d\in\Dc_K^{PC}$ is called an {\bf atom} of $K$, or an {\bf order-one atom}, or an atom of order $1$. Every atom of an order-one atom is called an {\bf order-two atom}, and so on. Note that a compactum such as the pseudo-arc or Cantor's Teepee may have a non-degenerate atom of order $\infty$.
Considering the atoms of a compactum $K\subset\hat{\mathbb{C}}$ as its structural units, we summarize the results obtained in \cite[Theorem 7]{LLY-2019} and \cite[Theorem 1.1]{LYY-2020} in the following way.
\begin{theorem*}[{\bf Theory of Atoms}
Every compactum $K\subset\hat{\mathbb{C}}$ is made up of atoms; all its atoms are sub-continua of $K$ and they form an upper semi-continuous decomposition, with its quotient space being a Peano compactum, that refines every other such decomposition; moreover, for any finite-to-one open map $f:\hat{\bbC}\rightarrow\hat{\bbC}$ and any atom $d$ of $K$, each component of $f^{-1}(d)$ is an atom of $f^{-1}(K)$.
\end{theorem*}
Using the hierarchy formed by {\bf atoms of atoms}, we introduce the lambda function.
\begin{definition*}[{\bf Lambda Function}]
Given a compactum $K\subset\hat{\bbC}$. Let $\lambda_K(x)=0$ for $x\notin K$. Let $\lambda_K(x)=m-1$ for any $x\in K$, if there is a smallest integer $m\ge1$ such that $\{x\}$ is an order-$m$ atom of $K$. If such an integer $m$ does not exist, we put $\lambda_K(x)=\infty$.
\end{definition*}
When little is known about the topology of $K$, it is difficult to completely determine the values of $\lambda_K$. On the other hand, the level sets $\lambda_K^{-1}(n)(n\ge0)$ are ``computable'' for typical choices of $K$. In such circumstances, the lambda function $\lambda_K$ is useful in describing certain aspects of the topology of $K$. For instance, one may check the following observations: (1) a compact set $K\subset\hat{\bbC}$ is a Peano compactum if and only if $\lambda_K(x)=0$ everywhere; (2) if $K=\left\{t+\left(\sin\frac1t\right){\bf i}: 0<t\le1\right\}\cup\{s{\bf i}: |s|\le 1\}$ then $\lambda_K^{-1}(1)=\{s{\bf i}: |s|\le 1\}$ and $\lambda_K^{-1}(0)=\hat{\bbC}\setminus \lambda_K^{-1}(1)$; and (3) if $K=\left\{t+s{\bf i}: t\in[0,1], s\in\mathcal{C}\right\}$, where $\mathcal{C}$ denotes Cantor's ternary set, then $\lambda_K^{-1}(1)=K$ and $\lambda_K^{-1}(0)=\hat{\bbC}\setminus K$.
The lambda function $\lambda_K$ helps us to {\bf measure}, locally and globally, how far a compact set $K\subset\hat{\mathbb{C}}$ is from being a Peano compactum. When $K=\partial\Omega$ for a planar domain $\Omega\subset\hat{\bbC}$, which is the image of a conformal map $\varphi:\mathbb{D}\rightarrow\Omega$ of the unit disk $\mathbb{D}=\{z\in\mathbb{C}: |z|<1\}$, the study of $\lambda_K$ is particularly interesting.
In order to illustrate how $\lambda_K$ is related to the boundary behavior of $\varphi$, we recall a theorem by Fatou \cite[p.17, Theorem 2.1]{CL66}, which says that the radial limits $\lim\limits_{r\rightarrow1}\varphi\left(re^{{\bf i}\theta}\right)$ exist for all $\theta\in[0,2\pi)$, except possibly for a set of linear measure zero.
One may call the map $e^{{\bf i}\theta}\mapsto \lim\limits_{r\rightarrow1}\varphi\left(re^{{\bf i}\theta}\right)$ the {\bf boundary function} of $\varphi$, denoted as $\varphi^b$. Sometimes we also call $\varphi^b$ the {\bf boundary of $\varphi$}. Its domain consists of all points $e^{{\bf i}\theta}\in\partial\mathbb{D}$ such that the limit $\lim\limits_{r\rightarrow1}\varphi\left(re^{{\bf i}\theta}\right)$ exists.
Recall that the prime end at $e^{{\bf i}\theta}$ is of the first or the second type, if the limit $\lim\limits_{r\rightarrow1}\varphi\left(re^{{\bf i}\theta}\right)$ exists \cite[p.177, Theorem 9.7]{CL66}.
By Carath\'eodory's {\bf Continuity Theorem} \cite[p.18]{Pom92}, the boundary function $\varphi^b$ is defined well and continuous on the whole unit circle $\partial\bbD$ if and only if the boundary $\partial\Omega$ is a Peano continuum. By \cite[p.17,Theorem 2.1]{CL66}, if $\partial\Omega$ is not a Peano continuum then $\varphi^b$ is a member of $L^\infty(\partial\bbD)\setminus C(\partial\bbD)$. Therefore, we are curious about two types of quantities: (1) those describing how far $\varphi^b$ is from being continuous and (2) those measuring how far $\partial\Omega$ is from being locally connected.
The first type concerns the asymptotic of $\varphi(z)$ for $z\in\bbD$ as $|z|\rightarrow 1$. The second type concerns the topology of $\partial\Omega$.
The lambda function $\lambda_{\partial\Omega}$ gives rise to a quantity of the second type. This function vanishes everywhere if and only if $\varphi^b$ belongs to $C(\partial\bbD)$. Among others, the questions below are of some interest: {\em What does it mean if $\lambda_{\partial\Omega}(x)\le 1$ for all $x$? Can we say something about $\lambda_{\partial\Omega}$ when $\varphi^b$ has finitely many discontinuities?}
In particular, one may consider the case that $\varphi(\mathbb{D})$ coincides with the complement of the Mandelbrot set $\M$. The set $\mathcal{M}$ consists of all the complex parameters $c$ such that the Julia set of $f_c(z)=z^2+c$ is connected. Its local connectedness is still an open question.
Due to the deep works on quadratic polynomials, we know that $\M$ is locally connected at many of its points, such as the Misiurewicz points and those lying on the boundary of a hyperbolic component. From these known results arises a natural question: {\bf Is it true that $\lambda_\M(x)=0$ for all $x$ at which $\M$ is known to be locally connected?} Another of some interest is, {\bf Can we find some upper bound for the lambda function $\lambda_\M$?} In particular, {\bf Can we show that $\lambda_\M(x)\le1$ for all $x$?}
Very recently, Yang and Yao \cite{YY-2020} show that $\lambda_{\mathcal{M}}(x)=0$ for all points $x$ lying on the boundary of a hyperbolic component. More studies on questions of a similar nature can be expected.
\section{Main Results}
In the current paper, we want to analyze $\lambda_K, \lambda_L$ for compacta $L\subset K\subset\hat{\mathbb{C}}$, that satisfy specific properties. Our analysis is connected to very basic results of topology, such as the Torhorst Theorem and the gluing lemma for continuous functions.
We will extend and quantify the Torhorst Theorem by an inequality, stating that the lambda function $\lambda_K$ of any compactum $K\subset\hat{\mathbb{C}}$ is an {\bf upper bound} of $\lambda_{\partial U}$ for any complementary component $U$ of $K$.
To this end, let us examine how the atoms of $K$ are related to those of any compactum $L$ lying on the boundary of a component $U$ of $\hat{\bbC}\setminus K$.
\begin{main-theorem}\label{compare_atoms}
Given a compact $K\subset\hat{\mathbb{C}}$ and a component $U$ of \ $\hat{\bbC}\setminus K$. If $L\subset\partial U$ is a compactum then every atom of $L$ lies in a single atom of $K$. Consequently, every atom of $L$ lies in a single atom of $\partial U$. \end{main-theorem}
With Theorem \ref{compare_atoms}, we compare the lambda functions $\lambda_K,\lambda_L$ and obtain the following.
\begin{main-theorem
\label{lambda_inequality}
Given a compactum $K\subset\hat{\mathbb{C}}$ and a component $U$ of \ $\hat{\bbC}\setminus K$. If $L\subset\partial U$ is a compactum then $\lambda_L(x)\le \lambda_K(x)$ for all $x\in\hat{\bbC}$.
\end{main-theorem}
Let $\mathcal{A}$ consist of the components of $\hat{\mathbb{C}}\setminus K$. The {\bf envelope function} of $K$ is defined as
$\displaystyle \tilde{\lambda}_K(x)=\sup\limits_{L\subset\partial U, U\in\mathcal{A}}\lambda_L(x)\ \left(x\in\hat{\bbC}\right).$
By Theorems \ref{compare_atoms} and \ref{lambda_inequality}, we have $\tilde{\lambda}_K(x)=\sup\limits_{U\in\mathcal{A}}\lambda_{\partial U}(x)$ and $\tilde{\lambda}_K(x)\le\lambda_K(x)$ for all $x\in\hat{\bbC}$. From now on we call $\tilde{\lambda}_K(x)\le\lambda_K(x)\left(x\in\hat{\mathbb{C}}\right)$ the {\bf\em lambda inequality}, which is also written as $\tilde{\lambda}_K\le\lambda_K$. If $K$ is a Peano compactum then $\lambda_K$ and hence $\lambda_L$ vanish everywhere, for any compactum $L$ lying on the boundary of any component $U$ of $\hat{\mathbb{C}}\setminus K$. That is to say, each compactum $L\subset\partial U$ is a Peano compactum. This includes as a special sub-case the Torhorst Theorem, in which $K$ is assumed to be a Peano continuum and $L=\partial U$.
\begin{rem}
In the above setting, the boundary $\partial K$ may not be locally connected, even if it is a continuum.
See \cite[Example 3.2]{Luo07} or Example \ref{bd_larger} of this paper. In Example \ref{bd_smaller}, we will construct a continuum $L$ such that the range of $\lambda_L-\lambda_{\partial L}$ is $\{-1,0,1\}$.
\end{rem}
Now we consider compacta $K\subset\hat{\mathbb{C}}$ for which the {\bf Lambda Equality} $\tilde{\lambda}_K=\lambda_K$ holds, so that $\tilde{\lambda}_K(x)=\lambda_K(x)$ for all $x\in\hat{\bbC}$.
Let us start from a simple example.
\begin{exam}\label{why_E_compactum}
Let $U$ be the complement of the unit square $\{a+b{\bf i}: 0\le a,b\le 1\}$. Let $K$ be the compactum consisting of $\partial U$ and an infinite sequence of squares (only the boundary) of side length $<1$. These squares are centered at $0.5+0.5{\bf i}$ and converge to $\partial U$ under Hausdorff distance. Then $\lambda_K(x)-\tilde{\lambda}_K(x)=\left\{\begin{array}{ll}1& x\in\partial U\\ 0&\text{otherwise}.\end{array}\right.$
\begin{figure}[ht] \vskip -0.5cm \begin{center} \begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.45] \draw[black,thick] (0,0) -- (0,32) -- (32,32) -- (32,0) -- (0,0); \foreach \j in {3,...,6} { \draw[gray, thick] (2*\j,32-2*\j) -- (32-2*\j,32-2*\j) -- (32-2*\j,2*\j) -- (2*\j,2*\j) --(2*\j,32-2*\j); } \foreach \j in {2,...,5} {\fill[gray] (\j,\j) circle(0.5ex); \fill[gray] (32-\j,32-\j) circle(0.5ex); \fill[gray] (\j,32-\j) circle(0.5ex); \fill[gray] (32-\j,\j) circle(0.5ex); } \draw(0,1) node[left]{$0$}; \draw(32,1) node[right]{$1$}; \draw(0,31) node[left]{${\bf i}$}; \end{tikzpicture} \end{center} \vskip -1.0cm \caption{A simplified depiction of $K$ and the small squares.}\label{seq-squares}\vskip -0.25cm \end{figure}
\end{exam}
If $K$ is given in Example \ref{why_E_compactum} then $\hat{\mathbb{C}}\setminus K$ has infinitely many components of diameter greater than $1-\epsilon$ for some constant $\epsilon\in(0,1)$. Borrowing the idea of {\bf $E$-continuum} \cite[p.113]{Whyburn42}, we define an {\bf $E$-compactum} to be a planar compactum such that for any constant $\varepsilon>0$ its complement has at most finitely many components of diameter $>\varepsilon$. Unfortunately, the condition of $E$-compactum alone is still not sufficient for the {\bf Lambda Equality} $\tilde{\lambda}_K=\lambda_K$. See Examples \ref{E-compactum} and \ref{finite-comp}.
The theorem below gives three conditions under which the Lambda Equality holds.
\begin{main-theorem}\label{equality-case}
Given a compactum $K\subset\hat{\mathbb{C}}$, the Lambda Equality $\tilde{\lambda}_K=\lambda_K$ holds if one of the following conditions is satisfied:
(i) $K$ is an $E$-compactum such that the envelope function $\tilde{\lambda}_K(x)$ vanishes everywhere;
(ii) $K$ is an $E$-compactum whose complementary components have disjoint closures.
(iii) $K$ is a partially unshielded compactum.
\end{main-theorem}
\begin{rem}
In (i) and (ii) of Theorem \ref{equality-case}, the assumption that $K$ is an $E$-compactum can not be removed. We may set $K=[0,1]\!\times\![0,{\bf i}]\setminus\left(\bigcup\limits_1^\infty R_n\right)$, with $R_n=\left(\frac{1}{3n},\frac{2}{3n}\right)\times\left(\frac13{\bf i},\frac23{\bf i}\right)$. If $W=\hat{\mathbb{C}}\setminus[0,1]\!\times\![0,{\bf i}]$ then $\partial W\cap\partial R_n=\emptyset$ for all $n\ge1$ and $\partial R_n\cap \partial R_m=\emptyset$ for $n\ne m$. See Figure \ref{non-E} for a simple depiction of $K$.
The continuum $K$ is not an $E$-continuum but it satisfies the other assumptions in (i) and (ii) of Theorem \ref{equality-case}. It has exactly one non-degenerate atom, the segment $\left[\frac13{\bf i},\frac23{\bf i}\right]$. Thus $\lambda_K(x)-\tilde{\lambda}_K(x)=\left\{\begin{array}{ll}1& x\in\left[\frac13{\bf i},\frac23{\bf i}\right]\\ 0&\text{otherwise}.\end{array}\right.$
\begin{figure}[ht]
\vskip -0.75cm
\begin{center}
\begin{tikzpicture}[x=5cm,y=5cm,scale=0.618]
\fill[gray!20,thick] (0,0) -- (0,1) -- (1,1) -- (1,0) -- (0,0);
\draw[gray,thick] (0,0) -- (0,1) -- (1,1) -- (1,0) -- (0,0);
\draw[black, ultra thick] (0,1/3) -- (0,2/3);
\foreach \j in {1,...,3}
{
\fill[white] (1/3^\j,1/3) -- (2/3^\j,1/3) -- (2/3^\j,2/3) -- (1/3^\j,2/3) --(1/3^\j,1/3);
\draw[gray, thick] (1/3^\j,1/3) -- (2/3^\j,1/3) -- (2/3^\j,2/3) -- (1/3^\j,2/3) --(1/3^\j,1/3);
}
\foreach \j in {1,...,6}
{\fill[gray] (1/60,0.32+0.05*\j) circle(0.3ex);
}
\draw(0,0.02) node[left]{$0$};
\draw(1,0.02) node[right]{$1$};
\draw(0,0.98) node[left]{${\bf i}$};
\end{tikzpicture}
\end{center}
\vskip -0.95cm
\caption{A depiction for $K$ and some of the rectangles.}\label{non-E}
\vskip -0.25cm
\end{figure}
\end{rem}
Note that the Lambda Equality may not hold for an $E$-compactum $K\subset\hat{\mathbb{C}}$, even if it has finitely many complementary components. See Example \ref{finite-comp}. Also notice that the Lambda Equality under condition (i) implies the theorem below. This extends Whyburn's Theorem \cite[p.113, (4.4)]{Whyburn42}, which says that {\em an $E$-continuum is a Peano continuum if and only if the boundary of any of its complementary components is a Peano continuum}.
\begin{theorem*}[{Extended Whyburn's Theorem}] An $E$-compactum is a Peano compactum if and only if the boundary of any of its complementary components is a Peano compactum.
\end{theorem*}
Theorem \ref{lambda_inequality} addresses how $\lambda_K$ and $\lambda_L$ are related when $L$ lies on the boundary of a component of $\hat{\bbC}\setminus K$. There are other choices of planar compacta $K\supset L$ so that $\lambda_K$ and $\lambda_L$ are intrinsically related. A typical situation happens, if the common part of $\overline{K\setminus L}$ and $L$ is a finite set.
\begin{main-theorem}\label{gluing_lemma}
If $K\supset L$ are planar compacta such that $\overline{K\setminus L}$ intersects $L$ at finitely many points then $\lambda_K(x)=\max\left\{\lambda_{\overline{K\setminus L}}(x),\lambda_L(x)\right\}$ for all $x$. \end{main-theorem}
Setting $A=\overline{K\setminus L}$, we can infer that that $\lambda_K(x)$ coincides with $\lambda_A(x)$ for $x\in A\setminus L$ and with $\lambda_L(x)$ for $x\in L\setminus A$, equals $\max\left\{\lambda_{A}(x),\lambda_L(x)\right\}$ for $x\in A\cap L$, and vanishes for every $x\notin(A\cup L)$. Therefore, we have.
\begin{theorem*}[{Gluing Lemma for Lambda Functions}]\label{gluing_lemma_1}
If in addition $\lambda_A(x)=\lambda_L(x)$ for all $x\in A\cap L$ then $\lambda_K(x)=\lambda_{A\cup L}$ may be obtained by gluing $\lambda_A$ and $\lambda_L$, in the sense that
\begin{equation}\label{form-1}
\lambda_{A\cup L}(x)=\left\{\begin{array}{ll}\lambda_A(x)& x\in A\\ \lambda_L(x)& x\in L\\ 0& {otherwise.}\end{array}\right.
\end{equation}
\end{theorem*}
\begin{rem}
The formation in Equation (\ref{form-1}) is similar to the one illustrated in the well known gluing lemma for continuous maps. See for instance \cite[p.69, Theorem (4.6)]{Armstrong}.
In certain situations, Theorem \ref{gluing_lemma} helps us to analyze questions concerning local connectedness of polynomial Julia sets. See Question \ref{small_julia}. However,
the case that $A\cap L$ is an infinite set is more involved. In Theorem \ref{baby_M}, we will extend Theorem 4 to such a case under additional assumptions. This extension allows one to choose $K$ to be the Mandelbrot set and $L$ the closure of a hyperbolic component. For concrete choices of $A$ and $L$ so that Equation \ref{form-1} does not hold, we refer to Examples \ref{cantor_combs}, \ref{brooms} and \ref{cup-fs}.
\end{rem}
The other parts of this paper are arranged as follows. Section \ref{proof-c} is devoted to the proofs for Theorems \ref{compare_atoms} to \ref{lambda_inequality}. Section \ref{equality} gives a proof for Theorem \ref{equality-case}. In Section \ref{glue} we firstly prove Theorem \ref{gluing_lemma} and then continue to establish Theorem \ref{baby_M}.
Section \ref{examples} gives examples.
\section{The Lambda Inequality}\label{proof-c}
In this section we prove Theorems \ref{compare_atoms} and \ref{lambda_inequality}.
We will study relations on compacta $K\subset\hat{\mathbb{C}}$. Such a relation is considered as a subset of the product space $K\times K$ and is said to be {\bf closed} if it is closed in $K\times K$. Given a relation $\Rc$ on $K$, we call $\Rc[x]=\{y\in K: \ (x,y)\in\Rc\}$ the fiber of $\Rc$ at $x$. We mostly consider closed relations $\Rc$ that are reflexive and symmetric, so that for all $x,y\in K$ we have (1) $x\in \Rc[x]$ and (2) $x\in\Rc[y]$ if and only if $y\in\Rc[x]$. For such a relation, the {\em iterated relation} $\Rc^2$ is defined naturally so that
$\displaystyle y\in\Rc^2[x]$ if and only if there exist $z\in K$ with $(x,z), (z,y)\in \Rc$.
Recall that the {\bf Sch\"onflies relation} on a planar compactum $K$ is a reflexive symmetric relation. Under this relation, two points $x_1, x_2$ are related provided that either $x_1=x_2$ or there exist two disjoint Jordan curves $J_i\ni x_i$ such that $\overline{U}\cap K$ has infinitely many components $P_n$, intersecting $J_1$ and $J_2$ both, whose limit under Hausdorff distance contains $\{x_1,x_2\}$. Here $U$ is the component of \ $\hat{\bbC}\setminus(J_1\cup J_2)$ with $\partial U=J_1 \cup J_2$.
Given a compactum $K\subset\hat{\bbC}$, denote by $R_K$ the Sch\"onflies relation on $K$ and by $\overline{R_K}$ the closure of $R_K$. We also call $\overline{R_K}$ the {\bf closed Sch\"onflies relation}.
Let $\Dc_K$ be the finest upper semi-continuous decompositions of $K$ into sub-continua that splits none of the fibers $R_K[x]$. Then $\Dc_K$ coincides with $\Dc_K^{PC}$, the core decomposition of $K$ with Peano quotient \cite[Theorem 7]{LLY-2019}. Therefore the elements of $\mathcal{D}_K$ are the (order-one) atoms of $K$.
Every fiber $\overline{R_K}[x]$ is a continuum. See
\cite[Theorem 1.4]{LYY-2020}. However, the compactness and connectedness of the fibers of $R_K$ remain open.
Moreover, in order that a point $y\ne x$ lies in $\overline{R_K}[x]$ it is necessary and sufficient that for small enough $r>0$ the difference $K\setminus(B_r(x)\cup B_r(y))$ has infinitely many components that intersect each of the circles $\partial B_r(x)$ and $\partial B_r(y)$. See \cite[Theorem 1.3]{LYY-2020}.
The lemma below relates the fibers of $\overline{R_K}^2$ to those of $\overline{R_L}$, where $L$ is a compact subset of $K$ satisfying certain properties.
\begin{lemma}\label{key-lemma}
Given a compactum $K\subset\hat{\mathbb{C}}$ and a component $U$ of \ $\hat{\bbC}\setminus K$. If $L\subset\partial U$ is compact then $\overline{R_L}[x]\subset \overline{R_K}^2[x]$ for any $x\in L$.
\end{lemma}
\begin{proof}
To obtain the containment $\overline{R_L}[x]\subset \overline{R_K}^2[x]$ for any given $x\in L$, we may fix an arbitrary point $y\in\overline{R_L}[x]\setminus\{x\}$ and consider the annulus $A_n=\hat{\mathbb{C}}\setminus\left(B_{1/n}(x)\cup B_{1/n}(y)\right)$, for any integer $n\ge1$ such that $\overline{B_{1/n}(x)}\bigcap\overline{B_{1/n}(y)}=\emptyset$. Here $B_{1/n}(x)$ and $B_{1/n}(y)$ are open disks with radius $1/n$ under spherical distance, that are respectively centered at $x$ and $y$. By \cite[Theorem 1.3]{LYY-2020}, $A_n\cap L$ has infinitely many components intersecting both $\partial B_{1/n}(x)$ and $\partial B_{1/n}(y)$. So we can find an infinite sequence $\{P_i\}$ of such components that converge to some continuum $P_\infty$ under Hausdorff metric.
Since $x,y\in L\subset\partial U$, we can find an open arc $\alpha_0\subset U$ that connects a point on $\partial B_{1/n}(x)$ to one on $\partial B_{1/n}(y)$. Going to an appropriate sub-arc, if necessary, we may assume that $\alpha\subset A_n$. Then, we may slightly thicken the closed arc $\overline{\alpha}$ and obtain a topological disc $\alpha^*\subset A_n$, satisfying $\alpha^*\cap K=\emptyset$. From this we see that $\overline{A_n\setminus\alpha^*}$ is homeomorphic to $[0,1]^2$. We will obtain the following.
{\bf Claim}. $P_\infty$ contains two points $u_n\in \partial B_{1/n}(x), v_n\in \partial B_{1/n}(y)$ with $v_n\in\overline{R_K}^2[u_n]$.
The flexibility of the large enough integers $n$ ensures that $\lim\limits_nu_n=x$ and $\lim\limits_nv_n=y$. Since $\overline{R_K}^2$ is a closed relation, we surely obtain $y\in \overline{R_K}^2[x]$. This completes our proof. Thus, the remaining issue is to verify the above claim.
As $\overline{A_n\setminus\alpha^*}$ is a topological disc, we consider it to be the unit square $[0,1]^2$. Moreover, we may represent by $[0,1]\times\{1\}$ the arc $l_1=\overline{A_n\setminus\alpha^*}\cap\partial B_n(x)$ and by $[0,1]\times\{0\}$ the arc $l_2=\overline{A_n\setminus\alpha^*}\cap\partial B_n(y)$. Fix any point $z$ in $P_\infty\cap(0,1)^2$. For any $r>0$ that is small, let $W_r$ denote the open rectangle centered at $z$ with diameter $r$.
Since $P_i\rightarrow P_\infty$ under Hausdorff distance we may assume that every $P_i$ intersects $W_r$ and lies in $[0,1]^2$, which from now on represents $\overline{A_n\setminus\alpha^*}$.
See Figure \ref{key}.
\begin{figure}[ht]
\vskip -0.5cm
\center{
\begin{tikzpicture}[scale=0.8,x=1.618cm, y=0.618cm]
\draw(-2,0)--(-2,7);
\draw(-2,0)--(7,0)node[below]{$l_2\subset \partial B_n(y)$};
\draw(-2,7)--(7,7)node[above]{$l_1\subset \partial B_n(x)$};
\draw(7,0)--(7,7);
\draw(2,0)--(2,7)node[above]{$P_\infty$};
\fill(2,3.5)circle(2pt);
\draw[blue,thick](-1,2)--(5,2) -- (5,5) -- (-1,5) -- (-1,2);
\draw(2,3.5) node[left]{$W_r\ni z$};
\draw(3,0)--(3,7)node[above]{$P_{2i+1}$};
\draw(4.5,0)--(4.5,7)node[above]{$P_{2i-1}$};
\draw[dashed](3.75,0)--(3.75,7);
\draw (3.75,0)node[below]{$P_{2i}$};
\draw(3.75,3) node[left]{$a_i$};
\fill(3.75,3)circle(2pt);
\draw(4,4) node[right]{$b_i$};
\fill(4,4)circle(2pt);
\draw[red](4.25,5)--(4.25,7); \draw(4.3,6) node[left]{$\beta_i$};
\draw[red](4.25,2)--(4.25,0); \draw(4.3,1) node[left]{$\beta_i$};
\end{tikzpicture}
}\vskip -0.75cm
\caption{Relative locations of $z,l_1,l_2,W_r, P_{2i-1}, P_{2i}, P_{2i+1}$ and $a_i, b_i$.}\label{key}
\vskip -0.25cm
\end{figure}
Recall that $[0,1]^2\setminus P_\infty$ has two components, one containing $\{1\}\times[0,1]$ and the other $\{0\}\times[0,1]$. One of these components contains infinitely many $P_i$. Without losing generality we may assume that every $P_i$ lies in the one containing $\{1\}\times[0,1]$, denoted $V$. Thus $P_i$ can be connected to $\{1\}\times[0,1]$ by an arc in $[0,1]^2$ that does not intersect $P_\infty$. Moreover, rename $P_i(i\ge1)$ so that every $P_i$ can be connected to $\{1\}\times[0,1]$ by an arc in $[0,1]^2$ that does not intersect $P_j$ for $j\ge i+1$. Therefore, each $P_i$ is ``to the right of'' $P_{i+1}$.
For all $i\ge1$ let $V_i$ be the unique component of $\hat{\bbC}\setminus\left(P_{2i-1}\cup P_{2i+1}\cup l_1\cup l_2\right)$ whose boundary intersects each of $l_1$, $l_2$, $P_{2i-1}$ and $P_{2i+1}$. Then $P_{2i}\subset \overline{V_i}$ for $i\ge1$. For the previously given point $z$ in $P_\infty\cap(0,1)^2$, we can find for each $i\ge1$ a point $a_i\in P_{2i}\cap W_r$ such that $\lim\limits_{i\rightarrow\infty}a_i=z$. Since $P_{2i}\subset L\subset \partial U$, we further find a point $b_i\in (W_r\cap V_i\cap U)$ for every $i\ge1$, such that the distance between $a_i$ and $b_i$ converges to zero as $i\rightarrow\infty$. Check Figure \ref{key} for relative locations of $a_i\in P_{2i+1}$ and $b_i\in(W_r\cap V_i\cap U)$.
Now, we may find arcs $\alpha_i\subset U$ for each $i\ge1$ that starts from a fixed point $b_0\in U$ and ends at $b_i$. Let $c_i$ be the last point on $\alpha_i$ that leaves $\partial[0,1]^2$. Let $d_i$ be the first point on $\alpha_i$ after $c_i$ at which $\alpha_i$ intersects $\partial W_r$. Clearly, we have $c_i\in(l_1\cup l_2)$. Let $\beta_i$ be the sub-arc of $\alpha_i$ from $c_i$ to $d_i$. Check Figure \ref{key} for a rough depiction of two possible locations for $\beta_i$. Then $\beta_i$ and $\beta_j$ for $i\ne j$ are contained in distinct components of $\mathcal{A}_r\setminus L$, where $\mathcal{A}_r=[0,1]^2\setminus W_r$ is topologically a closed annulus.
Since $L\subset K$ and $K\cap U=\emptyset$, the arcs $\beta_i$ and $\beta_j$ for $i\ne j$ are contained in distinct components of $\mathcal{A}_r\setminus K$.
Let $x_n$ be the only point on $l_1\cap P_\infty$ such that the right piece of $l_1\setminus\{x_n\}$ does not intersect $P_\infty$. Let $y_n$ be the point on $l_2\cap P_\infty$ such that the right piece of $l_2\setminus\{y_n\}$ does not intersect $P_\infty$. The sequence $\{c_i\}$ then has a limit point in $\{x_n,y_n\}$. We may assume that $z_r=\lim\limits_{i\rightarrow\infty}d_i$ for some point $z_r\in\partial W_r$. Since $\partial[0,1]^2$ and $\partial W_r$ are disjoint Jordan curves, from the choices of $x_n, y_n$ and $z_r$ we can infer that either $(x_n,z_r)\in R_K$ or $(y_n,z_r)\in R_K$. The flexibility of $r>0$ then leads to the inclusion $z\in\left(\overline{R_K}[x_n]\cup\overline{R_K}[y_n]\right)$.
Now consider the two closed sets $E_n=P_\infty\cap \left(\overline{R_K}[x_n]\cup l_1\right)$ and $F_n=P_\infty\cap \left(\overline{R_K}[y_n]\cup l_2\right)$, which satisfy $P_\infty=E_n\cup F_n$. From the connectedness of $P_\infty$ we see that $E_n\cap F_n\ne\emptyset$. Clearly, each point $w\in (E_n\cap F_n)$ necessarily falls into one of the following cases:
\begin{itemize}
\item[(1)] $w$ lies in $l_1\subset\partial B_{1/n}(x)$ and belongs to $\overline{R_K}[y_n]$,
\item[(2)] $w$ lies in $l_2\subset\partial B_{1/n}(y)$ and belongs to $\overline{R_K}[x_n]$,
\item[(3)] $w\notin(l_1\cup l_2)$ and it lies in $\overline{R_K}[x_n]\cap\overline{R_K}[y_n]\cap(0,1)^2$.
\end{itemize}
In case (1) we set $u_n=w,v_n=y_n$; in case (2) we set $u_n=x_n, v_n=w$; in case (3) we set $u_n=x_n, v_n=y_n$. Then, in cases (1) and (2) we have $v_n\in\overline{R_K}[u_n]\subset\overline{R_K}^2[u_n]$; and in case (3) we will have $v_n\in\overline{R_K}^2[u_n]$. This verifies the claim and completes our proof.
\end{proof}
With Lemma \ref{key-lemma}, we are well prepared to prove Theorems \ref{compare_atoms} and \ref{lambda_inequality} as follows.
\begin{proof}[{\bf Proof for Theorem \ref{compare_atoms}}]
Since $U$ is also a complementary component of $\partial U$, we only verify that every atom of $L$ is contained in a single atom of $K$.
To this end, let $\Dc_L^\#$ consist of all those continua that are each a component of $d^*\cap L$ for some $d^*\in\Dc_K$. By \cite[p.44, Theorem 3.21]{Nadler92} and \cite[p.278, Lemma 13.2]{Nadler92}, we see that $\Dc_L^\#$ is an upper semi-continuous decomposition of $L$. As every fiber of $\overline{R_K}^2$ is entirely contained in a single element of $\Dc_K$, by Lemma \ref{key-lemma} we know that every fiber $\overline{R_L}[z]$ is entirely contained in a single element of $\Dc_L^\#$. This implies that $\Dc_L^\#$ is refined by $\Dc_L$. In other words, every atom of $L$ is entirely contained in a single atom of $K$.
\end{proof}
\begin{proof}[{\bf Proof for Theorem \ref{lambda_inequality}}]
To obtain $\lambda_L(x)\le\lambda_K(x)$ for all $x$, we only need to consider the points $x\in L$. With no loss of generality, we may assume that $\lambda_K(x)=m-1$ for some integer $m\ge1$. That is to say, there exist strictly decreasing continua $d_1^*\supset d_2^*\supset\cdots\supset d_{m}^*=\{x\}$ such that $d_1^*$ is an atom of $K$ and $d_{i+1}^*$ an atom of $d_i^*$ for $1\le i\le m-1$. Here we may have $m=1$. By Theorem \ref{compare_atoms}, the atom of $L$ containing $x$, denoted as $d_1$, is a subset of $d_1^*$. Since $d_1\subset d_1^*$ also satisfies the assumptions of Theorem \ref{compare_atoms}, we can infer that the atom of $d_1$ containing $x$, denoted as $d_2$, is a subset of $d_2^*$. Repeating the same argument for $m$ times, we obtain for $1\le i\le m$ an order-$i$ atom $d_i$ of $L$ with $d_i\subset d_i^*$. Here we have $d_{m}=\{x\}$ and hence $\lambda_L(x)\le m=\lambda_K(x)$.
\end{proof}
\begin{rem}
In the proof for Theorem \ref{lambda_inequality}, we know that $U$ is a component of $\hat{\bbC}\setminus K$ and $L\subset\partial U$. Therefore, in the same way we can show that $\lambda_L(x)\le\lambda_{\partial U}(x)$ for all $x$. From this we can infer that
$\sup\limits_U\lambda_{\partial U}(x)=\tilde{\lambda}_K(x)$ for all $x\in\hat{\mathbb{C}}$.
\end{rem}
\section{On Lambda Equalities}\label{equality}
We prove Theorem \ref{equality-case}, establishing three equalities in terms of the lambda function. Two of these equalities are for $E$-compacta. The other one is for partially unshielded compacta.
Given an $E$-compactum $K\subset\hat{\mathbb{C}}$ with complementary components $U_1,U_2,\ldots$, so that the diameters $\delta(U_i)$ either form a finite sequence of an infinite one converging to zero. The Torhorst Inequality requires that $\sup\limits_i\lambda_{\partial U_i}(x)\le\lambda_K(x)$ for all $x\in\hat{\mathbb{C}}$ and for all $i\ge1$. Since $\lambda_K(x)=\tilde{\lambda}_K(x)=0$ for all $x\in K^o\cup\left(\bigcup\limits_iU_i\right)$, we only need to consider the points on $\partial K$, which may not equal $\bigcup\limits_i\partial U_i$.
Lemma \ref{bridging_lemma} follows from \cite[Lemma 3.3]{LLY-2019} and is useful when we prove Lemma \ref{trivial-fiber}.
\begin{lemma}\label{bridging_lemma}
If $A\subset\hat{\bbC}$ is a closed topological annulus and $K\subset\hat{\bbC}$ a compactum then the following statements are equivalent: (1) $A\cap K$ has infinitely many components intersecting each of the two components of $\partial A$; (2) $A\setminus K$ has infinitely many components intersecting each of the two components of $\partial A$.
\end{lemma}
\begin{lemma}\label{trivial-fiber}
Given an $E$-compactum $K\subset\hat{\mathbb{C}}$ with complementary components $U_1,U_2,\ldots$. If $\overline{R_K}[x]$ contains a point $y\ne x$ then $y\in\overline{R_{\partial U_i}}[x]$ for some $i$.
\end{lemma}
\begin{proof}
Let $\rho(x,y)$ be the spherical distance between $x$ and $y$. For each $n\ge2$ let $B_n(x)$ and $B_n(y)$ be the open disks of radius $2^{-n}\rho(x,y)$ that are centered at $x$ and $y$ respectively. Then $A_n=\hat{\mathbb{C}}\setminus\left(B_n(x)\cup B_n(y)\right)$ is a topological annulus. By \cite[Theorem 1.3]{LYY-2020}, the intersection $A_n\cap K$ has infinitely many components that intersect $\partial B_n(x)$ and $\partial B_n(y)$ both. By Lemma \ref{bridging_lemma}, the difference $A_n\setminus K$ has infinitely many components, say $\{P^n_j: j\ge1\}$, that intersect $\partial B_n(x)$ and $\partial B_n(y)$ both. Since the diameters of those $P^n_j$ are no less than $\rho(x,y)/2$ and since we assume $K$ to be an $E$-compactum, there is an integer $i(n)$ such that $U_{i(n)}$ contains infinitely many of those $P^n_j$. Here all those $P^n_j$ that are contained in $U_{i(n)}$ are each a component of $A_n\cap U_{i(n)}$.
Now, choose a subsequence $\{Q^n_k: k\ge1\}$ of $\{P^n_j: j\ge1\}$, with $Q_k^n\subset U_{i(n)}$, such that $\overline{Q^n_k}$ converges under Hausdorff distance to a continuum $M_n$. Then $M_n$ is a subset of $\partial U_{i(n)}$ and intersects $\partial B_n(x)$ and $\partial B_n(y)$ both. Fixing any $a_n$ in $M_n\cap \partial B_n(x)$ and $b_n$ in $M_n\cap \partial B_n(y)$, we will have $(a_n,b_n)\in R_{\partial U_{i(n)}}$. Since $K$ is an $E$-compactum, there are infinitely many integers $n$ such that $i(n)$ takes the same value, say $i$. Therefore, we have two infinite sequences $\{c_n\}\subset\{a_n\}$ and $\{d_n\}\subset \{b_n\}$, with $c_n,d_n\in\partial U_i$, such that $(c_n,d_n)\in R_{\partial U_i}$ for all $n\ge2$. Since $\lim\limits_{n\rightarrow\infty}c_n=x$ and $\lim\limits_{n\rightarrow\infty}d_n=y$, we readily have $(x,y)\in\overline{R_{\partial U_i}}$, or equivalently $y\in\overline{R_{\partial U_i}}[x] $.
\end{proof}
Now we are well prepared to prove parts (i) and (ii) of Theorem \ref{equality-case}, whose results are respectively included in the next two propositions.
\begin{proposition}\label{equality-case-1}
If $K$ is an $E$-compactum such that $\tilde{\lambda}_K(x)=0$ for all $x\in\hat{\mathbb{C}}$ then $\lambda_K(x)$ vanishes everywhere.
\end{proposition}
\begin{proof}
As $\tilde{\lambda}_K(x)$ vanishes everywhere, all the relations $\overline{R_{\partial U_i}}$ are trivial, in the sense that the fibers $\overline{R_{\partial U_i}}[x]$ are each a singleton for all $i$ and all $x\in\partial U_i$. Combing this with the conclusion of Lemma \ref{trivial-fiber}, we can infer that the fiber $\overline{R_K}[x]=\{x\}$ for all $x\in K$. From this, we see that every atom of $K$ is a singleton and that $\lambda_K(x)=0$ for all $x$.
\end{proof}
\begin{proposition}\label{equality-case-2}
Given an $E$-compactum $K$. If $\partial U_i\cap\partial U_j=\emptyset$ for $i\ne j$ then $\lambda_K=\tilde{\lambda}_K$.
\end{proposition}
\begin{proof}
Let $\mathcal{D}_i$ denote the core decomposition of $\partial U_i$. Since we assume that $\partial U_i\cap\partial U_j=\emptyset$ for $i\ne j$, the collection
$\displaystyle \mathcal{D}_K^*:=\left(\bigcup\limits_i\mathcal{D}_i\right)\cup\left\{\{x\}: x\in K\setminus\left(\bigcup\limits_i\partial U_i\right)\right\}$
is a partition that divides $K$ into sub-continua. It suffices to show that $\Dc_K^*$ is the core decomposition of $K$.
Recall that $\Dc_K$ is the finest monotone decomposition such that every fiber of $\overline{R_K}$ is contained in a single element of $\Dc_K$. By Lemma \ref{key-lemma}, we know that $\Dc_K$ is refined by $\Dc_K^*$. On the other hand, since $K$ is an $E$-compactum and since $\partial U_i\cap\partial U_j=\emptyset$ for $i\ne j$, we can use Lemma \ref{trivial-fiber} to infer that every fiber of $\overline{R_K}$ is contained in a single element of $\Dc^*_K$. Therefore, we only need to verify that $\mathcal{D}^*_K$ is upper semi-continuous, which then indicates that $\Dc_K^*$ is a monotone decomposition hence is refined by $\Dc_K$.
In other words, we need to verify that the equivalence $\sim$ determined by the partition $\Dc_K^*$ is closed as a subset of $K\times K$. To this end, we consider an arbitrary sequence $\{(x_n,y_n): n\ge1\}$ in $K\times K$ with $\lim\limits_{n\rightarrow\infty}(x_n,y_n)=(x,y)$ such that $x_n\sim y_n$ for all $n\ge1$. There are two possibilities: either $x=y$ or $x\ne y$. In the first case, we have $(x,y)=(x,x)$, which is surely an element of $\sim$. In the second, the assumption that $K$ is an $E$-compactum implies that there is some $U_i$ such that $\{x_n,y_n\}\subset\partial U_i$ for infinitely many $n\ge1$. Consequently, the subset $\{x,y\}$ is contained in a single element of $\Dc_{i}$, which is a sub-collection of $\Dc_K^*$. That is to say, we have $x\sim y$. This ends our proof.
\end{proof}
The arguments in the above proof actually imply the following.
\begin{theo}\label{equal-cd}
Given an $E$-compactum $K$. If $\partial U_i\cap\partial U_j=\emptyset$ for $i\ne j$ then every atom of $K$ is either an atom of some $\partial U_i$ or a singleton $\{x\}$ with $x\in K\setminus\left(\bigcup_i\partial U_i\right)$.
\end{theo}
Now we go on to consider partially unshielded compacta and obtain Theorem \ref{equality-case}(iii).
\begin{deff}\label{part-unshielded}
Let $L\subset\hat{\mathbb{C}}$ be an unshielded compactum, which equals the boundary $\partial U$ of one of its complementary components $U$. A compactum $K$ formed by the union of $L$ with some complementary components of $L$ other than $U$ is called a {\bf partially unshielded compactum} determined by $L$.
\end{deff}
In order to find typical examples, one may set $L$ to be a polynomial Julia set, $U$ the unbounded Fatou component, and $K$ the union of $L$ and some bounded Fatou components. The next proposition discusses the relation between the atoms of any given compactum $L\subset\hat{\mathbb{C}}$ and those of a compactum $K$, where $K$ is the union of $L$ with some (not all) components of $\hat{\bbC}\setminus L$.
\begin{proposition}\label{useful}
Given a planar compactum $L\subset \hat{\mathbb{C}}$ and a family $\{U_\alpha:\ \alpha\in I\}$ of components of \ $\hat{\mathbb{C}}\!\setminus\!L$. If $\displaystyle K=L\cup\left(\bigcup_{\alpha\in I}U_\alpha\right)$ then $\overline{R_K}$ is a subset of $\{(z,z):\ z\in K\!\setminus\!L\}\cup\overline{R_L}$. Consequently, every atom of $K$ is either a singleton lying in $K\setminus L$ or a sub-continuum of an atom of $L$.
\end{proposition}
\begin{proof}
Since $\displaystyle K=L\cup\left(\bigcup_{\alpha\in I}U_\alpha\right)$, every point $z\in (K\setminus L)$ lies in some $U_\alpha$. Thus the atom of $K$ containing $z$ is exactly the singleton $\{z\}$. From this it readily follows that every atom $d^*$ of $K$ that intersects $L$ is a sub-continuum of $L$. So we have $\overline{R_K}=\{(z,z):\ z\in K\!\setminus\!L\}\cup\left(\overline{R_K}\cap L^2\right)$. Therefore, we only need to show that $\left(\overline{R_K}\cap L^2\right)\subset \overline{R_L}$.
Indeed, if on the contrary there were some $(x,y)\in \overline{R_K}\cap L^2$ not belonging to $\overline{R_L}$ then, for any small enough number $r>0$, the difference $L\setminus (B_r(x)\cup B_r(y))$ would have finitely many components intersecting $\partial B_r(x)$ and $\partial B_r(y)$ both. Let $A_r=\hat{\mathbb{C}}\setminus (B_r(x)\cup B_r(y))$. By Lemma \ref{bridging_lemma}, $A_r\setminus L$ has at most finitely many components that intersect $\partial B_r(x)$ and $\partial B_r(y)$ both. As we assume that $\displaystyle K=L\cup\left(\bigcup_{\alpha\in I}U_\alpha\right)$, every component of $A_r\setminus K$ is also a component of $A_r\setminus L$. Thus $A_r\setminus K$ has at most finitely many components that intersect both $\partial B_r(x)$ and $\partial B_r(y)$. In other words, we have $(x,y)\notin\overline{R_K}$. This is absurd since we assume that $(x,y)\in \overline{R_K}$.
\end{proof}
There are other basic facts concerning an unshielded compactum $L$ and a partially unshielded compactum $K$ determined by $L$. Firstly, every interior point of $K$ lies in some complementary component of $L$; secondly, every boundary point of $K$ lies in $L$. Thus we always have $\partial K=L$; moreover, every atom of $K$ that intersects the interior $K^o$ is necessarily a singleton. Therefore, in order to determine the atoms of $K$ we only need to consider those of $L$.
\begin{theo}\label{part-2}
Let $L\subset\hat{\mathbb{C}}$ be an unshielded compactum. Let $K$ be a partially unshielded compactum determined by $L$. Then every atom of $L$ is also an atom of $K$ and we have $\Dc_K=\Dc_L\cup\{\{x\}: x\in K\setminus L\}$. Consequently, $\tilde{\lambda}_K(x)=\lambda_K(x)$ for all $x\in\hat{\mathbb{C}}$.
\end{theo}
\begin{proof}
As $L$ is unshielded, there is a component $U$ of $\hat{\mathbb{C}}\setminus L$ with $L=\partial U$. By Lemma \ref{key-lemma}, every atom of $L$ lies in a single atom of $K$. By Lemma \ref{useful}, every atom of $K$ intersecting $L$ is contained in a single atom of $L$. Thus every atom of $L$ is also an atom of $K$. As any singleton $\{x\}$ with $x\in K^o= K\setminus L$ is an atom of $K$, we have $\Dc_K=\Dc_L\cup\{\{x\}: x\in K\setminus L\}$. This indicates the Lambda Equality $\tilde{\lambda}_K=\lambda_K$.
\end{proof}
\begin{rem}\label{why_partially_unshielded}
Theorem \ref{part-2} gives a result that is slightly stronger than Theorem \ref{equality-case}(iii). In particular, for any full compactum $K$ we have $\mathcal{D}_{\partial K}\subset\mathcal{D}_K$. Therefore, a full compactum $K$ is a Peano compactum if and only if the boundary $\partial K$ is. In particular, if $G\subset\hat{\mathbb{C}}$ is a simply connected bounded domain then $\partial G$ is locally connected if and only if $K=\hat{\mathbb{C}}\setminus G$ is locally connected, or equivalently when $K$ is a Peano continuum. This basic fact has been well known, see for instance the items (iii) and (iv) of \cite[p.20, Theorem 2.1]{Pom92}. Now, it is extended to a quantitative version in Theorem \ref{part-2}. This extension applies to an arbitrary full continuum, that may or may not be locally connected.
\end{rem}
\section{The Gluing Lemma for Lambda Functions}\label{glue}
We will follow the philosophy of the well known gluing lemma for continuous maps.
See for instance \cite[p.69, Theorem (4.6)]{Armstrong} for the simple case and \cite[p.70, Theorem (4.8)]{Armstrong} for the general setting. Our aim is to prove Theorem \ref{gluing_lemma}, which deals with the lambda functions $\lambda_K,\lambda_L$ for planar compacta $K\supset L$ such that $A=\overline{K\setminus L}$ intersects $L$ at finitely many points $x_1,\ldots,x_n$. In Theorem \ref{baby_M}, we further extend Theorem \ref{gluing_lemma} to the case that $A\cap L$ is a countably infinite set, under additional assumptions. Notice that when $A\cap L$ is an infinite set Theorem \ref{gluing_lemma} may not hold. See Examples \ref{cantor_combs}, \ref{brooms} and \ref{cup-fs}.
\begin{proof}[{\bf Proof for Theorem \ref{gluing_lemma}}]
For $1\le i\le n$, denote by $d_i^1$ the order-$1$ atom of $A$ that contains $x_i$. Similarly, denote by $e_i^1$ the atom of $L$ that contains $x_i$.
Let $K_1=A_1\cup L_1$, where $\displaystyle A_1=\bigcup_id_i^1$ and $\displaystyle L_1=\bigcup_ie_i^1$. Then $K_1$ has finitely many components. Let $\Ec_1$ be the collection of these components.
By \cite[Theorem 1.3]{LYY-2020}, a point $y\ne x$ lies in $\overline{R_K}[x]$
if and only if $K\setminus(B_r(x)\cup B_r(y))$ has infinitely many components that intersect both $\partial B_r(x)$ and $\partial B_r(y)$ for small enough $r>0$. Because of this, we can directly check that $\overline{R_K}=\overline{R_A}\cup\overline{R_L}$. Here $\overline{R_K},\overline{R_A},\overline{R_L}$ are respectively the closed Sch\"onflies relations on $K,A$ and $L$. Let \[
\Dc_1=\left(\Dc_L\setminus\left\{e_1^1,\ldots,e_n^1\right\}\right)\cup
\left(\Dc_A\setminus\left\{d_1^1,\ldots,d_n^1\right\}\right)\cup
\Ec_1.\]
Then $\Dc_1$ is an upper semi-continuous decomposition of $K$ into subcontinua. Since $\Dc_1$ does not split the fibers of $\overline{R_K}$, it is refined by $\Dc_K$, the core decomposition of $K$ with Peano quotient. On the other hand, the equality $\overline{R_K}=\overline{R_A}\cup\overline{R_L}$ indicates that $\mathcal{D}_K$ does not split the fibers of $\overline{R_A}$ and those of $\overline{R_L}$. Thus each atom of $A$ lies in an atom of $K$; similarly, every atom of $L$ lies in an atom of $K$. Consequently, we have.
\begin{lemma}\label{gluing_atoms_a}
$\Dc_K=\Dc_1$. Thus $d\cap A$ (or $d\cap L$) either is empty or consists of finitely many atoms of $A$ (resp. $L$) for any atom $d$ of $K$.
\end{lemma}
Lemma \ref{gluing_atoms_a} ensures that $\displaystyle \lambda_K(x)=\max\left\{\lambda_A(x),\lambda_L(x)\right\}$ for all $x\notin K_1$. That is to say, the equation $\lambda_K(x)=\max\left\{\lambda_{\overline{K\setminus L}}(x),\lambda_L(x)\right\}$ in Theorem \ref{gluing_lemma} holds for all points $x\notin K_1$, so that we only need to consider the points $x\in K_1$.
Notice that we have set $A=\overline{K\setminus L}$, $\displaystyle A_1=\bigcup_id_i^1$, and $\displaystyle L_1=\bigcup_ie_i^1$. We will need to verify that $\displaystyle \lambda_{K_1}(x)=\max\left\{\lambda_{A_1}(x),\lambda_{L_1}(x)\right\}$ for all $x\in K_1$, since for $x\in A_1$ and $y\in L_1$ we have
\[
\begin{array}{ccc}
\lambda_A(x)=\left\{\begin{array}{ll} 0& \{x\}\in\Dc_A\\
1+\lambda_{A_1}(x)& otherwise\end{array}\right. &\text{and}&
\lambda_L(y)=\left\{\begin{array}{ll} 0& \{y\}\in\Dc_L\\
1+\lambda_{L_1}(y)& otherwise.\end{array}\right.
\end{array}\]
To do that, we recall that $\mathcal{D}_{A_1}$ consists of all the order-$2$ atoms of $A$ lying in $A_1$. Similarly, $\mathcal{D}_{L_1}$ consists of all the order-$2$ atoms of $L$ lying in $L_1$. Thus we may repeat the above procedure again, replacing $A$ and $L$ by $A_1$ and $L_1$. This then gives rise to two compacta $A_2\subset A_1$ and $L_2\subset L_1$ such that
$\displaystyle \lambda_{K_1}(x)=\max\left\{\lambda_{A_1}(x),\lambda_{L_1}(x)\right\}$ for all $x\notin K_2=A_2\cup L_2$.
We may carry out the same procedure indefinitely and obtain two decreasing sequences of compacta: (1) $A_1\supset A_2\supset\cdots$ and (2) $L_1\supset L_2\supset\cdots$. Setting $K_p=A_p\cup L_p$ for $p\ge1$, we have the following equations:
\begin{equation}
\displaystyle \lambda_{K_{p}}(x)=\left\{\begin{array}{ll}0& \{x\}\in\Dc_{K_p}\\
1+\lambda_{K_{p+1}}(x)& otherwise\end{array}\right. \quad (x\in K_{p+1}).
\end{equation}
\begin{equation}
\displaystyle \lambda_{A_{p}}(x)=\left\{\begin{array}{ll}0& \{x\}\in\Dc_{A_p}\\
1+\lambda_{A_{p+1}}(x)& otherwsie\end{array}\right. \quad(x\in A_{p+1})
\end{equation}
\begin{equation}
\displaystyle \lambda_{L_{p}}(x)=\left\{\begin{array}{ll}0& \{x\}\in\Dc_{L_p}\\
1+\lambda_{L_{p+1}}(x)& otherwise\end{array}\right.\quad (x\in L_{p+1})
\end{equation}
\begin{equation}
\lambda_{K_p}(x)=\max\left\{\lambda_{A_p}(x),\lambda_{L_p}(x)\right\}\quad (x\notin K_{p+1})
\end{equation}
There are two possibilities. In the first, we have $K_p=K_{p+1}$ for some $p\ge1$, indicating that $K_m=K_p$ for all $m\ge p$.
In such a case, we have $\lambda_{K_p}(x)=\max\left\{\lambda_{A_p}(x),\lambda_{L_p}(x)\right\}$ and hence $\lambda_{K}(x)=\max\left\{\lambda_{A}(x),\lambda_{L}(x)\right\}$.
In the second, we have $K_p\ne K_{p+1}$ for all $p\ge1$. This implies that $\lambda_K(x)=\max\left\{\lambda_A(x),\lambda_L(x)\right\}=\infty$ holds for all $x\in K_\infty=\bigcap_pK_p$ and that $\lambda_{K}(x)=p+\lambda_{K_p}(x)=p+\max\left\{\lambda_{A_p}(x),\lambda_{L_p}(x)\right\}=
\max\left\{\lambda_{A}(x),\lambda_{L}(x)\right\}$ holds for $p\ge1$ and $x\notin K_p\setminus K_{p+1}$. Here $\displaystyle K_1\setminus K_{\infty}=\bigcup_{p=1}^\infty(K_p\setminus K_{p+1})$. This completes our proof.
\end{proof}
Lemma \ref{gluing_atoms_a} and Theorem \ref{gluing_lemma} are useful, when we study $\lambda_K$ for certain choices of planar compacta $K$. For instance, we may choose $K$ to be the Julia set of a renormalizable polynomial $f(z)=z^2+c$ and $L$ the small Julia set. For the sake of convenience, we further assume that the only critical point of $f$ is recurrent and that there is no irrationally neutral cycle. Then it is possible to choose a decreasing sequence of Jordan domains $\{U_n\}$, with $\overline{U_{n+1}}\subset U_n$ and $\displaystyle L=\bigcap_{n=1}^\infty U_n$, such that every $K\cap \partial U_n$ consists of finitely many points that are periodic or pre-periodic. See for instance \cite[section 2.2]{Jiang00}.
For any $n\ge1$ we can use \cite[Theorems 2 and 3]{Kiwi04} to infer that every singleton $\{x\}$ with $x\in (K\cap\partial U_n)$ is an atom of $K$ hence is also an atom of $L_n=K\cap\overline{U_n}$. Combining these with Lemma \ref{gluing_atoms_a} and Theorem \ref{gluing_lemma}, we further see that $\mathcal{D}_{L_{n+1}}\subset\mathcal{D}_{L_n}\subset\mathcal{D}_K$ for all $n\ge1$. However, we are not sure whether $\mathcal{D}_L\subset\mathcal{D}_K$. Similarly, it is not clear whether $\lambda_K(x)=\lambda_L(x)$ holds for $x\in L$. Therefore, we propose the following.
\begin{que}\label{small_julia}
Let $K=L_0\supset L_1\supset L_2\supset\cdots$ a decreasing sequence of planar compacta such that $L_n\cap\overline{K\setminus L_n}$ is a finite set for all $n\ge1$.
Setting $L=\bigcap_{n\ge1}L_n$. Find conditions so that (1) $\mathcal{D}_L\subset\mathcal{D}_K$ or (2) $\lambda_K(x)=\lambda_L(x)$ holds for all $x\in L$.
\end{que}
As a response to Question \ref{small_julia}, we turn to study the lambda functions of two planar compact $K\supset L$
such that $K\setminus L$ is contained in the union of at most countably many continua $P_n\subset K$ that satisfy the following properties:
\begin{itemize}
\item[(P1)] every $P_n$ intersects $L$ at a single point $x_n$, and
\item[(P2)] for any constant $C>0$ at most finitely many $P_n$ are of diameter greater than $C$.
\item[(P3)] $P_n\cap P_m=\emptyset$ for $n\ne m$.
\end{itemize}
Here $P_n\setminus\{x_n\}$ might be disconnected for some of the integers $n\ge1$. Notice that there is a special situation, when $K$ is the Mandelbrot set $\M$. Then, in order that the above properties (P1)-(P3) be satisfied, we may choose $L$ to be the closure of a hyperbolic component or a {\bf Baby Mandelbrot set}.
As an extension of Theorem \ref{gluing_lemma}, we will obtain the following.
\begin{theo}\label{baby_M}
Given two planar compacta $K\supset L$ that satisfy (P1) to (P3), we have \begin{equation}\label{baby}
\lambda_K(x)=\left\{\begin{array}{lll} \lambda_{P_n}(x)&x\in P_n\setminus\{x_n\}\ {for\ some}\ n&({case}\ 1)\\ \lambda_L(x)& x\in L\setminus\{x_n: n\in\mathbb{N}\}&({case}\ 2)\\ \max\left\{\lambda_L(x_n),\lambda_{P_n}(x_n)\right\}& x=x_n\ {for\ some}\ x_n&({case}\ 3)\\
0 &{otherwise}&(case\ 4)\end{array}\right. \end{equation}
\end{theo}
We just need to consider the above equation for points $x\in K$.
To do that, we may
define an equivalence $\sim$ on $\mathbb{N}$ so that $m\sim n$ if and only if $x_m,x_n$ are contained in the same atom of $L$. Let $\{I_j:j\}$ be the equivalence classes of $\sim$.
Denote by $d_n$ the atom of $P_n$ that contains $x_n$, and by $e_j$ the atom of $L$ that contains all $x_n$ with $n\in I_j$. Moreover, set $e'_j=e_j\cup\left(\bigcup_{n\in I_j}d_n\right)$ for every $j$.
Then $\{e'_j: j\}$ is a collection of at most countably many continua that are pairwise disjoint.
Now we consider the following upper semi-continuous decomposition of $K$:
\begin{equation}\label{baby_M_partition}
\Dc_1=\left(\Dc_L\setminus\left\{e_j: j\right\}\right)\cup
\left(\bigcup_n\Dc_{P_n}\setminus\{d_n\}\right)\cup
\{e'_j: j\}.
\end{equation}
All its elements are sub-continua of $K$ that do not split the fibers of $\overline{R_K}$. So it is refined by $\Dc_K$. On the other hand, by \cite[Theorem 1.3]{LYY-2020}, we also have $\overline{R_K}=\overline{R_L}\cup\left(\bigcup_{n\ge1}\overline{R_{P_n}}\right)$. Thus $\mathcal{D}_K$ does not split the fibers of $\overline{R_L}$ and those of $\overline{R_{P_n}}$ for all $n\ge1$. Therefore, every atom of $L$ lies in an atom of $K$, so does every atom of $P_n$. Consequently, we have.
\begin{lemma}\label{gluing_atoms_b}
$\Dc_K=\Dc_1$. Therefore, for any atom $d$ of $K$ the intersection $d\cap L$ (or $d\cap P_n$ for any $n\ge1$) is either empty or a single atom of $L$ (respectively, $P_n$).
\end{lemma}
\begin{proof}[{\bf Proof for Theorem \ref{baby_M}}]
Clearly, $\lambda_K(x)=\lambda_L(x)$ for all $x$ in $L\setminus\left(\bigcup_je'_j\right)$. Similarly, $\lambda_K(x)=\lambda_{P_n}(x)$ for all $x$ in $P_n\setminus d_n$. Moreover, $\lambda_K(x_n)=\lambda_L(x_n)=\lambda_{P_n}(x_n)=0$ for all $x_n$ such that $\{x_n\}$ is an atom of $L$ and also an atom of $P_n$. Therefore, we just need to consider those $e_j'$ that are non-degenerate.
Let $\mathcal{N}_1$ be the collection of all the integers $j$ such that $e_j'$ is not a singleton.
Then $e_{n_1}$ is a subcontinuum of $e_{n_1}'$ for any $n_1\in\mathcal{N}_1$, such that $e_{n_1}'\setminus e_{n_1}$ is covered by all those $d_n$ with $n\in I_{n_1}$. Thus the properties (P1) - (P3) are satisfied, if $K$ and $L$ are respectively replaced by $e_{n_1}'$ and $e_{n_1}$. It is then routine to check the following:
\begin{equation}\label{inductive_1}
\lambda_K(x)=\left\{\begin{array}{ll}
\lambda_{P_n}(x)&x\in P_n\setminus\left(\bigcup_{n_1}e_{n_1}'\right)\\
\lambda_{L}(x)& x\in L\setminus\left(\bigcup_{n_1}e_{n_1}'\right)\\
1+\lambda_{e_{n_1}'}(x)& x\in e_{n_1}'\ \text{for\ some}\ n_1\in\mathcal{N}_1
\end{array}\right.
\end{equation}
Every atom of $e_{n_1}'$ falls into exactly one of the following possibilities: (1) an order-two atom of $P_n$ for some $n\in I_{n_1}$ that is disjoint from $\{x_n\}$, (2) an order-two atom of $L$ that is disjoint from $\{x_n: n\in I_{n_1}\}$, (3) a singleton $\{x_n\}$ for some $n\in I_{n_1}$, which is an order-two atom of $L$ and is also an order-two atom of $P_n$, (4) a non-singleton continuum that consists of the order-two atom of $L$ containing some $x_n$, with $n\in I_{n_1}$, and the order-two atom of $P_n$ containing $x_n$.
An atom falling in the first three possibilities is called an atom of {\bf pure type}.
We can check that $e_{n_1}'$ has at most countably many atoms that is not of pure type. Such an atom is generally denoted as $e'_{n_1n_2}$. Similarly we can define continua $e'_{n_1n_2\ldots n_p}$ for $p>2$. On the one hand, such a continuum is an order-$p$ atom of $K$; on the other, it is also an atom of $e'_{n_1n_2,\ldots n_{p-1}}$ that is not of pure type.
By the same arguments, that have been used in obtaining Equation (\ref{inductive_1}), we can infer the following equation:
\begin{equation}\label{inductive_2}
\lambda_K(x)=\left\{\begin{array}{ll}
\lambda_{P_n}(x)&x\in P_n\setminus\left(\bigcup_{n_1,n_2}e_{n_1n_2}'\right)\\
\lambda_{L}(x)& x\in L\setminus\left(\bigcup_{n_1,n_2}e_{n_1n_2}'\right) \\
2+\lambda_{e_{n_1n_2}'}(x)& x\in e_{n_1n_2}'\ \text{for\ some}\ n_1,n_2
\end{array}\right.
\end{equation}
This equation may be extended to order-$p$ atoms $e'_{n_1n_2\ldots n_p}$ with $p\ge2$ in the following way.
\begin{equation}\label{inductive_p}
\lambda_K(x)=\left\{\begin{array}{ll}
\lambda_{P_n}(x)&x\in P_n\setminus\left(\bigcup_{n_1,\ldots,n_p}e_{n_1\cdots n_p}'\right)\\
\lambda_{L}(x)& x\in L\setminus\left(\bigcup_{n_1,\ldots,n_p}e_{n_1\cdots n_p}'\right) \\
p+\lambda_{e_{n_1\cdots n_p}'}(x)& x\in e_{n_1\cdots n_p}'\ \text{for\ some}\ n_1,\ldots,n_p
\end{array}\right.
\end{equation}
Notice that Theorem \ref{baby_M} holds for every $x\in K$ lying in an atom of $e'_{n_1n_2\ldots n_p}$ that is of pure type. Such a point $x$ does not lie in $e'_{n_1n_2\ldots n_pn_{p+1}}$ for any choice of $n_{p+1}$ and hence falls into exactly one of the following possibilities:
\begin{itemize}
\item that $x\in P_n\setminus\{x_n\}$ for some $n\ge1$ and $\lambda_K(x)=\lambda_{P_n}(x)\ge p$.
\item that $x\in L\setminus\{x_n: n\ge1\}$ and $\lambda_K(x)=\lambda_L(x)\ge p$.
\item that $x=x_n$ for some $n\ge1$ and $\lambda_K(x)=\max\left\{\lambda_L(x),\lambda_{P_n}(x)\right\}=p$.
\end{itemize}
Every other point $x\in K$ necessarily lies in $e'_{n_1n_2\ldots n_p}$ for infinitely many $p$. The continua $e'_{n_1n_2\ldots n_p}$ decrease to a continuum $M_x$. There are three possibilities, either $x\in L\setminus\{x_n\}$, or $x\in P_n\setminus\{x_n\}$ for some $n\ge1$, or $x=x_n$ for some $n\ge1$. In the first case, we have $\lambda_K(x)=\lambda_L(x)=\infty$; in the second, we have $\lambda_K(x)=\lambda_{P_n}(x)=\infty$; in the third, we have $\lambda_K(x)=\max\left\{\lambda_L(x),\lambda_{P_n}(x)\right\}=\infty$. This completes our proof.
\end{proof}
\begin{rem}
Let $K=L_0\supset L_1\supset L_2\supset\cdots$ be given as in Question \ref{small_julia}. Also let $L=\bigcap_{n\ge1}L_n$. Then $L_n\cap\overline{K\setminus L_n}$ is a finite set for all $n\ge1$.
Assume in addition that (1) every singleton $\{x_n\}$ is an atom of $K$ and (2) $K$ and $L$ satisfy the requirements in Theorem \ref{baby_M}. By Lemma \ref{gluing_atoms_a}, we see that $\mathcal{D}_{L_{n+1}}\subset\mathcal{D}_{L_n}\subset\mathcal{D}_K$ for all $n\ge1$; thus from Theorem \ref{gluing_lemma} we can infer that $\lambda_K(x)=\lambda_{L_n}(x)$ for all $x\in L_n$. Moreover, by Lemma \ref{gluing_atoms_b} we have $\mathcal{D}_L\subset\mathcal{D}_K$. Therefore, by Theorem \ref{baby_M} we further infer that $\lambda_K(x)=\lambda_L(x)$ holds for all $x\in L$.
\end{rem}
\section{Examples}\label{examples}
We shall construct examples.
The beginning two provide choices of compacta $A,B\subset\hat{\bbC}$ such that $\lambda_{A\cup B}(x)\ne\max\left\{\lambda_A(x),\lambda_B(x)\right\}$ for some $x$, although $\lambda_A(x)=\lambda_B(x)$ for all $x\in A\cap B$. In the first, $A\cap B$ is an uncountable set; in the second, $A\cap B$ is a countably infinite set. Therefore, the conditions of Theorem \ref{gluing_lemma} are not satisfied.
\begin{exam}\label{cantor_combs}
Let $A
\{t+s{\bf i}: t\in\Kc, 0\le s\le1\}$, where $\Kc$ is the Cantor ternary set. Let
$B
\{t+(1+s){\bf i}: 0\le t\le 1, s\in\Kc\}$. Let $A_1=A\cup B$ and $B_1=(A+1+{\bf i})\cup(B+1-{\bf i})$.
See Figure \ref{not_glued} for a simplified depiction of $A, B, A_1, B_1$.
\begin{figure}[ht]
\vspace{-0.05cm}
\begin{tabular}{ccccc}
\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3*\i/27,0) -- (3*\i/27,3);
\draw[gray,very thick] (6/9+3*\i/27,0) -- (6/9+3*\i/27,3);
\draw[gray,very thick] (2+3*\i/27,0) -- (2+3*\i/27,3);
\draw[gray,very thick] (2+6/9+3*\i/27,0) -- (2+6/9+3*\i/27,3);
}
\end{tikzpicture} \hspace{0.25cm}
&
\hspace{0.25cm}
\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (0,3*\i/27+3) -- (3,3*\i/27+3);
\draw[gray,very thick] (0,6/9+3*\i/27+3) -- (3,6/9+3*\i/27+3);
\draw[gray,very thick] (0,2+3*\i/27+3) -- (3,2+3*\i/27+3);
\draw[gray,very thick] (0,2+6/9+3*\i/27+3) -- (3,2+6/9+3*\i/27+3);
}
\draw[gray,dashed] (0,0) -- (3,0)-- (3,3)-- (0,3)-- (0,0);
\end{tikzpicture} \hspace{0.25cm}
&
\hspace{0.25cm}
\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3*\i/27,0) -- (3*\i/27,3);
\draw[gray,very thick] (6/9+3*\i/27,0) -- (6/9+3*\i/27,3);
\draw[gray,very thick] (2+3*\i/27,0) -- (2+3*\i/27,3);
\draw[gray,very thick] (2+6/9+3*\i/27,0) -- (2+6/9+3*\i/27,3);
}
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (0,3*\i/27+3) -- (3,3*\i/27+3);
\draw[gray,very thick] (0,6/9+3*\i/27+3) -- (3,6/9+3*\i/27+3);
\draw[gray,very thick] (0,2+3*\i/27+3) -- (3,2+3*\i/27+3);
\draw[gray,very thick] (0,2+6/9+3*\i/27+3) -- (3,2+6/9+3*\i/27+3);
}
\end{tikzpicture} \hspace{0.25cm}
&
\hspace{0.25cm}
\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3+3*\i/27,3) -- (3+3*\i/27,6);
\draw[gray,very thick] (3+6/9+3*\i/27,3) -- (3+6/9+3*\i/27,6);
\draw[gray,very thick] (3+2+3*\i/27,3) -- (3+2+3*\i/27,6);
\draw[gray,very thick] (3+2+6/9+3*\i/27,3) -- (3+2+6/9+3*\i/27,6);
}
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3,3*\i/27) -- (3+3,3*\i/27);
\draw[gray,very thick] (3,6/9+3*\i/27) -- (3+3,6/9+3*\i/27);
\draw[gray,very thick] (3,2+3*\i/27) -- (3+3,2+3*\i/27);
\draw[gray,very thick] (3,2+6/9+3*\i/27) -- (3+3,2+6/9+3*\i/27);
}
\end{tikzpicture} \hspace{0.25cm}
&
\hspace{0.25cm}
\begin{tikzpicture}[x=1cm,y=1cm,scale=0.5]
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3*\i/27,0) -- (3*\i/27,3);
\draw[gray,very thick] (6/9+3*\i/27,0) -- (6/9+3*\i/27,3);
\draw[gray,very thick] (2+3*\i/27,0) -- (2+3*\i/27,3);
\draw[gray,very thick] (2+6/9+3*\i/27,0) -- (2+6/9+3*\i/27,3);
}
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (0,3*\i/27+3) -- (3,3*\i/27+3);
\draw[gray,very thick] (0,6/9+3*\i/27+3) -- (3,6/9+3*\i/27+3);
\draw[gray,very thick] (0,2+3*\i/27+3) -- (3,2+3*\i/27+3);
\draw[gray,very thick] (0,2+6/9+3*\i/27+3) -- (3,2+6/9+3*\i/27+3);
}
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3+3*\i/27,3) -- (3+3*\i/27,6);
\draw[gray,very thick] (3+6/9+3*\i/27,3) -- (3+6/9+3*\i/27,6);
\draw[gray,very thick] (3+2+3*\i/27,3) -- (3+2+3*\i/27,6);
\draw[gray,very thick] (3+2+6/9+3*\i/27,3) -- (3+2+6/9+3*\i/27,6);
}
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3,3*\i/27) -- (3+3,3*\i/27);
\draw[gray,very thick] (3,6/9+3*\i/27) -- (3+3,6/9+3*\i/27);
\draw[gray,very thick] (3,2+3*\i/27) -- (3+3,2+3*\i/27);
\draw[gray,very thick] (3,2+6/9+3*\i/27) -- (3+3,2+6/9+3*\i/27);
}
\end{tikzpicture}
\\ $A$& $B$& $A_1=A\cup B$& $B_1$& $A_1\cup B_1$\end{tabular}
\caption{The two compacta $A, B$ and their union.}\label{not_glued}
\end{figure}
Then
$\lambda_A(x)=1$ for all $x\in A$ and vanishes otherwise; similarly, $\lambda_B(x)=1$ for all $x\in B$ and vanishes otherwise.
However, both $A\cap B$ and $A_1\cap B_1$ are uncountable, thus the conditions in Theorem \ref{gluing_lemma} are not satisfied. Moreover, we have
\[\lambda_{A_1}(x)=\lambda_{A\cup B}(x)=\left\{\begin{array}{ll}2&x\in A\\ 1& B\setminus A\\ 0& {otherwise}\end{array}\right.\quad {and}\quad
\lambda_{A_1\cup B_1}(x)=\left\{\begin{array}{ll}\infty & x\in (A_1\cup B_1)\\ 0& {otherwise}.\end{array}\right.\]
\end{exam}
\begin{exam}\label{brooms}
Set $A=\bigcup\limits_{n\ge0}A_n$. Here $A_0=\{s{\bf i}: 0\le s\le1\}$ and $A_1$ is the continuum that consists of the line $\displaystyle\left\{1+t{\bf i}: 0\le t\le1\right\}$ and all those lines connecting $1+{\bf i}$ to $\displaystyle\frac{k}{k+1}$ for $k\ge1$; moreover, for $n\ge2$, $A_n=\displaystyle \left\{2^{-n+1}t+s{\bf i}: t+s{\bf i}\in A_1\right\}$. See Figure \ref{broom_comb}.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{cc}
\begin{tikzpicture}[x=1.618cm,y=1cm,scale=1]
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (0,3+3*\i/27) -- (3,3+3*\i/27);
\draw[gray,very thick] (0,3+6/9+3*\i/27) -- (3,3+6/9+3*\i/27);
\draw[gray,very thick] (0,3+2+3*\i/27) -- (3,3+2+3*\i/27);
\draw[gray,very thick] (0,3+2+6/9+3*\i/27) -- (3,3+2+6/9+3*\i/27);
}
\draw[gray,very thick] (0,0) --(0,3);
\draw[gray,very thick] (3,0) --(3,3);
\draw[gray,very thick] (3/2,0) --(3/2,3);
\draw[gray,very thick] (3/4,0) --(3/4,3);
\draw[gray,very thick] (3/8,0) --(3/8,3);
\draw[gray,very thick] (3/16,0) --(3/16,3);
\draw[gray,very thick] (3/32,0) --(3/32,3);
\draw[gray,very thick] (3/64,0) --(3/64,3);
\draw[gray,very thick] (3/128,0) --(3/128,3);
\draw[gray,very thick] (3/256,0) --(3/256,3);
\draw[gray,very thick] (3/512,0) --(3/512,3);
\foreach \i in {2,...,7}
{
\draw[gray,very thick] (3,3) -- (3-3/\i,0);
}
\node at (2.8,0.15) {$\ldots$};
\foreach \i in {2,...,7}
{
\draw[gray,very thick] (3/2,3) -- (3/2-1.5/\i,0);
}
\node at (9/16,1.75) {$\vdots$};
\node at (9/16,1.25) {$\vdots$};
\node at (-0.1,0.2){$0$}; \node at (3.1,0.2){$1$};
\node at (0,0){$\cdot$}; \node at (3,0){$\cdot$};
\node at (3,3){$\cdot$}; \node at (3,6){$\cdot$};
\node at (3.35,3.0){$1\!+\!{\bf i}$}; \node at (3.35,6.0){$1\!+\!2{\bf i}$};
\end{tikzpicture}
&
\begin{tikzpicture}[x=1.618cm,y=1cm,scale=1]
\foreach \i in {0,...,3}
{
\draw[gray,dashed] (0,3+3*\i/27) -- (3,3+3*\i/27);
\draw[gray,dashed] (0,3+6/9+3*\i/27) -- (3,3+6/9+3*\i/27);
\draw[gray,dashed] (0,3+2+3*\i/27) -- (3,3+2+3*\i/27);
\draw[gray,dashed] (0,3+2+6/9+3*\i/27) -- (3,3+2+6/9+3*\i/27);
}
\draw[gray,very thick] (0,3) --(3,3);
\draw[gray,very thick] (0,0) --(0,3);
\draw[gray,very thick] (3,0) --(3,3);
\draw[gray,very thick] (3/2,0) --(3/2,3);
\draw[gray,very thick] (3/4,0) --(3/4,3);
\draw[gray,very thick] (3/8,0) --(3/8,3);
\draw[gray,very thick] (3/16,0) --(3/16,3);
\draw[gray,very thick] (3/32,0) --(3/32,3);
\draw[gray,very thick] (3/64,0) --(3/64,3);
\draw[gray,very thick] (3/128,0) --(3/128,3);
\draw[gray,very thick] (3/256,0) --(3/256,3);
\draw[gray,very thick] (3/512,0) --(3/512,3);
\node at (-0.1,0.2){$0$}; \node at (3.1,0.2){$1$}; \node at (3.35,3.0){$1\!+\!{\bf i}$};
\end{tikzpicture}\\ $A\cup B$ & $d$
\end{tabular}
\end{center}
\vskip -0.75cm
\caption{The compactum $A\cup B$ and the atom $d$ of $A\cup B$.}\label{broom_comb}
\end{figure}
Further setting $B$ as in Example \ref{cantor_combs}, we have $A\cap B=\{ {\bf i}\}\cup\left\{2^{-n}+{\bf i}: n\ge0\right\}$.
If $x\in A\cap B$ then $\lambda_A(x)=\lambda_B(x)=1$. Let
$\displaystyle L_1=\left\{t+s{\bf i}: 0\le s\le 1, t=0\ {or}\ 2^{-n}\ {for\ some}\ n\ge0\right\}.$ Then $d=L_1 \cup\ \{t+{\bf i}: 0\le t\le1\}$ is an atom of $A\cup B$ and is not locally connected at any $x\in A_0$. Moreover, we have
\[\lambda_{A}(x)=\left\{\begin{array}{ll}1&x\in L_1 \\ 0& {otherwise}\end{array}\right.\quad {and}\quad
\lambda_{A\cup B}(x)=\left\{\begin{array}{ll}2& x\in A_0\\ 1&x\in (B\cup d)\setminus A_0\\ 0& {otherwise}.\end{array}\right.\]
\end{exam}
The next two examples are about $E$-continua $K\subset\hat{\mathbb{C}}$ such that the lambda equality given in (i) or (ii) of Theorem \ref{equality-case} does not hold.
\begin{exam}\label{E-compactum}
Let $X$ denote the square $[1,2]\times[0,{\mathbf i}]\subset\hat{\mathbb{C}}$. Let $Y$ be an embedding of $[0,\infty)$ whose closure $\overline{Y}$ equals the union of $Y$ with $\partial X$. See the left part of Figure \ref{negative} for a simplified representation of \ $\overline{Y}$, which is depicted as \tb{blue}.
\begin{figure}[ht]
\vskip -0.25cm
\begin{tabular}{ll}
\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.55]
\draw[gray,thick] (64,0) -- (64,32) -- (0,32) -- (0,0) -- (64,0);
\foreach \j in {0,1}
{ \draw[gray, thick] (32,\j*16) -- (32,16+\j*16) -- (16,16+\j*16) -- (16,\j*16) --(32,\j*16);
}
\foreach \j in {0,...,3}
{
\draw[gray, thick] (16,\j*8) -- (16,8+\j*8) -- (8,8+\j*8) -- (8,\j*8) --(16,\j*8);
}
\foreach \j in {0,...,7}
{
\draw[gray, thick] (8,\j*4) -- (8,4+\j*4) -- (4,4+\j*4) -- (4,\j*4) --(8,\j*4);
}
\foreach \j in {0,...,15}
{
\draw[gray, thick] (4,\j*2) -- (4,2+\j*2) -- (2,2+\j*2) -- (2,\j*2) --(4,\j*2);
}
\foreach \j in {0,...,31}
{
\draw[gray, thick] (2,\j*1) -- (2,1+\j*1) -- (1,1+\j*1) -- (1,\j*1) --(2,\j*1);
}
\foreach \j in {0,...,63}
{
\draw[gray, thick] (1,\j*1/2) -- (1,1/2+\j*1/2) -- (1/2,1/2+\j*1/2) -- (1/2,\j*1/2) --(1,\j*1/2);
}
\foreach \j in {0,...,127}
{
\draw[gray, thick] (1/2,\j*1/4) -- (1/2,1/4+\j*1/4) -- (1/4,1/4+\j*1/4) -- (1/4,\j*1/4) --(1/2,\j*1/4);
}
\foreach \j in {0,...,255}
{
\draw[gray, thick] (1/4,\j*1/8) -- (1/4,1/8+\j*1/8) -- (1/8,1/8+\j*1/8) -- (1/8,\j*1/8) --(1/4,\j*1/8);
}
\foreach \k in {0,1}
{
\draw[gray, dashed, thick] (16+1,16*\k+1) -- (32-1,16*\k+1) -- (32-1, 16*\k+16-1) -- (16+1/2,16*\k+16-1);
}
\foreach \k in {0,1,2,3}
{
\draw[gray, dashed, thick] (8+1/2,8*\k+1/2) -- (16-1/2,8*\k+1/2) -- (16-1/2, 8*\k+8-1/2) -- (8+1/2,8*\k+8-1/2);
}
\foreach \i in {0,1}
{\foreach \j in {2,...,6}
{
\draw[gray, thick] (16+\j,\j+16*\i) -- (32-\j,\j+16*\i) -- (32-\j,16-\j+16*\i) -- (16+\j-1,16-\j+16*\i)--(16+\j-1,\j-1+16*\i);
}
}
\foreach \i in {0,...,3}
{\foreach \j in {2,...,6}
{
\draw[gray] (8+\j/2,\j/2+8*\i) -- (16-\j/2,\j/2+8*\i) -- (16-\j/2,8-\j/2+8*\i) --
(8+\j/2-1/2,8-\j/2+8*\i)--(8+\j/2-1/2,\j/2-1/2+8*\i);
}
}
\foreach \j in {0,...,7}
{
\fill(5.5,2+\j*4)circle(1pt);
\fill(6,2+\j*4)circle(1pt);
\fill(6.5,2+\j*4)circle(1pt);
}
\node at (0.25,-1.5){$0$};
\node at (32,-1.5){$1$};
\node at (64,-1.5){$2$};
\node at (0.25,33.5){${\mathbf i}$};
\draw[blue,thick] (64,0) -- (64,32) -- (32,32) -- (32,0) -- (64,0);
\foreach \j in {2,...,6}
{ \draw[blue, thick] (32+2*\j,2*\j) -- (64-2*\j,2*\j) -- (64-2*\j,32-2*\j) -- (32+2*\j-2,32-2*\j) --(32+2*\j-2,2*\j-2);
}
\draw[blue, dashed, thick] (32+2,2) -- (64-2,2) -- (64-2,32-2) -- (32+1,32-2);
\end{tikzpicture}
&
\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.55]
\draw[gray,thick] (64,0) -- (64,32) -- (0,32) -- (0,0) -- (64,0);
\foreach \j in {0,1}
{ \draw[gray, thick] (32,\j*16) -- (32,16+\j*16) -- (16,16+\j*16) -- (16,\j*16) --(32,\j*16);
}
\foreach \j in {0,...,3}
{
\draw[gray, thick] (16,\j*8) -- (16,8+\j*8) -- (8,8+\j*8) -- (8,\j*8) --(16,\j*8);
}
\foreach \j in {0,...,7}
{
\draw[gray, thick] (8,\j*4) -- (8,4+\j*4) -- (4,4+\j*4) -- (4,\j*4) --(8,\j*4);
}
\foreach \j in {0,...,15}
{
\draw[gray, thick] (4,\j*2) -- (4,2+\j*2) -- (2,2+\j*2) -- (2,\j*2) --(4,\j*2);
}
\foreach \j in {0,...,31}
{
\draw[gray, thick] (2,\j*1) -- (2,1+\j*1) -- (1,1+\j*1) -- (1,\j*1) --(2,\j*1);
}
\foreach \j in {0,...,63}
{
\draw[gray, thick] (1,\j*1/2) -- (1,1/2+\j*1/2) -- (1/2,1/2+\j*1/2) -- (1/2,\j*1/2) --(1,\j*1/2);
}
\foreach \j in {0,...,127}
{
\draw[gray, thick] (1/2,\j*1/4) -- (1/2,1/4+\j*1/4) -- (1/4,1/4+\j*1/4) -- (1/4,\j*1/4) --(1/2,\j*1/4);
}
\foreach \j in {0,...,255}
{
\draw[gray, thick] (1/4,\j*1/8) -- (1/4,1/8+\j*1/8) -- (1/8,1/8+\j*1/8) -- (1/8,\j*1/8) --(1/4,\j*1/8);
}
\node at (0.25,-1.5){$0$};
\node at (32,-1.5){$1$};
\node at (64,-1.5){$2$};
\node at (0.25,33.5){${\mathbf i}$};
\end{tikzpicture}
\end{tabular}
\vskip -0.25cm
\caption{(left): the $E$-continuum $K$; (right): the only non-degenerate atom.}\label{negative}
\vskip -0.25cm
\end{figure}
Let $f_1(z)=\frac{z}{2}$ and $f_2(z)=\frac{z+{\mathbf i}}{2}$. Let $K_0=\overline{Y}$. For all $n\ge1$, let $K_n=f_1\left(K_{n-1}\right)\cup f_2\left(K_{n-1}\right)$. Then $K_0,K_1,\ldots$ is an infinite sequence of continua converging to the segment $[0,{\mathbf i}]$ under Hausdorff distance. Clearly,
\[K=\left(\bigcup\limits_{n\ge0}K_n\right)\cup\{s{\mathbf i}: 0\le s\le1\}\]
is an $E$-continuum. See left part of Figure \ref{negative}. Let $L_0=\partial X$. For all $n\ge1$, let $L_n=f_1\left(L_{n-1}\right)\cup f_2\left(L_{n-1}\right)$. Then $L_0,L_1,\ldots$ is an infinite sequence of continua converging to the segment $[0,{\mathbf i}]$ under Hausdorff distance. Similarly, we see that
\[L=\left(\bigcup\limits_{n\ge0}L_n\right)\cup\{s{\mathbf i}: 0\le s\le1\}\]
is also an $E$-continuum. See right part of Figure \ref{negative}. Moreover, the continuum $K$ has exactly one atom of order $1$ that is not a singleton. This atom equals $L$. Thus we have
\[
\lambda_K(x)=\left\{\begin{array}{ll}1&x\in L\\ 0& {otherwise}\end{array}\right.\ \text{and}\
\tilde{\lambda}_K(x)=\left\{\begin{array}{ll}
1& x\in L\setminus[0,{\mathbf i}]\\ 0& {otherwise}.\end{array}\right.
\]
\end{exam}
\begin{exam}\label{finite-comp}
Let $\mathcal{C}$ denote Cantor's ternary set. Let $U_1\subset\hat{\mathbb{C}}$ be the domain, not containing $\infty$, whose boundary consists of $[0,1]\times{\bf i}\mathcal{C}=\{t+s{\bf i}: 0\le t\le 1, s\in\mathcal{C}\}$ and $\partial\left([0,\frac43]\times[0,{\bf i}]\right)$.
\begin{figure}[ht]
\vspace{-0.05cm}
\begin{center}
\begin{tabular}{ccc}
\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]
\draw[gray,very thick] (0,0) -- (3,0)-- (3,4)-- (0,4)-- (0,0);
\draw[gray,very thick] (3,0) -- (-1,0)-- (-1,-3)-- (3,-3)-- (3,0);
\draw[gray,very thick] (3,0) -- (3,-4)-- (6,-4)-- (6,0)-- (3,0);
\draw[gray,very thick] (3,0) -- (7,0)-- (7,3)-- (3,3)-- (3,0);
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3*\i/27,0) -- (3*\i/27,3);
\draw[gray,very thick] (6/9+3*\i/27,0) -- (6/9+3*\i/27,3);
\draw[gray,very thick] (2+3*\i/27,0) -- (2+3*\i/27,3);
\draw[gray,very thick] (2+6/9+3*\i/27,0) -- (2+6/9+3*\i/27,3);
}
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (0,3*\i/27-3) -- (3,3*\i/27-3);
\draw[gray,very thick] (0,6/9+3*\i/27-3) -- (3,6/9+3*\i/27-3);
\draw[gray,very thick] (0,2+3*\i/27-3) -- (3,2+3*\i/27-3);
\draw[gray,very thick] (0,2+6/9+3*\i/27-3) -- (3,2+6/9+3*\i/27-3);
}
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3+3*\i/27,0) -- (3+3*\i/27,-3);
\draw[gray,very thick] (3+6/9+3*\i/27,0) -- (3+6/9+3*\i/27,-3);
\draw[gray,very thick] (3+2+3*\i/27,0) -- (3+2+3*\i/27,-3);
\draw[gray,very thick] (3+2+6/9+3*\i/27,0) -- (3+2+6/9+3*\i/27,-3);
}
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3,3*\i/27) -- (3+3,3*\i/27);
\draw[gray,very thick] (3,6/9+3*\i/27) -- (3+3,6/9+3*\i/27);
\draw[gray,very thick] (3,2+3*\i/27) -- (3+3,2+3*\i/27);
\draw[gray,very thick] (3,2+6/9+3*\i/27) -- (3+3,2+6/9+3*\i/27);
}
\draw(1.5,2.75) node[above]{$U_2$};
\draw(0.25,-1.5) node[left]{$U_3$};
\draw(4.5,-2.75) node[below]{$U_4$};
\draw(5.75,1.5) node[right]{$U_1$};
\draw(4.5,3.5) node[right]{$U_5$};
\end{tikzpicture}
&&\hskip 1.0cm
\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]
\draw[gray,dashed] (0,0) -- (3,0)-- (3,4)-- (0,4)-- (0,0);
\draw[gray,dashed] (3,0) -- (-1,0)-- (-1,-3)-- (3,-3)-- (3,0);
\draw[gray,dashed] (3,0) -- (3,-4)-- (6,-4)-- (6,0)-- (3,0);
\draw[gray,dashed] (3,0) -- (7,0)-- (7,3)-- (3,3)-- (3,0);
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3*\i/27,0) -- (3*\i/27,3);
\draw[gray,very thick] (6/9+3*\i/27,0) -- (6/9+3*\i/27,3);
\draw[gray,very thick] (2+3*\i/27,0) -- (2+3*\i/27,3);
\draw[gray,very thick] (2+6/9+3*\i/27,0) -- (2+6/9+3*\i/27,3);
}
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (0,3*\i/27-3) -- (3,3*\i/27-3);
\draw[gray,very thick] (0,6/9+3*\i/27-3) -- (3,6/9+3*\i/27-3);
\draw[gray,very thick] (0,2+3*\i/27-3) -- (3,2+3*\i/27-3);
\draw[gray,very thick] (0,2+6/9+3*\i/27-3) -- (3,2+6/9+3*\i/27-3);
}
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3+3*\i/27,0) -- (3+3*\i/27,-3);
\draw[gray,very thick] (3+6/9+3*\i/27,0) -- (3+6/9+3*\i/27,-3);
\draw[gray,very thick] (3+2+3*\i/27,0) -- (3+2+3*\i/27,-3);
\draw[gray,very thick] (3+2+6/9+3*\i/27,0) -- (3+2+6/9+3*\i/27,-3);
}
\foreach \i in {0,...,3}
{
\draw[gray,very thick] (3,3*\i/27) -- (3+3,3*\i/27);
\draw[gray,very thick] (3,6/9+3*\i/27) -- (3+3,6/9+3*\i/27);
\draw[gray,very thick] (3,2+3*\i/27) -- (3+3,2+3*\i/27);
\draw[gray,very thick] (3,2+6/9+3*\i/27) -- (3+3,2+6/9+3*\i/27);
}
\end{tikzpicture}
\end{tabular}
\end{center}
\vskip -0.5cm
\caption{The continuum $K$ and the only non-degenerate atom $d\in\Dc_K$.}\label{finite-comp-pic}
\end{figure}
For $2\le j\le 4$ let $U_j=f^{j-1}(U_1)$, where $f(z)={\bf i}z$. See the left part of Figure \ref{finite-comp-pic}. Then $K=\bigcup_i\partial U_i$ is a continuum, whose complementary components are $U_1,\ldots, U_4, U_5$. Here $U_5$ is the one containing $\infty$. Moreover, the only non-degenerate atom of $K$ is
$\displaystyle d=\bigcup_{j=0}^3 f^j([0,1]\times{\bf i}\mathcal{C})$.
Since the continuum $d$ has a single atom, which is itself, we have
$\lambda_K(x)=\left\{\begin{array}{ll}\infty& x\in d\\ 0&{otherwise}.\end{array}\right.$
On the other hand, by the construction of $U_1,\ldots, U_4$ and $U_5$, we also have
$\tilde{\lambda}_K(x)=\left\{\begin{array}{ll}1& x\in d\\ 0&{otherwise}.\end{array}\right.$
Consequently, we have $\lambda_K(x)-\tilde{\lambda}_K(x)=\left\{\begin{array}{ll} \infty& x\in d\\ 0& {otherwise}.\end{array}\right.$
\end{exam}
Then we continue to find planar continua $K$, trying to describe possible relations between $\lambda_K$ and $\lambda_{\partial K}$. The first one is Peano continuum $K$ but its boundary is a continuum that is not locally connected. Therefore, $\lambda_{\partial K}(x)\ge\lambda_K(x)$ for all $x\in\hat{\mathbb{C}}$ and $\lambda_{\partial K}(x)>\lambda_K(x)$ for uncountably many $x$.
\begin{exam}\label{bd_larger}
Consider a spiral made of broken lines, lying in the open square $W=\{t+s{\mathbf i}: 0< t,s<1\}\subset\hat{\mathbb{C}}$, which converges to $\partial W$. See the left of Figure \ref{spiral}
\begin{figure}[ht]
\vspace{-0.05cm}
\center{\begin{tabular}{ccc}
\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]
\draw[blue, thick] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);
\foreach \j in {1,2}
\foreach \k in {1,2}
{
\draw[blue, ultra thick] (-4,-4*\j+16) -- (4*\j,-4*\j+16) -- (4*\j,4*\j+16) --(-4*\j-4,4*\j+16)-- (-4*\j-4,-4*\j+12) --(0,-4*\j+12);
}
\draw[blue,ultra thick, dashed] (0,4) -- (13.0,4);
\draw[blue,ultra thick, dashed] (12.75,4.0) -- (12.75,27.5);
\end{tikzpicture}
\hspace{0.25cm}
&
\hspace{0.25cm}
\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]
\fill[gray!80] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);
\draw[blue, thick] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);
\foreach \j in {1,2}
\foreach \k in {1,2}
{
\draw[gray!16, ultra thick] (-4,-4*\j+16) -- (4*\j,-4*\j+16) -- (4*\j,4*\j+16) --(-4*\j-4,4*\j+16)-- (-4*\j-4,-4*\j+12) --(0,-4*\j+12);
\draw[gray!16, ultra thick] (-4,-4*\j+16-\k*0.45) -- (4*\j+\k*0.45,-4*\j+16-\k*0.45) -- (4*\j+\k*0.45,4*\j+16+\k*0.45) --(-4*\j-4-\k*0.45,4*\j+16+\k*0.45)-- (-4*\j-4-\k*0.45,-4*\j+12-\k*0.45) --(0,-4*\j+12-\k*0.45);
}
\draw[gray!16,ultra thick, dashed] (0,4) -- (13.2,4); \draw[gray!16,ultra thick, dashed] (0,3.55) -- (13.2,3.55); \draw[gray!16,ultra thick, dashed] (0,3.1) -- (13.2,3.1);
\end{tikzpicture}
\hspace{0.25cm}
&
\hspace{0.25cm}
\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]
\fill[gray!80] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);
\draw[blue, thick] (-16,0) -- (16,0) -- (16,32) -- (-16,32) --(-16,0);
\foreach \j in {1,2}
\foreach \k in {1,2}
{
\draw[gray!16, ultra thick] (-4,-4*\j+16) -- (4*\j,-4*\j+16) -- (4*\j,4*\j+16) --(-4*\j-4,4*\j+16)-- (-4*\j-4,-4*\j+12) --(0,-4*\j+12);
\draw[gray!16, ultra thick] (-4,-4*\j+16-\k*0.45) -- (4*\j+\k*0.45,-4*\j+16-\k*0.45) -- (4*\j+\k*0.45,4*\j+16+\k*0.45) --(-4*\j-4-\k*0.45,4*\j+16+\k*0.45)-- (-4*\j-4-\k*0.45,-4*\j+12-\k*0.45) --(0,-4*\j+12-\k*0.45);
}
\draw[gray!16,ultra thick, dashed] (0,3.8) -- (13.2,3.8);
\draw[gray!16,ultra thick, dashed] (0,3.55) -- (13.2,3.55);
\draw[gray!16,ultra thick, dashed] (0,3.1) -- (13.2,3.1);
\foreach \j in {1,2}
\foreach \k in {2}
{
\draw[blue] (-4,-4*\j+16+0.2) -- (4*\j-0.2,-4*\j+16+0.2) -- (4*\j-0.2,4*\j+16-0.2) --(-4*\j-4+0.2,4*\j+16-0.2)-- (-4*\j-4+0.2,-4*\j+12+0.2) --(0,-4*\j+12+0.2);
\draw[blue] (-4,-4*\j+16-\k*0.45-0.3) -- (4*\j+\k*0.45+0.3,-4*\j+16-\k*0.45-0.3) -- (4*\j+\k*0.45+0.3,4*\j+16+\k*0.45+0.3) --(-4*\j-4-\k*0.45-0.3,4*\j+16+\k*0.45+0.3)-- (-4*\j-4-\k*0.45-0.3,-4*\j+12-\k*0.45-0.3) --(0,-4*\j+12-\k*0.45-0.3);
}
\foreach \i in {0,1,2}
{
\draw[blue] (-4+3.9*\i,12.25) -- (-4+3.9*\i,12-1.25);
\draw[blue] (3.8,13.8+3*\i) -- (5.2,13.8+3*\i);
\draw[blue] (-7.8,13.8+3*\i) -- (-9.2,13.8+3*\i);
\draw[blue] (-7.8,12-1.9*\i) -- (-9.2,12-1.9*\i);
}
\foreach \i in {0,...,3}
{
\draw[blue] (-5.6+3.6*\i,19.8) -- (-5.6+3.6*\i,21.2);
\draw[blue] (-7.8+1.2*\i,8.2) -- (-7.8+1.2*\i,6.8);
\draw[blue] (-3+1.2*\i,8.2) -- (-3+1.2*\i,6.8);
\draw[blue] (1.8+1.0*\i,8.2) -- (1.8+1.0*\i,6.8);
\draw[blue] (4.8+1.0*\i,8.2) -- (4.8+1.0*\i,6.8);
\draw[blue] (7.8,7.8+1.0*\i) -- (9.2,7.8+1.0*\i);
\draw[blue] (7.8,11.8+1.0*\i) -- (9.2,11.8+1.0*\i);
\draw[blue] (7.8,15.8+0.8*\i) -- (9.2,15.8+0.8*\i);
\draw[blue] (7.8,19.2+0.8*\i) -- (9.2,19.2+0.8*\i);
\draw[blue] (7.8,22.4+0.7*\i) -- (9.2,22.4+0.7*\i);
\draw[blue] (7.8-0.8*\i,23.8) -- (7.8-0.8*\i,25.2);
\draw[blue] (4.6-0.8*\i,23.8) -- (4.6-0.8*\i,25.2);
\draw[blue] (1.4-0.8*\i,23.8) -- (1.4-0.8*\i,25.2);
\draw[blue] (-1.8-0.8*\i,23.8) -- (-1.8-0.8*\i,25.2);
\draw[blue] (-5-0.8*\i,23.8) -- (-5-0.8*\i,25.2);
\fill[blue!62](-9-\i,24.5) circle(1.8pt);
}
\end{tikzpicture}
\end{tabular}
}
\caption{A Peano continuum $K$ whose boundary is not locally connected.}\label{spiral}
\end{figure}
We may thicken the spiral to an embedding $h: [0,\infty)\times[0,1]\rightarrow W$, of the unbounded strip $U=[0,\infty)\times[0,1]$. Such an embedding may be chosen appropriately, so that $h(\partial U)$ consists of countably many segments. Then, we obtain a continuum
$K_0=\overline{W}\setminus h(U)$. See the middle part of Figure \ref{spiral}. Clearly, the continuum $K_0$ is not locally connected on $\partial W$; and it is locally connected at all the other points. Now, divide the thickened spiral $h(U)$ into smaller and smaller quadruples, which are depicted in the right part of Figure \ref{spiral} as small rectangles. Let $K$ be the union of $K_0$ with the newly added bars, used in the above mentioned division. Then $K$ is locally connected everywhere hence is a Peano continuum. However, its boundary $\partial K$ is not locally connected on $\partial W$ and is locally connected elsewhere. Therefore, we have
\[\lambda_K(x)\equiv 0\quad{and}\quad
\lambda_{\partial K}(x)=\left\{\begin{array}{ll}1 & x\in\partial W\\
0& x\notin\partial W.\end{array}\right.\]
\end{exam}
\begin{exam}\label{bd_smaller}
Let the continuum $K$ be defined as in Example \ref{bd_larger}. Let $f_j(z)=\frac{z}{2}+\frac{j-1}{2}{\mathbf i}$ for $j=1,2$. For any compact set $X\subset\hat{\bbC}$, put $\Phi(X)=f_1(X)\cup f_2(X)$. We will use the continuum $K$ and the mapping $\Phi$ to construct a continuum $L$. See Figure \ref{spiral-double}.
\begin{figure}[ht]
\vskip -0.05cm
\center{
\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.8]
\draw[gray,thick] (32,0) -- (0,0) -- (0,32) -- (32,32);
\draw[gray,thick] (32,0) -- (64,0) -- (64,32) -- (32,32) -- (32,0);
\foreach \j in {0,1}
{ \draw[gray, thick] (32,16+\j*16) -- (16,16+\j*16) -- (16,\j*16) --(32,\j*16);
}
\foreach \j in {0,...,3}
{
\draw[gray, thick] (16,\j*8) -- (16,8+\j*8) -- (8,8+\j*8) -- (8,\j*8) --(16,\j*8);
}
\foreach \j in {0,...,7}
{
\draw[gray, thick] (8,\j*4) -- (8,4+\j*4) -- (4,4+\j*4) -- (4,\j*4) --(8,\j*4);
}
\foreach \j in {0,...,15}
{
\draw[gray, thick] (4,\j*2) -- (4,2+\j*2) -- (2,2+\j*2) -- (2,\j*2) --(4,\j*2);
}
\foreach \j in {0,...,31}
{
\draw[gray, thick] (2,\j*1) -- (2,1+\j*1) -- (1,1+\j*1) -- (1,\j*1) --(2,\j*1);
}
\foreach \j in {0,...,63}
{
\draw[gray, thick] (1,\j*1/2) -- (1,1/2+\j*1/2) -- (1/2,1/2+\j*1/2) -- (1/2,\j*1/2) --(1,\j*1/2);
}
\foreach \j in {0,...,127}
{
\draw[gray, thick] (1/2,\j*1/4) -- (1/2,1/4+\j*1/4) -- (1/4,1/4+\j*1/4) -- (1/4,\j*1/4) --(1/2,\j*1/4);
}
\foreach \j in {0,...,255}
{
\draw[gray, thick] (1/4,\j*1/8) -- (1/4,1/8+\j*1/8) -- (1/8,1/8+\j*1/8) -- (1/8,\j*1/8) --(1/4,\j*1/8);
}
\draw[gray, thick] (-4,-4) -- (64+4,-4) -- (64+4,32+4) --(-3,32+4)--(-3,-3);
\draw[gray, thick] (-3,-3) -- (64+3,-3) -- (64+3,32+2) --(-2,32+2) -- (-2,-2);
\draw[gray, thick] (-2,-2) -- (64+2,-2) -- (64+2,32+1) --(-1,32+1) -- (-1,-1);
\draw[gray, thick, dashed] (-1,-1)--(64+1,-1)--(64+1,30);
\node at (33,2) {$1$}; \fill(32,0) circle(2pt);
\node at (63,2) {$2$}; \fill(64,0) circle(2pt);
\node at (61,30) {$2\!+\!{\mathbf i}$}; \fill(64,32) circle(2pt);
\node at (48,16) {$\partial K+1$};
\node at (24,8) {$f_1(\partial K\!+\!1)$};
\node at (24,24) {$f_2(\partial K\!+\!1)$};
\draw[gray, very thin] (12,4) -- (-12.5,8);
\node at (-15,10) {$f_1\circ f_1(K\!+\!1)$};
\draw[gray, very thin] (12,12) -- (-12.5,16);
\node at (-15,18) {$f_1\circ f_2(K\!+\!1)$};
\draw[gray, very thin] (12,20) -- (-12.5,24);
\node at (-15,26) {$f_2\circ f_1(K\!+\!1)$};
\end{tikzpicture}
}\vskip -0.5cm
\caption{Relative locations of $\partial K+1$, $\Phi^{1}(\partial K+1)$ and $\Phi^{2}(K+1)$.}\label{spiral-double}
\end{figure}
The continuum $L$ consists of five parts:
\begin{enumerate}
\item the segment $[0,{\mathbf i}]=\{s{\bf i}: 0\le s\le1\}$;
\item a spiral converging to the boundary of $[0,2]\times[0,{\mathbf i}]$;
\item $\partial K+1=\{z+1: z\in \partial K\}$;
\item $\Phi^{2n}(K+1)$ for all integers $n\ge1$; and
\item $\Phi^{2n-1}(\partial K+1)$ for all integers $n\ge1$.
\end{enumerate}
On the one hand, we can directly check that $L$ has a unique non-degenerate atom $d$, which consists of the following four parts: (1) the segment $[0,{\mathbf i}]$; (2) the boundary of $[1,2]\times[0,{\mathbf i}]$, denoted as $A$; (3) $\Phi^{2n-1}(A)$ for all integers $n\ge1$; and (4) the boundary of $[2^{-2n},2^{-2n+1}]\times[0,{\mathbf i}]$ for all integers $n\ge1$.
On the other hand, the boundary $\partial L$ has a unique non-degenerate atom $d^*$, which is the union of $A$, the segment $[0,{\mathbf i}]$, and $\Phi^{n}(A)$ for all integers $n\ge1$. See Figure \ref{atoms} for a depiction of $d$ and $d^*$.
\begin{figure}[ht]
\vspace{-0.25cm}
\center{\begin{tabular}{cc}
\begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]
\draw[gray,thick] (32,0) -- (0,0) -- (0,32) -- (32,32);
\draw[gray,thick] (32,0) -- (64,0) -- (64,32) -- (32,32) -- (32,0);
\foreach \j in {0,1}
{ \draw[gray, thick] (32,\j*16) -- (16,\j*16) -- (16,\j*16) --(32,\j*16);
}
\foreach \j in {0}
{
\draw[gray, thick] (16,+\j*8) -- (16,32+\j*8) -- (8,32+\j*8) -- (8,\j*8) --(16,\j*8);
}
\foreach \j in {0,...,7}
{
\draw[gray, thick] (8,\j*4) -- (8,\j*4) -- (4,\j*4) -- (4,\j*4) --(8,\j*4);
}
\foreach \j in {0}
{
\draw[gray, thick] (4,\j*2) -- (4,32+\j*2) -- (2,32+\j*2) -- (2,\j*2) --(4,\j*2);
}
\foreach \j in {0,...,31}
{
\draw[gray, thick] (2,\j*1) -- (2,\j*1) -- (1,\j*1) -- (1,\j*1) --(2,\j*1);
}
\foreach \j in {0}
{
\draw[gray, thick] (1,\j*1/2) -- (1,32+\j*1/2) -- (1/2,32+\j*1/2) -- (1/2,\j*1/2) --(1,\j*1/2);
}
\foreach \j in {0,...,127}
{
\draw[gray, thick] (1/2,\j*1/4) -- (1/2,1/4+\j*1/4) -- (1/4,1/4+\j*1/4) -- (1/4,\j*1/4) --(1/2,\j*1/4);
}
\foreach \j in {0,...,255}
{
\draw[gray, thick] (1/4,\j*1/8) -- (1/4,1/8+\j*1/8) -- (1/8,1/8+\j*1/8) -- (1/8,\j*1/8) --(1/4,\j*1/8);
}
\node at (34,2) {$1$}; \fill(32,0) circle(2pt);
\node at (62,2) {$2$}; \fill(64,0) circle(2pt);
\node at (60,30) {$2\!+\!{\mathbf i}$}; \fill(64,32) circle(2pt);
\end{tikzpicture}
& \begin{tikzpicture}[x=0.2cm,y=0.2cm,scale=0.6]
\draw[gray,thick] (32,0) -- (0,0) -- (0,32) -- (32,32);
\draw[gray,thick] (32,0) -- (64,0) -- (64,32) -- (32,32) -- (32,0);
\foreach \j in {0,1}
{ \draw[gray, thick] (32,16+\j*16) -- (16,16+\j*16) -- (16,\j*16) --(32,\j*16);
}
\foreach \j in {0,...,3}
{
\draw[gray, thick] (16,\j*8) -- (16,8+\j*8) -- (8,8+\j*8) -- (8,\j*8) --(16,\j*8);
}
\foreach \j in {0,...,7}
{
\draw[gray, thick] (8,\j*4) -- (8,4+\j*4) -- (4,4+\j*4) -- (4,\j*4) --(8,\j*4);
}
\foreach \j in {0,...,15}
{
\draw[gray, thick] (4,\j*2) -- (4,2+\j*2) -- (2,2+\j*2) -- (2,\j*2) --(4,\j*2);
}
\foreach \j in {0,...,31}
{
\draw[gray, thick] (2,\j*1) -- (2,1+\j*1) -- (1,1+\j*1) -- (1,\j*1) --(2,\j*1);
}
\foreach \j in {0,...,63}
{
\draw[gray, thick] (1,\j*1/2) -- (1,1/2+\j*1/2) -- (1/2,1/2+\j*1/2) -- (1/2,\j*1/2) --(1,\j*1/2);
}
\foreach \j in {0,...,127}
{
\draw[gray, thick] (1/2,\j*1/4) -- (1/2,1/4+\j*1/4) -- (1/4,1/4+\j*1/4) -- (1/4,\j*1/4) --(1/2,\j*1/4);
}
\foreach \j in {0,...,255}
{
\draw[gray, thick] (1/4,\j*1/8) -- (1/4,1/8+\j*1/8) -- (1/8,1/8+\j*1/8) -- (1/8,\j*1/8) --(1/4,\j*1/8);
}
\node at (34,2) {$1$}; \fill(32,0) circle(2pt);
\node at (62,2) {$2$}; \fill(64,0) circle(2pt);
\node at (60,30) {$2\!+\!{\mathbf i}$}; \fill(64,32) circle(2pt);
\end{tikzpicture}
\end{tabular}
}\vskip -0.0cm
\caption{A depiction of $d$ and $d^*$.}\label{atoms}
\end{figure}
The atom $d^*$ (of $\partial L$) is a Peano continuum and contains $d$. However, the atom $d$ (of $L$) is not locally connected at points $s{\mathbf i}$ with $0<s<1$ and is locally connected elsewhere. Therefore, we can compute their lambda functions as follows:
\[
\lambda_L(x)=\left\{\begin{array}{ll}2& x\in[0,{\mathbf i}]\\
1 & x\in d\setminus[0,{\mathbf i}]\\
0& {otherwise}\end{array}\right.\quad{and}\quad \lambda_{\partial L}(x)=\left\{\begin{array}{ll}1& x\in[0,{\mathbf i}]\\
1 & x\in d^*\setminus[0,{\mathbf i}]\\
0& {otherwise}.\end{array}\right.
\]
From these we further infer that
\[
\lambda_L(x)-\lambda_{\partial L}(x)=\left\{\begin{array}{ll} 1& x\in[0,{\mathbf i}]\\ -1& x\in d^*\setminus d\\ 0&{otherwise}.\end{array}\right.
\]
\end{exam}
Note that it is still unknown {\bf whether there is a compactum $K\subset\hat{\mathbb C}$ such that $\lambda_K(x)\ge\lambda_{\partial K}(x)$ for all $x\in\hat{\mathbb{C}}$ and $\lambda_K(x)>\lambda_{\partial K}(x)$ for at least one point $x\in\partial K$}.
To conclude this section, we now consider unions and intersections of specific Peano compacta in the plane. We will find concrete Peano continua in the plane, say $X$ and $Y$, such that $X\cap Y$ is a continuum that is not locally connected. Notice that $X\cup Y$ is always a Peano continuum.
\begin{exam}\label{cap-peano}
Let $M$ be the union of $[0,1]\times\{0\}$ with the vertical segments $\{0\}\times[0,1]$ and $\{2^{-k}\}\times[0,1]$ for integers $k\ge0$. Then $M$ is a continuum and is not locally connected at points on $\{0\}\times(0,1]$; moreover, we have
\[\lambda_M(x)=\left\{\begin{array}{ll}1& x\in \{t{\bf i}: 0\le t\le 1\}\\ 0 & otherwise\end{array}\right.
\]
We will construct two Peano continua $X$ and $Y$ satisfying $X\cap Y=M$. To this end, for all integers $k\ge1$ we put
\[A_k=\bigcup_{j=1}^{2^k-1}\left[0,2^{-k+1}\right]\times\left\{j2^{-k}\right\}.\]
Then $X=M\cup\left(\bigcup_kA_k\right)$ is a Peano continuum.
\begin{figure}[ht]
\vskip -0.05cm
\begin{center}
\begin{tabular}{ccc}
\begin{tikzpicture}[x=1cm,y=1cm,scale=0.8]
\foreach \i in {1,...,3}
{
\draw[red] (0,1.296*\i) -- (2.592,1.296*\i);
}
\foreach \i in {1,...,7}
{
\draw[red] (0,0.648*\i) -- (1.296,0.648*\i);
}
\foreach \i in {1,...,15}
{
\draw[red] (0,0.324*\i) -- (0.648,0.324*\i);
}
\foreach \i in {1,...,31}
{
\draw[red] (0,0.162*\i) -- (0.324,0.162*\i);
}
\foreach \i in {1,...,63}
{
\draw[red] (0,0.081*\i) -- (0.162,0.081*\i);
}
\draw[blue,thick] (0,0) -- (0,5.184);
\draw[blue,thick] (0,0) -- (5.184,0);
\draw[red] (2.592,2.592) -- (5.184,2.592);
\foreach \i in {1,2,4,8,16,32}
{
\draw[blue,thick] (5.184/\i,0) -- (5.184/\i,5.184);
}
\fill[black] (5.184,0) circle (0.35ex); \draw[purple] (5.184,0) node[right]{$1$};
\fill[black] (5.184,5.184) circle (0.35ex); \draw[purple] (5.184,5.184) node[right]{$1+{\bf i}$};
\fill[black] (0,0) circle (0.35ex); \draw[purple] (0,0) node[left]{$0$};
\end{tikzpicture}\hspace{0.25cm}
&
&\hspace{0.25cm}
\begin{tikzpicture}[x=1cm,y=1cm,scale=0.8]
\foreach \i in {1,...,2}
{
\draw[red] (0,1.728*\i) -- (5.184,1.728*\i);
}
\foreach \i in {1,...,8}
{
\draw[red] (0,0.576*\i) -- (2.592,0.576*\i);
}
\foreach \i in {1,...,26}
{
\draw[red] (0,0.192*\i) -- (1.296,0.192*\i);
}
\foreach \i in {1,...,80}
{
\draw[red] (0,0.064*\i) -- (0.648,0.064*\i);
}
\draw[blue,thick] (0,0) -- (0,5.184);
\draw[blue,thick] (0,0) -- (5.184,0);
\foreach \i in {1,2,4,8}
{
\draw[blue,thick] (5.184/\i,0) -- (5.184/\i,5.184);
}
\fill[black] (5.184,0) circle (0.35ex); \draw[purple] (5.184,0) node[right]{$1$};
\fill[black] (5.184,5.184) circle (0.35ex); \draw[purple] (5.184,5.184) node[right]{$1+{\bf i}$};
\fill[black] (0,0) circle (0.35ex); \draw[purple] (0,0) node[left]{$0$};
\end{tikzpicture}
\end{tabular}
\end{center}
\vskip -0.5cm
\caption{\small Two Peano continua that intersect at a non-locally connected continuum.}\label{peano-cap}
\end{figure}
See left part of Figure \ref{peano-cap} for a rough approximate of $X$. Similarly, if for every $k\ge1$ we set
\[B_k=\bigcup_{j=1}^{3^k-1}\left[0,2^{-k+1}\right]\times\left\{j3^{-k}\right\},\]
then $\displaystyle Y=M\cup\left(\bigcup\limits_kB_k\right)$ is also a Peano continuum. See right part of Figure \ref{peano-cap} for a rough approximate of $Y$. Moreover, we have $X\cap Y=M$.
\end{exam}
We also find Peano compacta $X$ and $Y$ such that the union $X\cup Y$ is not a Peano compactum, although the intersection $X\cap Y$ is always a Peano compactum. We will use {\em fractal squares} to construct two such compacta.
Here a {\bf fractal square of order $n\ge2$} is the attractor of an iterated function system
$\displaystyle \Fc_\Dc:=\left\{f_d(x)=\frac{x+d}{n}: d\in\Dc\right\}$
for some $\Dc\subset\{0,1,\ldots,n-1\}^2$ which contains at least $2$ and at most $n^2-1$ elements.
For general theory on iterated function systems, we refer to \cite{Hutchinson81}.
\begin{exam}\label{cup-fs}
Let $X$ and $Y$ be the fractal squares determined by $\Fc_{\Dc_X}$ and $\Fc_{\Dc_Y}$. Here $\Dc_X=\{(i,0): i=0,1,2\}\cup\{(0,2)\}$ and $\Dc_Y=\{(i,0): i=0,1,2\}\cup\{(1,2),(2,2)\}$. See Figure \ref{fs-cup} for relative locations of the small squares $f_d([0,1]^2)$ with $d\in\Dc_X$ and $d\in\Dc_Y$.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{ccc}
\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]
\fill[purple!30] (0,0) -- (5.184,0) -- (5.184,1.728) -- (0,1.728) -- (0,0);
\fill[purple!30] (0,3.456) -- (1.728,3.456) -- (1.728,5.184) -- (0,5.184) -- (0,3.456);
\foreach \i in {0,...,3}
{
\draw[gray,thick] (0,1.728*\i) -- (5.184,1.728*\i);
\draw[gray,thick] (1.728*\i,0) -- (1.728*\i,5.184);
}
\end{tikzpicture}
&
\hskip 0.5cm
&
\begin{tikzpicture}[x=1cm,y=1cm,scale=0.618]
\fill[purple!30] (0,0) -- (5.184,0) -- (5.184,1.728) -- (0,1.728) -- (0,0);
\fill[purple!30] (1.728,3.456) -- (5.184,3.456) -- (5.184,5.184) -- (1.728,5.184) -- (1.728,3.456);
\foreach \i in {0,...,3}
{
\draw[gray,thick] (0,1.728*\i) -- (5.184,1.728*\i);
\draw[gray,thick] (1.728*\i,0) -- (1.728*\i,5.184);
}
\end{tikzpicture}
\end{tabular}
\end{center}
\vskip -0.5cm
\caption{\small The small squares $f_d([0,1]^2)$ for $d\in\Dc_1$ (left part) and for $d\in\Dc_2$ (right part).}\label{fs-cup}
\end{figure}
Then $X$ and $Y$ are Peano compacta, each of which contains the interval $[0,1]$, such that $X\cup Y$ contains all the segments $[0,1]\times\{\frac{2}{3^k}\}$ for $k\ge1$. Moreover, $X\cap Y$ is a Peano compactum having uncountably many components. All but one of these components are single points. The only non-degenerate component is the interval $[0,1]$. On the other hand, for all $k\ge1$ the horizontal strip $\bbR\times\left(\frac{1}{3^k},\frac{2}{3^k}\right)$ is disjoint from $X\cup Y$. This implies that $X\cup Y$ is not a Peano compactum. Consequently, we have
\[
\begin{array}{ccc}\lambda_{X}(x)=0 (\forall x\in \hat{\mathbb{C}}); & \lambda_{Y}(x)= 0 (\forall x\in \hat{\mathbb{C}}); & \lambda_{X\cup Y}(x)=\left\{\begin{array}{ll}1& x\in [0,1]\\ 0 & otherwise.\end{array}\right.
\end{array}
\]
Notice that $Y\cup X+1$ is also a Peano compactum, although $Y\cap X+1$ has uncountably many components. Thus $\lambda_{X+1}(x)=\lambda_{Y}(x)=\lambda_{Y\cup X+1}(x)=0$ for all $x\in \hat{\mathbb{C}}$. \end{exam}
\noindent
{\bf Acknowledgement}. The authors are grateful to Dr. Yi Yang at Sun Yat-sen University, for valuable discussions during the period of his phd study.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
High-dimensional data are ubiquitous in modern statistics. Consequently, the fundamental problem of estimating the covariance matrix or its inverse (the precision matrix) has received renewed attention.
Suppose we have $n$ i.i.d.\ observations of a $p$-dimensional variate distributed as $\mathcal{N}_p(\vec{\mu}, {\vec{\Sigma}})$. The Gaussian log-likelihood parameterized in terms of the precision matrix ${\vec{\Omega}} = {\vec{\Sigma}}^{-1}$ is then given by:
\begin{equation}
\label{eq:OLL}
\mathcal{L}({\vec{\Omega}}; \vec{S})
\propto \ln\deter{{\vec{\Omega}}} - \tr(\vec{S}{\vec{\Omega}}),
\end{equation}
where $\vec{S}$ is the sample covariance matrix.
When $n > p$ the maximum of \eqref{eq:OLL} is attained at the maximum likelihood estimate (MLE) ${\hat{\vec{\Omega}}}{}^\text{ML} = \vec{S}^{-1}$.
However, in the high-dimensional case, i.e., when $p > n$, the sample covariance matrix $\vec{S}$ is singular and its inverse ceases to exist.
Furthermore, when $p \approx n$, the sample covariance matrix may be ill-conditioned and the inversion becomes numerically unstable.
Hence, these situations necessitate usage of regularization techniques.
Here, we study the simultaneous estimation of numerous precision matrices when multiple classes of high-dimensional data are present.
Suppose ${\boldsymbol{\mathrm{y}}}_{ig}$ is a realization of a $p$-dimensional Gaussian random vector for $i = 1, \ldots, n_g$ independent observations nested within $g = 1, \ldots, G$ classes, each with class-dependent covariance ${\vec{\Sigma}}_g$, i.e., ${\boldsymbol{\mathrm{y}}}_{ig} \sim \mathcal{N}_p(\vec{\mu}_g, {\vec{\Sigma}}_g)$ for each designated class $g$.
Hence, for each class a data set consisting of the $n_g \times p$ matrix
$\vec{Y}_g = [{\boldsymbol{\mathrm{y}}}_{1 g}, \ldots, {\boldsymbol{\mathrm{y}}}_{n_g g}]^\top$ is observed.
Without loss of generality $\vec{\mu}_g = \vec{0}$ can be assumed as each data set $\vec{Y}_g$ can be centered around its column means.
The class-specific sample covariance matrix given by
\begin{equation*}
\vec{S}_g
= \frac{1}{n_g} \sum_{i = 1}^{n_g} {\boldsymbol{\mathrm{y}}}_{ig}{\boldsymbol{\mathrm{y}}}_{ig}^\top
= \frac{1}{n_g} \vec{Y}_g^\top\vec{Y}_g,
\end{equation*}
then constitutes the well-known MLE of ${\vec{\Sigma}}_g$ as discussed above.
The closely related \emph{pooled} sample covariance matrix
\begin{equation}
\label{eq:pooledcovar}
\vec{S}_\summed
= \frac{1}{n_\summed} \sum_{g = 1}^G \sum_{i = 1}^{n_g} {\boldsymbol{\mathrm{y}}}_{ig}{\boldsymbol{\mathrm{y}}}_{ig}^\top
= \frac{1}{n_\summed} \sum_{g = 1}^G n_g \vec{S}_g,
\end{equation}
where $n_\summed = \sum_{g = 1}^G n_g$, is an oft-used estimate of the common covariance matrix across classes.
In the high-dimensional case $p > n_\summed$ (implying $p > n_g$) the $\vec{S}_g$ and $\vec{S}_\summed$ are singular and their inverses do not exist.
Our primary interest thus lies in estimating the precision matrices ${\vec{\Omega}}_1 = {\vec{\Sigma}}_1^{-1}, \ldots, {\vec{\Omega}}_G = {\vec{\Sigma}}_G^{-1}$, as well as their commonalities and differences, when $p > n_\summed$.
We will develop a general $\ell_2$-penalized ML framework to this end which we designate \emph{targeted fused ridge estimation}.
The estimation of multiple precision matrices from high-dimensional data classes is of interest in many applications. The field of oncogenomics, for example, often deals with high-dimensional data from high-throughput experiments.
Class membership may have different connotations in such settings.
It may refer to certain sub-classes within a single data set such as cancer subtypes (cancer is a very heterogeneous disease, even when present in a single organ).
It may also designate different data sets or studies.
Likewise, the class indicator may also refer to a conjunction of both subclass and study membership to form a two-way design of factors of interest (e.g., breast cancer subtypes present in a batch of study-specific data sets), as is often the case in oncogenomics.
Our approach is thus motivated by the meta-analytic setting, where we aim for an integrative analysis in terms of simultaneously considering multiple data (sub-)classes, data sets, or both.
Its desire is to borrow statistical power across classes by effectively increasing the sample size in order to improve sensitivity and specificity of discoveries.
\subsection{Relation to literature and overview}
There have been many proposals for estimating a single precision matrix in high-dimensional data settings.
A popular approach is to amend \eqref{eq:OLL} with an $\ell_1$-penalty \citep{YL2007,Banerjee2008,Friedman2008,Yuan2008b}.
The solution to this penalized problem is generally referred to as the \emph{graphical lasso} and it is popular as it performs automatic model selection, i.e., the resulting estimate is sparse.
It is heavily used in Gaussian graphical modeling (GGM) as the support of a Gaussian precision matrix represents a Markov random field \citep{Lauritz96}.
The $\ell_1$-approach has been extended to deal with more than a single sample-group.
\citet{GLMZ2011} have proposed a parametrization of class-specific precision matrices that expresses the individual elements as a product of shared and class-specific factors.
They include $\ell_1$-penalties on both the shared and class-specific factors in order to jointly estimate the sparse precision matrices (representing graphical models).
The penalty on the shared factors promotes a shared sparsity structure while the penalty on the class-specific factors promotes class-specific deviations from the shared sparsity structure.
\citet{Danaher2013} have generalized these efforts by proposing the \emph{joint graphical lasso} which allows for various penalty structures.
They study two particular choices: the \emph{group graphical lasso} that encourages a shared sparsity structure across the class-specific precision matrices, and the \emph{fused graphical lasso} that promotes a shared sparsity structure as well as shared precision element-values.
A Bayesian approach to inferring multiple sparse precision matrices can be found in \citet{PSV2015}.
While simultaneous estimation and model selection can be deemed elegant, automatic sparsity is not always an asset.
It may be that one is intrinsically interested in more accurate representations of class-specific precision matrices for usage in, say, covariance-regularized regression \citep{Wit09} or discriminant analysis \citep{Price2014}.
In such a situation one is not after sparse representations and one may prefer usage of a regularization method that shrinks the estimated elements of the precision matrices proportionally.
In addition---when indeed considering network representations of data---the true class-specific graphical models need not be (extremely) sparse in terms of containing many zero elements.
The $\ell_1$-penalty is unable to retrieve the sparsity pattern when the number of truly non-null elements exceeds the available sample size \citep{VanWieringen2014}.
In such a situation one may wish to couple a non-sparsity-inducing penalty with a post-hoc selection step allowing for probabilistic control over element selection.
We therefore consider $\ell_2$ or ridge-type penalization.
In Section \ref{GenFused.sec} the \emph{targeted fused ridge estimation} framework will be presented.
The proposed fused $\ell_2$-penalty allows for the simultaneous estimation of multiple precision matrices from high-dimensional data classes that chiefly share the same structure but that may differentiate in locations of interest. The approach is targeted in the sense that it allows for the specification of target matrices that may encode prior information.
The framework is flexible and general, containing the recent work of \citet{Price2014} and \citet{VanWieringen2014} as special cases.
It may be viewed as a $\ell_2$-alternative to the work of \citet{Danaher2013}.
The method is contingent upon the selection of penalty values and target matrices, topics that are treated in Section \ref{PenTar.sec}.
Section \ref{sec:posthoc} then focuses on the graphical interpretation of precision matrices.
It shows how the fused ridge precision estimates may be coupled with post-hoc support determination in order to arrive at multiple graphical models.
We will refer to this coupling as the \emph{fused graphical ridge}.
This then serves as a basis for integrative or meta-analytic network modeling.
Section \ref{Sims.sec} then assesses the performance of the proposed estimator through extensive simulation experiments.
Section \ref{Illustrate.sec} illustrates the techniques by applying it in a large scale integrative study of gene expression data of diffuse large B-cell lymphoma.
The focus is then on finding common motifs and motif differences in network representations of (deregulated) molecular pathways.
Section \ref{Discuss.sec} concludes with a discussion.
\subsection{Notation}
Some additional notation must be introduced. Throughout the text and supplementary material, we use the following notation for certain matrix properties and sets: We use $\vec{A} \succ \vec{0}$ and $\vec{B} \succeq \vec{0}$ to denote symmetric positive definite (p.d.) and positive semi-definite (p.s.d.) matrices $\vec{A}$ and $\vec{B}$, respectively.
By $\mathbb{R}$, $\mathbb{R}_+$, and $\mathbb{R}_{++}$ we denote the real numbers, the non-negative real numbers, and the strictly positive real numbers, respectively.
In notational analogue, $\mathcal{S}^p$, $\mathcal{S}^p_+$, and $\mathcal{S}^p_{++}$ are used to denote the space of $p\times p$ real symmetric matrices, the real symmetric p.s.d.\ matrices, and real symmetric p.d.\ matrices, respectively. That is, e.g., $\mathcal{S}_{++}^p = \{\vec{X} \in \mathbb{R}^{p \times p} : \vec{X} = \vec{X}^\top \wedge \vec{X} \succ \vec{0}\}$.
Negative subscripts similarly denote negative reals and negative definiteness. By $\vec{A} \geq \vec{B}$ and similar we denote \emph{element-wise} relations, i.e., $(\vec{A})_{jq} \geq (\vec{B})_{jq}$ for all $(j,q)$.
Matrix subscripts will usually denote class membership, e.g., $\vec{A}_g$ denotes (the realization of) matrix $\vec{A}$ in class $g$.
For notational brevity we will often use the shorthand $\{\vec{A}_g\}$ to denote the set $\{\vec{A}_g\}_{g=1}^{G}$.
The following notation is used throughout for operations: We write $\diag(\vec{A})$ for the column vector composed of the diagonal of $\vec{A}$ and
$\vect(\vec{A})$ for the vectorization operator which stacks the columns of $\vec{A}$ on top of each other.
Moreover, $\circ$ will denote the Hadamard product while $\otimes$ refers to the Kronecker product.
We will also repeatedly make use of several special matrices and functions.
We let $\vec{I}_p$ denote the ($p\times p$)-dimensional identity matrix.
Similarly, $\vec{J}_p$ will denote the ($p\times p$)-dimensional all-ones matrix.
In addition, $\vec{0}$ will denote the null-matrix, the dimensions of which should be clear from the context.
Lastly, $\sqfnorm{\missingarg}$ and $\mathds{1}[\missingarg]$ will stand for the squared Frobenius norm and the indicator function, respectively.
\section{Targeted fused ridge estimation}\label{GenFused.sec}
\subsection{A general penalized log-likelihood problem}
Suppose $G$ classes of $(n_g \times p)$-dimensional data exist and that the samples within each class are i.i.d.\ normally distributed.
The log-likelihood for the data takes the following form under the additional assumption that all $n_\summed$ observations are independent:
\begin{equation}
\label{eq:loglik}
\mathcal{L}\left(\{{\vec{\Omega}}_g\}; \{\vec{S}_g\}\right)
\propto \sum_g n_g
\bigl\{ \ln\deter{{\vec{\Omega}}_g} - \tr(\vec{S}_g{\vec{\Omega}}_g) \bigr\}.
\end{equation}
We desire to obtain estimates $\{{\hat{\vec{\Omega}}}{}_g\} \in \mathcal{S}^p_{++}$ of the precision matrices for each class.
Though not a requirement, we primarily consider situations in which $p > n_g$ for all $g$, necessitating the need for regularization.
To this end, amend \eqref{eq:loglik} with the \emph{fused ridge penalty} given by
\begin{equation}
f^\text{FR}\left(\{{\vec{\Omega}}_g\}; \{\lambda_{g_1 g_2}\}, \{\vec{T}_g\}\right)
= \sum_g \frac{\lambda_{gg}}{2} \sqfnorm[\big]{{\vec{\Omega}}_g \!- \vec{T}_g}
+ \sum_{\mathclap{g_1, g_2}} \frac{\lambda_{g_1 g_2}}{4}
\sqfnorm[\big]{ ({\vec{\Omega}}_{g_1} \!- \vec{T}_{g_1}) -
({\vec{\Omega}}_{g_2} \!- \vec{T}_{g_2}) },
\label{eq:FR}
\end{equation}
where the $\vec{T}_g \in \mathcal{S}_+^p$ indicate known class-specific \emph{target matrices} (see also Section \ref{sec:Tselec}), the $\lambda_{gg} \in \mathbb{R}_{++}$ denote class-specific \emph{ridge penalty parameters}, and the $\lambda_{g_1 g_2} \in \mathbb{R}_+$ are pair-specific \emph{fusion penalty parameters} subject to the requirement that $\lambda_{g_1 g_2} = \lambda_{g_2 g_1}$.
All penalties can then be conveniently summarized into a non-negative symmetric matrix ${\vec{\Lambda}} = [\lambda_{g_1 g_2}]$ which we call the \emph{penalty matrix}.
The diagonal of ${\vec{\Lambda}}$ corresponds to the class-specific ridge penalties whereas off-diagonal entries are the pair-specific fusion penalties.
The rationale and use of the penalty matrix is motivated further in Section \ref{sec:penaltygraph}.
Combining \eqref{eq:loglik} and \eqref{eq:FR} yields a general targeted fused ridge estimation problem:
\begin{equation}
\label{eq:argmax0}
\argmax_{\{{\vec{\Omega}}_g\} \in \mathcal{S}_{++}^p}
\left\{
\mathcal{L}\left(\{{\vec{\Omega}}_g\}; \{\vec{S}_g\}\right)
- \sum_g \frac{\lambda_{gg}}{2} \sqfnorm[\big]{ {\vec{\Omega}}_g \!- \vec{T}_g }
- \sum_{\mathclap{g_1, g_2}} \frac{\lambda_{g_1 g_2}}{4}
\sqfnorm[\big]{ ({\vec{\Omega}}_{g_1} \!- \vec{T}_{g_1}) -
({\vec{\Omega}}_{g_2} \!- \vec{T}_{g_2}) }
\right\}.
\end{equation}
The problem of \eqref{eq:argmax0} is strictly concave.
Furthermore, it is worth noting that non-zero fusion penalties, $\lambda_{g_1 g_2} > 0$ for all $g_1 \neq g_2$, alone will not guarantee uniqueness when $p > n_\summed$: In high dimensions, all ridge penalties $\lambda_{gg}$ should be strictly positive to ensure identifiability.
These and other properties of the estimation problem are reviewed in Section \ref{sec:properties}.
The problem stated in \eqref{eq:argmax0} is very general.
We shall sometimes consider a single common ridge penalty $\lambda_{gg} = \lambda$ for all $g$, as well as a common fusion penalty $\lambda_{g_1 g_2} = \lambda_f$ for all class pairs $g_1 \neq g_2$ (cf., however, Section \ref{sec:penaltygraph}) such that ${\vec{\Lambda}} = \lambda\vec{I}_p + \lambda_f(\vec{J}_p-\vec{I}_p)$.
This simplification leads to the first special case:
\begin{equation*}
\argmax_{\{{\vec{\Omega}}_g\} \in \mathcal{S}_{++}^p}
\left\{
\mathcal{L}\left(\{{\vec{\Omega}}_g\}; \{\vec{S}_g\}\right)
- \frac{\lambda}{2}\sum_g \sqfnorm[\big]{ {\vec{\Omega}}_g \!- \vec{T}_g }
- \frac{\lambda_f}{4}\sum_{\mathclap{g_1, g_2}}
\sqfnorm[\big]{ ({\vec{\Omega}}_{g_1} \!- \vec{T}_{g_1}) -
({\vec{\Omega}}_{g_2} \!- \vec{T}_{g_2}) }
\right\}.
\end{equation*}
Here and analogous to \eqref{eq:argmax0}, $\lambda$ controls the rate of shrinkage of each precision ${\vec{\Omega}}_g$ towards the corresponding target $\vec{T}_g$ \cite{VanWieringen2014}, while $\lambda_f$ determines the retainment of entry-wise similarities between $({\vec{\Omega}}_{g_1} \!- \vec{T}_{g_1})$ and $({\vec{\Omega}}_{g_2} \!- \vec{T}_{g_2})$ for all class pairs $g_1 \neq g_2$.
When $\vec{T}_g = \vec{T}$ for all $g$, the problem further simplifies to
\begin{equation}
\label{eq:argmax2}
\argmax_{\{{\vec{\Omega}}_g\} \in \mathcal{S}_{++}^p}
\left\{
\mathcal{L}\left(\{{\vec{\Omega}}_g\}; \{\vec{S}_g\}\right)
- \frac{\lambda}{2}\sum_g
\sqfnorm[\big]{ {\vec{\Omega}}_g \!- \vec{T} }
- \frac{\lambda_f}{4}\sum_{\mathclap{g_1, g_2}}
\sqfnorm[\big]{ {\vec{\Omega}}_{g_1} \!- {\vec{\Omega}}_{g_2} }
\right\},
\end{equation}
where the targets are seen to disappear from the fusion term.
Lastly, when $\vec{T} = \vec{0}$ the problem \eqref{eq:argmax2} reduces to its simplest form recently considered by \citet{Price2014}.
Appendix \ref{app:geometric} studies, in order to support an intuitive feel for the fused ridge estimation problem, its geometric interpretation in this latter context.
\subsection{Estimator and properties}\label{sec:properties}
There is no explicit solution to \eqref{eq:argmax0} except for certain special cases and thus an iterative optimization procedure is needed for its general solution.
As described in Section \ref{sec:Algo}, we employ a coordinate ascent procedure which relies on the concavity of the penalized likelihood (see Lemma~\ref{lem:concavity} in Appendix \ref{sec:Support}) and repeated use of the following result, whose proof (as indeed all proofs) has been deferred to Appendix \ref{sec:Proofs}:
\begin{Proposition}
\label{prop:fusedridge}%
Let $\{\vec{T}_g\} \in \mathcal{S}_+^p$ and let ${\vec{\Lambda}} \in \mathcal{S}^G$ be a fixed penalty matrix such that ${\vec{\Lambda}} \geq \vec{0}$ and $\diag({\vec{\Lambda}}) > \vec{0}$.
Furthermore, assume that ${\vec{\Omega}}_g$ is p.d.\ and fixed for all $g\neq g_0$.
The maximizing argument for class $g_0$ of the optimization problem \eqref{eq:argmax0} is then given by
\begin{gather}
\label{eq:update}
{\hat{\vec{\Omega}}}{}_{g_0}\bigl({\vec{\Lambda}}, \{{\vec{\Omega}}_g\}_{g {\neq} g_0} \bigr)
=
\left\{
\left[
\bar{\lambda}_{g_0} \vec{I}_p
+ \frac{1}{4}\big(\bar{\vec{S}}_{g_0} - \bar{\lambda}_{g_0}\bar{\vec{T}}_{g_0}\big)^2
\right]^{1/2}
+ \frac{1}{2}\big(\bar{\vec{S}}_{g_0} - \bar{\lambda}_{g_0} \bar{\vec{T}}_{g_0}\big)
\right\}^{-1},
\intertext{where}
\bar{\vec{S}}_{g_0}
= \vec{S}_{g_0} - \sum_{g \neq g_0}\frac{\lambda_{g g_0}}{n_{g_0}} ({\vec{\Omega}}_g \!- \vec{T}_g),
\label{eq:barupdate}
\quad
\bar{\vec{T}}_{g_0} = \vec{T}_{g_0},
\andwhere
\bar{\lambda}_{g_0} = \frac{\lambda_{g_0\summed}}{n_{g_0}},
\end{gather}
with $\lambda_{g_0\summed} = \sum_{g} \lambda_{g g_0}$ denoting the sum of the \nth{g_0} column (or row) of ${\vec{\Lambda}}$.
\end{Proposition}
\begin{remark}
Defining $\bar{\vec{T}}_{g_0} = \vec{T}_{g_0}$ in Proposition \ref{prop:fusedridge} may be deemed redundant.
However, it allows us to state equivalent alternatives to \eqref{eq:barupdate} without confusing notation.
See Section \ref{sec:Algo} as well as Appendix \ref{sec:Proofs} and Section 1 of the Supplementary Material.
\end{remark}
\begin{remark}
The target matrices from Proposition \ref{prop:fusedridge} may be chosen nonnegative definite.
However, choosing n.d.\ targets may lead to ill-conditioned estimates in the limit.
From a shrinkage perspective we thus prefer to choose $\{\vec{T}_g\} \in \mathcal{S}_{++}^p$.
See Section \ref{sec:Tselec}.
\end{remark}
Proposition~\ref{prop:fusedridge} provides a function for updating the estimate of the \nth{g_0} class while fixing the remaining parameters. As a special case, consider the following. If all off-diagonal elements of ${\vec{\Lambda}}$ are zero no `class fusion' of the estimates takes place and the maximization problem decouples into $G$ individual, disjoint ridge estimations: See Corollary \ref{prop:fusedridge2} in Appendix \ref{sec:Proofs}. The next result summarizes some properties of \eqref{eq:update}:
\begin{Proposition}
\label{prop:fusedridge3}%
Consider the estimator of Proposition \ref{prop:fusedridge} and its accompanying assumptions.
Let ${\hat{\vec{\Omega}}}{}_{g} \equiv {\hat{\vec{\Omega}}}{}_{g}\bigl({\vec{\Lambda}}, \{{\vec{\Omega}}_{g'}\}_{g' {\neq} g}\bigr)$ be the precision matrix estimate of the \nth{g} class.
For this estimator, the following properties hold:
\begin{enumerate}[i.]
\item ${\hat{\vec{\Omega}}}{}_g \succ \vec{0}$ for all $\lambda_{gg} \in \mathbb{R}_{++}$;\vspace{+.1cm}
\label{prop:fusedridge3item1}
\item $\lim\limits_{\lambda_{g g} \to 0^+} {\hat{\vec{\Omega}}}{}_g = \vec{S}_g^{-1}$ if $\sum_{g' \neq g} \lambda_{gg'} = 0$ and $p \leq n_g$
\label{prop:fusedridge3item2}
\item $\lim\limits_{\lambda_{g g} \to \infty^-} {\hat{\vec{\Omega}}}{}_g = \vec{T}_g$ if $\lambda_{g g'} < \infty$ for all $g'\neq g$
\label{prop:fusedridge3item3}
\item $\lim\limits_{\lambda_{g_1 g_2} \to \infty^-}({\hat{\vec{\Omega}}}{}_{g_1} - \vec{T}_{g_1})
= \lim\limits_{\lambda_{g_1 g_2} \to \infty^-} ({\hat{\vec{\Omega}}}{}_{g_2} - \vec{T}_{g_2})$ if $\lambda_{g_1' g_2'} < \infty$ for all $\{g_1',g_2'\} \neq \{g_1,g_2\}$.
\label{prop:fusedridge3item4}
\end{enumerate}
\end{Proposition}
The first item of Proposition \ref{prop:fusedridge3} implies that strictly positive $\lambda_{gg}$ are sufficient to guarantee p.d.\ estimates from the ridge estimator.
The second item then implies that if `class fusion' is absent one obtains as the right-hand limit for group $g$ the standard MLE $\vec{S}_g^{-1}$, whose existence is only guaranteed when $p \leq n_g$.
The third item shows that the fused ridge precision estimator for class $g$ is shrunken exactly to its target matrix when the ridge penalty tends to infinity while the fusion penalties do not.
The last item shows that the precision estimators of any two classes tend to a common estimate when the fusion penalty between them tends to infinity while all remaining penalty parameters remain finite.
The attractiveness of the general estimator hinges upon the efficiency by which it can be obtained. We state a result useful in this respect before turning to our computational approach in Section \ref{sec:Algo}:
\begin{Proposition}
\label{prop:InvLess}%
Let ${\hat{\vec{\Omega}}}{}_{g} \equiv {\hat{\vec{\Omega}}}{}_{g}\bigl({\vec{\Lambda}}, \{{\vec{\Omega}}_{g'}\}_{g' {\neq} g}\bigr)$ be the precision matrix estimate \eqref{eq:update} for the \nth{g} class and define $[{\hat{\vec{\Omega}}}{}_{g}]^{-1} \equiv \hvSigma_{g}$.
The estimate ${\hat{\vec{\Omega}}}{}_{g}$ can then be obtained without inversion through:
\begin{equation}\nonumber
{\hat{\vec{\Omega}}}{}_{g}
=
\frac{1}{\bar{\lambda}_{g}}
\left[ \hvSigma_{g} - (\bar{\vec{S}}_{g} - \bar{\lambda}_{g} \bar{\vec{T}}_{g}) \right]
=
\frac{1}{\bar{\lambda}_{g}}
\left\{
\left[
\bar{\lambda}_{g} \vec{I}_p
+ \frac{1}{4}\big(\bar{\vec{S}}_{g} - \bar{\lambda}_{g}\bar{\vec{T}}_{g_0}\big)^2
\right]^{1/2}
- \frac{1}{2}\big(\bar{\vec{S}}_{g} - \bar{\lambda}_{g} \bar{\vec{T}}_{g}\big)
\right\}.
\end{equation}
\end{Proposition}
\subsection{Algorithm}\label{sec:Algo}
Equation \eqref{eq:update} allows for updating the precision estimate ${\hat{\vec{\Omega}}}{}_{g}$ of class $g$ by plugging in the remaining ${\hat{\vec{\Omega}}}{}_g'$, $g' \neq g$, and assuming them fixed.
Hence, from initial estimates, all precision estimates may be iteratively updated until some convergence criterion is reached.
We propose a block coordinate ascent procedure to solve \eqref{eq:argmax0} by repeated use of the results in Proposition \ref{prop:fusedridge}.
This procedure is outlined in Algorithm \ref{alg:fusedridge}.
By the strict concavity of the problem in \eqref{eq:argmax0}, the procedure guarantees that, contingent upon convergence, the unique maximizer is attained when considering all ${\hat{\vec{\Omega}}}{}_g$ jointly.
Moreover, we can state the following result:
\begin{Proposition}
\label{prop:PosRealm}%
The gradient ascent procedure given in Algorithm \ref{alg:fusedridge} will always stay within the realm of positive definite matrices $\mathcal{S}_{++}^p$.
\end{Proposition}
\begin{algorithm}[htbp]
\caption{Pseudocode for the fused ridge block coordinate ascent procedure.}
\label{alg:fusedridge}
\begin{algorithmic}[1]
\State \algorithmicrequire{
\State \emph{Sufficient data:} $(\vec{S}_1, n_1), \ldots, (\vec{S}_G, n_G)$
\State \emph{Penalty matrix:} ${\vec{\Lambda}}$
\State \emph{Convergence criterion:} $\varepsilon > 0$
}
\State \algorithmicensure{
\State \emph{Estimates:} ${\hat{\vec{\Omega}}}{}_1, \ldots, {\hat{\vec{\Omega}}}{}_G$
}
\Procedure{ridgeP.fused}{$\vec{S}_1, \ldots, \vec{S}_G, n_1, \ldots, n_G, {\vec{\Lambda}}, \varepsilon$}
\State \label{lst:Initial} \emph{Initialize}: ${\hat{\vec{\Omega}}}{}_g^{(0)}$ for all $g$.
\For {$c = 1, 2, 3, \ldots$}
\For {$g = 1, 2, \ldots, G$}
\State \label{lst:UpdateStep} Update ${\hat{\vec{\Omega}}}{}_g^{(c)} :=
{\hat{\vec{\Omega}}}{}_g
\big(
{\vec{\Lambda}},
{\hat{\vec{\Omega}}}{}{}_1^{(c)}, \ldots, {\hat{\vec{\Omega}}}{}_{g-1}^{(c)},
{\hat{\vec{\Omega}}}{}{}_{g+1}^{(c-1)}, \ldots, {\hat{\vec{\Omega}}}{}_G^{(c-1)}
\big)$
by \eqref{eq:update}.
\EndFor
\If {$\max_g\!\Big\{ \frac{\sqfnorm{{\hat{\vec{\Omega}}}{}{}_g^{(c)} - {\hat{\vec{\Omega}}}{}{}_g^{(c-1)}}}
{\sqfnorm{{\hat{\vec{\Omega}}}{}{}_g^{(c)}}} \Big\} <
\varepsilon$}
\State \Return $\big({\hat{\vec{\Omega}}}{}_1^{(c)}, \ldots, {\hat{\vec{\Omega}}}{}_G^{(c)}\big)$
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
The procedure is implemented in the \texttt{rags2ridges} package within the {\textsf{R}} statistical language \citep{R}.
This implementation focuses on \emph{stability} and \emph{efficiency}. With regard to the former: Equivalent (in terms of the obtained estimator) alternatives to \eqref{eq:barupdate} can be derived that are numerically more stable for extreme values of ${\vec{\Lambda}}$.
The most apparent such alternative is:
\begin{equation}
\label{eq:barupdate1}
\bar{\vec{S}}_{g_0} = \vec{S}_{g_0},
\quad
\bar{\vec{T}}_{g_0} = \vec{T}_{g_0} + \sum_{g \neq g_0}
\frac{\lambda_{g g_0}}{\lambda_{g_0\summed}}
({\vec{\Omega}}_g \!- \vec{T}_g),
\andwhere
\bar{\lambda}_{g_0}
= \frac{\lambda_{g_0\summed}}{n_{g_0}}.
\end{equation}
It `updates' the target $\bar{\vec{T}}_g$ instead of the covariance $\bar{\vec{S}}_g$ and has the intuitive interpretation that the target matrix for a given class in the fused case is a combination of the actual class target matrix and the `target corrected' estimates of remaining classes.
The implementation makes use of this alternative where appropriate.
See Section 1 of the Supplementary Material for details on alternative updating schemes.
Efficiency is secured through various roads.
First, in certain special cases closed-form solutions to \eqref{eq:argmax0} exist.
When appropriate, these explicit solutions are used.
Moreover, these solutions may provide warm-starts for the general problem.
See Section 2 of the Supplementary Material for details on estimation in these special cases.
Second, the result from Proposition \ref{prop:InvLess} is used, meaning that the relatively expensive operation of matrix inversion is avoided.
Third, additional computational speed was achieved by implementing core operations in \textsf{C++} via the {\textsf{R}}-packages \texttt{Rcpp} and \texttt{RcppArmadillo} \citep{Rcpp2013, Eddelbuettel2011, RcppArmadillo, Sanderson2010}. These efforts make analyzes with large $p$ feasible.
Throughout, we will initialize the algorithm with ${\hat{\vec{\Omega}}}{}_g^{(0)} = p/\tr(\vec{S}_\summed)\cdot\vec{I}_p$ for all $g$.
\section{Penalty and target selection}\label{PenTar.sec}
\subsection{The penalty graph and analysis of factorial designs}
\label{sec:penaltygraph}
Equality of all class-specific ridge penalties $\lambda_{gg}$ is deemed restrictive, as is equality of all pair-specific fusion penalties $\lambda_{g_1 g_2}$.
In many settings, such as the analysis of factorial designs, finer control over the individual values of $\lambda_{gg}$ and $\lambda_{g_1 g_2}$ befits the analysis.
This will be motivated by several examples of increasing complexity.
In order to do so, some additional notation is developed:
The penalties of ${\vec{\Lambda}}$ can be summarized by a node- and edge-weighted graph $\mathcal{P} = (W, H)$ where the vertex set $W$ corresponds to the possible classes and the edge set $H$ corresponds to the similarities to be retained.
The weight of node $g \in W$ is given by $\lambda_{g g}$ and
the weight of edge $(g_1, g_2)\in H$ is then given by $\lambda_{g_1 g_2}$.
We refer to $\mathcal{P}$ as the \emph{penalty graph} associated with the penalty matrix ${\vec{\Lambda}}$.
The penalty graph $\mathcal{P}$ is simple and undirected as the penalty matrix is symmetric.
\begin{Example}
Consider $G = 2$ classes or subtypes (ST) of diffuse large B-cell lymphoma (DLBCL) patients with tumors resembling either so-called activated B-cells (${\mathrm{ABC}}$) or germinal centre B-cells (${\mathrm{GCB}}$).
Patients with the latter subtype have superior overall survival \citep{Alizadeh2000}.
As the ${\mathrm{GCB}}$ phenotype is more common than ${\mathrm{ABC}}$, one might imagine a scenario where the two class sample sizes are sufficiently different such that $n_{\mathrm{GCB}} \gg n_{\mathrm{ABC}}$.
Numeric procedures to obtain a common ridge penalty (see, e.g., Section \ref{sec:PenSelec}) would then be dominated by the smaller group.
Hence, choosing non-equal class ridge penalties for each group will allow for a better analysis.
In such a case, the following penalty graph and matrix would be suitable:
\begin{equation}
\label{eq:ex1}
\begin{tikzpicture}[node distance = 2mm, auto,
baseline ={(0,-3.5pt)},
main_node/.style={circle,draw,minimum size=1em,inner sep=1pt}]
\node (P) at (-1, 0) {$\mathcal{P} =$};
\node [main_node, label={[yshift=0.1cm]${\mathrm{ABC}}$} ] (n1) at (0,0) {$\lambda_{11}$};
\node [main_node, label={[yshift=0.1cm]${\mathrm{GCB}}$} ] (n2) at (2,0) {$\lambda_{22}$};
\path
(n1) edge node [above] {$\lambda_f$} (n2);
\end{tikzpicture}
\qquad
{\vec{\Lambda}} =
\begin{bmatrix}
\lambda_{11} & \lambda_f \\
\lambda_f & \lambda_{22}
\end{bmatrix}.
\end{equation}
\QEDE
\end{Example}
\begin{Example}
\label{ex:2}
Consider data from a one-way factorial design where the factor is ordinal with classes A, B, and C.
For simplicity, we choose the same ridge penalty $\lambda$ for each class.
Say we have prior information that A is closer to B and B is closer to C than A is to C.
The fusion penalty on the pairs containing the intermediate level B might then be allowed to be stronger.
The following penalty graph and matrix are thus sensible:
\begin{equation}
\label{eq:ex2}
\begin{tikzpicture}[node distance = 2mm, auto,
baseline={([yshift=-.5ex]current bounding box.center)},
main_node/.style={circle,draw,minimum size=1em,inner sep=2pt}]
\node (P) at (-1, 0) {$\mathcal{P} =$};
\node [main_node, label={[yshift=0.05cm]$\mathrm{A}$}] (n1) at (0,0) {$\lambda$};
\node [main_node, label={[yshift=0.05cm]$\mathrm{C}$}] (n2) at (4,0) {$\lambda$};
\node [main_node, label={[yshift=0.05cm]$\mathrm{B}$}] (n3) at (2,0) {$\lambda$};
\path
(n1) edge node [above, midway] {$\lambda_\mathrm{B}$} (n3)
(n3) edge node [above, midway] {$\lambda_\mathrm{B}$} (n2)
(n1) edge [bend right=15] node [below, midway] {$\lambda_\mathrm{AC}$} (n2);
\end{tikzpicture}
\qquad
{\vec{\Lambda}} =
\begin{bmatrix}
\lambda & \lambda_\mathrm{B} & \lambda_\mathrm{AC}\\
\lambda_\mathrm{B} & \lambda & \lambda_\mathrm{B}\\
\lambda_\mathrm{AC} & \lambda_\mathrm{B} & \lambda
\end{bmatrix}.
\end{equation}
\noindent Depending on the application, one might even omit the direct shrinkage between A and C by fixing $\lambda_\mathrm{AC} = 0$.
A similar penalty scheme might also be relevant if one class of the factor is an unknown mix of the remaining classes and one wishes to borrow statistical power from such a class.
\QEDE
\end{Example}
\begin{Example}
\label{ex:3}
In two-way or $n$-way factorial designs one might wish to retain similarities in the `direction' of each factor along with a factor-specific penalty.
Consider, say, 3 oncogenic data sets (DS$_1$, DS$_2$, DS$_3$) regarding ${\mathrm{ABC}}$ and ${\mathrm{GCB}}$ DLBCL cancer patients.
This yields a total of $G = 6$ classes of data.
One choice of penalization of this $2$ by $3$ design is represented by the penalty graph and matrix below:
\begin{equation}
\begin{tikzpicture}[node distance = 2mm, auto,
baseline ={(0,-3.5pt)},
main_node/.style={circle,draw,minimum size=1em,inner sep=2pt}]
\node (P) at (-1.25, 0) {$\mathcal{P} =$};
\node [main_node] (n1) at (0,0.625) {$\lambda$};
\node [main_node] (n2) at (2,0.625) {$\lambda$};
\node [main_node] (n3) at (4,0.625) {$\lambda$};
\node [main_node] (n4) at (0,-0.625) {$\lambda$};
\node [main_node] (n5) at (2,-0.625) {$\lambda$};
\node [main_node] (n6) at (4,-0.625) {$\lambda$};
\draw (n1) -- (n2) node [below, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n2) -- (n3) node [below, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n4) -- (n5) node [above, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n5) -- (n6) node [above, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n1) -- (n4) node [left, midway] {\scriptsize${\lambda_{\mathrm{ST}}}$};
\draw (n2) -- (n5) node [left, midway] {\scriptsize${\lambda_{\mathrm{ST}}}$};
\draw (n3) -- (n6) node [left, midway] {\scriptsize${\lambda_{\mathrm{ST}}}$};
\path
(n1) edge [bend left=15] node [above, near start] {\scriptsize${\lambda_{\mathrm{DS}}}$} (n3)
(n4) edge [bend right=15] node [below, near start] {\scriptsize${\lambda_{\mathrm{DS}}}$} (n6);
\node [above=0.1cm of n1] (DS1) {$\mathrm{DS}_1$};
\node [above=0.1cm of n2] (DS2) {$\mathrm{DS}_2$};
\node [above=0.1cm of n3] (DS3) {$\mathrm{DS}_3$};
\node [left=0.05cm of n1] (GCB) {${\mathrm{GCB}}$};
\node [left=0.05cm of n4] (ABC) {${\mathrm{ABC}}$};
\end{tikzpicture}
\qquad
{\vec{\Lambda}} =
\begin{bmatrix}
\lambda & {\lambda_{\mathrm{DS}}} & {\lambda_{\mathrm{DS}}} & {\lambda_{\mathrm{ST}}} & 0 & 0\\
{\lambda_{\mathrm{DS}}} & \lambda & {\lambda_{\mathrm{DS}}} & 0 & {\lambda_{\mathrm{ST}}} & 0\\
{\lambda_{\mathrm{DS}}} & {\lambda_{\mathrm{DS}}} & \lambda & 0 & 0 & {\lambda_{\mathrm{ST}}}\\
{\lambda_{\mathrm{ST}}} & 0 & 0 & \lambda & {\lambda_{\mathrm{DS}}} & {\lambda_{\mathrm{DS}}}\\
0 & {\lambda_{\mathrm{ST}}} & 0 & {\lambda_{\mathrm{DS}}} & \lambda & {\lambda_{\mathrm{DS}}}\\
0 & 0 & {\lambda_{\mathrm{ST}}} & {\lambda_{\mathrm{DS}}} & {\lambda_{\mathrm{DS}}} & \lambda
\end{bmatrix}.
\label{eq:ERpenaltygraph}
\end{equation}
This example would favor similarities (with the same force) only between pairs sharing a common level in each factor.
This finer control allows users, or the employed algorithm, to penalize differences between data sets more (or less) strongly than differences between the ${\mathrm{ABC}}$ and ${\mathrm{GCB}}$ sub-classes.
This corresponds to not applying direct shrinkage of interaction effects which is of interest in some situations.
\QEDE
\end{Example}
While the penalty graph primarily serves as an intuitive overview, it does provide some aid in the construction of the penalty matrix for multifactorial designs.
For example, the construction of the penalty matrix \eqref{eq:ERpenaltygraph} in Example~\ref{ex:3} corresponds to a Cartesian graph product of two complete graphs similar to those given in \eqref{eq:ex1} and \eqref{eq:ex2}.
We state that $\mathcal{P}$ and ${\vec{\Lambda}}$ should be chosen carefully in conjunction with the choice of target matrices.
Ideally, only strictly necessary penalization parameters (from the perspective of the desired analysis) should be introduced.
Each additional penalty introduced will increase the difficulty of finding the optimal penalty values by increasing the dimension of the search-space.
\subsection{Selection of penalty parameters}\label{sec:PenSelec}
As the $\ell_2$-penalty does not automatically induce sparsity in the estimate, it is natural to seek loss efficiency.
We then use cross-validation (CV) for penalty parameter selection due to its relation to the minimization of the Kullback-Leibler divergence and its predictive accuracy stemming from its data-driven nature.
Randomly divide the data of each class into $k = 1, \ldots, K$ disjoint subsets of approximately the same size.
Previously, we have defined ${\hat{\vec{\Omega}}}{}_{g} \equiv {\hat{\vec{\Omega}}}{}_{g}\bigl({\vec{\Lambda}}, \{{\vec{\Omega}}_{g'}\}_{g' {\neq} g}\bigr)$ to be the precision matrix estimate of the \nth{g} class.
Let ${\hat{\vec{\Omega}}}{}{}_g^{\neg k}$ be the analogous estimate (with similar notational dependencies) for class $g$ based on all samples not in $k$.
Also, let $\vec{S}_g^{k}$ denote the sample covariance matrix for class $g$ based on the data in subset $k$ and let $n_g^{k}$ denote the size of subset $k$ in class $g$.
The $K$-fold CV score for our fused regularized precision estimate based on the fixed penalty ${\vec{\Lambda}}$ can then be given as:
\begin{equation*}
\mathrm{KCV}({\vec{\Lambda}}) =
\frac{1}{KG} \sum_{g = 1}^G \sum_{k = 1}^{K} n_g^{k} \left[-\ln|{\hat{\vec{\Omega}}}{}{}_g^{\neg k}| + \tr({\hat{\vec{\Omega}}}{}{}_g^{\neg k}\vec{S}_g^{k})\right]
= -\frac{1}{KG} \sum_{g = 1}^G \sum_{k = 1}^{K}
\mathcal{L}_{g}^k\bigl({\hat{\vec{\Omega}}}{}_g^{\neg k}; \vec{S}_{g}^{k} \bigr).
\end{equation*}
One would then choose ${\vec{\Lambda}}^\ast$ such that
\begin{equation}
{\vec{\Lambda}}^\ast
= \argmin_{{\vec{\Lambda}}}
\mathrm{KCV}({\vec{\Lambda}}), \quad\text{subject to:} \quad {\vec{\Lambda}} \geq \vec{0} \wedge \diag({\vec{\Lambda}}) > \vec{0}.
\end{equation}
The least biased predictive accuracy can be obtained by choosing $K = n_g$ such that $n_g^{k} = 1$.
This would give the fused version of leave-one-out CV (LOOCV).
Unfortunately, LOOCV is computationally demanding for large $p$ and/or large $n_g$.
We propose to select the penalties by the computationally expensive LOOCV only if adequate computational power is available.
In cases where it is not, we propose two alternatives.
Our first alternative is a special version of the LOOCV scheme that significantly reduces the computational cost.
The \emph{special} LOOCV ($\SLOOCV$) is computed much like the LOOCV.
However, only the class estimate in the class of the omitted datum is updated.
More specifically, the $\SLOOCV$ problem is given by:
\begin{equation}
\label{eq:argminSLOOCV}
{\vec{\Lambda}}^\diamond
= \argmin_{{\vec{\Lambda}}}
\mathrm{SLOOCV}({\vec{\Lambda}}), \quad\text{subject to:} \quad {\vec{\Lambda}} \geq \vec{0} \wedge \diag({\vec{\Lambda}}) > \vec{0},
\end{equation}
with
\begin{equation}\nonumber
\SLOOCV({\vec{\Lambda}})
= -\frac{1}{n_\bullet} \sum_{g = 1}^G \sum_{i = 1}^{n_g}
\mathcal{L}_{g}^{i}\bigl(\widetilde{{\vec{\Omega}}}{}_g^{\neg i}; \vec{S}_{g}^{i} \bigr).
\end{equation}
The estimate $\widetilde{{\vec{\Omega}}}{}_g^{\neg i}$ in \eqref{eq:argminSLOOCV} is obtained by updating only ${\hat{\vec{\Omega}}}{}_g$ using Proposition~\ref{prop:fusedridge}.
For all other $g' \neq g$, $\widetilde{{\vec{\Omega}}}{}_{g'}^{\neg i} = {\hat{\vec{\Omega}}}{}_g$.
The motivation for the SLOOCV is that a single observation in a given class $g$ does not exert heavy direct influence on the estimates in the other classes.
This way the number of fused ridge estimations for each given ${\vec{\Lambda}}$ and each given leave-one-out sample is reduced from $n_\summed$ to $G$ estimations.
Our second and fastest alternative is an approximation of the fused LOOCV score.
This approximation can be used as an alternative to (S)LOOCV when the class sample sizes are relatively large (precisely the scenario where LOOCV is unfeasible).
See Section 3 of the Supplementary Material for detailed information on this approximation.
\subsection{Choice of target matrices}\label{sec:Tselec}
The target matrices $\{\vec{T}_g\}$ can be used to encode prior information and their choice is highly dependent on the application at hand.
As they influence the efficacy as well as the amount of bias of the estimate, it is of some importance to make a well-informed choice.
Here, we describe several options of increasing level of informativeness.
In the non-fused setting, the consideration of a scalar target matrix $\vec{T} = \alpha\vec{I}_p$ for some $\alpha \in [0, \infty)$ leads to a computational benefit stemming from the property of rotation equivariance \citep{VanWieringen2014}: Under such targets the ridge estimator only operates on the eigenvalues of the sample covariance matrix.
This benefit transfers to the fused setting for the estimator described in Proposition~\ref{prop:fusedridge}.
Hence, one may consider $\vec{T}_g = \alpha_g\vec{I}_p$ with $\alpha_g \in [0, \infty)$ for each $g$.
The limited fused ridge problem in \citet{Price2014} corresponds to choosing $\alpha_g = 0$ for all $g$, such that a common target $\vec{T}_g = \vec{T} = \vec{0}$ is employed.
This can be considered the least informative target possible.
We generally argue against the use of the non p.d.\ target $\vec{T} = \vec{0}$, as it implies shrinking the class precision matrices towards the null matrix and thus towards infinite variance.
Choosing $\alpha_g$ to be strictly positive implies a (slightly) more informative choice.
The rotation equivariance property dictates that it is sensible to choose $\alpha_g$ based on empirical information regarding the eigenvalues of $\vec{S}_g$.
One such choice could be the average of the reciprocals of the non-zero eigenvalues of $\vec{S}_g$.
A straightforward alternative would be to choose $\alpha_g = [\tr(\vec{S}_g)/p]^{-1}$.
In the special case of \eqref{eq:argmax2} where all $\alpha_g = \alpha$ the analogous choice would be $\alpha = [\tr(\vec{S}_\bullet)/p]^{-1}$.
More informative targets would move beyond the scalar matrix.
An example would be the consideration of factor-specific targets for factorial designs.
Recalling Example \ref{ex:3}, one might deem the data set factor to be a `nuisance factor'.
Hence, one might choose different targets $\vec{T}_{{\mathrm{GCB}}}$ and $\vec{T}_{{\mathrm{ABC}}}$ based on training data or the pooled estimates of the ${\mathrm{GCB}}$ and ${\mathrm{ABC}}$ samples, respectively.
In general, the usage of pilot training data or (pathway) database information (or both) allows for the construction of target matrices with higher specificity.
We illustrate how to construct targets from database information in the DLBCL application of Section \ref{Illustrate.sec}.
\section{Fused graphical modeling}
\label{sec:posthoc}
\subsection{To fuse or not to fuse}
\label{sec:FuseYN}
As a preliminary step to downstream modeling one might consider testing the hypothesis of no class heterogeneity---and therefore the necessity of fusing---amongst the class-specific precision matrices.
Effectively, one then wishes to test the null-hypothesis $H_0 : {\vec{\Omega}}_1 = \ldots = {\vec{\Omega}}_G$.
Under $H_0 $ an explicit estimator is available in which the fused penalty parameters play no role, cf.\ Section 2.2 of the Supplementary Material.
Here we suggest a score test \citep{BeraBil01} for the evaluation of $H_0$ in conjunction with a way to generate its null distribution in order to assess its observational extremity.
A score test is convenient as it only requires estimation under the null hypothesis, allowing us to exploit the availability of an explicit estimator.
The score statistic equals:
\begin{equation*}
U = \left.- \sum_{g=1}^G
\left(\frac{\partial \mathcal{L}(\{{\vec{\Omega}}_g\}; \{\vec{S}_g\})}{\partial {\vec{\Omega}}_g}\right)^\top
\left(\frac{\partial^2 \mathcal{L}(\{{\vec{\Omega}}_g\}; \{\vec{S}_g\})}{\partial {\vec{\Omega}}_g \partial {\vec{\Omega}}_g^\top}\right)^{-1}
\frac{\partial \mathcal{L}(\{{\vec{\Omega}}_g\}; \{\vec{S}_g\})}{\partial {\vec{\Omega}}_g}
\right|_{{\vec{\Omega}}_g = {\hat{\vec{\Omega}}}{}^{H_0}},
\end{equation*}
where ${\hat{\vec{\Omega}}}{}^{H_0}$ denotes the precision estimate under $H_0$ given in \eqref{eq:specialcaseIIc}, which holds for all classes $g$.
The gradient can be considered in vectorized form and is readily available from \eqref{eq:loglikderiv}.
The Hessian of the log-likelihood equals
$\partial^2 \mathcal{L}/(\partial {\vec{\Omega}}_g \partial {\vec{\Omega}}_g^\top) = - {\vec{\Omega}}_g^{-1} \otimes {\vec{\Omega}}_g^{-1}$.
For practical purposes of evaluating the score statistic, we employ the identity
$(\mathbf{A}^\top \otimes \mathbf{B}) \vect(\mathbf{C}) = \vect(\mathbf{B} \mathbf{C} \mathbf{A})$
which avoids the manipulation of $(p^2 \times p^2)$-dimensional matrices.
Hence, the test statistic $U$ is computed by
\begin{equation*}
\hat{U}
= \sum_{g=1}^G \vect({\hat{\vec{X}}}{}_g)^\top\vect({\hat{\vec{\Omega}}}{}^{H_0} {\hat{\vec{X}}}{}_g {\hat{\vec{\Omega}}}{}^{H_0})
= \sum_{g=1}^G \tr\bigl[ {\hat{\vec{X}}}{}_g ({\hat{\vec{\Omega}}}{}^{H_0} {\hat{\vec{X}}}{}_g {\hat{\vec{\Omega}}}{}^{H_0})\bigr],
\end{equation*}
where ${\hat{\vec{X}}}{}_g = n_g \{2[({\hat{\vec{\Omega}}}{}^{H_0})^{-1} - \vec{S}_g] - [({\hat{\vec{\Omega}}}{}^{H_0})^{-1} - \vec{S}_g] \circ \vec{I}_p \}$.
The null distribution of $U$ can be generated by permutation of the class labels: one permutes the class labels, followed by re-estimation of ${\vec{\Omega}}$ under $H_0$ and the re-calculation of the test statistic.
The observed test statistic (under $H_0$) $\hat{U}$ is obtained from the non-permuted class labels and the regular fused estimator.
The $p$-value is readily obtained by comparing the observed test statistic $\hat{U}$ to the null distribution obtained from the test statistic under permuted class labels.
We note that the test is conditional on the choice of $\lambda_{gg}$.
\subsection{Graphical modeling}
A contemporary use for precision matrices is found in the reconstruction and analysis of networks through graphical modeling.
Graphical models merge probability distributions of random vectors with graphs that express the conditional (in)dependencies between the constituent random variables.
In the fusion setting one might think that the class precisions share a (partly) common origin (conditional independence graph) to which fusion appeals.
We focus on class-specific graphs $\mathcal{G}_g = (V, E_g)$ with a finite set of vertices (or nodes) $V$ and set of edges $E_g$.
The vertices correspond to a collection of random variables and we consider the same set $V = \{Y_{1},\ldots,Y_{p}\}$ of cardinality $p$ for all classes $g$.
That is, we consider the same $p$ variables in all $G$ classes.
The edge set $E_g$ is a collection of pairs of distinct vertices $( Y_j, Y_{j'} )$ that are connected by an undirected edge and this collection may differ between classes.
In case we assume $\{Y_{1},\ldots,Y_{p}\} \sim \mathcal{N}_p(\vec{0}, {\vec{\Sigma}}_g)$ for all classes $g$ we are considering multiple Gaussian graphical models.
Conditional independence between a pair of variables in the Gaussian graphical model corresponds to zero entries in the (class-specific) precision matrix.
Let ${\hat{\vec{\Omega}}}{}_g$ denote a generic estimate of the precision matrix in class $g$.
Then the following relations hold for all pairs $\{Y_{j}, Y_{j'}\} \in \mathcal{V}$ with $j \neq j'$:
\begin{equation*}
({\hat{\vec{\Omega}}}{}_g)_{jj'} = \omega_{jj'}^{(g)} = 0
\quad \Longleftrightarrow \quad
Y_j \independent Y_{j'} \bigm| V \setminus \bigl\{ Y_j, Y_{j'} \bigr\} ~\mbox{in class}~ g
\quad \Longleftrightarrow \quad
( Y_j, Y_{j'} ) \not\in E_g.
\end{equation*}
Hence, determining the (in)dependence structure of the variables for class $g$---or equivalently the edge set $E_g$ of $\mathcal{G}_g$---amounts to determining the support of ${\hat{\vec{\Omega}}}{}_g$.
\subsection{Edge selection}\label{sec:SelectEdge}
We stress that support determination may be skipped entirely as the estimated precision matrices can be interpreted as complete (weighted) graphs.
For more sparse graphical representations we resort to support determination by a local false discovery rate (lFDR) procedure \citep{EfronLocFDR} proposed by \citet{SS05}.
This procedure assumes that the nonredundant off-diagonal entries of the partial correlation matrix
\begin{equation*}
(\hat{\vP}{}_g)_{jj'}
= -\hat{\omega}{}_{jj'}^{(g)}
\left(
\hat{\omega}{}_{jj}^{(g)}\hat{\omega}{}_{j'j'}^{(g)}
\right)^{-\frac{1}{2}}
\end{equation*}
follow a mixture distribution representing null and present edges.
The null-distribution is known to be a scaled beta-distribution which allows for estimating the lFDR:
\begin{align*}
\widehat{\mathrm{lFDR}}{}_{jj'}^{(g)}
= P\!\Big(
( Y_j, Y_{j'}) \not\in E_g
\;\Big|\;
(\hat{\vP}{}_g)_{jj'}
\Big),
\end{align*}
which gives the empirical posterior probability that the edge between $Y_j$ and $Y_{j'}$ is null in class $g$ conditional on the observed corresponding partial correlation.
The analogous probability that an edge is present can be obtained by considering $1 - \widehat{\mathrm{lFDR}}{}_{jj'}^{(g)}$.
See \cite{EfronLocFDR,SS05,VanWieringen2014} for further details on the lFDR procedure.
Our strategy will be to select for each class only those edges for which $1 - \widehat{\mathrm{lFDR}}{}_{jj'}^{(g)}$ surpasses a certain threshold (see Section \ref{Illustrate.sec}).
This two-step procedure of regularization followed by subsequent support determination has the advantage that it enables probabilistic statements about the inclusion (or exclusion) of edges.
\subsection{Common and differential (sub-)networks}
\label{sec:commonAndDifferentialNetworks}
After estimation and sparsification of the class precision matrices the identification of commonalities and differences between the graphical estimates are of natural interest.
Here we consider some (summary) measures to aid such identifications.
Assume in the following that multiple graphical models have been identified by the sparsified estimates ${\hat{\vec{\Omega}}}{}_1^{0}, \ldots, {\hat{\vec{\Omega}}}{}_G^{0}$ and that the corresponding graphs are denoted by $\mathcal{G}_1, \ldots, \mathcal{G}_G$.
An obvious method of comparison is by pairwise graph differences or intersections.
We utilize the \emph{differential network} $\mathcal{G}_{g_1\setminus g_2} = (V, E_{g_1} \setminus E_{g_2})$ between class $g_1$ and $g_2$ to provide an overview of edges present in one class but not the other.
The \emph{common network} $\mathcal{G}_{1\cap 2} = (V, E_1 \cap E_2)$ is composed of the edges present in both graphs.
We also define the \emph{edge-weighted total network} of $m \leq G$ graphs $\mathcal{G}_1, \ldots, \mathcal{G}_m$ as the graph formed by the union $\mathcal{G}_{1 \cup \cdots \cup m} = (V, E_1 \cup \cdots \cup E_m)$ where the weight $w_{jj'}$ of the edge $e_{jj'}$ is given by the cardinality of the set
$\{g \in \{1, \ldots, m\} : e_{jj'} \in E_g\}$.
More simply, $\mathcal{G}_{1 \cup \cdots \cup m}$ is determined by summing the adjacency matrices of $\mathcal{G}_1$ to $\mathcal{G}_m$.
Analogously, the \emph{signed edge-weighted total network} takes into account the stability of the sign of an edge over the classes by summing signed adjacency matrices.
Naturally, the classes can also be compared by one or more summary statistics at node-, edge-, and network-level per class \citep[cf.][]{Newman10}.
We also propose the idea of `network rewiring'.
Suppose an investigator is interested in the specific interaction between genes $A$ and $B$ for classes $g_1$ and $g_2$.
The desire is to characterize the dependency between genes $A$ and $B$ and determine the differences between the two classes.
To do so, we suggest using the decomposition of the covariance of $A$ and $B$ into the individual contributions of all paths between $A$ and $B$.
A path $z$ between $A$ and $B$ of length $t_z$ in a graph for class $g$ is, following \citet{Lauritz96}, defined to be a sequence $A = v_0, \ldots, v_{t_{z}} = B$ of distinct vertices such that $(v_{d-1}, v_d) \in E_g$ for all $d = 1, \ldots, t_z$.
The possibility of the mentioned decomposition was shown by \citet{Jones2005} and, in terms of ${\hat{\vec{\Omega}}}{}_g^{0} = [\omega_{jj'}]$, can be stated as:
\begin{equation}
\label{eq:covdecomp}
\Cov(A, B)
= \sum_{z \in \mathcal{Z}_{AB}}
(-1)^{t_{z}+1}
\omega_{Av_1}\omega_{v_1v_2}\omega_{v_2v_3} \cdots
\omega_{v_{t_{z}-2}v_{t_{z}-1}}\omega_{v_{t_{z}-1}B}
\frac{\deter{({\hat{\vec{\Omega}}}{}_g^{0})_{\neg P}}}{\deter{{\hat{\vec{\Omega}}}{}_g^{0}}},
\end{equation}
where $\mathcal{Z}_{AB}$ is the set of all paths between $A$ and $B$ and $({\hat{\vec{\Omega}}}{}_g^{0})_{\neg P}$ denotes the matrix ${\hat{\vec{\Omega}}}{}_g^{0}$ with rows and columns corresponding to the vertices of the path $z$ removed.
Each \emph{term} of the covariance decomposition in \eqref{eq:covdecomp} can be interpreted as the flow of information through a given path $z$ between $A$ and $B$ in $\mathcal{G}_g$.
Imagine performing this decomposition for $A$ and $B$ in both ${\hat{\vec{\Omega}}}{}_{g_1}^{0}$ and ${\hat{\vec{\Omega}}}{}_{g_2}^{0}$.
For each path, we can then identify whether it runs through the common network $\mathcal{G}_{g_1\cap g_2}$, or utilizes the differential networks $\mathcal{G}_{g_2\setminus g_1}, \mathcal{G}_{g_1\setminus g_2}$ unique to the classes.
The paths that pass through the differential networks can be thought of as a `rewiring' between the groups (in particular compared to the common network).
In summary, the covariance between a node pair can be separated into a component that is common and a component that is differential (or rewired).
\begin{Example}
Suppose we have the following two graphs for classes $g_1 = 1$ and $g_2 = 2$:
\begin{equation*}
\begin{tikzpicture}[node distance = 2mm, auto,
main_node/.style={circle,draw,minimum size=1.2em,inner sep=0pt]
}]
\node (G1) at (-1.6, 0.5) {$\mathcal{G}_1 =$};
\node[main_node] (n1) at (-0.866,0.5) {$A$};
\node[main_node] (n2) at (0,1) {$B$};
\node[main_node] (n3) at (1,1) {$3$};
\node[main_node] (n4) at (1,0) {$4$};
\node[main_node] (n5) at (0,0) {$5$};
\draw (n1) -- (n2);
\draw (n1) -- (n5) -- (n2);
\draw (n1) -- (n5) -- (n4) -- (n2);
\draw (n5) -- (n3);
\end{tikzpicture}
\qquad
\begin{tikzpicture}[node distance = 2mm, auto,
main_node/.style={circle,draw,minimum size=1.2em,inner sep=0pt]
}]
\node (G2) at (-1.6, 0.5) {$\mathcal{G}_2 =$};
\node[main_node] (n1) at (-0.866,0.5) {$A$};
\node[main_node] (n2) at (0,1) {$B$};
\node[main_node] (n3) at (1,1) {$3$};
\node[main_node] (n4) at (1,0) {$4$};
\node[main_node] (n5) at (0,0) {$5$};
\draw (n1) -- (n5) -- (n2);
\draw (n1) -- (n5) -- (n4) -- (n3) -- (n2);
\end{tikzpicture}
\end{equation*}
\noindent and consider the covariance between node $A$ and $B$.
In $\mathcal{G}_1$ the covariance $\Cov(Y_A, Y_B)$ is decomposed into contributions by the paths $(A,B)$, $(A,5,B)$, and $(A,5,4,B)$.
Similarly for $\mathcal{G}_2$, the contributions are from paths $(A,5,B)$ and $(A,5,4,3,B)$.
Thus $(A,5,B)$ is the only shared path.
Depending on the size of the contributions we might conclude that network 1 has some `rewired pathways' compared to the other.
This method gives a concise overview of the estimated interactions between two given genes, which genes mediate or moderate these interactions, as well as how the interaction patterns differ across the classes.
In turn this might suggest candidate genes for perturbation or knock-down experiments.
\QEDE
\end{Example}
\section{Simulation study}\label{Sims.sec}
In this section we explore and measure the performance of the fused estimator and its behavior in four different scenarios.
Performance is measured primarily by the squared Frobenius loss,
\begin{equation*}
L_F^{(g)}\bigl({\hat{\vec{\Omega}}}{}_g({\vec{\Lambda}}), {\vec{\Omega}}_g\bigr)
= \sqfnorm[\big]{ {\hat{\vec{\Omega}}}{}_g({\vec{\Lambda}}) - {\vec{\Omega}}_g },
\end{equation*}
between the class precision estimate and the true population class precision matrix.
However, the performance is also assessed in terms of the quadratic loss,
\begin{equation*}
L_Q^{(g)}\bigl({\hat{\vec{\Omega}}}{}_g({\vec{\Lambda}}), {\vec{\Omega}}_g\bigr)
= \sqfnorm[\big]{ {\hat{\vec{\Omega}}}{}_g({\vec{\Lambda}}){\vec{\Omega}}_g^{-1} - \vec{I}_p }.
\end{equation*}
The risk defined as the expected loss associated with an estimator, say,
\begin{equation*}
\mathcal{R}_F\bigl\{ {\hat{\vec{\Omega}}}{}_g({\vec{\Lambda}}) \bigr\}
= \mathbb{E}\Bigl[ L_F^{(g)}\bigl({\hat{\vec{\Omega}}}{}_g({\vec{\Lambda}}), {\vec{\Omega}}_g\bigr) \Bigr],
\end{equation*}
is robustly approximated by the median loss over a repeated number of simulations and corresponding estimations.
We designed four simulation scenarios to explore the properties and performance of the fused ridge estimator and alternatives.
Scenario (1) evaluates the fused ridge estimator under two choices of the penalty matrix,
the non-fused ridge estimate applied individually to the classes, and
the non-fused ridge estimate using the pooled covariance matrix when
(1a)~${\vec{\Omega}}_1 = {\vec{\Omega}}_2$ and
(1b)~${\vec{\Omega}}_1 \neq {\vec{\Omega}}_2$.
Scenario (2) evaluates the fused ridge estimator under different choices of targets:
(2a)~$\vec{T}_1 = \vec{T}_2 = \vec{0}$,
(2b)~$\vec{T}_1 = \vec{T}_2 = \alpha\vec{I}_p$, and
(2c)~$\vec{T}_1 = \vec{T}_2 = {\vec{\Omega}}$.
Scenario (3) evaluates the fused ridge estimator for varying network topologies and degrees of class homogeneity.
Specifically, for
(3a)~scale-free topology and
(3b)~small-world topology,
each with
(3i)~low class homogeneity and
(3ii)~high class homogeneity.
Scenario (4) investigates the fused estimator under non-equal class sample sizes.
Except for scenario 4, we make no distinction between the loss in different classes.
Except for scenario 1, we use penalty matrices of the form ${\vec{\Lambda}} = \lambda\vec{I}_p + \lambda_f(\vec{J}_p-\vec{I}_p)$.
\subsection{Scenario 1: Fusion versus no fusion}
Scenario 1 explores the loss-efficiency of the fused estimate versus non-fused estimates as a function of the class sample size $n_g$ for fixed $p$ and hence for different $p/n_\summed$ ratios.
Banded population precision matrices are simulated from $G = 2$ classes.
We set $p = 30$ and
\begin{equation}
\label{eq:banded}
({\vec{\Omega}}_g)_{jj'}
= \frac{k + 1}{\abs{j - j'} + 1} \mathds{1}\bigl[\abs{j - j'} \leq k\bigr]
\end{equation}
with $k$ non-zero off-diagonal bands.
The sub-scenario
(1a) ${\vec{\Omega}}_1 = {\vec{\Omega}}_2$ uses $k = 15$ bands whereas
(1b) ${\vec{\Omega}}_1 \neq {\vec{\Omega}}_2$ uses $k = 15$ bands for ${\vec{\Omega}}_1$ and $k = 2$ bands for ${\vec{\Omega}}_2$.
Hence, identical and very different population precision matrices are considered, respectively.
For $n_g = 10, 25, 70$ the loss over $20$ repeated runs was computed.
In each run, the optimal \emph{unrestricted} penalty matrix ${\vec{\Lambda}}$ was determined by LOOCV.
The losses were computed for
(1i) the fused ridge estimator with an unrestricted penalty matrix,
(1ii) the fused ridge estimator with a restricted penalty matrix such that $\lambda_{11} = \lambda_{22}$,
(1iii) the regular non-fused ridge estimator applied separately to each class, and
(1iv) the regular non-fused ridge estimator using the pooled estimate $\vec{S}_\summed$.
In all cases the targets $\vec{T}_1 = \vec{T}_2 = \alpha_\summed\vec{I}_p$ were used with $\alpha_\summed = p/\tr(\vec{S}_\summed)$.
The risk and quartile losses for scenario 1 are seen in the boxplots of Figure \ref{fig:plot_fig1}A.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig1.pdf}
\caption{A: Results for Scenario 1. The losses against the class samples size for different ridge estimators under unequal and equal class population matrices. B: Results for Scenario 2. The losses against the class sample size with different target matrices.}
\label{fig:plot_fig1}
\end{figure}
Generally, the \emph{unrestricted} fused estimates are found to perform at least as well as the (superior of the) \emph{non-fused} estimates.
This can be expected as the fused ridge estimate might be regarded as an interpolation between using the non-fused ridge estimator on the pooled data and within each class separately.
Hence, the LOOCV procedure is thus able to capture and select the appropriate penalties both when the underlying population matrices are very similar and when they are very dissimilar.
In the case of differing class population precision matrices, the \emph{restricted} fused ridge estimator (that uses the single ridge penalty $\lambda_{11} = \lambda_{22}$) performs somewhat intermediately, indicating again the added value of the flexible penalty setup.
It is unsurprising that the non-fused estimate using the pooled covariance matrix is superior in scenario 1b, where ${\vec{\Omega}}_1 = {\vec{\Omega}}_2$, as it is the explicit estimator in this scenario, cf.\ Section 2.2 of the Supplementary Material.
\subsection{Scenario 2: Target versus no target}
Scenario 2 investigates the added value of the targeted approach to fused precision matrix estimation compared to that of setting $\vec{T}_g = \vec{0}$ which reduces to the special-case considered by \citet{Price2014}.
We simulated data sets with $G = 2$ classes from banded precision matrices (as given in \eqref{eq:banded}) with $p = 50$ variables and $k = 25$ bands for varying class sample sizes $n_g$ and target matrices $\vec{T}_1$ and $\vec{T}_2$.
The performance was evaluated using
(2a)~$\vec{T}_1 = \vec{T}_2 = \vec{0}$,
(2b)~$\vec{T}_g = \alpha_\summed\vec{I}_p$, as above, and
(2c)~the spot-on target $\vec{T}_1 = \vec{T}_2 = {\vec{\Omega}}$
for each of $n_g = 25, 50, 125$ class sample sizes.
As above, risks were estimated by the losses for each class over $20$ simulation repetitions.
The optimal penalties where determined by LOOCV with penalty matrices of the form ${\vec{\Lambda}} = \lambda\vec{I}_p + \lambda_f(\vec{J}_p-\vec{I}_p)$.
The results are shown in the boxplots in Panel B of Figure~\ref{fig:plot_fig1}.
As expected, the spot-on target shows superior performance in terms of loss in all cases.
This suggests that well-informed choices of the target can greatly improve the estimation and that the algorithm will put emphasis on the target if it reflects the truth.
Such behavior is also seen analytically in the ridge estimator of \citet{SS05} inferred from their closed expression of the optimal penalty.
We see that using the scalar target $\alpha_\summed\vec{I}_p$ resuts in an as-good or lower risk in terms of the quadratic but not the Frobenius loss compared to the no-target situation.
As this scenario corresponds to the case of \citet{Price2014} we performed a secondary timing benchmark of their accompanying \texttt{RidgeFusion} package compared to \texttt{rags2ridges}.
We evaluated estimation time of each package on a single simulated data set with
$p = 50$,
$G = 2$, and
$n_1 = n_2 = 10$ using a banded matrix as before.
The average estimation times over 100 model fits where
8.9 and 26
milliseconds
for packages
\texttt{rags2ridges} and
\texttt{RidgeFusion}, respectively.
This approximates a factor
2.94
speed-up for a single model fit.
The timing was done using the package \texttt{microbenchmark} \citep{Mersmann2014} and the estimates from each package were in agreement within expected numerical precision.
\subsection{Scenario 3: Varying topology and class (dis)similarity}
Scenario 3 investigates the fused estimator with $G = 3$ classes for (3i) high and (3ii) low class homogeneity and two different latent random graph topologies on $p = 50$ variables.
The topologies are the (3a) `small-world' and the (3b) `scale-free' topology generated by Watts-Strogatz and Barab\'{a}si graph games, respectively.
The former generates topologies where all node degrees are similar while the latter game generates networks with (few) highly connected hubs.
From the generated topology, we construct a latent precision matrix ${\vec{\Psi}}$ with diagonal elements set to 1 and the non-zero off-diagonal entries dictated by the network topology set to $0.1$.
The two topologies are motivated as they imitate many real phenomena and processes.
Small-world topologies approximate systems such as power grids, the neural network of the worm C.~elegans, and the social networks of film actors \citep{Mei2011, Watts1998}.
Conversely, scale-free topologies approximate many social networks, protein-protein interaction networks, airline networks, the world wide web, and the internet \citep{Barabasi1999, Barabasi2009}.
We control the inter-class homogeneity using a latent inverse Wishart distribution for each class covariance matrix as considered by \citet{Bilgrau2015b}.
That is, we let
\begin{align}
\label{eq:invwishart}
{\vec{\Sigma}}_g = {\vec{\Omega}}_g^{-1} \sim \mathcal{W}_p^{-1}\Big((\nu - p - 1){\vec{\Phi}}^{-1}, \nu\Big),
\quad \nu > p + 1
\end{align}
where $\mathcal{W}_p^{-1}({\vec{\Theta}}, \nu)$ denotes an inverse Wishart distribution with scale matrix ${\vec{\Theta}}$ and $\nu$ degrees of freedom.
The parametrization implies the expected value $\mathbb{E}[{\vec{\Sigma}}_g] = \mathbb{E}[{\vec{\Omega}}_g^{-1}] = {\vec{\Phi}}^{-1}$ and thus ${\vec{\Phi}}$ defines the latent expected topology.
We simulate from a multivariate normal distribution as before conditional on the realized covariance ${\vec{\Sigma}}_g$.
In \eqref{eq:invwishart}, the parameter $\nu$ controls the inter-class homogeneity.
Large $\nu$ imply that ${\vec{\Omega}}_1 \approx {\vec{\Omega}}_2 \approx {\vec{\Omega}}_3$ and thus a large class homogeneity.
Small values of $\nu \to (p + 1)^+$ imply large heterogeneity.
For the simulations, we chose (i) $\nu = 100$ and (ii) $\nu = 1000$.
Again we fitted the model using both the zero target as well as the scalar matrix target described above using the reciprocal value of the mean eigenvalue, i.e., $\vec{T}_1 = \vec{T}_2 = \vec{T}_3 = \alpha \vec{I}_p$ for both $\alpha = 0$ and $\alpha = p/\tr(\vec{S}_\summed)$.
The estimation was repeated 20 times for each combination of high/low class similarity, network topology, choice of target, and class sample-size $n_1 = n_2 = n_3 = 25, 50, 125$.
Panels A and B of Figure \ref{fig:plot_sim3_res} show box-plots of the results.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig2.pdf}
\caption{A: Scenario 3. The Frobenius loss as a function of sample size under different topologies and degree of similarity. B: Scenario 3. The quadratic loss as a function of sample size under different topologies and degree of similarity. C: Scenario 4. The loss as a function of sample size of class 1 with fixed sample size for class 2.}
\label{fig:plot_sim3_res}
\end{figure}
First, the loss is seen to be dependent on the network topology, irrespective of the loss function.
Second, as expected, the loss is strongly influenced by the degree of class (dis)similarity where a higher homogeneity yields a lower loss.
Intuitively, this makes sense as the estimator can borrow strength across the classes and effectively increase the degrees of freedom in each class.
Third, the targeted approach has a superior loss in all cases with a high class homogeneity and thus the gain in loss-efficiency is greater for the targeted approach.
For low class homogeneity, the targeted approach performs comparatively to the zero target with respect to the Frobenius loss while it is seemingly better in terms of quadratic loss.
Measured by quadratic loss, the targeted approach nearly always outperforms the zero target.
\subsection{Scenario 4: Unequal class sizes}
Scenario 4 explores the fused estimator under unequal class sample sizes.
We simulated data with $k = 8$ non-zero off-diagonal bands, $G = 2$, and $p = 50$.
The number of samples in class 2 was fixed at $n_2 = 30$ while the number of samples in class 1 were varied: $n_1 = 5, 25, 50, 75$.
The results of the simulation are shown in Panel C of Figure \ref{fig:plot_sim3_res}.
Note that we consider the Frobenius and quadratic loss within each class separately here.
Not surprisingly, the fused estimator performs better (for both classes) when $n_\summed$ increases.
Perhaps more surprising, there seems to be no substantial difference in quadratic loss for group $n_1$ and $n_2$ suggesting that the fusion indeed borrows strength from the larger class.
A loss difference is only visible in the most extreme case where $n_1 = 5$ and $n_2 = 30$.
The relative difference however is not considered large.
\section{Applications}\label{Illustrate.sec}
Lymphoma refers to a group of cancers that originate in specific cells of the immune system such as white blood T- or B-cells.
Approximately $90\%$ of all lymphoma cases are non-Hodgkin's lymphomas---a diverse group of blood cancers excluding Hodgkin's disease---of which the aggressive diffuse large B-cell lymphomas (DLBCL) constitutes the largest subgroup~\citep{Project1997}.
We showcase the usage of the fused ridge estimator through two analyzes of DLBCL data.
In DLBCL, there exists at least two major genetic subtypes of tumors named after their similarities in genetic expression with activated B-cells (ABC) and germinal centre B-cells (GCB).
A third \emph{umbrella} class, usually designated as Type III, contains tumors that cannot be classified as being either of the ABC or GCB subtype.
Patients with tumors of GCB class show a favorable clinical prognosis compared to that of ABC.
Even though the genetic subtypes have been known for more than a decade \citep{Alizadeh2000} and despite the appearance of refinements to the DLBCL classification system \citep{DykaerBoegsted2015}, DLBCL is still treated as a singular disease in daily clinical practice and the first differentiated treatment regimens have only recently started to appear in clinical trials \citep{Ruan2011,Nowakowski2015}.
Many known phenotypic differences between ABC and GCB are associative, which might underline the translational inertia.
Hence, the biological underpinnings and \emph{functional differences} between ABC and GCB are of central interest and the motivation for the analyzes below.
Incorrect regulation of the NF-$\kappa$B signaling pathway, responsible for i.a.\ control of cell survival, has been linked to cancer.
This pathway has certain known drivers of deregulation.
Aberrant interferon $\beta$ production due to recurrent oncogenic mutations in the central MYD88 gene interferes with cell cycle arrest and apoptosis \citep{Yang2012}.
It also well-known that BCL2, another member of the NF-$\kappa$B pathway, is deregulated in DLBCL \citep{Schuetz2012}.
Moreover, a deregulated NF-$\kappa$B pathway is a key hallmark distinguishing the poor prognostic ABC subclass from the good prognostic GCB subclass of DLBCL \citep{Roschewski2014}.
Our illustrative analyzes thus focus on the \emph{functional differences} between ABC and GCB in relation to the NF-$\kappa$B pathway.
Section \ref{sec:analysis1} investigates the DLBCL classes in the context of a single data set on the NF-$\kappa$B signalling pathway.
Section \ref{sec:dlbcl2} analyzes multiple DLBCL NF-$\kappa$B data sets with a focus on finding common motifs and motif differences in network representations of pathway-deregulation.
These analyzes show the value of a fusion approach to integration.
In all analyzes we take the NF-$\kappa$B pathway and its constituent genes to be defined by the Kyoto Encyclopedia of Genes and Genomes (KEGG) database \citep{Kanehisa2000}.
\subsection{Nonintegrative analysis of DLBCL subclasses}
\label{sec:analysis1}
We first analyze the data from \citet{DykaerBoegsted2015}, consisting of $89$ DLBCL tumor samples.
These samples were RMA-normalized using custom brainarray chip definition files (CDF) \citep{Dai2005} and the {\textsf{R}}-package \texttt{affy} \citep{Gautier2004}.
This preprocessing used Entrez gene identifiers (EID) by the National Center for Biotechnology Information (NCBI), which are also used by KEGG.
The usage of custom CDFs avoids the mapping problems between Affymetrix probeset IDs and KEGG.
Moreover, the custom CDFs can increase the robustness and precision of the expression estimates \citep{Lu2006, Sandberg2007}.
The RMA-preprocessing yielded 19,764 EIDs.
Subsequently, the features were reduced to the available 82 out of the 91 EIDs present in the KEGG NF-$\kappa$B pathway.
The samples were then partitioned, using the DLBCL automatic classifier (DAC) by \citet{Care2013}, into the three classes ABC $(n_1=31)$, III $(n_2=13)$, and GCB $(n_3=45)$, and gene-wise centered to have zero mean within each class.
The analysis was performed with the following settings.
Target matrices for the groups were chosen to be scalar matrices with the scalar determined by the inverse of the average eigenvalue of the corresponding sample class covariance matrix, i.e.:
\begin{equation*}
\vec{T}_\text{ABC} = \alpha_1\vec{I}_p, \quad
\vec{T}_\text{III} = \alpha_2\vec{I}_p, \quad
\vec{T}_\text{GCB} = \alpha_3\vec{I}_p,
\quad\text{where}\quad
\alpha_g = \frac{p}{\tr(\vec{S}_g)}.
\end{equation*}
These targets translate to a class-scaled `prior' of conditional independence for all genes in NF-$\kappa$B.
The optimal penalties were determined by
LOOCV
using the penalty matrix and graph given in \eqref{eq:DLBCLpenatlygraph1}.
Note that the penalty setup bears resemblance to Example~\ref{ex:2}.
Differing class-specific ridge penalties were allowed because of considerable differences in class sample size.
Direct shrinkage between ABC and GCB was disabled by fixing the corresponding pair-fusion penalty to zero.
The remaining fusion penalties were free to be estimated.
Usage of the Nelder-Mead optimization procedure then resulted in the optimal values given on the right-hand side of \eqref{eq:DLBCLpenatlygraph1} below:
\begin{equation}
\begin{tikzpicture}[node distance = 2mm, auto,
baseline={([yshift=-.75ex]current bounding box.center)},
main_node/.style={circle,draw,minimum size=1em,inner sep=1pt}]
\node [main_node, label={[yshift=0.05cm]$\text{ABC}$}] (n1) at (0,0) {$\lambda_{11}$};
\node [main_node, label={[yshift=0.045cm]$\text{Type III}$}] (n2) at (1.5,0) {$\lambda_{22}$};
\node [main_node, label={[yshift=0.05cm]$\text{GCB}$}] (n3) at (3,0) {$\lambda_{33}$};
\path
(n1) edge node [below, midway] {$\lambda_{12}$} (n2)
(n2) edge node [below, midway] {$\lambda_{23}$} (n3);
\end{tikzpicture}
\quad
\label{eq:DLBCLpenatlygraph1}
{\vec{\Lambda}}^\ast
=
\begin{bmatrix}
\lambda_{11} & \lambda_{12} & 0\\
\lambda_{12} & \lambda_{22} & \lambda_{23}\\
0 & \lambda_{23} & \lambda_{33}
\end{bmatrix}
=
\begin{bmatrix} 2 & 1.5\times 10^{-3} & 0 \\ 1.5\times 10^{-3} & 2.7 & 2\times 10^{-3} \\ 0 & 2\times 10^{-3} & 2.3 \end{bmatrix}\begin{array}{l}\text{ABC} \\ \text{III} \\ \text{GCB}\end{array}
\end{equation}
The ridge penalties of classes ABC and GCB are seen to be comparable in size.
The small size of the Type III class leads to a relatively larger penalty to ensure a well-conditioned and stable estimate.
The estimated fusion penalties are all relatively small, implying that heavy fusion is undesirable due to class-differences.
The three class-specific precision matrices were estimated under ${\vec{\Lambda}}^\ast$ and subsequently scaled to partial correlation matrices.
Panels A--C of Figure~\ref{fig:pw_analysis1} visualize these partial correlation matrices.
In general, the ABC and GCB classes seem to carry more signal in both the negative and positive range vis-\`{a}-vis the Type III class.
\begin{figure}[tbp]
\centering
\includegraphics[width=\textwidth]{pathway_map04064_analysis_1.pdf}
\caption{
\emph{Top}: Heat maps and color key of the partial correlation matrices for the
ABC (panel A), III (panel B), and GCB (panel C) classes in the NF-$\kappa$B signaling pathway on the \citet{DykaerBoegsted2015} data.
\emph{Bottom}: Graphs corresponding to the sparsified precision matrices for the classes above.
Red and blue edges correspond to positive and negative partial correlations, respectively.
\emph{Far right-panel}: EID key and corresponding Human Genome Organization (HUGO) Gene Nomenclature Committee (HGNC) curated gene names of the NF-$\kappa$B signaling pathway genes.
Genes that are connected in panels D--F are shown bold.
}
\label{fig:pw_analysis1}
\end{figure}
Post-hoc support determination was carried out on the partial correlation matrices using the class-wise $\mathrm{lFDR}$ approach of Section \ref{sec:SelectEdge}.
The lFDR threshold was chosen conservatively to
$0.99$, selecting
39, 85, 34 edges for classes
ABC, III, GCB, respectively.
The relatively high number of edges selected for the Type III class is (at least partly) due to the difficulty of determining the mixture distribution mentioned in Section \ref{sec:SelectEdge} when the overall partial correlation signal is relatively flat.
Panels D--E of Figure~\ref{fig:pw_analysis1} then show the conditional independence graphs corresponding to the sparsified partial correlation matrices.
We note that a single connected component is identified in each class, suggesting, at least for the ABC and GCB classes, a genuine biological signal.
A secondary supporting overview is provided in Table~\ref{tab:netstats}.
Table~\ref{tab:netstats} gives the most central genes in the graphs of Panels D--E by two measures of node centrality: degree and betweenness.
The node degree indicates the number of edges incident upon a particular node.
The betweenness centrality indicates in how many shortest paths between vertex pairs a particular node acts as an intermediate vertex.
Both measures are proxies for the importance of a feature.
See, e.g., \cite{Newman10} for an overview of these and other centrality measures.
It is seen that the CCL, CXCL, and TNF gene families are well-represented as central and connected nodes across all classes.
The gene CCL21 is very central in classes ABC and III, but less so in the GCB class.
From Panels D--E of Figure~\ref{fig:pw_analysis1} it is seen that BCL2 and BCL2A1 are only connected in the non-ABC classes.
Contrary to expectation, MYD88 is disconnected in all graphs.
The genes ZAP70, LAT, and LCK found in Figure \ref{fig:pw_analysis1} and Table~\ref{tab:netstats} are well-known T-cell specific genes involved in the initial T-cell receptor-mediated activation of NF-$\kappa$B in T-cells \citep{Bidere2009}.
From the differences in connectivity of these genes, different abundances of activated T-cells or different NF-$\kappa$B activation programs for ABC/GCB might be hypothesized.
\begin{table}[!tbp]
{\small
\caption{The most central genes, their EID, and their plot index. For each class and node, the degree (with the number of positive and negative edges connected to that node in parentheses) and the betweenness centrality is shown. Only the 15 genes with the highest degrees summed over each class are shown.\label{tab:netstats}}
\begin{center}
\begin{tabular}{llrclrclrclr}
\hline\hline
\multicolumn{1}{l}{\bfseries }&\multicolumn{2}{c}{\bfseries }&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries ABC}&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries III}&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries GCB}\tabularnewline
\cline{5-6} \cline{8-9} \cline{11-12}
\multicolumn{1}{l}{}&\multicolumn{1}{c}{EID}&\multicolumn{1}{c}{Index}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{Degree}&\multicolumn{1}{c}{Betw.}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{Degree}&\multicolumn{1}{c}{Betw.}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{Degree}&\multicolumn{1}{c}{Betw.}\tabularnewline
\hline
CCL21&6366&$77$&&$9\:(5^+, 4^-)$&$202.0$&&$17\:(9^+, 8^-)$&$297.00$&&$4\:(3^+, 1^-)$&$106$\tabularnewline
CXCL8&3576&$38$&&$5\:(2^+, 3^-)$&$126.0$&&$12\:(4^+, 8^-)$&$234.00$&&$4\:(1^+, 3^-)$&$ 56$\tabularnewline
CCL19&6363&$78$&&$4\:(4^+, 0^-)$&$120.0$&&$10\:(6^+, 4^-)$&$ 91.70$&&$6\:(6^+, 0^-)$&$230$\tabularnewline
LTA&4049&$80$&&$5\:(3^+, 2^-)$&$143.0$&&$10\:(6^+, 4^-)$&$195.00$&&$3\:(3^+, 0^-)$&$ 56$\tabularnewline
CXCL12&6387&$40$&&$3\:(2^+, 1^-)$&$ 84.2$&&$12\:(5^+, 7^-)$&$187.00$&&$2\:(2^+, 0^-)$&$ 27$\tabularnewline
CXCL2&2920&$76$&&$3\:(3^+, 0^-)$&$ 61.0$&&$11\:(5^+, 6^-)$&$196.00$&&$3\:(2^+, 1^-)$&$ 53$\tabularnewline
LTB&4050&$81$&&$4\:(3^+, 1^-)$&$ 85.5$&&$5\:(3^+, 2^-)$&$ 4.24$&&$6\:(3^+, 3^-)$&$ 98$\tabularnewline
CD14&929&$51$&&$3\:(2^+, 1^-)$&$ 20.2$&&$6\:(3^+, 3^-)$&$ 25.90$&&$3\:(2^+, 1^-)$&$ 32$\tabularnewline
CCL4&6351&$74$&&$2\:(1^+, 1^-)$&$ 5.0$&&$8\:(5^+, 3^-)$&$118.00$&&$2\:(1^+, 1^-)$&$ 4$\tabularnewline
ZAP70&7535&$48$&&$3\:(2^+, 1^-)$&$ 60.0$&&$5\:(4^+, 1^-)$&$ 50.70$&&$3\:(2^+, 1^-)$&$ 75$\tabularnewline
CCL13&6357&$39$&&$4\:(3^+, 1^-)$&$119.0$&&$5\:(3^+, 2^-)$&$ 19.70$&&$1\:(1^+, 0^-)$&$ 0$\tabularnewline
TNFSF11&8600&$42$&&$5\:(4^+, 1^-)$&$160.0$&&$2\:(1^+, 1^-)$&$ 0.00$&&$3\:(2^+, 1^-)$&$ 55$\tabularnewline
TNF&7124&$16$&&$1\:(1^+, 0^-)$&$ 0.0$&&$4\:(2^+, 2^-)$&$ 1.68$&&$3\:(3^+, 0^-)$&$ 24$\tabularnewline
LAT&27040&$49$&&$2\:(2^+, 0^-)$&$ 0.0$&&$4\:(4^+, 0^-)$&$ 15.80$&&$2\:(2^+, 0^-)$&$ 0$\tabularnewline
LCK&3932&$62$&&$2\:(0^+, 2^-)$&$ 31.0$&&$3\:(3^+, 0^-)$&$ 10.00$&&$3\:(2^+, 1^-)$&$ 64$\tabularnewline
\hline
\end{tabular}\end{center}}
\end{table}
\begin{table}[!tbp]
\caption{Overview of data sets, the defined classes, and the number of samples. In GSE31312, 28 samples were not classified with the DAC due to technical issues and hence do not appear in this table. In the pilot study GSE11318, 31 samples were primary mediastinal B-cell lymphoma and left out. Note also that the pilot data set GSE11318 was not classified by the DAC.\label{tab:dlbclinfo}}
\begin{center}
\begin{tabular}{lrrcrrcrrcr}
\hline\hline
\multicolumn{1}{l}{\bfseries }&\multicolumn{2}{c}{\bfseries ABC}&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries Type III}&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries GBC}&\multicolumn{1}{c}{\bfseries }&\multicolumn{1}{c}{\bfseries }\tabularnewline
\cline{2-3} \cline{5-6} \cline{8-9}
\multicolumn{1}{l}{}&\multicolumn{1}{c}{$g$}&\multicolumn{1}{c}{$n_g$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$g$}&\multicolumn{1}{c}{$n_g$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$g$}&\multicolumn{1}{c}{$n_g$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$\sum n_g$}\tabularnewline
\hline
{\bfseries Pilot data}&&&&&&&&&&\tabularnewline
~~GSE11318&\gray $$& $ 74$& &\gray $$& $ 71$& &\gray $$& $ 27$& & $ 172$\tabularnewline
\hline
{\bfseries Data set}&&&&&&&&&&\tabularnewline
~~GSE56315&\gray $ 1$& $ 31$& &\gray $ 2$& $ 13$& &\gray $ 3$& $ 45$& & $ 89$\tabularnewline
~~GSE19246&\gray $ 4$& $ 51$& &\gray $ 5$& $ 30$& &\gray $ 6$& $ 96$& & $ 177$\tabularnewline
~~GSE12195&\gray $ 7$& $ 40$& &\gray $ 8$& $ 18$& &\gray $ 9$& $ 78$& & $ 136$\tabularnewline
~~GSE22895&\gray $10$& $ 31$& &\gray $11$& $ 21$& &\gray $12$& $ 49$& & $ 101$\tabularnewline
~~GSE31312&\gray $13$& $146$& &\gray $14$& $ 97$& &\gray $15$& $ 224$& & $ 467$\tabularnewline
~~GSE10846.CHOP&\gray $16$& $ 64$& &\gray $17$& $ 28$& &\gray $18$& $ 89$& & $ 181$\tabularnewline
~~GSE10846.RCHOP&\gray $19$& $ 75$& &\gray $20$& $ 42$& &\gray $21$& $ 116$& & $ 233$\tabularnewline
~~GSE34171.hgu133plus2&\gray $22$& $ 23$& &\gray $23$& $ 15$& &\gray $24$& $ 52$& & $ 90$\tabularnewline
~~GSE34171.hgu133AplusB&\gray $25$& $ 18$& &\gray $26$& $ 17$& &\gray $27$& $ 43$& & $ 78$\tabularnewline
~~GSE22470&\gray $28$& $ 86$& &\gray $29$& $ 43$& &\gray $30$& $ 142$& & $ 271$\tabularnewline
~~GSE4475&\gray $31$& $ 73$& &\gray $32$& $ 20$& &\gray $33$& $ 128$& & $ 221$\tabularnewline
~~$\sum n_g$&\gray $$& $638$& &\gray $$& $344$& &\gray $$& $1062$& & $2044$\tabularnewline
\hline
\end{tabular}\end{center}
\end{table}
\subsection{Integrative DLBCL analysis}
\label{sec:dlbcl2}
We now expand the analysis of the previous section to show the advantages of integration by fusion.
A large number of DLBCL gene expression profile (GEP) data sets is freely available at the NCBI Gene Expression Omnibus (GEO) website \citep{Barrett2013}.
We obtained 11 large-scale DLBCL data sets whose GEO-accession numbers (based on various Affymetrix microarray platforms) can be found in the first column of Table~\ref{tab:dlbclinfo}.
One of the sets, with GEO-accession number GSE11318, is treated as a pilot/training data set for the construction of target matrices (see below).
The GSE10846 set is composed of two distinct data sets corresponding to two treatment regimens (R-CHOP and CHOP) as well as different time-periods of study.
Likewise, GSE34171 is composed of three data sets corresponding to the respective microarray platforms used: HG-U133A, HG-U133B, and HG-U133 plus 2.0.
As the samples on HG-U133A and HG-U133B were paired and run on \emph{both} platforms, the (overlapping) features were averaged to form a single virtual microarray comparable to that of HG-U133 plus 2.0.
Note that the \citet{DykaerBoegsted2015} data used in Section \ref{sec:analysis1} is part of the total batch under GEO-accession number GSE56315.
The sample sizes for the individual data sets vary in the range 78--495 and can also be found in Table~\ref{tab:dlbclinfo}.
The data yield a total of 2,276 samples making this, to our knowledge, the hitherto largest integrative DLBCL study.
Similar to above, all data sets were RMA-normalized using custom brainarray CDFs and the {\textsf{R}}-package \texttt{affy}.
Again, NCBI EIDs were used to avoid non-bijective gene-ID translations between the array-platforms and the KEGG database.
The freely available {\textsf{R}}-package \texttt{DLBCLdata} was created to automate the download and preprocessing of the data sets in a reproducible and convenient manner.
See the \texttt{DLBCLdata} documentation \citep{DLBCLdata} for more information.
Subsequently, the data sets were reduced to the intersecting 11,908 EIDs present on all platforms.
All samples in all data sets, except for the pilot study GSE11318, were classified as either ABC, GCB, or Type III using the DAC mentioned above.
The same classifier was used in all data sets to obtain a uniform classification scheme and thus maximize the comparability of the classes across data sets.
Subsequently, the features were reduced to the EIDs present in the NF-$\kappa$B pathway and gene-wise centered to have zero mean within each combination of DLBCL subtype and data set.
We thus have a a two-way study design---DLBCL subtypes and multiple data sets---analogous to Example~\ref{ex:3}.
A concise overview of each of the $11 \times 3 = 33$ classes for the non-pilot data is provided in Table~\ref{tab:dlbclinfo}.
The target matrices were constructed from the pilot data in an attempt to use information in the directed representation $\mathcal{G}_\mathrm{pw}$ of the NF-$\kappa$B pathway obtained from KEGG.
The directed graph represents direct and indirect causal interactions between the constituent genes.
It was obtained from the KEGG database via the {\textsf{R}}-package \texttt{KEGGgraph} \citep{Zhang2015}.
A target matrix was constructed for each DLCBL subtype using the pilot data and the information from the directed topology by computing node contributions using multiple linear regression models.
That is, from an initial $\vec{T} = \vec{0}$, we update $\vec{T}$ for each node $\alpha \in V(\mathcal{G}_\mathrm{pw})$ through the following sequence:
\begin{align*}
T_{\alpha, \alpha} &:= T_{\alpha, \alpha} + \tfrac{1}{\sigma^2} \\
\vec{T}_{\pa(\alpha), \alpha} &:= \vec{T}_{\pa(\alpha), \alpha} + \tfrac{1}{\sigma^2} \vec{\beta}_{\pa(\alpha)} \\
\vec{T}_{\alpha, \pa(\alpha)} &:= \vec{T}_{\alpha, \pa(\alpha)} + \tfrac{1}{\sigma^2} \vec{\beta}_{\pa(\alpha)} \\
\vec{T}_{\pa(\alpha), \pa(\alpha)}
&:= \vec{T}_{\pa(\alpha), \pa(\alpha)} + \tfrac{1}{\sigma^2} \vec{\beta}_{\pa(\alpha)}\vec{\beta}_{\pa(\alpha)}^\top,
\end{align*}
where $\pa(\alpha)$ denotes the parents of node $\alpha$ in $\mathcal{G}_\mathrm{pw}$, and where $\sigma$ and $\vec{\beta}$ are the residual standard error and regression coefficients obtained from the linear regression of $\alpha$ on $\pa(\alpha)$.
By this scheme the target matrix represents the conditional independence structure that would result from moralizing the directed graph.
If $\mathcal{G}_\mathrm{pw}$ is acyclic then $\vec{T} \succ 0$ is guaranteed.
The penalty setup bears resemblance to Example \ref{ex:3}.
The Type III class is considered closer to the ABC and GCB subtypes than ABC is to GCB.
Thus, the direct shrinkage between the ABC and GCB subtypes was fixed to zero.
Likewise, direct shrinkage between subtype and data set combinations was also disabled.
Hence, a common ridge penalty $\lambda$, a data set--data set shrinkage parameter ${\lambda_{\mathrm{DS}}}$ and a subtype--subtype shrinkage parameter ${\lambda_{\mathrm{ST}}}$ were estimated.
The optimal penalties were determined by SLOOCV using the penalty matrix and graph given in \eqref{eq:DLBCLpenaltygraph} below:
\begin{equation}
\begin{tikzpicture}[node distance = 2mm, auto,
baseline={([yshift=-.5ex]current bounding box.center)},
main_node/.style={circle,draw,minimum size=1em,inner sep=2pt}
]
\node [main_node] (n1) at (0,0) {$\lambda$};
\node [main_node] (n2) at (2,0) {$\lambda$};
\node [main_node] (n3) at (4,0) {$\lambda$};
\node [main_node] (n4) at (0,-1) {$\lambda$};
\node [main_node] (n5) at (2,-1) {$\lambda$};
\node [main_node] (n6) at (4,-1) {$\lambda$};
\node [minimum size=1em,inner sep=3pt] (n7) at (0,-2) {$\raisebox{3pt}{$\scalebox{.75}{$\vdots$}$}$};
\node [minimum size=1em,inner sep=3pt] (n8) at (2,-2) {$\raisebox{3pt}{$\scalebox{.75}{$\vdots$}$}$};
\node [minimum size=1em,inner sep=3pt] (n9) at (4,-2) {$\raisebox{3pt}{$\scalebox{.75}{$\vdots$}$}$};
\node [main_node] (n10) at (0,-3) {$\lambda$};
\node [main_node] (n11) at (2,-3) {$\lambda$};
\node [main_node] (n12) at (4,-3) {$\lambda$};
\draw (n1) -- (n2) node [above, midway] {\scriptsize${\lambda_{\mathrm{ST}}}$};
\draw (n2) -- (n3) node [above, midway] {\scriptsize${\lambda_{\mathrm{ST}}}$};
\draw (n4) -- (n5) node [below, midway] {\scriptsize${\lambda_{\mathrm{ST}}}$};
\draw (n5) -- (n6) node [below, midway] {\scriptsize${\lambda_{\mathrm{ST}}}$};
\draw (n10) -- (n11) node [below, midway] {\scriptsize${\lambda_{\mathrm{ST}}}$};
\draw (n11) -- (n12) node [below, midway] {\scriptsize${\lambda_{\mathrm{ST}}}$};
\draw (n1) -- (n4) node [left, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n2) -- (n5) node [left, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n3) -- (n6) node [left, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n4) -- (n7) node [left, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n5) -- (n8) node [left, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n6) -- (n9) node [left, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n7) -- (n10) node [left, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n8) -- (n11) node [left, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\draw (n9) -- (n12) node [left, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$};
\path
(n1) edge [bend left=30] node [right, near start] {\scriptsize${\lambda_{\mathrm{DS}}}$} (n7)
(n1) edge [bend left=30] node [right, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$} (n10)
(n4) edge [bend left=30] node [right, near end] {\scriptsize${\lambda_{\mathrm{DS}}}$} (n10)
(n2) edge [bend left=30] node [right, near start] {\scriptsize${\lambda_{\mathrm{DS}}}$} (n8)
(n2) edge [bend left=30] node [right, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$} (n11)
(n5) edge [bend left=30] node [right, near end] {\scriptsize${\lambda_{\mathrm{DS}}}$} (n11)
(n3) edge [bend left=30] node [right, near start] {\scriptsize${\lambda_{\mathrm{DS}}}$} (n9)
(n3) edge [bend left=30] node [right, midway] {\scriptsize${\lambda_{\mathrm{DS}}}$} (n12)
(n6) edge [bend left=30] node [right, near end] {\scriptsize${\lambda_{\mathrm{DS}}}$} (n12);
\node [above=0.1cm of n1] (ABC) {ABC};
\node [above=0.1cm of n2] (T3) {Type III};
\node [above=0.1cm of n3] (GCB) {GCB};
\node [left=0.05cm of n1] (DS1) {$\mathrm{DS}_1$};
\node [left=0.05cm of n4] (DS2) {$\mathrm{DS}_2$};
\node [left=0.05cm of n10] (DS11) {$\mathrm{DS}_{11}$};
\end{tikzpicture}
\qquad
{\vec{\Lambda}} =
\begin{bsmallmatrix}
\lambda & {\lambda_{\mathrm{ST}}} & 0 & {\lambda_{\mathrm{DS}}} & 0 & 0 & \cdots & {\lambda_{\mathrm{DS}}} & 0 & 0 \\
{\lambda_{\mathrm{ST}}} &\lambda & {\lambda_{\mathrm{ST}}} & 0 & {\lambda_{\mathrm{DS}}} & 0 & \cdots & 0 & {\lambda_{\mathrm{DS}}} & 0 \\
0 & {\lambda_{\mathrm{ST}}} &\lambda & 0 & 0 & {\lambda_{\mathrm{DS}}} & \cdots & 0 & 0 & {\lambda_{\mathrm{DS}}} \\
{\lambda_{\mathrm{DS}}} & 0 & 0 &\lambda & {\lambda_{\mathrm{ST}}} & 0 & \cdots & {\lambda_{\mathrm{DS}}} & 0 & 0 \\
0 & {\lambda_{\mathrm{DS}}} & 0 & {\lambda_{\mathrm{ST}}} &\lambda & {\lambda_{\mathrm{ST}}} & \cdots & 0 & {\lambda_{\mathrm{DS}}} & 0 \\
0 & 0 & {\lambda_{\mathrm{DS}}} & 0 & {\lambda_{\mathrm{ST}}} &\lambda & \cdots & 0 & 0 & {\lambda_{\mathrm{DS}}} \\
\raisebox{3pt}{$\scalebox{.75}{$\vdots$}$} & \raisebox{3pt}{$\scalebox{.75}{$\vdots$}$} & \raisebox{3pt}{$\scalebox{.75}{$\vdots$}$} & \raisebox{3pt}{$\scalebox{.75}{$\vdots$}$} & \raisebox{3pt}{$\scalebox{.75}{$\vdots$}$} & \raisebox{3pt}{$\scalebox{.75}{$\vdots$}$} & \raisebox{3pt}{$\scalebox{.75}{$\ddots$}$} & \raisebox{3pt}{$\scalebox{.75}{$\vdots$}$} & \raisebox{3pt}{$\scalebox{.75}{$\vdots$}$} & \raisebox{3pt}{$\scalebox{.75}{$\vdots$}$} \\
{\lambda_{\mathrm{DS}}} & 0 & 0 & {\lambda_{\mathrm{DS}}} & 0 & 0 & \cdots &\lambda & {\lambda_{\mathrm{ST}}} & 0 \\
0 & {\lambda_{\mathrm{DS}}} & 0 & 0 & {\lambda_{\mathrm{DS}}} & 0 & \cdots & {\lambda_{\mathrm{ST}}} &\lambda & {\lambda_{\mathrm{ST}}} \\
0 & 0 & {\lambda_{\mathrm{DS}}} & 0 & 0 & {\lambda_{\mathrm{DS}}} & \cdots & 0 & {\lambda_{\mathrm{ST}}} &\lambda
\end{bsmallmatrix}
\label{eq:DLBCLpenaltygraph}
\end{equation}
\noindent The optimal penalties were found to be
$\lambda^\diamond = 2.2$
for the ridge penalty,
$\lambda^\diamond_\mathrm{DS} = 0.0022$
for the data set fusion penalty, and
$\lambda^\diamond_\mathrm{ST} = \ensuremath{6.8\times 10^{-4}}$
for the subtype fusion penalty, respectively.
\begin{figure}[tbp]
\centering
\includegraphics[width=\textwidth]{pathway_map04064_precision.pdf}
\caption{
Summary of the estimated precision matrices for the NF-$\kappa$B pathway.
\emph{Top row}: Heat maps of the estimated precision matrices pooled across data sets for each genetic subtype.
\emph{Middle row from left to right:} The pooled target matrix for ABC, the difference between the pooled ABC and GCB estimates, and the pooled target matrix for GCB.
\emph{Bottom:} The color key for the heat maps.
}
\label{fig:pw_prec_one}
\end{figure}
To summarize and visualize the 33 class precision estimates they were pooled within DLBCL subtype.
Panels A--C of Figure~\ref{fig:pw_prec_one} visualizes the 3 pooled estimates as heat maps.
Panels D and F visualize the constructed target matrices for the ABC and GCB subtypes, respectively.
Panel E then gives the difference between the pooled ABC and GCB estimates, indicating that they harbor differential signals to some degree.
We would like to capture the commonalities and differences with a differential network representation.
The estimated class-specific precision matrices were subsequently scaled to partial correlation matrices.
Each precision matrix was then sparsified using the lFDR procedure of Section \ref{sec:SelectEdge}.
Given the class an edge was selected whenever $1 - \widehat{\mathrm{lFDR}} \geq 0.999$.
To compactly visualize the the multiple GGMs we obtained \emph{signed edge-weighted total networks} mentioned in Section~\ref{sec:commonAndDifferentialNetworks}.
Clearly, for inconsistent connections the weight would vary around zero, while edges that are consistently selected as positive (negative) will have a large positive (negative) weight.
These meta-graphs are plotted in Figure~\ref{fig:pw_file_one}.
Panels A--C give the signed edge-weighted total networks for each subtype across the data sets.
They show that (within DLBCL subtypes) there are a number of edges that are highly concordant across all data sets.
To evaluate the greatest differences between the ABC and GCB subtypes, the signed edge-weighted total network of the latter was subtracted from the former.
The resulting graph $\mathcal{G}_{\mathrm{ABC}-\mathrm{GCB}}$ can be found in Panel D.
Edges that are more stably present in the ABC subtype are represented in orange and the edges more stably present in the GCB subtype are represented in blue.
Panel F represents the graph from panel D with only those edges retained whose absolute weight exceeds $2$.
In a sense, the graph of panel F then represents the stable differential network.
The strongest connections here should suggest places of regulatory deregulation gained or lost across the two subtypes.
Interestingly, this differential network summary shows relatively large connected subgraphs suggesting differing regulatory mechanisms.
The graph in panel F of Figure~\ref{fig:pw_file_one} then conveys the added value of the integrative fusion approach.
Certain members of the CCL, CXCL, and TNF gene families who were highly central in the analysis of Section \ref{sec:analysis1} are still considered to be central here.
However, it is also seen that certain genes that garnered high centrality measures in the single data set analyzed in Section \ref{sec:analysis1} do not behave stably \emph{across} data sets, such as CXCL2.
In addition, the integrative analysis appoints the BCL2 gene family a central role, especially in relation to the ABC subtype.
This contrasts with Section \ref{sec:analysis1}, where the BCL2 gene family was not considered central and appeared to be connected mostly in the non-ABC classes.
Moreover, whereas the analysis of the single data set could not identify a signal for MYD88, the integrative analysis identifies MYD88 to be stably connected across data sets.
Especially the latter two observations are in line with current knowledge on deregulation in the NF-$\kappa$B pathway in DLBCL patients.
Also in accordance with litterature is the known interaction of LTA with LTB seen in panel F of Figure~\ref{fig:pw_file_one} \citep{WilliamsAbbott1997,Browning1997} which here appear to be differential between ABC/GCB.
Thus, borrowing information across classes enables a meta-analytic approach that can uncover information otherwise unobtainable through the analysis of single data sets.
\begin{figure}[tbp]
\centering
\includegraphics[width=\textwidth]{pathway_map04064_consensus.pdf}
\caption{
Summary of estimated GGMs for the NF-$\kappa$B pathway.
\emph{Panels A--C}: Graphs obtained by adding the signed adjacency matrices for each subtype across the data sets.
The edge widths are drawn proportional to the absolute edge weight.
\emph{Panel D}: Graph obtained by subtracting the summarized signed adjacency matrix of GCB (panel A) from that of ABC (panel C).
Edge widths are drawn proportional to the absolute weight and colored according to the sign.
Orange implies edges more present in ABC and blue implies edges more present in GCB.
\emph{Panel E}: As the graph in panel D, however only edges with absolute weight $> 2$ are drawn.
\emph{Panel F}: As the graph in panel E, but with an alternative layout.
\emph{Far right-panel:} EID key and corresponding HGNC curated gene names of the NF-$\kappa$B pathway genes.
Genes that are connected in panel F are shown bold.
}
\label{fig:pw_file_one}
\end{figure}
\section{Discussion and conclusion}\label{Discuss.sec}
We considered the problem of jointly estimating multiple inverse covariance matrices from high\hyp{}dimensional data consisting of distinct classes.
A fused ridge estimator was proposed that generalizes previous contributions in two principal directions.
First, we introduced the use of targets in fused ridge precision estimation.
The targeted approach helps to stabilize the estimation procedure and allows for the incorporation of prior knowledge.
It also juxtaposes itself with various alternative penalized precision matrix estimators that pull the estimates towards the edge of the parameter space, i.e., who shrink towards the non-interpretable null matrix.
Second, instead of using a single ridge penalty and a single fusion penalty parameter for all classes, the approach grants the use of \emph{class-specific} ridge penalties and \emph{class-pair-specific} fusion penalties.
This results in a flexible shrinkage framework that (i) allows for class-specific tuning, that (ii) supports analyzes when a factorial design underlies the available classes, and that (iii) supports the appropriate handling of situations where some classes are high-dimensional whilst others are low-dimensional.
Targeted shrinkage and usage of a flexible penalty matrix might also benefit other procedures for precision matrix estimation such as the fused graphical lasso \citep{Danaher2013}.
The targeted fused ridge estimator was combined with post-hoc support determination, which serves as a basis for integrative or meta-analytic Gaussian graphical modeling.
This combination thus has applications in meta-, integrative-, and differential network analysis of multiple data sets or classes of data.
This meta-approach to network analysis has multiple motivations.
First, by combining data it can effectively increase the sample size in settings where samples are relatively scarce or expensive to produce.
In a sense it refocuses the otherwise declining attention to obtaining a sufficient amount of data---a tendency we perceive to be untenable.
Second, aggregation across multiple data sets decreases the likelihood of capturing idiosyncratic features (of individual data sets), thereby preventing over-fitting of the data.
Insightful summarization of the results is important for the feasibility of our approach to fused graphical modeling.
To this end we have proposed various basic tools to summarize commonalities and differences over multiple graphs.
These tools were subsequently used in a differential network analysis of the NF-$\kappa$B signaling pathway in DLBCL subtypes over multiple GEP data sets.
This application is not without critique, as it experiences a problem present in many GEP studies:
The classification of the DLBCL subtypes (ABC and GBC) is performed on the basis of the same GEP data on which the network analysis is executed.
This may be deemed methodologically undesirable.
However, we justify this double use of data as (a) the pathway of interest involves a selection of genes whereas the classification uses all genes, and (b) the analysis investigates partial correlations and differential networks whereas the classification, in a sense, considers only differential expression.
Furthermore, as in all large-scale genetic screenings, the analyzes should be considered `tentative' and findings need to be validated in independent experiments.
Notwithstanding, the analyzes show that the fusion approach to network integration has merit in uncovering class-specific information on pathway deregulation.
Moreover, they exemplify the exploratory \emph{hypothesis generating} thrust of the framework we offer.
We see various inroad for further research.
With regard to estimation one could think of extending the framework to incorporate a fused version of the elastic net.
Mixed fusion, in the sense that one could do graphical lasso estimation with ridge fusion or ridge estimation with lasso fusion, might also be of interest.
From an applied perspective the desire is to expand the toolbox for insightful (visual) summarization of commonalities and differences over multiple graphs.
Moreover, it is of interest to explore improved ways for support determination.
The lFDR procedure, for example, could be expanded by considering all classes jointly.
Instead of applying the lFDR procedure to each class-specific precision matrix, one would then be interested in determining the proper mixture of a grand common null-distribution and multiple class-specific non-null distributions.
These inroads were out of the scope of current work, but we hope to explore them elsewhere.
\subsection{Software implementation}
The fused ridge estimator and its accompanying estimation procedure is implemented in the \texttt{rags2ridges}-package \citep{rags} for the statistical language {\textsf{R}}.
This package has many supporting functions for penalty parameter selection, graphical modeling, as well as network analysis.
We will report on its full functionality elsewhere.
The package is freely available from the Comprehensive {\textsf{R}} Archive Network: \url{http://cran.r-project.org/}.
\section*{Acknowledgements}
Anders E. Bilgrau was supported by a grant from the Karen Elise Jensen Fonden, a travel grant from the Danish Cancer Society, and a visitor grant by the Dept.\ of Mathematics of the VU University Amsterdam.
Carel F.W. Peeters received funding from the European Community's Seventh Framework Programme (FP7, 2007-2013), Research Infrastructures action, under grant agreement No. FP7-269553 (EpiRadBio project).
The authors would also like to thank Karen Dybk{\ae}r of the Dept.\ of Haematology at Aalborg University Hospital, for her help on the biological interpretations in the DLBCL application.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Notation}
\label{notation}
Let $G$ be a group. For elements $x,y\in G, n\in \mathbb N$, our conventions are $x^{ny}=y^{-1}x^ny, [x,y]=x^{-1}y^{-1}xy$. We use double bracket $\langle \langle \cdot \rangle \rangle_G$ to denote the normal closure of a set in the group $G$. Sometimes we omit the subscript when there is no misunderstanding in the context.
In addition, for a group $G$ and a commutative ring $K$ with $1\neq 0$, we let $KG$ be the group ring of $G$ over $K$. An element $\lambda\in KG$ is usually denoted as $\lambda=\sum_{g\in G} \alpha_g g, \alpha_g\in K$ where all but finitely many $\alpha_g$'s are 0. We also regard $\lambda$ as a function $\lambda: G\to K$ with finite support, where $\lambda(g)=\alpha_g$. We let $|\lambda|=\sum_{g\in G} |\alpha_g|$.
For groups $A$ and $T$, let $B=\oplus_{t\in T}A^t$ be the direct sum of copies of $A$ indexed by elements in $T$. Then \emph{the wreath product} $A\wr T$ is defined to be the semidirect product $B\rtimes T$ where $T$ acts on $B$ by $t\circ(a_\omega)=(a_{t^{-1}\omega})$. The subgroup $B$ is called the base group of the wreath product.
Suppose $G$ is an extension of $A$ by $T$ where both $A$ and $T$ are abelian. $A$ has a natural module structure over the group ring $\mathbb ZT$, and the action of $T$ on $A$ is given by conjugation. In this case, we also say that $G$ is an extension of a $T$-module $A$ by $T$. In this paper, we will use the following notation for actions of $\mathbb ZT$ on $A$. Let $\lambda=\sum_{t\in T} \alpha_t t \in \mathbb ZT$. Then for $a\in A$, we define
\[a^\lambda:=\prod_{t\in T}a^{\alpha_t t}.\]
\section{Introduction and Results}
\label{intro}
In this paper, we study two different functions associates with finitely generated metabelian groups, both of which are introduced to describe some geometric properties of groups, the (relative) Dehn function and the subgroup distortion function. They are naturally related since for a finitely presented group the Dehn function can be regarded as the distortion function of a subgroup in a free group over an infinite generating set (the set of all conjugates of relators). Asides from the canonical connection, we investigate the connection between the subgroup distortion function (of a special type of finitely generated subgroups) in the wreath product to the Dehn function relative to the variety of metabelian groups, which it interesting because it provides an estimation of the usual Dehn function of a finitely presented metabelian group \cite{wang2020dehn}.
We first define the (relative) Dehn function. Recall that a \emph{variety} of groups is a class of groups that closed under taking subgroups, epimorphic images, and unrestricted direct products. Inside a variety, we can talk about relative free groups and relative presentations, just like in the usual sense (the variety of all groups). Let $\mathcal V$ be a variety. The free group relative to the variety $\mathcal V$ on a set $X$, denoted by $\tilde F(X)$, is a group in $\mathcal V$ satisfying the following property: it is equipped with a map $\theta: X\to \tilde F(X)$ such that for every group $G$ in $\mathcal V$ and every set map $\sigma: X\to G$ there exists a unique homomorphism $\varphi: \tilde F(X)\to G$ such that the following diagram commutes,
\[
\begin{tikzcd}
X \arrow[r,"\theta"] \arrow[rd,"\sigma"] &\tilde F(X) \arrow[d, "\varphi"]\\
&G
\end{tikzcd}.
\]
In particular, a group in $\mathcal V$ generated by $|X|$ elements is a epimorphic image of $\tilde F(X)$. A relative finite presentation $\mathcal P=\langle X\mid R\rangle_{\mathcal V}$ of $G$ consists of two finite sets $X$ and $R$ where $R$ is a subset of $\tilde F(X)$ such that there exists an epimorphism $\varphi:\tilde F(X)\to G$ and the kernel of $\varphi$ is $\langle \langle R\rangle \rangle_{\tilde F(X)}$. Let $w$ be a word in $G$ such that $w=_G 1$. Then $w$ lies in the normal closure of $R$. Thus $w$ can be written as
\[w=_{\tilde F(X)} \prod_{i=1}^l f_i^{-1}r_i{f_i} \text{ where }r_i\in R\cup R^{-1},f_i\in \tilde F(X).\]
The smallest possible $l$ is called the relative area of $w$, denoted by $\widetilde\area_\mathcal P(w)$. Consequently, the Dehn function relative to $\mathcal V$ with respect to the presentation $\mathcal P$ is defined as
\[\tilde\delta_\mathcal P(n)=\sup\{\widetilde\area_\mathcal P(w)\mid |w|_{X}\leqslant n\}.\]
Here $|\cdot|_{X}$ is the word length in alphabet $X$.
Throughout this paper, we focus on two varieties of groups: the variety of all groups and the variety of metabelian groups, where the later will be denoted by $\mathcal S_2$.
If $\mathcal V$ is the variety of all groups, then the Dehn function relative to $\mathcal V$ is just the usual Dehn function.
Dehn functions are defined up to an asymptotic equivalence $\approx$ taken on functions $\mathbb N \to \mathbb N$ by $f \approx g$ if and only if $f \preccurlyeq g$ and $g \preccurlyeq f$ where $f \preccurlyeq g$ if and only if there exists $C>0$ such that $f(n)\leqslant Cg(Cn)+Cn+C$ for all $n\in \mathbb N$. One can verify that $\approx$ is an equivalence relation. This relation preserves the asymptotic nature of a function. For example, it distinguishes polynomials of different degrees and likewise polynomials and the exponential function. It also distinguishes functions like $n^p$ and $n^p\log n$ for $p>1$. On the other hand, it identifies all polynomials of the same degree, and likewise all single exponentials, i.e., $a^n\approx b^n$ for $a,b>1$.
Despite the dependence of Dehn function on finite presentations of a group, all Dehn functions of the same finitely presented group are equivalent under $\approx$ \cite{gromov1996geometric}, i.e., given a finitely presented group $G$ with finite presentations $\mathcal P$ and $\mathcal P'$, one can show that $\delta_{\mathcal P} \approx \delta_{\mathcal P'}$. Thus, we define the \emph{Dehn function} of a finitely presented group $G$, $\delta_G$, as the Dehn function of any of its finite presentation.
If $\mathcal V$ is the variety of metabelian groups $\mathcal S_2$. It has been shown that the relative Dehn functions are also independent of the choice of finite presentations up to equivalence \cite{Fuh2000}. Therefore it is valid to denote the relative Dehn function of a finitely generated metabelian group $G$ by $\tilde \delta_G$. One non-trivial property of metabelian groups is that all finitely generated metabelian groups are relatively finitely presentable in $\mathcal S_2$ \cite{Hall1954}.
In what follows, the Dehn function of a finitely presented group $G$ and the area of a word $w$ in $G$ relative to the variety of all groups will be denoted by $\delta_G(n)$ and $\area(w)$ respectively, while the Dehn function of a finitely generated group $G$ and the area of a word $w$ in $G$ relative to the variety of metabelian groups will be denoted by $\tilde \delta_G(n)$ and $\widetilde \area(w)$ respectively.
Next, let us talk about the distortion function. Let $G$ be a finitely generated group with a finite generating set $X$ and $H$ be a a subgroup of $G$ with finite generating set $Y$. The \emph{distortion function} of $H$ in $G$ is
\[\Delta_H^G(n)=\sup\{|w|_Y\mid w\in H, |w|_X\leqslant n\}.\]
We consider a slightly different equivalence relation for distortion functions. For non-decreasing functions $f$ and $g$ on $\mathbb N$, we say that $f\preceq g$ if there exists a constant $C$ such that $f(n)\leqslant Cg(Cn)$. Hence we say that two functions $f$ and $g$ are equivalent, written $f\asymp g$, if $f\preceq g$ and $g\preceq f$. As expected, the distortion function is independent of the choice of the finite generating set under this equivalence relation \cite[Proposition 8.98]{drute2018Geometric}. The reason we consider $\asymp$ rather than $\approx$ is that if the subgroup is infinite then the distortion function is at least linear. We say a subgroup is \emph{undistorted} if the distortion function is equivalent to a linear function.
Let $A$ and $T$ be free abelian groups with bases $\{a_1,a_2,\dots,a_m\}$ and $\{t_1,t_2,\dots,t_k\}$ respectively. Consider the wreath product $W:=A\wr T$. The base group $B:=\langle \langle A\rangle \rangle$ is a $T$-module. For a finite subset $\mathcal X=\{f_1,f_2,\dots,f_l\}$ of $B$, let $H$ be the subgroup of $W$ generated by $\mathcal X\cup\{t_1,t_2,\dots,t_k\}$ and $G$ be the group $W/\langle \langle \mathcal X\rangle \rangle$.
Our main result is the following:
\begin{customthm}{A}[\cref{subgroupDistortion}]
\label{subgroupDistortion}
Let $W,H,G$ be groups defined as above, then
\[\Delta_{H}^W(n) \preccurlyeq \tilde \delta_G^k(n)+n^k,\tilde\delta_G(n)\preccurlyeq\max\{n^3, (\Delta_{H}^W(n^2))^3\}.\]
In particular, if $k=1$,
\[\Delta_{H}^W(n) \preccurlyeq \tilde \delta_G(n).\]
\end{customthm}
\cref{subgroupDistortion} leads to some interesting examples.
\begin{corollaries}[\cref{relativeLowerbound}]
For each $l\in \mathbb N$, there exists a finitely generated metabelian group such that its relative Dehn function is asymptotically greater or equal to $n^l$.
\end{corollaries}
The distortion function of subgroups in $A\wr \mathbb Z$ has been studied extensively by Davis and Olshanskiy \cite{davis2011Subgroup}. Combining their result with \cref{subgroupDistortion}, we immediately have
\begin{customthm}{B}[\cref{overZ}]
Let $G$ be a finitely generated metabelian group such that $G$ is an extension of an abelian group $A$ by a virtually cyclic abelian group $T$. Then the relative Dehn function of $G$ is polynomially bounded. If in addition $G$ is finitely presented, the Dehn function of $G$ is asymptotically bounded above by the exponential function.
\end{customthm}
This theorem gives the exponential upper bound of Dehn functions for many examples including the metabelian Baumslag-Solitar groups $BS(1,n)$ and $\mathbb Z^n\rtimes_\phi \mathbb Z$ where $\phi\in GL(n,\mathbb Z)$ (See in \cite{bridson1996optimal}). And it also improves the main results in \cite{wang2020dehn}.
Moreover, we estimate the relative Dehn function of various examples.
\begin{customthm}{C}
\begin{enumerate}[(1)]
\item (\cref{metaBaum}) The metabelianized Baumslag-Solitar group $\tilde{BS}(n,m)=\langle a,t\mid (a^{n})^{t}=a^m\rangle_{\mathcal S_2}$ has at most cubic relative Dehn function when $n\neq m$ and has at most quartic relative Dehn function when $n=m$.
\item (\cref{metaBaumCorollary}) The metabelianized Baumslag-Solitar group $\tilde{BS}(n,m)=\langle a,t\mid (a^{n})^{t}=a^m\rangle_{\mathcal S_2}, m>2, m=n+1$ has at most quadratic relative Dehn function.
\item (\cref{lamplighter}) The lamplighter groups $L_m$ have at most cubic relative Dehn function.
\item (\cref{relativeL2}) The lamplighter group $L_2$ has linear relative Dehn function.
\end{enumerate}
\end{customthm}
\emph{The structure of this paper.} In \cref{prem} we will state some preliminaries on the topic of Dehn function of a module and the relative Dehn function of a finitely generated metabelian group. Next in \cref{relativeDehn4}, we estimate the relative Dehn function for different examples including the Baumslag-Solitar groups and Lamplighter groups. Finally in \cref{relativeDehn5} we prove the main theorem about distortion function and the relative Dehn function.
\emph{Acknowledgement.} I would like to thank my advisor Mark Sapir who points out to me the study of distortion functions for subgroups in $A\wr \mathbb Z$ where $A$ is abelian.
\section{Preliminaries}
\label{prem}
\subsection{The Dehn function of a Module}
\label{prem1}
Let $T$ be a free abelian group of rank $k$ and $R=\mathbb ZT$ the group ring of $T$. In what follows, we will only discuss modules over $R$.
Similar to groups, we have free modules and hence we can define the presentation of a module. A subset $\{f_1,f_2,\dots,f_l\}$ of a $R$-module $M$ is called a \emph{generating set} if every $f\in M$ is the linear combination of them, i,e, there exists $\alpha_1,\alpha_2,\dots,\alpha_l\in R$ such that
\[f=\alpha_1 f_1+\alpha_2 f_2+\dots+\alpha_l f_l.\]
A set of elements $\{f_1,f_2,\dots,f_l\}$ of a module $M$ is called \emph{independent} if no nontrivial linear combination is zero, that is,
\[\text{If } \alpha_1 f_1+\alpha_2 f_2+\dots+\alpha_l f_l=0, \text{ then }\alpha_i=0, \text{ for } i=1,2,\dots,l.\]
A \emph{basis} is an independent generating set.
One immediate example for a $R$-module is $R^m$. The addition and scalar multiplication on $R^m$ are the following, respectively:
\begin{align*}
(a_1,a_2,\dots,a_m)+(b_1,b_2,\dots,b_m)&=(a_1+b_1,a_2+b_2,\dots,a_m+b_m),\\
r(a_1,a_2,\dots,a_m)&=(ra_1,ra_2,\dots,ra_m).
\end{align*}
The module $R^m$ is called a \emph{free $R$-module of rank $m$}. The canonical basis of $R^m$ is $\{e_1,e_2,\dots,e_m\}$ where $e_i=(0,\dots,1,\dots,0)$ with all but the $i$-th entry is 0.
A submodule of the free module $R^1$ is an ideal in the ring $R$.
Given a free $R$-module $M$ of finite rank and a submodule $S$ generated by a finite set $\{f_1,f_2,\dots,f_l\}$, the membership of a submodule $S$ we are considering in this thesis is the following
\begin{prob}
\label{memberPro}
Given an element $f$ in $M$, decide whether $f$ in $S$, i.e., if there exists elements $\alpha_1,\alpha_2,\dots,\alpha_l$ such that
\[f=\alpha_1f_1+\alpha_2f_2+\dots+\alpha_l f_l.\]
\end{prob}
A \emph{homomorphism} $\varphi: M \to N$ of $R$-modules is a map which is compatible with the laws of composition:
\[\varphi(f+f')=\varphi(f)+\varphi(f'), \varphi(rf)=r\varphi(f)\]
for all $f,f'\in M, r\in R$. A bijective homomorphism is called an \emph{isomorphism}.
Last we define the concept of quotient modules. Let $R$ be a ring, and let $S$ be a submodule of an $R$-module $M$. The quotient $M/S$ is the additive group of cosets $\bar f=f+S$. And the scalar multiplication is defined by
\[r\bar f=\overline{rf}.\]
Thus $M/S$ is made an $R$-module.
Conversely, let $A$ be a finitely generated $R$-module, then there exists a free $R$-module $M$ with basis $\{a_1,a_2,\dots,a_m\}$ and a submodule $S$ of $M$ such that $A\cong M/S$. Since $R$ is a Noetherian ring, $S$ is finitely generated, and hence we assume its generating set is $\{f_1,f_2,\dots,f_l\}$. Therefore we have a module presentation of $A$ as the following:
\[A=\langle a_1,a_2,\dots,a_m\mid f_1,f_2,\dots, f_l\rangle.\]
For an element $f=\mu_1 a_1+\mu_2 a_2+\dots +\mu_m a_m$ in $M$ we define its \emph{length}, denoted by $\|f\|$, to be the following:
\[\|f\|=\sum_{i=1}^l |\mu_i|+\mathrm{reach}(f),\]
where $\mathrm{reach}(f)$ is the minimal length over the lengths of close loops that starts at $1$ and passes through all points in $\cup_{i=1}^l \supp{\mu_i}$ in the Cayley graph of $T$. Another way to think of this length $\|\cdot\|$ is that it is the minimal length of a group word among words that are rearranges of all conjugates of elements in $\mathcal A$ in $a_1^{\mu_1}a_2^{\mu_2}\dots a_m^{\mu_m}$. For example, suppose $m=k=1$, we have
\[\|(t_1^n+t_1^{n-1}+\dots+t_1+1)a_1\|=(n+1)+2n=3n+1,\]
because the minimal length of a loop passing $\{1,t,t^2,\dots,t^n\}$ is $2n$. Note that $a_1^{t_1^n+t_1^{n-1}+\dots+t_1+1}=t_1^{-n}a_1t_1a_1\dots a_1t_1a_1$ is a group word of length $3n+1$ in the alphabet $\{a_1\}\cup\{t_1\}$.
Then for every element $f$ in $S$, there exists $\alpha_1,\alpha_2,\dots,\alpha_l\in R$ such that
\[f=\alpha_{1}f_1+\alpha_2 f_2+\dots+\alpha_l f_l.\]
We denote by $\widehat\area_A(f)$ the minimal possible $\sum_{i=1}^l |\alpha_i|$ ($|\cdot|$ is defined in \cref{notation}). Then \emph{the Dehn function} of the $R$-module $A$ is defined to be
\[\hat\delta_A(n)=\max\{\widehat \area_A(f)\mid \|f\|\leqslant n\}.\]
As expected, the Dehn function of a module is also independent from the choice of the finite presentation \cite{Fuh2000}.
\begin{rems}
Now we have three different types of Dehn functions in this paper: the Dehn function, the relative Dehn function and the Dehn function of a module. They are similar and we distinguish them by the notation. We denote by $\delta_G(n), \area(w)$ the Dehn function of $G$ and the area of a word $w$; $\tilde \delta_G(n),\widetilde \area(w)$ the relative Dehn function and the relative area of a word $w$, $\hat \delta_A(n), \widehat \area(f)$ the Dehn function of the module $A$ and the area of a module element $f$.
\end{rems}
The membership problem \cref{memberPro} can be regarded as the word problem of the quotient $M/S$.
\begin{prob}
\label{memberPro}
Given an element $f$ in $A$, decide whether $f$ represents the trivial element in $A$.
\end{prob}
It turns out that the Dehn function of a module plays an essential role for understanding the relative Dehn function for finitely generated metabelian group \cite{Fuh2000}.
Last, let us define a well-order $\prec$ on the ring $R=\mathbb ZT$. On $\mathbb Z$, we define an order $\prec_{\mathbb Z}$ as following
\[0\prec_{\mathbb Z} 1 \prec_{\mathbb Z} 2 \prec_{\mathbb Z} \dots \prec_{\mathbb Z} -1 \prec_{\mathbb Z} -2 \prec_{\mathbb Z} \dots.\]
For monomials in $R$, we use the degree lexicographical order (also called shortlex or graded lexicographical order) $\prec_R$ which is defined with respect to the convention $t_1 \succ t_1^{-1} \succ \dots \succ t_k\succ t_k^{-1}$, i.e. for $\mu_1=t_1^{n_1}t_1^{-n_2}t_2^{n_3}t_2^{-n_4}\dots t_k^{n_{2k-1}}t_k^{-n_{2k}}, \mu_2=t_1^{m_1}t_1^{-m_2}t_2^{m_3}t_2^{-m_4}\dots t_k^{m_{2k-1}}t_k^{-m_{2k}}$
\[\mu_1\prec_R \mu_2 \text{ if }\sum_{i=1}^{2k}|n_i|<\sum_{i=1}^{2k} |m_i| \text{ or } \sum_{i=1}^{2k}|n_i|=\sum_{i=1}^{2k} |m_i|, \mu_1\prec_{lex} \mu_2,\]
where $\prec_{lex}$ is the usual lexicographical order which is defined in the following way
\[t_1^{n_1}t_1^{-n_2}t_2^{n_3}t_2^{-n_4}\dots t_k^{n_{2k-1}}t_k^{-n_{2k}}\prec_{lex} t_1^{m_1}t_1^{-m_2}t_2^{m_3}t_2^{-m_4}\dots t_k^{m_{2k-1}}t_k^{-m_{2k}}\]
if $n_i<m_i$ for the first $i$ where $n_i$ and $m_i$ differ. Note that $\prec_R$ on $\mathcal X$ in fact is a well-oder while $\prec_{lex}$ might not be (See in \cite{baader1999term}).
Finally we set $\prec$ on $R=\mathbb ZT$ to be the lexicographical order based on $T \succ \mathbb Z$. It is not hard to verify that $\prec$ is a well-order on $\mathcal T$. The degree $\deg \mu$ of $\mu\in \mathbb ZT$ is define to be the degree of its leading monomial and the degree $\deg f$ of $fin M$ is define to be the maximal degree of coefficients of basis.
\subsection{Relative Dehn Functions of Finitely Generated Metabelian Groups}
\label{prem2}
As we discuss above, the Dehn function for a finitely generated metabelian group relative to $\mathcal S_2$ is always defined. So it provides a convenient tool to study finitely generated metabelian groups.
Let $G$ be a finitely generated metabelian group. Then $G$ sits inside a short exact sequence.
\[1\to A\to G\to T\to 1,\]
where $A$ and $T$ are abelian. Since $G$ is finitely generated, $T$ is finitely generated abelian group. We now choose $A$, $T$ among all such short exact sequences such that the torsion-free rank of $T$ is minimized. We denote by $\rk(G)$ the minimal torsion-free rank of $T$.
Now let $k=\rk(G)$ and $\pi:G\to T$ be the canonical quotient map. It is not hard to show that there exists a subgroup $G_0$ of finite index in $G$ such that $\pi(G_0)=\mathbb Z^k$. It has been shown that the Dehn function, the relative Dehn function and $\rk(G)$ are all preserved (up to equivalence) under taking finite index subgroups \cite{gromov1996geometric}, \cite{wang2020dehn}. Thus, in what follows, we always assume that $T$ is a free abelian group.
For the finitely generated metabelian group $G$, let $\{t_1,t_2,\dots,t_k\}$ be a subset of $G$ such that its image in $T$ form a basis. Since $A$ is a normal subgroup in $G$, by \cite{Hall1954}, it is a normal closure of a finite set. Let $\mathcal A=\{a_1,a_2,\dots,a_m\}$ be a subset of $A$ satisfying: (1) $\langle \langle \mathcal A\rangle \rangle=A$; (2) $\mathcal A$ contains all commutators of $\{t_1,t_2,\dots,t_k\}$, i.e., for any pair $1\leqslant i<j\leqslant k$, there exists $l(i,j)\in \{1,2,\dots,m\}$ such that $a_{l(i,j)}=[t_i,t_j]$, $l(i,j)=l(i',j')$ if and only if $i=i',j=j'$.
We associate $G$ with an auxiliary group $\tilde G$, which has the following relative presentation:
\begin{align*}
\tilde G=&\langle a_1,a_2,\dots,a_m,t_1,t_2,\dots,t_k\mid a_{l(i,j)}=[t_i,t_j], \\
&[a,b]=1,[a,b^t]=1, 1\leqslant i<j\leqslant k, a,b\in \mathcal A, t\in \mathcal T\rangle_{\mathcal S_2}
\end{align*}
Relations $\{[a,b]=1,[a,b^t]=1, a,b\in \mathcal A, t\in \mathcal T\}$ is enough along with all metabelian relations inherited from the free metabelian groups. And the area of $[a,b^u]$ is linearly controlled by the length of $u$, that is,
\begin{lemma}[Wang \cite{wang2020dehn}]
\label{relativeCommutative}
$\{[a,b]=1,[a,b^t]=1, a,b\in \mathcal A, t\in \mathcal T\}$ generates all commutative relations $[a,b^u]=1, a,b\in \mathcal A, u\in F(\mathcal T)$ in the presentation relative to the variety of metabelian groups. Moreover, the relative area of $[a,b^u]$ is bounded by $4|u|-3$.
\end{lemma}
All relations in $\tilde G$ are also represent the identity in $G$. It follows that the identity map on $\mathcal A\cup \mathcal T$ induces an epimorphism $\varphi: \tilde G\to G$. The kernel $\ker\varphi$ is a normal subgroup in $\langle \langle \mathcal A\rangle \rangle_{\tilde G}$, since $\varphi$ induces an isomorphism on $T$. Let $\{f_1,f_2,\dots,f_l\}$ be a subset of $\langle \langle \mathcal A\rangle \rangle_{\tilde G}$ such that $\langle \langle f_1,f_2,\dots,f_l\rangle \rangle_{\tilde G}=\ker \varphi$. Thus we obtain a relative presentation of $G$.
\begin{align*}
G=&\langle \tilde G\mid f_1,f_2,\dots,f_l\rangle_{\mathcal S_2}\\
=&\langle a_1,a_2,\dots,a_m,t_1,t_2,\dots,t_k\mid f_1,f_2,\dots,f_l, a_{l(i,j)}=[t_i,t_j],\\
&[a,b]=1,[a,b^t]=1, 1\leqslant i<j\leqslant k, a,b\in \mathcal A, t\in \mathcal T\rangle_{\mathcal S_2}.
\end{align*}
We focus on the module structure on $\langle \langle \mathcal A\rangle \rangle_{\tilde G}$, which is a free $T$-module generated by the basis $\mathcal A$ \cite{wang2020dehn}. Let us define the ordered form of an element $f$, denoted by $\OF(f)$, in $\langle \langle \mathcal A\rangle \rangle_{\tilde G}$. $f$ can be written as $a_1^{\alpha_1}a_2^{\alpha_2}\dots a_m^{\alpha_m}$ as a group element or $\alpha_1 a_1+\alpha_2 a_2+\dots+\alpha_m a_m$ as a module element. Let $\prec$ be the well-order we construct in \cref{prem1}. The ordered form $\OF(f)$ is of the form $a_1^{\mu_1}a_2^{\mu_2}\dots a_m^{\mu_m}$ such that
\begin{enumerate}[(1)]
\item $\mu_i\in \mathbb ZT$ for $1\leqslant i\leqslant m$, and each $\mu_i$ is of the form $\mu_i=\sum_{j=1}^{n_j} c_{ij}u_{ij}$ such that $c_{ij}\in \mathbb Z, u_{ij}\in \bar F$ and $u_{i1}\succ u_{i2}\succ \dots \succ u_{in_i}$;
\item $f=_{\tilde G} \OF(f)$,
\end{enumerate}
where
\[\bar F=\{t_1^{m_1}t_2^{m_2}\dots t_k^{m_k}\mid m_1,\dots,m_k\in \mathbb Z\}.\]
It has been shown that the ordered form is well-defined and for any $f,g\in \langle \langle \mathcal A\rangle \rangle_{\tilde G}$, $f=_{\tilde G}g$ if and only if $\OF(f)=_{F(\mathcal A\cup \mathcal T)} \OF(g)$ \cite{wang2020dehn}. For an element $f$ in $\langle \langle \mathcal A\rangle \rangle_{G}$, we define the ordered form of $f$ by lifting $f$ to $\tilde G$. The ordered form is useful for estimating the relative area of a word.
Let $M$ be the free $T$-module with basis $\{a_1,a_2,\dots,a_m\}$ and $S$ be a submodule of $M$ generated by $\{f_1,f_2,\dots,f_l\}$. Then the $T$-module $A$ is isomorphic to $M/S$. We have the following connection between the Dehn function of the module $A$ and the relative Dehn function of $G$:
\begin{proposition}[Wang \cite{wang2020dehn}]
\label{relativeConnection1}
Let $G$ be a finitely generated metabelian group and $A$ is defined as above, then
\[\hat\delta_A(n)\preccurlyeq \tilde \delta_G(n) \preccurlyeq \max\{\hat\delta_A^3(n^3), n^6\}. \]
\end{proposition}
If $G$ is a semidirect product, the result can be slightly improved.
\begin{proposition}
\label{improved}
Let $T$ be a finitely generated abelian group and let $A$ be a finitely generated $T$-module. Form the semidirect product
\[G=A\rtimes T.\]
Then $\delta_G(n)\preccurlyeq \max\{\hat\delta_A^3(n^2), n^3\}.$
\end{proposition}
If $G$ happens to be finitely presented, we have
\begin{theorem}
\label{relativeConnection2}
Let $G$ be a finitely presented metabelian group. Then
\[\tilde \delta_G(n)\preccurlyeq \delta_G(n)\preccurlyeq \max\{\tilde \delta_G^3(n^3),2^n\}.\]
\end{theorem}
\section{Estimate the Relative Dehn Function}
\label{relativeDehn4}
Computing the relative Dehn function is harder than computing the Dehn function. Many techniques are no longer useful for the relative case. For the variety of metabelian groups, fortunately, the structure of groups in it is not complicated. The key is to understand the natural module structure of a finitely generated metabelian group.
First, let us list some known results for relative Dehn functions, they are computed by Fuh in her thesis \cite{Fuh2000}. Note that most of them only give the upper bound of the relative Dehn function.
\begin{theorem}[Fuh \cite{Fuh2000}]
\label{knownResult}
\begin{enumerate}[(1)]
\item The realative Dehn function of a wreath product of two finitely generated abelian groups is polynomially bounded.
\item The Baumslag-Solitar group $BS(1,2)$ has linear Dehn function.
\item Let $G=\tilde{BS}(n,m)=\langle a,t \mid (a^n)^t = a^m \rangle_{\mathcal S_2}$ where $m>2, m=n+1$. Then $\tilde\delta_G(n)\preccurlyeq n^3$.
\end{enumerate}
\end{theorem}
Now let us estimate the relative Dehn function from above for some concrete examples. By the \emph{cost} of converting $w_1$ to $w_2$ ($w_2$ to $w_1$) in $G$ we mean the relative area of $w_2^{-1}w_1$ (resp. $w_1^{-1}w_2$) in $G$. If $w_2$ happens to be the identity, then the cost of converting $w_1$ to $w_2$ coincides with the area of $w_1$. By the definition of the area, it is not hard to see that if $w_1=_G w_2 =_G w_3$ and the cost of converting $w_1$ to $w_2$, $w_2$ to $w_3$ is $N_1$ and $N_2$ respectively then the cost of converting $w_1$ to $w_3$ is at most $N_1+N_2$. Essentially, to estimate the relative Dehn function from above we need to estimate the cost of converting a word to the identity.
To begin with, we consider the metabelianized Baumslag-Solitar group
\[\tilde{BS}(n,m)=\langle a,t\mid (a^n)^t=a^m\rangle_{\mathcal S_2}.\]
The normal subgroup generated by $a$ is a $\mathbb Z\langle t\rangle$-module. In this case, i.e., when the module is over the ring of Laurent polynomial of one variable and is generated by one variable, the Dehn function of the module is well-studied. The following theorem from Davis and Olshanskiy \cite{davis2011Subgroup} shows that the Dehn function of a finitely generated $\langle t \rangle$-module is a polynomial.
\begin{theorem}[Davis, Olshanskiy {\cite[Theorem 8.6]{davis2011Subgroup}}]
\label{moduleoverone}
Let $M=\langle a\rangle$ is the free module of rank one over the group ring $\mathbb Z\langle t\rangle$. Let $f=h(t)a$ where $h(x)$ is a polynomial of the form $d_nx^n+d_{n-1}x^{n-1}+\dots+d_0$. Then the Dehn function of the $\langle t\rangle$-module $M/\langle f\rangle$ is a polynomial. Furthermore, the degree of this polynomial is exactly one plus the maximal multiplicity of a (complex) root of $h(x)$ having modulus one.
\end{theorem}
Thus we have
\begin{proposition}
\label{metaBaum}
The metabelianized Baumslag-Solitar group $\tilde{BS}(n,m)=\langle a,t\mid (a^{n})^{t}=a^m\rangle_{\mathcal S_2}$ has at most cubic relative Dehn function when $n\neq m$ and has at most quartic relative Dehn function when $n=m$.
\end{proposition}
\begin{proof}
Note that in this case we have $|\mathcal A|=|\mathcal T|=1$, which simplifies the process a lot. Given a word $w=_G 1$ of length $l$.
First we claim that converting $w$ to the ordered form $\OF(w)$ (defined in \cref{prem2}) costs at most $(4l-3)l^2$.
Suppose $w=t^{n_1}a^{m_1}t^{n_2}a^{m_2}\dots t^{n_s}a^{m_s}t^{n_{s+1}}, n_i, m_i\in\mathbb Z$ for all $i$ and only $n_1,n_{s+1}$ can be zero. Since $w=1$, we have that
\[\sum_{i=1}^{s+1} n_i=0, \sum_{i=1}^{s}(|n_i|+|m_i|)+|n_{s+1}|=l.\]
The first conditions comes from the fact that the image of $w$ is 0 in $T$ and the second condition comes from $|w|=l$.
Thus we can rewrite $w$ by inserting trivial words $tt^{-1}$ to the form
\[w=a^{t^{-n_1}}a^{t^{-(n_1+n_2)}}\dots a^{t^{-(n_1+n_2+\dots+n_s)}}=a^\mu,\]
where $\mu=t^{-n_1}+t^{-(n_1+n_2)}+\dots+t^{-(n_1+n_2+\dots+n_s)}\in \mathbb ZT.$ We immediately have that $\deg(\mu)\leqslant l$, $|\mu|=s\leqslant l$, and $\|w\|\leqslant l$ by the definition.
The cost of converting $w$ to $a^{t^{-n_1}}a^{t^{-(n_1+n_2)}}\dots a^{t^{-(n_1+n_2+\dots+n_s)}}$ is zero since we only insert trivial words in the absolute free group.
Next we will convert $a^{t^{-n_1}}a^{t^{-(n_1+n_2)}}\dots a^{t^{-(n_1+n_2+\dots+n_s)}}$ to the ordered form of $w$. To do this, we have to rearrange conjugates of $a$ such that exponents are ordered by $\prec$ from high to low. In order to commute two conjugates in $w$, we have to insert commutator of the form
\[[a^{t^{-l_i}}, a^{t^{-l_j}}]=[a,a^{t^{l_i-l_j}}], \text{ where }l_i=n_1+n_2+\dots+n_i.\]
By \cref{relativeCommutative}, the area is bounded by $4l-3$. To rearrange $s$ conjugates of $a$ we need to insert at most $s^2$ many such commutators. Thus the cost of converting $w$ to $\OF(w)$ is bounded by $(4l-3)l^2$. The claim is proved.
Suppose $OF(w)=a^{\mu}$, where $|\mu|\leqslant l,\deg \mu\leqslant l$. We can conjugate $w$ by $t^l$ such that $\mu$ only have positive powers of $t$. Thus we assume that $|\mu|\leqslant l,\deg \mu \leqslant 2l$. Further, the length of $\mu$ is bounded by $l$ by definition.
In this case, the module $A$ is isomorphic to $M/S$ where $M$ is a free $T$-module with basis $a$ and $S$ is its submodule generated by $\{(nt-m)a\}$. Consider the polynomial ring $R=\mathbb Z[t,t^{-1}]$ and its ideal $I=\langle nt-m,tt^{-1}-1\rangle$. We have that $A\cong R/I$. The Gr\"{o}bner basis of $I$ is $\{tt^{-1}-1, nt-m, mt^{-1}-n\}$. If we regard $\mu$ as an element in $I$, it can only be reduced by $nt-m$ since it only has positive power of $t$. It follows that there exists a polynomial $\nu$, which only consists of the power of $t$, such that
\[\mu=(nt-m)\nu.\]
This equality also holds in the polynomial ring $\mathbb Z[t]$. When $n\neq m$, the Dehn function of $\langle t\rangle$-module $\mathbb ZT/\langle nt-m\rangle$ is linear, by \cref{moduleoverone}. Thus there exists $C$ such that $|\nu|\leqslant C\|\mu\|+C$. We have that
\[a^\mu=_G(a^{mt-n})^{\nu}.\]
The area of the right hand side is at most $Cl+C$. Converting the right hand side to its ordered form costs at most $(4l-3)((m+n)(Cl+C))^2$ since the degree is less than $l$ and we have $(m+n)(Cl+C)$ many conjugates to rearrange. Thus the upper bound of $\widetilde \area(w)$ is at most $l^3$ up to equivalence when $n\neq m$.
When $n=m$, the Dehn function of $\langle t\rangle$-module $\mathbb ZT/\langle nt-m\rangle$ is quadratic. Following the same process, we have that the upper bound of $\widetilde \area(w)$ is at most $l^4$ up to equivalence when $n\neq m$. This finishes the proof.
\end{proof}
For the case $n=1$, the group $\tilde BS(1,n)\cong BS(1,n)$ is finitely presented. Following from \cref{relativeConnection2}, the Dehn function of $BS(1,n)$ is at most exponential. We will extend this idea of using relative Dehn function to estimate the Dehn function in the next Section.
One special case Fuh \cite[Theorem 6.1]{Fuh2000} concerned is when $m>2, m=n+1$. In this case, we have that $a=[a^n,t]$. Since $a$ itself is a commutator, it follows that the relative area of words like $[a^{t^k},a]$ is at most 4 instead of linearly depending on $k$. Therefore we can improve the result in \cite[Theorem 6.1]{Fuh2000} by the following corollary of \cref{metaBaum}.
\begin{corollary}
\label{metaBaumCorollary}
The metabelianized Baumslag-Solitar group $\tilde{BS}(n,m)=\langle a,t\mid (a^{n})^{t}=a^m\rangle_{\mathcal S_2}, m>2, m=n+1$ has at most quadratic relative Dehn function.
\end{corollary}
The lamplighter groups are another interesting class of infinite presented metabelian groups with a simple module structure. We have
\begin{proposition}
\label{lamplighter}
The lamplighter groups $L_m, m\geqslant 2$ have at most cubic relative Dehn function.
\end{proposition}
\begin{proof}
Consider the lamplighter group $L_m$ with the standard presentation.
\[L_m=\langle a,t\mid a^m=1, [a,a^{t^n}]=1, n\in\mathbb N\rangle.\]
By the discussion in \cref{prem2}, we have a finite relative presentation as the following
\[L_m=\langle a,t\mid a^m=1, [a,a^t]=1\rangle_{\mathcal S_2}.\]
The rest of the proof is the same as the proof of \cref{metaBaum}. The only difference is that in this case the submodule is generated by $\{m\}$.
\end{proof}
This slightly improves the estimation in \cite[Theorem B2]{Fuh2000}.
In particular, when $m=2$, for the case of $L_2$, we are able to improve the upper bound to linear.
\begin{proposition}
\label{relativeL2}
The lamplighter groups $L_2$ has linear relative Dehn function.
\end{proposition}
\begin{proof}
The linear lower bound is given by \cref{moduleoverone}.
We choose the following relative presentation of $L_2$:
\[L_2=\langle a,t\mid a^2=1, [a,a^t]=1\rangle_{\mathcal S_2}.\]
For the upper bound, consider a word $w\in L_2$ that represents the identity. Thus $w$ has the form
\[w=t^{n_1}at^{n_2}a^{n_3}\dots t^{n_{2k}}at^{n_{2k+1}}, \text{ where }n_2,n_3,\dots,n_{2k}\neq 0.\]
Suppose the length of $w$ is $n$, combining the fact that $w=1$, we have
\[2k+\sum_{i=1}^{2k+1}|n_i|=n, \sum_{i=1}^{2k+1} n_i=0.\]
Inserting $tt^{-1}$ or $t^{-1}t$, we can rewrite $w$ as the following form:
\[w=a^{t^{-n_1}}a^{t^{-(n_1+n_2)}}\dots a^{t^{-(n_1+n_2+\dots+n_{2k})}}.\]
Thus $w$ represents an element in $\oplus_{i\in \mathbb Z}\mathbb Z_2$, where the $a^{t^i}$ is the generator of the $i$-th copy of $\mathbb Z_2$. Since $w=1$, then every element in the set $\{-n_1,-(n_1+n_2),\dots,-(n_1+n_2+\dots+n_{2k})\}$ occurs even many times in the sequence $-n_1,-(n_1+n_2),\dots,-(n_1+n_2+\dots+n_{2k})$. Our goal is to gather the conjugates of $a$ of the same exponents together at a linear cost with respect to $n$.
Since $a^{-1}=a$, we notice that
\[a^{t^s}a^{t^l}=(aa^{t^{l-s}})^{t^s}=[a,t^{l-s}]^{t^s}, l,s\in\mathbb Z.\]
Thus any pair of two consecutive conjugates of $a$ is a commutator. It follows that any such pair commutes with any other pair of this form without any cost inside the variety of metabelian groups.
For convenience, let $m_i=\sum_{i=1}^{2k} -n_{i}$. We now turn the problem of estimating the relative area of $w$ to a problem of cancelling numbers in a sequence and estimating the cost. Consider a sequence of number
\[m_1,m_2,\dots,m_{2k}.\]
The goal is to cancel all the pairs of the same value. We have three operations allowed:
\begin{enumerate}[(i)]
\item Cancel two consecutive numbers of the same value without any cost.
\item Commute a pair of consecutive numbers with another pair of consecutive numbers next to it without any cost.
\item Commute two consecutive numbers $c,d$ with a cost of $|c-d|$.
\end{enumerate}
Applying all three operations to the original sequence many times, the result might seems chaotic. To analyze the process, for a sequence of numbers, we define the $\iota(m_i)$ be the position of $m_i$ in the sequence. At the beginning, $\iota(m_i)=i$. Then we define $\sigma(m_i,m_j)=|\iota(m_i)-\iota(m_j)| \mod 2$. So $\sigma(m_i,m_j)=0$ if $m_i$ and $m_j$ are even positions apart and $\sigma(m_i,m_j)=1$ if $m_i$ and $m_j$ are odd positions apart. We notice that
\begin{enumerate}[(a)]
\item operations from (i) and (ii) do not change $\sigma(m_i,m_j)$;
\item if $m_i$ is next to $m_j$, applying the operation (iii) to commute $m_i$ and $m_j$ will change all values of $\sigma(m_i,m_l), \sigma(m_j,m_l)$ for $l\neq i ,j$ but all other values of $\sigma$ remain the same.
\end{enumerate}
From the above observation, we have that
\begin{enumerate}[(1)]
\item if $\sigma(m_i,m_j)=0$ and $i<j$, $m_j$ can be moved to the position next to $m_i$ just using operations from (ii).
\item if $\sigma(m_i,m_j)=0$, $i<j$ and $m_i=m_j$, then $m_i$ and $m_j$ can be cancelled using just operations from (i) and (ii).
\item for $m_i,m_j,m_l$ such that $m_i=m_j$, $\sigma(m_i,m_j)=1,\sigma(m_i,m_l)=0$, we can cancel $m_i,m_j$ with the cost of $|m_i-m_l|$.
\end{enumerate}
(1) can be achieved by commuting two consecutive pairs of numbers. (2) is a direct consequence of (1). Let us show how to achieve (3). By (1), we can move $m_l$ next to $m_i$. Then by using operation (ii), the pair $m_im_l$ (or $m_lm_i$) can be moved to the position next to $m_j$, resulting the form of $m_im_lm_j$ or $m_jm_lm_i$. Finally, we commute $m_i$ and $m_j$ using operation (iii) at a cost of $|m_i-m_j|$ and cancel $m_im_j$.
Now we are ready to estimate the cost to cancel the sequence $m_1,m_2,\dots,m_{2k}$ to the empty sequence. By $(2)$, we can assume that we have already cancelled all the pairs $m_i,m_j$ where $\sigma(m_i,m_j)=0, m_i=m_j$ using operations (i) and (ii). This step costs nothing and does not change any $\sigma(m_i,m_j)$ for $m_i,m_j$ remaining in the resulting sequence. Let the remaining elements after cancellations be $m_{i(1)}, m_{i(2)},\dots, m_{i(4s)}$ for some $2s\leqslant k$ and $i(1)<i(2)<\dots<i(4s)$. The reason we will have even pairs numbers left is that if the number of pairs is odd, there must be a pair of number of the same value that is even positions apart. The remaining sequence satisfies the following properties:
\begin{enumerate}[(a)]
\item $\sigma(m_{i(s)},m_{i(l)})=i(s)-i(l) \mod 2$,
\item if $m_{i(s)}=m_{i(l)}$ then $\sigma(m_{i(s)},m_{i(l)})=1$,
\item $\sigma(m_{i(s)},m_{i(l)})=\sigma(m_{i(s')},m_{i(l')})$ for $m_{i(s)}=m_{i(s')},m_{i(l)}=m_{i(l')}$.
\end{enumerate}
Here the property (a) is true because in the original sequence $\iota(m_{i(s)})=i(s)$ and we only use operation (i) and (ii) which do not change $\sigma(m_{i(s)},m_{i(l)})$. (b) and (c) follow from the definition of $\sigma$ and the remaining sequence.
\begin{figure}[H]
\centering
\includegraphics[width=3.5cm]{figure9.eps}
\caption{the corresponding graph of the sequence 2,3,5,3,5,8,2,8}
\end{figure}
We define the weighted graph $\Gamma_0$ associated with $(m_{i(1)}, m_{i(2)},\dots, m_{i(4s)})$ where the vertex set is $\{m_{i(1)}, m_{i(2)},\dots, m_{i(4s)}\}$ and there is an edge with weight $|m_{i(s)}-m_{i(l)}|$ connects $m_{i(s)},m_{i(l)}$ if $\sigma(m_{i(s),i(l)})=0$. Note that this graph is invariant under operations (ii) and may have multi-edge.
By (3), we are allowed to cancel $m_{i(s)},m_{i(l)}$ at a cost of $|m_{i(s)}-m_{i(j)}|$ for some $m_{i(j)}$ that $\sigma(m_{i(s)},m_{i(j)})=0$. After the cancellation, since we use operation (iii) once, $\sigma(m_{i(j)},m_{i(j')})$ change to 0 for some $m_{i(k')}$ that $m_{i(j)}=m_{i(j')}$. Therefore we can then cancel $m_{i(j)},m_{i(j')}$ without any cost. In summary, we have
\begin{enumerate}[(1)]
\setcounter{enumi}{3}
\item for $m_{i(s)} \neq m_{i(j)}$ that $\sigma(m_{i(s)},m_{i(j)})=0$, we can cancel a pair of number $m_{i(s)}$ and a pair of number $m_{i(j)}$ at a cost of $|m_{i(s)}-m_{i(j)}|$ where $\sigma(m_i,m_j)$ remains the same for numbers that have not been cancelled.
\end{enumerate}
(4) will delete an edge of $(m_{i(s)},m_{i(j)})$ in the graph. If no edges connecting $m_{i(s)}$ and $m_{i(j)}$, we delete the two vertices $m_{i(s)}, m_{i(j)}$. The cost is the weight of that edge. Let $\mathcal C$ be a cancellation of $\Gamma_0$ where $\mathcal C$ consists of an ordered sequence of edges in $\Gamma_0$, where we cancel the edge by the order of the sequence. Thus the total cost of a cancellation $\mathcal C$ to the empty graph is just a sum of the weight of edges in $\mathcal C$. Every cancellation can be associated with a path $p_\mathcal C$ where the path passes through the sequence of edges in $\mathcal C$ in the same order.
Now we delete edges in the following way. We first delete one edge $(m_{i(1)},m_{i(2)})$ since $\sigma(m_{i(1)},m_{i(2)})=0$. We let the resulted graph to be $\Gamma_1$.
\begin{figure}[H]
\centering
\includegraphics[width=10cm]{figure8.eps}
\caption{two different cancellations $\mathcal C,\mathcal C'$ and their corresponding $p_{\mathcal C},p_{\mathcal C'}$. The total cost of $\mathcal C$ is 3 and the total cost of $\mathcal C'$ is 8.}
\end{figure}
Inductively, $\Gamma_{i+1}$ is obtained by deleting an edge $(m_{i(s)},m_{i(j)})$ where $i,j$ are the smallest numbers remained in $\Gamma_i$. $\Gamma_s$ will be an empty graph since every time we delete four numbers from the sequence.
Let us estimate the cost from $\Gamma_0$ to $\Gamma_s$. Since every time we cancel pairs of numbers based on the order of the original sequence $m_1,m_2,\dots,m_{2k}$ (always cancel the first two numbers remained). The cost is bounded by
\[\sum_{i=1}^{2k-1} |m_{i+1}-m_i|=\sum_{i=2}^{2k} |n_i|<n.\]
Let inequality can also be realized by the following interpretation: the sequence $m_{i(1)}, m_{i(2)},\dots, m_{i(4s)}$ defines a path $p$ in $\Gamma_0$ (since $\sigma(m_{i(i)},m_{i(j+1)})=0$) that $p(j)=m_{i(j)}$, the weight of the path $p$ is bounded by $n$ by the definition of $m_i$. $p$ happens to be the path associated with this cancellation. It follows that the cost of the cancellation is bounded by the total weight of $p$. Thus the total cost is bounded by $n$.
By \cref{relativeCommutative}, the total cost of converting
\[a^{t^{m_1}}a^{t^{m_2}}\dots a^{t^{m_{2k}}}\]
to $0$ is bounded by $4n-3$. We finish the proof.\end{proof}
\section{Relative Dehn Functions and Subgroup Distortions}
\label{relativeDehn5}
So far for all the examples considered in \cite{Fuh2000} and \cref{relativeDehn4}, only the upper bounds of their relative Dehn functions are estimated. Similar to the case of the Dehn function, it is genuinely much harder to estimate the lower bound. In this section, we will connect the relative Dehn function of a finitely generated metabelian group to the subgroup distortions in a wreath product of two free abelian groups. This connection provides a new method to estimate the lower bound for the relative Dehn function and yields a sequence of examples of finitely generated metabelian groups with relative Dehn function larger that $n^k$ for arbitrary $k\in \mathbb N$.
Let $G$ be a finitely generated group with a finite generating set $X$ and $H$ be a a subgroup of $G$ with finite generating set $Y$. The \emph{distortion function} of $H$ in $G$ is
\[\Delta_H^G(n)=\sup\{|w|_Y\mid w\in H, |w|_X\leqslant n\}.\]
For example, the subgroup $\langle a \rangle$ in the Baumslag-Solitar group $\langle a,t\mid a^t=a\rangle$ is exponentially distorted since $a^{t^n}=a^{2^n}$. And it not hard to check that infinite subgroups of a finitely generated abelian group are undistorted.
Let $A$ and $T$ be free abelian groups with bases $\{a_1,a_2,\dots,a_m\}$ and $\{t_1,t_2,\dots,t_k\}$ respectively. Consider the wreath product $W:=A\wr T$. The base group $B:=\langle \langle A\rangle \rangle$ is a $T$-module. For a finite subset $\mathcal X=\{f_1,f_2,\dots,f_l\}$ of $B$, let $H$ be the subgroup of $W$ generated by $\mathcal X\cup\{t_1,t_2,\dots,t_k\}$ and $G$ be the group $W/\langle \langle \mathcal X\rangle \rangle$. We denote by $\pi: W\twoheadrightarrow T$ the canonical quotient map.
\begin{theorem}
\label{subgroupDistortion}
Let $W,H,G$ be groups defined as above, then
\[\Delta_{H}^W(n) \preccurlyeq \tilde \delta_G^k(n)+n^k,\tilde\delta_G(n)\preccurlyeq\max\{n^3, (\Delta_{H}^W(n^2))^3\}.\]
In particular, if $k=1$,
\[\Delta_{H}^W(n) \preccurlyeq \tilde \delta_G(n).\]
\end{theorem}
\begin{proof}
First we show the following lemma.
\begin{lemma}
let $M$ be the $T$-module $B/\langle \langle \mathcal X \rangle \rangle$. Then $\hat\delta_M(n)\preccurlyeq\Delta_H^W(n)\preccurlyeq \hat\delta_M^k(n)+n^k$.
\end{lemma}
\begin{proof}
Let $g\in H$. Note that $g$ can be written as $g_0t$, by adding $t:=\pi(g)$ to the end, where $g_0\in B, t\in T$. Since $|\pi(t)|_T \leqslant |g|_W\leqslant n$, $|g_0|_W\leqslant 2|g|_W$. Thus, we have
\[|g|_H=|g_0t|_H\leqslant |g_0|_H+|t|_H\leqslant |g|_W+|g_0|_H.\]
Assume that the ordered form of $OF(g_0)$ is $a_1^{\mu_1} a_2^{\mu_2}\dots a_m^{\mu_m}$, let us estimate $|g_0|_H$. First note that $\deg \mu_i\leqslant |g|_W$ for all $i$. Let $\alpha_1,\alpha_2,\dots,\alpha_l$ be elements in $\mathbb Z T$ such that $g_0=f_1^{\alpha_1}f_2^{\alpha_2}\dots f_l^{\alpha_l}$ and $\sum_{i=1}^l |\alpha_i|$ is minimized. By Theorem 3.4 in \cite{davis2011Subgroup},
\[|g_0|_H=\sum_{i=1}^l |\alpha_i|+\mathrm{reach}(g_0),\]
where $\mathrm{reach}(g_0)$ is the length of the shortest loop starting at 0 in the Cayley graph of $T$ that passing through all points in the set $\cup_{i=1}^l \supp \alpha_i$. By \cite[Lemma 6.7]{wang2020dehn}, for all $i$, $\deg (\alpha_i)\leqslant |g|_W+C\sum_{i=1}^l |\alpha_i|$ for some constant $C$. It follows that $\cup_{i=1}^l \supp \alpha_i$ lies in Ball $B_0(|g|_W+C\sum_{i=1}^l |\alpha_i|)$ of radius $|g|_W+C\sum_{i=1}^l |\alpha_i|$ centered at 0 in the Cayley graph of $T$. Since there exists a path of length $(2(|g|_W+C\sum_{i=1}^l |\alpha_i|)+1)^k$ passing through all the points in $B_0(|g|_W+C\sum_{i=1}^l |\alpha_i|)$,
\[\mathrm{reach}(g_0) \leqslant (2(|g|_W+C\sum_{i=1}^l |\alpha_i|)+1)^k.\]
Therefore, we have
\[\sum_{i=1}^l |\alpha_i|\leqslant |g|_H\leqslant |g|_W+\sum_{i=1}^l |\alpha_i|+2^k(|g|_W+C\sum_{i=1}^l |\alpha_i|)^k.\]
Since $\sum_{i=1}^l |\alpha_i|=\widehat \area_M(g_0)$ by definition and $\|g_0\|\leqslant 2|g|_W$, we have the following estimation:
\[\widehat \area(2|g|_W)\leqslant |g|_H \leqslant |g|_W+\widehat \area(2|g|_W)+2^k(|g|_W+C\widehat \area(2|g|_W))^k.\]
\end{proof}
By \cref{relativeConnection1}, we have
\[\Delta_{H}^W(n) \preccurlyeq \tilde \delta_G^k(n)+n^k.\]
Last, by \cref{improved},
\[\tilde\delta_G(n)\preccurlyeq\max\{n^3, (\Delta_{H}^W(n^2))^3\}.\]
\end{proof}
\cref{subgroupDistortion} connects the subgroup distortion function and the relative Dehn function, as it provides a way to estimate the relative Dehn function from the bottom. One special case is that both $A$ and $T$ are free abelian group of rank 1. Davis and Olshanskiy \cite{davis2011Subgroup} show that subgroups in $W=\langle a\rangle \wr \langle t\rangle$ have polynomial distortion functions and moreover for each $l\in \mathbb Z$, a subgroup of the form $H_l:=\langle [\dots,[a,t],t],\dots,t],t \rangle$, where the commutator is $(l-1)$-fold, is isomorphic to $\mathbb Z\wr \mathbb Z$ with $n^l$ distortion. It follows immediately that
\begin{corollary}
\label{relativeLowerbound}
Let $W=\langle a\rangle \wr \langle t \rangle$ be the wreath product of two infinite cyclic group. For each $l\in \mathbb N$, let $w_l=[\dots,[a,t],t],\dots,t]$ be the $(l-1)$-fold commutator. Finally let $H_l=W/\langle \langle w_l\rangle \rangle$. Then we have
\[\tilde \delta_{H_l}\succcurlyeq n^l.\]
\end{corollary}
Let us consider the case when the rank of $T$ is 1, that is, when $k=1$. The following result characterizes the distortion function of subgroups when $k=1$.
\begin{theorem}[Davis, Olshanskiy, {\cite[Theorem 1.2]{davis2011Subgroup}}]
\label{subgroupoverZ}
Let $A$ be a finitely generated abelian group.
\begin{enumerate}[(1)]
\item For any finitely generated infinite subgroup $H\leqslant A\wr \mathbb Z$ there exists $l\in \mathbb N$ such that the distortion of $H$ in $A\wr \mathbb Z$ is
\[\delta_{H}^{A\wr \mathbb Z}(n)\asymp n^l.\]
\item If $A$ is finite, then $l=1$; that is, all subgroup are undistorted.
\item If $A$ is infinite, then for every $l\in \mathbb N$, there is a 2-generated subnormal subgroup $H$ of $A\wr \mathbb Z$ having distortion function
\[\Delta_H^{A\wr \mathbb Z}(n)\asymp n^l.\]
\end{enumerate}
\end{theorem}
It follows that
\begin{theorem}
\label{overZ}
Let $G$ be a finitely generated metabelian group such that $rk(G)=1$. Then the relative Dehn function of $G$ is polynomially bounded. If in addition $G$ is finitely presented, the Dehn function of $G$ is asymptotically bounded above by the exponential function.
\end{theorem}
\begin{proof}
By passing to a finite index subgroup, we can assume that there exists a short exact sequence
\[1\to A\to G\to \mathbb Z\to 1,\]
where $A$ is abelian.
We denote by $T=\langle t\rangle$ the $\mathbb Z$ in the short exact sequence. Since every short exact sequence $1\to A\to G\to \mathbb Z\to 1$ splits, $G$ is isomorphic to the semidirect product $A\rtimes T$.
Note that $A$ is a normal subgroup of $G$, then it is finitely generated as a $T$-module. Thus, there exists a free $T$-module $M$ of rank $m$ and a submodule $S=\langle f_1,f_2,\dots,f_l\rangle$ such that $A\cong M/S$. We have that
\[G\cong (M/S)\rtimes T\cong (M\rtimes T)/\langle \langle f_1,f_2,\dots,f_l\rangle \rangle.\]
Let $\bar A$ be a free abelian group of rank $m$ and $W:=\bar A \wr T$ be the wreath product of $\bar A$ and $T$. Then there is an isomorphism $\varphi: M\rtimes T\to W$. We have
\[G\cong W/\langle \langle \varphi(f_1),\varphi(f_2),\dots,\varphi(f_l)\rangle \rangle.\]
Let $H$ be the subgroup in $W$ generated by $\{\varphi(f_1),\varphi(f_2),\dots,\varphi(f_l),t\}$. By \cref{subgroupDistortion}, we have that
\[\tilde \delta_G(n) \preccurlyeq\max\{n^3, (\Delta_{H}^W(n^2))^3\}.\]
By \cref{subgroupoverZ}, $\Delta_{H}^W(n)$ is a polynomial. Therefore the relative Dehn funcion $\tilde\delta_G(n)$ of $G$ is polynomially bounded. We are done for the relative Dehn function case.
If $G$ is finitely presented, by \cref{relativeConnection2}, if the relative Dehn function is polynomially bounded, $\delta_G(n)$ is bounded above by the exponential function.
\end{proof}
\medskip
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{intro}
Multiple groups are aggregating data on COVID-19 including \cite{d1,d2,d3,d4}. By and large, they rely on a large number of other sources for their data. A comprehensive list of data sources is given by \cite{d1}. While the total number of daily cases and deaths being documented is without precedent, surveillance of other conditions has also produced comparable data. Daily data was produced during the Haitian cholera epidemic \cite{e0} as well as the West African Ebola epidemic \cite{e1}. Similarly, the World Health Organization and various governmental bodies track and published influenza data on a weekly basis.
For COVID-19, many countries and regions show clear oscillations in their daily counts of both new cases and new deaths. Over the course of just a few days, counts can experience dramatic fluctuations. For example, between July 6 and 7, the number of deaths in Arizona jumped from 1 to 117 \cite{d3}. On June 19th, Brazil had 55,209 new cases and then on June 21st, it had 16,851 \cite{d4}.
A number of studies have examined the oscillations \cite{e2,e3,b1,b2,b3,b4,b5,b7}. A couple of these point out that oscillations have been observed in prior epidemics \cite{e2,e3}. However, they also note that this occurred over a considerably longer time scale. It has been shown that the phases of the oscillations in different regions tend to align with one another more often than not \cite{b7}. Cases typically reach a maximum close to Friday before falling to a minimum close to Monday \cite{b2}.
A few different hypotheses have been proposed to explain the oscillations. New York City and Los Angeles county maintain their own data on COVID-19. According to \cite{b3}, in these datasets, dates are back-dated, and the oscillations are not present in their mortality data. It was also shown that oscillations in the number of infections resulted from oscillations in the number of tests being administered on a given day.
Even if the oscillations are caused by data acquisition practices, their presence is problematic for more than just mathematical analysis. They indicate that the data points contain a large amount of error. Identifying the sources of error could give insight into the shortcomings of data acquisition practices, and thereby lead to more effective pandemic response strategies.
There is an absence of any standard procedures for reporting and acquiring COVID-19 data. Estimation of the total number of infections has been greatly hindered by tests not being available. A number of seroprevalence studies \cite{s1} have sought to help assess prevalence. Additionally, analyses of excess deaths have indicated that the true death toll is probably substantially higher than what is reported by data aggregators \cite{c1}.
The seven-day moving average has become an immensely popular method for suppressing oscillations in COVID-19 data. It's in widespread use by both news agencies and researchers. While superficially simple, its precise behavior is complex and unintuitive. Generally speaking, oscillations are attenuated rather than removed. It also flips the phase of oscillations if they lie within specific frequency ranges. Many papers have looked at trend analysis and forecasting. Some of these, such as \cite{a1,a2,a3}, have employed alternative smoothing methods such as exponential smoothing. In contrast, this paper employs methodology from signal and audio processing theory. Spectral techniques are used for improved data smoothing as well as the extraction of shorter-term fluctuations. Additionally, spectral analysis and modeling techniques are used to resynthesize time-series oscillations. As a source of aggregated COVID-19 data, the repository from \cite{d1} is used.
\section{Spectral Analysis And Resynthesis}
Spectral analyses was performed in \cite{e3,b3,b4,b5} to study the 7-day oscillation. Oscillations shorter than 7 days have also been observed by multiple researchers, such as \cite{e3,b3,b5}.
Our own spectral analysis found 7-day oscillations for both cases and deaths in many, but not all, countries. A number of countries exhibit a 3.5-day oscillation, and a few were found to have 2.33-day oscillations. Both of these time periods are integer divisors of 7 days; thus, they are potentially harmonics. Harmonics are frequently observed in physical systems, and they are always present in more complex periodic waveforms (e.g., square waves and triangle waves).
\begin{figure*}[t]
\centerline{\includegraphics[scale=0.70]{Fig1.pdf}}
\vspace*{-3mm}
\caption{\label{fig:spectrum}{\it Spectrograms of the derivatives of daily counts for both cases and deaths. Vertical dotted lines mark periods of 7.0, 3.5, and 2.33 days from left to right. Calculations use the most recent 193 days of data and a Hanning window function. Magnitudes are normalized to the largest value between the frequencies 0.1/day and 0.475/day.}}
\vspace*{3mm}
\end{figure*}
Three papers \cite{e3,b1,b4} used first differences in their spectral analyses. Conceptually, a first difference is very similar to a derivative. Both amplify higher frequencies and attenuate lower ones. First differences have a phase response that varies linearly with frequency, while true derivatives have a phase response that is constant. For time-series data, attenuation of low frequencies has the effect of removing long-term trends. For COVID-19 data, it also makes the peaks for the 3.5 and 2.33-day oscillations more prominent relative to the peak for 7-day oscillations. Spectrograms of the derivatives are shown in Fig. \ref{fig:spectrum} for six countries.
As of this writing, there are only around 185 days in the COVID-19 time-series that are mathematically significant to this paper's analysis. From a signal processing perspective, this is an extraordinarily small number of data points. Sections of the daily counts also contain exponential growth and decay. Selecting different time-ranges can cause the locations of spectral peaks to wobble and sometimes disappear altogether. Applying window functions is also problematic as data becomes blurred, leaving spectral features more difficult to discern. It was observed that when Fourier transforms were instead applied to the derivatives of the time-series, spectral variations were reduced and features became more clear. This is illustrated in Fig. \ref{fig:time_spectrum} using a sliding time-window.
In this paper, derivatives are calculated in the frequency domain. Time-domain calculations can also work; however, various spectral, phase, and time-shifting artifacts generally occur. Depending on the particular use case, such artifacts might need to be accounted for. A brief comparison of several differentiation methods is found in Appendix \ref{sec:CompareDeriv}. Methodology for computing frequency domain derivatives is outlined in Appendix \ref{sec:FreqDomainDeriv}.
\begin{figure*}[t]
\centerline{\includegraphics[scale=0.70]{Fig2.pdf}}
\vspace*{-3mm}
\caption{\label{fig:time_spectrum}{\it Time-dependent spectra of Brazil's new daily deaths using a 25-day sliding window. For each time-window, the spectra are normalized to the largest value above the frequency of 0.1/day. Below 0.1/day, values exceeding unity are clipped.}}
\vspace*{3mm}
\end{figure*}
\subsection{Resynthesis}
\label{sec:resynth}
Reproducing time-series oscillations can aid in understanding the weekly progression of the epidemic. It can be used to forecast minimums, maximums, and plateaus in the daily counts of infections, deaths, tests, and other data. It can also help identify varying data acquisition practices, and it's potentially useful for detecting irregularities in data.
Given a time-series $x[n]$ and its $N$-point Fourier transform $X[k]$, the magnitude and phase angle of the frequency components are respectively given by,
\begin{equation}
a[k] = \frac{2}{N} \Big| X[k] \Big|,
\label{eq:magnitude}
\end{equation}
\begin{equation}
\theta [k] = \tan^{-1} \left( \frac{ \operatorname{Im} (X[k]) }{ \operatorname{Re} (X[k]) } \right).
\label{eq:angle}
\end{equation}
\noindent Using Eqs. \ref{eq:magnitude} and \ref{eq:angle}, and zero-indexed arrays, the time-series can then be reconstructed as,
\begin{equation}
x[n] = \sum_{k=0}^{N/2-1} a[k] \cos{ \left( \frac{2\pi nk}{N} + \theta[k] \right)}.
\label{eq:reconstruct}
\end{equation}
Using sinusoidal resynthesis, the oscillations were recreated for the three aforementioned frequencies. The derivative of the time-series was taken, followed by the Fourier transform. Then Eq. \ref{eq:reconstruct} was symbolically integrated with respect to $n$, and the summation was taken only for values of $k$ that corresponded to one of the three frequencies. The result is a waveform having only the oscillations in the original time-series.
The input dataset is aggregated on a daily basis. To improve time alignment, each data point was treated as having occurred at 12 noon on its respective day. Resynthesis calculations were then performed using minute-level time-resolution, and the result of this is shown in Fig. \ref{fig:waveforms} for five countries. The waveform shapes can change overtime, and revisions are often made to existing data points.
\begin{figure*}[t]
\centerline{\includegraphics[scale=0.70]{Fig3.pdf}}
\vspace*{-3mm}
\caption{\label{fig:waveforms}{\it Resynthesis of the time-series' three oscillations using a 183-day analysis window.}}
\vspace*{3mm}
\end{figure*}
\section{Smoothing and Isolation}
The behavior of COVID-19 time-series data can be better understood and better characterized by altering its spectral properties. In fact, spectral alteration is the precise mechanism by which the seven-day moving average smooths data. In this study, three methods were used for modifying the spectral content of U.S. time-series data: the moving average filtering, infinite impulse response (IIR) filtering, and frequency domain processing. As an important first and final step, the data underwent pre- and post-processing.
\subsection{Properties of the Seven-Day Moving Average}
\label{sec:movingAverage}
In the context of digital signal processing, the seven-day moving average is a finite impulse response (FIR) filter. Its time-center form is non-causal, and it can be denoted by,
\begin{equation}
y[n] = \frac{1}{7} \sum_{k=-3}^{3} x[n-k],
\label{eq:MovAve}
\end{equation}
\noindent where $x$, $y$, and $n$ are the input, the smoothed output, and the time-index, respectively. Taking the Z-transform, rearranging, and simplifying yields the filter's transfer function,
\begin{equation}
H(z) = \frac{1}{7} \left( z^{-3} + z^{-2} + z^{-1} + z^{0} + z^{1} + z^{2} + z^{3} \right).
\label{eq:MovAveTf}
\end{equation}
\noindent Substituting $e^{j 2 \pi f}$ for $z$ in Eq. \eqref{eq:MovAveTf} and taking the absolute value results in the frequency response. It follows that the frequency response in decibels is given by,
\begin{equation}
H_{dB}(f) = 20 \log_{10}\left|{H}\left(e^{j 2 \pi f}\right)\right|,
\label{eq:MovAveTfDb}
\end{equation}
\noindent where $f$ is the frequency. The continuous frequency phase response is obtained by substituting ${H}\left(e^{j 2 \pi f}\right)$ for $X[k]$ in Eq. \ref{eq:angle}.
The frequency response for the seven-day moving average is shown in Fig. \ref{fig:FiltComp}. The three nulls in the plot occur for frequencies with periods of 7 days, 3.5 days, and 2.33 days. The presence of nulls indicates that the filter will completely remove the respective frequencies. All other frequencies, including those periods less than 7 days, are still present. While they are reduced in magnitude, it is enough to be the sole cause of the jaggedness seen in media reports. The jaggedness is illustrated in Fig. \ref{fig:MovingAverage}. To further muddle the data, frequencies on certain intervals have inverted phases. This is illustrated in the phase response in Fig. \ref{fig:MaPhase}. A second application of the same moving average filter will correct the phase inversions.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.70]{Fig4.pdf}}
\vspace*{-3mm}
\caption{\label{fig:FiltComp}{\it The frequency response for three filters. Vertical dotted lines mark periods of 7.0, 3.5, and 2.33 days from left to right.}}
\vspace*{3mm}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[scale=0.70]{Fig5.pdf}}
\vspace*{-3mm}
\caption{\label{fig:MaPhase}{\it The phase response of a seven-day moving average. Vertical dotted lines mark periods of 7.0, 3.5, and 2.33 days from left to right.}}
\vspace*{3mm}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[scale=0.70]{Fig6.pdf}}
\vspace*{-3mm}
\caption{\label{fig:MovingAverage}{\it New daily deaths in the U.S. before and after processing. The seven-day moving average leaves the plot somewhat jagged.}}
\vspace*{3mm}
\end{figure}
\subsection{Processing Methodology}
\subsubsection{Pre- and Post-Processing}
Padding the input dataset is beneficial to all three spectral processing methods. It allows the moving average to run to the end of the time-series. It facilitates the initialization of the IIR filters, and it can help prevent artifacts that frequency domain processing would otherwise leave at the ends of the modified time-series.
Before processing, 28 days of synthetic data were added to both ends of the original data. Each synthetic data point was calculated by linearly extrapolating from the nearest existing data points located at distances of $7m$ and $7(m+1)$ days where $m$ is an integer. Numbers extrapolating to less than $0$ were set equal to $0$. After processing, sections of the output data corresponding to extrapolated data were removed.
\subsubsection{IIR Filters}
Elliptic filters are a type of IIR filter. Their steep frequency roll-off permits strong frequency isolation. Frequency-dependent phase changes are a common artifact of signal processing filters. However, these artifacts are canceled out if filtering is applied twice in opposite directions.
Four elliptic filters were created using Python's SciPy library. Comparable functionality is found in Matlab and Octave. The dataset was filtered twice: backwards first and then forwards. The filter design parameters are given in Table \ref{tab:params}. Frequency response plots for two passes of two of the listed filters are shown in Fig. \ref{fig:FiltComp}.
\begin{table*}[t]
\caption{\label{tab:params} Parameters For Elliptic Filters}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
filter name & pass-band frequency & stop-band frequency & pass-band ripple$^{\mathrm{a}}$ & stop-band attenuation$^{\mathrm{a}}$ \\
\hline
\textbf{low-pass \#1} & 1/9 & 1/8 & 0.01 dB & 40 dB \\
\textbf{low-pass \#2} & 1/21 & 1/19 & 0.01 dB & 40 dB \\
\textbf{high-pass \#1} & 1/7 & 1/8 & 0.01 dB & 40 dB \\
\textbf{band-pass \#1} &
\begin{tabular}{cc}1/8 & 1/6\end{tabular} &
\begin{tabular}{cc}1/9 & 1/5\end{tabular} &
0.01 dB & 40 dB \\
\textbf{band-pass \#2} &
\begin{tabular}{cc}1/19 & 1/9\end{tabular} &
\begin{tabular}{cc}1/21 & 1/8\end{tabular} &
0.01 dB & 40 dB \\
\hline
\multicolumn{5}{l}{$^{\mathrm{a}}$Two filter passes result in total values of $0.02$ dB and $80$ dB.}
\end{tabular}
\end{center}
\end{table*}
\subsubsection{Frequency Domain Processing}
The Fourier transform will convert a time-series to and from the frequency domain. From there, its spectral content is readily modified. The general formula for such operations can be written as,
\begin{equation}
y[n] = \mathcal{F}^{-1}( H_{s}[k] \cdot \mathcal{F}( x[n] ) ),
\label{eq:FFT}
\end{equation}
\noindent where $\mathcal{F}$, $\mathcal{F}^{-1}$, and $H_{s}[k]$ are the Fourier transform, the inverse Fourier transform, and a computed spectrum, respectively.
Five spectra were computed for $H_{s}[k]$. A "brick wall" spectrum could be calculated by setting $H_{s}[k]$ equal to one or more unit step sequences mirrored about Nyquist. For low-pass, high-pass, and band-pass spectra, this can implemented using the formula,
\begin{equation}
H_{s}[k] =
\begin{cases}
1, & \text{if } N(f_{low}) \leq k \leq N(f_{high}) \\
1, & \text{if } N(1-f_{high}) \leq k \leq N(1-f_{low}) \\
0, & \text{otherwise},
\end{cases}
\label{eq:brick_wall}
\end{equation}
\noindent where $f_{low}$ and $f_{high}$ are the lower and upper bounds of the pass-band. However, for reasons that will become more clear in Sec. \ref{results}, "brick wall" spectra were closely approximated by,
\begin{equation}
H_{s}[k] = \left| {H_{e}[k]}^2 \right|,
\label{eq:spectra}
\end{equation}
\noindent where $H_{e}[k]$ is the discretely sampled transfer function of the single-pass filters described in Table \ref{tab:params}.
\subsection{Results and Discussion}
\label{results}
The moving average was applied to the COVID-19 data. The result is plotted in Fig. \ref{fig:MovingAverage}. Using the parameters from Table \ref{tab:params}, the elliptic filters and the frequency domain method were also used to process the data.
\paragraph{\textbf{Low-pass \#1} (Fig. \ref{fig:LP1})}
\label{sec:LP1}
The line is visibly less jagged than the moving average. However, significant oscillations longer than $8$ days are still present. The elliptic filters and the Fourier method are in almost perfect agreement with one another.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.70]{Fig7.pdf}}
\vspace*{-3mm}
\caption{\label{fig:LP1}{\it New daily deaths in the U.S. before and after processing. Low-passing removes all oscillations shorter than 8 days. However, longer-term fluctuations are still visible.}}
\vspace*{3mm}
\end{figure}
\paragraph{\textbf{Low-pass \#2} (Fig. \ref{fig:LP2})}
The line is again less jagged. Also, there are fewer lower-frequency oscillations. As in Fig. \ref{fig:LP1}, the elliptic filters and the Fourier method are in almost perfect agreement.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.70]{Fig8.pdf}}
\vspace*{-3mm}
\caption{\label{fig:LP2}{\it New daily deaths in the U.S. before and after processing. Low-passing removes short-term oscillations.}}
\vspace*{3mm}
\end{figure}
\paragraph{\textbf{High-pass \#1} (Fig. \ref{fig:HP1})}
The long-term trends are effectively removed. As in the two low-pass methods, the elliptic filters and the Fourier method are once again in near perfect agreement.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.70]{Fig9.pdf}}
\vspace*{-3mm}
\caption{\label{fig:HP1}{\it New daily cases in the U.S. before and after processing. High-passing removes long term trends.}}
\vspace*{3mm}
\end{figure}
\paragraph{\textbf{Band-pass \#1} (Fig. \ref{fig:BP1})}
The 7-day oscillation is fairly well isolated. The resultant waveform is much more sinusoidal in nature than that produced by the high-pass filter. Like the other operations, the two methods agree very well.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.70]{Fig10.pdf}}
\vspace*{-3mm}
\caption{\label{fig:BP1}{\it New daily cases in the U.S. before and after processing. Band-passing isolates the oscillations between 6 and 8 days.}}
\vspace*{3mm}
\end{figure}
\paragraph{\textbf{Band-pass \#2} (Fig. \ref{fig:BP2})}
A bandwidth of oscillations with periods between 8 and 21 days is separated. Aperiodic oscillations remain despite having 80 dB of attenuation on all oscillations with periods shorter than 8 days. The remaining oscillations are the same as those that were present in Fig. \ref{fig:LP1} but not present in Fig. \ref{fig:LP2}. The two methods still produce plot lines that overlay one another, albeit with slightly less precision than in the other trials.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.70]{Fig11.pdf}}
\vspace*{-3mm}
\caption{\label{fig:BP2}{\it New daily deaths in the U.S. before and after processing. Band-passing isolates oscillations with periods between 8 and 21 days.}}
\vspace*{3mm}
\end{figure}
\noindent \newline One thing to note here, is that the temporal resolution worsens as the band-width is narrowed. A principle of Fourier theory is that an arbitrary time-series can be represented completely by a summation of sinusoids. Thus, if some of the sinusoids are removed from the series, there could be oscillations in locations where the input series was near zero. Those oscillations are in fact present in the input; they are just not necessarily visible when the other frequency components are also present. A practical example of this is resonance in the earlier part of the data where there is a small number of cases, for example, the period prior to day 10 on Figs. \ref{fig:HP1} and \ref{fig:BP1}. On the other hand, the time period prior to day 10 is of the least interest.
\section{Conclusions}
Significant spectral differences were observed among different countries. These differences are often evident in the time-series data. For example, several countries had a large peak for 3.5-day oscillations in at least one of their spectra. Upon reviewing the respective time-series, a double-humped oscillation was observed. Provided that there is no biological explanation, this must result from some kind of tangible difference in how information is being collected, reported, and aggregated.
The oscillations in the time-series were recreated using sinusoidal resynthesis. This effectively recreated the minimums, the maximums, and the general shape of the time-series data including double-humps. This gives us some ability to predict the behavior of the time-series over the course of a week.
IIR filters and frequency domain processing can be used to selectively manipulate the spectral properties of COVID-19 time-series data. When tuned appropriately, both methods produce virtually identical results. The superior suppression of higher-frequency components smooths data far more effectively than the seven-day moving average. It would be difficult to argue that statistically sensitive calculations should carried out using a statistically erroneous seven-day moving average. Additionally, the isolation of higher-frequency components could be useful for predicting and modeling short-term variations in observed data. Using the described methods, fluctuations greater that 8 days and shorter than 21 days were identified in U.S. mortality data. The cause and significance of this is currently not known.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Photosynthesis, the green engine of life on Earth, produces molecular
oxygen by using the light-driven water-plastoquinone oxidoreductase
enzyme known as photosystem II.\textsuperscript{1--3} The photosystem
II-reaction center (PSII-RC) is one of the smallest photosynthetic
components which can undergo charge separation (CS) and thus is an ideal
model system to investigate the underlying mechanism of the initial
light-energy conversion process of photosynthesis.\textsuperscript{4--6}
The PSII-RC consists of six pigments as central cofactors---two special
pair chlorophylls (P\textsubscript{D1} and P\textsubscript{D2}), two
accessory chlorophylls (Chl\textsubscript{D1} and
Chl\textsubscript{D2}), and two pheophytins (Phe\textsubscript{D1} and
Phe\textsubscript{D2})---arranged in a quasi-symmetric geometry (Figure
1a).\textsuperscript{7,8} These six molecules are generally referred to
as RC pigments. In addition, there are two peripheral antenna Chls which
are denoted as Chlz\textsubscript{D1} and Chlz\textsubscript{D2}.
Despite the similarity of the pigment arrangement in the D1 and D2
branches, electron transfer only proceeds along the D1 pigments. The
specifics of how CS proceeds in the PSII-RC is, however, a matter of
vivid debate. In particular, there remains a long-standing discussion
concerned with whether the initial electron acceptor is
P\textsubscript{D1}\textsuperscript{9,10} or
Phe\textsubscript{D1},\textsuperscript{11--13} i.e. whether the initial
radical pair is
(P\textsubscript{D2}\textsuperscript{+}P\textsubscript{D1}\textsuperscript{-})
or
(Chl\textsubscript{D1}\textsuperscript{+}Phe\textsubscript{D1}\textsuperscript{-}).
The uncertainty here is a consequence of the many closely spaced
excitonic states arising from pigment-pigment interactions in the
PSII-RC such that no observable structure is present even in the
electronic linear absorption spectrum at cryogenic
temperatures.\textsuperscript{14--16}
To this end, the excited state dynamics of the PSII-RC has been the
focus of extensive spectroscopic interest spanning over three decades.
These works have included time-resolved
fluorescence,\textsuperscript{17,18} transient
absorption,\textsuperscript{9,10,13,19--21} optical
photon-echo,\textsuperscript{12} visible pump-mid infrared (IR)
probe,\textsuperscript{11} and two-dimensional electronic spectroscopy
(2DES)\textsuperscript{14,22--24} studies. While electronic
spectroscopies acutely suffer from a lack of spectral resolution in
regards to the PSII-RC, the implementation of mid-IR spectroscopy has
proven to be highly advantageous in addressing issues related to
spectral congestion.\textsuperscript{25--28} In particular, the keto and
ester CO stretching modes of Chl and Phe show unique signatures in the
mid-IR region depending on the local protein environment, electronic
structure, and ionic states.\textsuperscript{11,29--33} Additionally,
the amide I modes of the backbone protein can be used as sensitive
reporters for the electron transfer.\textsuperscript{11,31} These were
notably demonstrated by Groot et al. in a visible pump-mid IR probe
study of the PSII-RC where it was suggested that the initial electron
acceptor was Phe based on its distinguishing vibrational
structure.\textsuperscript{11} However, the spectral resolution along
the detection axis alone was not enough to disentangle the distinct
excitonic contributions and dynamics or definitively assign the initial
electron acceptor.
\begin{figure}
\centering
\includegraphics[width=16.51cm, height=8.814cm]{Figure1.png}
\caption{Structure and 2DEV spectrum of the PSII-RC. (a) Pigment arrangement of the PSII-RC depicted based on the crystal
structure (3WU2) reported by Umena et al.\textsuperscript{8} (b) 2DEV
spectrum of the PSII-RC at 170 fs. Positive contours (red/yellow)
indicate ground state bleach (GSB) features and negative contours (blue)
indicate photoinduced absorption (PIA) features. The vertical dotted
lines show the zero phonon exciton transition energies based on the
model by Novoderezhkin et al.\textsuperscript{35} Contour levels are
drawn in 5\% intervals. Colored squares on the top indicate the dominant
pigments participating in each excitonic state as labeled in (a).}
\label{fig:fig1}
\end{figure}
Many theoretical models have been developed in order to aid in
experimental interpretation and to elucidate the nature the electronic
states at difference absorption wavelengths. Particularly, Stark
spectroscopy suggests that the absorption spectrum of PSII is not
characterized by purely excitonic states, rather it is composed of mixed
exciton-charge transfer (CT) states possibly including contributions
from
(Chl\textsubscript{D1}\textsuperscript{$\delta$+}Phe\textsubscript{D1}\textsuperscript{$\delta$-})*
and
(P\textsubscript{D2}\textsuperscript{$\delta$+}P\textsubscript{D1}\textsuperscript{$\delta$-})*.\textsuperscript{34}
In an attempt to model this, one of the most sophisticated exciton
models of the PSII-RC takes into account eight pigments---the six RC and
two peripheral pigments---and one CS state.\textsuperscript{35} Even in
this model, there was uncertainty as to the character of the initial CS
state because both
P\textsubscript{D2}\textsuperscript{+}P\textsubscript{D1}\textsuperscript{-}
and
Chl\textsubscript{D1}\textsuperscript{+}Phe\textsubscript{D1}\textsuperscript{-}
gave reasonable fits to the data with the former yielding slightly
better agreement to experimental data considered. It is important to
note here that the experimental data was, however, entirely from
electronic spectroscopies.
While uncertainty surrounds the involvement and extent of exciton-CT
mixing in the PSII-RC, studies have suggested that the mixed CT states
are responsible for the far-red excitation of
PSII.\textsuperscript{36--38} Although the absorption of the PSII-RC and
the required redox potential of water oxidation were believed to be
located below 690 nm, it was demonstrated that PSII can be operated by
the far red light beyond 690 nm (exhibiting activities including oxygen
evolution).\textsuperscript{36,39} Additionally, recent EPR
experimental\textsuperscript{37} and QM/MM
theoretical\textsuperscript{38} studies suggest that the far-red light
excitation of PSII involves a lower lying CT state with a hole localized
on Chl\textsubscript{D1} rather than P\textsubscript{D2}. However, just
as spectral congestion obscures the assignment of the initial electron
acceptor, the character of these mixed CT states remains undetermined.
Compared to the previously mentioned techniques, the emerging method of
two-dimensional electronic-vibrational (2DEV) spectroscopy, which
correlates electronic excitation and mid-IR
detection,\textsuperscript{40--44} has the potential to overcome the
challenges associated with congested electronic spectra. In particular,
the simultaneous spectral resolution along both the visible excitation
and IR detection axis has been shown to enable the clear assignment of
transient species.\textsuperscript{41--44} In this study, we
investigated the excited state dynamics of the PSII-RC via 2DEV
spectroscopy. Both highly excitation frequency-dependent spectral
structure and dynamics were clearly resolved. This allowed for a broad
analysis of the excitonic composition of the PSII-RC and direct insight
into the involvement of mixed exciton-CT states found to be directly
prepared upon photoexcitation. Further, the spectra facilitated an
assignment of the initial electron acceptor and enabled the excitation
energy transfer (EET) and electron transfer pathways initiated by
peripheral antenna excitation or RC pigments excitation to be
disentangled.
\section{RESULTS AND DISCUSSION}
\label{sec:headings}
\paragraph{General insights from the 2DEV spectra and IR band assignments.}
Figure 1b shows the 2DEV spectrum of the PSII-RC 170 fs after
photoexcitation. Of note is the significant excitation frequency
(\emph{$\omega$}\textsubscript{exc.})-dependence of the vibrationally resolved
structure along the detection axis (\emph{$\omega$}\textsubscript{det.}) which,
as we will demonstrate, allows for an excitonic state-specific analysis
of the spectra with high frequency resolution (i.e. vibrationally
resolved excitonic structure). For example, photoinduced absorptions
(PIA) spanning \emph{$\omega$}\textsubscript{det.} = 1,710-1,760
cm\textsuperscript{-1} were seen to clearly favor the lower-lying
excitonic states. Other strong indications of this
\emph{$\omega$}\textsubscript{exc.}-dependent behavior were observed in the
ground state bleach (GSB) region spanning \emph{$\omega$}\textsubscript{det.} =
1,680-1,710 cm\textsuperscript{-1} and the PIAs at
\emph{$\omega$}\textsubscript{det.} = 1,620-1,670 cm\textsuperscript{-1}. These
three regions are of particular interest because, here, vibrational
modes belonging to both the neutral and ionic forms of Chl and Phe can
be clearly distinguished---thus serving as sensitive markers for the EET
and CT steps leading to CS as well as the character of the excitonic
states.
The vibrational structure of the PSII-RC is not only highly
\emph{$\omega$}\textsubscript{exc.}-dependent, but also shows a significant
time-dependence. Therefore, our assignments will be based on the
vibrational structure at specific \emph{$\omega$}\textsubscript{exc.}
corresponding to the energies of exciton 2 (14,690
cm\textsuperscript{-1}) and exciton 8 (14,940 cm\textsuperscript{-1}) in
the model by Novoderezhkin et al.,\textsuperscript{35} which covers the
relevant pigments along the D1 branch, and at either early or later
waiting times (Figure 2).
\begin{figure}
\centering
\includegraphics[width=9.474cm, height=10.109cm]{Figure2.png}
\caption{Exciton-specific vibrational
structure and IR assignments. Slices of 2DEV spectrum at
\emph{$\omega$}\textsubscript{exc.} = 14,690 cm\textsuperscript{-1} and
\emph{$\omega$}\textsubscript{exc.} = 14,940 cm\textsuperscript{-1},
corresponding to the energies of exciton 2 and 8 at early (pink, 180 fs)
and later (blue, 89 ps) waiting times. The difference absorption spectra
of P\textsuperscript{+}/P (dotted line) and Phe\textsuperscript{-}/Phe
(solid line) are shown above for comparison (where the signs have been
reversed to match the convention of the 2DEV data). Vertical dotted
(solid) lines indicate to band assignments corresponding
P\textsuperscript{+}/P (Phe\textsuperscript{-}/Phe) while dash-dotted
lines distinguish more ambiguous assignments. The black arrow in exciton
2 marks the Chl\textsubscript{D1}\textsuperscript{+} mode at 1,716
cm\textsuperscript{-1} and in exciton 8 marks the Chlz\textsubscript{D1}
ground state bleach. The P\textsuperscript{+}/P and
Phe\textsuperscript{-}/Phe spectra are reproduced from Ref.
\textsuperscript{30} and \textsuperscript{29} with permission.}
\label{fig:fig2}
\end{figure}
Generally, the GSB observed at \emph{$\omega$}\textsubscript{det} = 1,680-1,710
cm\textsuperscript{-1} is assigned to the keto CO stretching mode of
Chl/Phe.\textsuperscript{29,31,32} On the electronic ground state, the
frequency of this keto mode depends on the polarity of the environment
and the presence of hydrogen bonding from surrounding media (the larger
the polarity, or the stronger the hydrogen bond, the lower the frequency
of the keto mode). Thus, the GSB can be used to broadly distinguish
pigment contributions (further discussed in the next section). For
example, in Figure 2, it is apparent at early waiting times that the GSB
band of exciton 8 shows much more signal amplitude at 1,680-1,700
cm\textsuperscript{-1} compared to that of the exciton 2. This is in
line with a light-induced FTIR difference spectroscopic study which
reported that Chlz shows a GSB at 1,684
cm\textsuperscript{-1},\textsuperscript{31} whereas P and Phe exhibit
higher and lower frequency GSBs at 1,704 cm\textsuperscript{-1} and 1677
cm\textsuperscript{-1}, respectively.\textsuperscript{29,31,32}
On the electronically excited state, the keto modes of Chl and Phe
exhibit redshifted absorption.\textsuperscript{11,45} For example, in
THF, the keto stretching mode in the previously measured Chl*/Chl
difference spectrum was seen to shift from 1,695 cm\textsuperscript{-1}
to 1,660 cm\textsuperscript{-1}.\textsuperscript{11} Correspondingly,
the negative signal at \emph{$\omega$}\textsubscript{det} = 1,620-1,670
cm\textsuperscript{-1} in both exciton 2 and 8 is broadly assigned to
the excited state absorption (ESA) of the keto modes of Chl and Phe. At
later waiting times, however, there is a notable evolution in the
vibrational structure of this region (Figure 2). Focusing on exciton 2,
a clear dip at 1,657 cm\textsuperscript{-1} appeared concomitantly with
a new peak emerging at 1,666 cm\textsuperscript{-1}. While both the
P\textsuperscript{+}/P and Phe\textsuperscript{-}/Phe difference spectra
exhibit features in this region at frequencies of 1,653-1,655
cm\textsuperscript{-1} and 1,659
cm\textsuperscript{-1},\textsuperscript{29,31,32} respectively, the
signal for Phe\textsuperscript{-}/Phe agrees more closely with the
observed feature at 1,657 cm\textsuperscript{-1}. Resonance Raman
spectroscopy of PSII-RC shows no signal at 1640-1660
cm\textsuperscript{-1}, thus Groot et al. and Noguchi et al. suggest
that the band at 1657 cm\textsuperscript{-1} is assigned to the amide CO
mode reflecting the CS at the RC, rather than keto stretching mode of
Chl or Phe.\textsuperscript{11,31} The band at 1,666
cm\textsuperscript{-1} is similar to both Phe\textsuperscript{-}/Phe and
P\textsuperscript{+}/P showing signal at 1,662 cm\textsuperscript{-1}
and 1,663 cm\textsuperscript{-1},\textsuperscript{29,31,32}
respectively, which has been suggested as a counterpart of the
previously mentioned band.\textsuperscript{31} A more definitive
assignment is reserved for later discussion.
This leaves the remaining PIA region spanning 1,710-1,760
cm\textsuperscript{-1}. While the ester modes Chl* and Phe* fall in this
region,\textsuperscript{11} they are known to be very weak and would
unlikely account for the full intensity of the observed features.
Further, assuming that this region is only composed of Chl* and Phe*
ester modes would not account for the significant
\emph{$\omega$}\textsubscript{exc.}-dependence clearly present in Figure 1b. If
this was the case, then this region should have a near uniform intensity
across excitons 3 through 7 which have similar pigment contributions and
exciton transition dipole strengths,\textsuperscript{35} but this is
clearly not so (Figure 1b). As a result, contributions from Chl* and
Phe* ester modes are likely small, which should leave this a relatively
clear spectral window, yet, strong features are apparent in the 2DEV
spectra. The Phe\textsuperscript{-}/Phe difference spectrum measured in
PSII, however, shows characteristic signatures in this region, still
related to the ester mode of chromophore itself or surrounding amino
acid residue, with strong absorptions at 1,722 cm\textsuperscript{-1},
1,730 cm\textsuperscript{-1}, and 1,739 cm\textsuperscript{-1} (Figure
2).\textsuperscript{29,32} The corresponding peaks in the 2DEV spectrum
(at 1,722 cm\textsuperscript{-1}, 1,730 cm\textsuperscript{-1}, and
1,740 cm\textsuperscript{-1}), apparent at early waiting times for
exciton 2 and emerging later for exciton 8, are therefore assigned to
Phe\textsuperscript{-}. It should be noted that exciton 8 does show a
slight negative signal around 1,730 cm\textsuperscript{-1} immediately
after photoexcitation, despite being near fully characterized by
Chlz\textsubscript{D1}. We attribute this signal to either slight
contributions from the ester ESA, some degree of overlap between
excitonic bands as these slices only represent the zero phonon
transitions and the actual absorption has finite bandwidth. The ester
mode of the Chl \emph{a} cation (in THF), on the other hand, is known to
blueshift from 1,738 cm\textsuperscript{-1} (neutral) to 1,750
cm\textsuperscript{-1}.\textsuperscript{29} Yet, the
P\textsuperscript{+}/P difference spectrum (Figure 2) does not exhibit
any corresponding characteristic absorptions in this region (the ester
mode of P\textsuperscript{+} appears at 1,743
cm\textsuperscript{-1}).\textsuperscript{30} Thus, the bands in this
region, 1,750 cm\textsuperscript{-1} and 1,764 cm\textsuperscript{-1},
are related to the intermediate Chl cation
(Chl\textsubscript{D1}\textsuperscript{+}) which are also clearly
present in the structure of exciton 2 at early waiting times.
Further characteristic of the Chl \emph{a} cation is a significantly
blueshifted keto stretch, to 1,718 cm\textsuperscript{-1}, (on the order
of 25 cm\textsuperscript{-1}) versus neutral Chl \emph{a} in
THF.\textsuperscript{33} At early waiting times in exciton 2, for
example, a peak is oberved at 1,716 cm\textsuperscript{-1} which we
assign to Chl\textsubscript{D1}\textsuperscript{+}. However, at later
waiting times, this peak noticeably redshifts to 1,713
cm\textsuperscript{-1}, towards agreement with the characteristic
P\textsuperscript{+} absorption at 1,711 cm\textsuperscript{-1}. This
dynamical behavior will be the focus of later discussion.
To summarize, the significant markers tracking CS in this study are as
follows: Phe\textsuperscript{-} (1,722 cm\textsuperscript{-1}, 1,730
cm\textsuperscript{-1}, and 1,740 cm\textsuperscript{-1}),
Chl\textsubscript{D1}\textsuperscript{+} (at early waiting times: 1,716
cm\textsuperscript{-}\textsuperscript{1}, 1,750 cm\textsuperscript{-1},
and 1,764 cm\textsuperscript{-1}), and P\textsuperscript{+} (at later
waiting times: 1,713 cm\textsuperscript{-1}). The GSB of the amide CO
bands at 1,657 cm\textsuperscript{-1} and its up-shifted counterpart at
1,666 cm\textsuperscript{-1} reflecting the CS at RC, where the former
likely has predominant contributions from
Phe\textsuperscript{-}\textsubscript{,} while the latter could
potentially be a mixture of Phe\textsuperscript{- }and
P\textsuperscript{+}.
\textbf{Excitonic composition and charge transfer character.} Following
the vibrational assignments, we focus on a comparison of the vibrational
structure at specific excitonic energies based on the model by
Novoderezhkin et al.,\textsuperscript{35} in order to understand the
character of the excitonic states and degree of CT mixing. Figure 3a
shows the vibrational structure corresponding to exciton 1, 2, 5, and 8
at an early waiting time. We note again that the exciton energies
discussed thus far are zero phonon lines (shown in Figure 1b). However,
it has been reported that the actual absorption of the CT state shows a
significant blue shift (\textasciitilde5 nm) as a result of coupling to
low-frequency phonons in the environment, compared to other excitonic
bands (1\textasciitilde2 nm).\textsuperscript{35} Thus, to investigate
the CT state specifically, the 2DEV signal corresponding exciton 1 as
shown in Figure 3a was integrated in the range
\emph{$\omega$}\textsubscript{exc} = 14,500-14,650 cm\textsuperscript{-1}.
\begin{figure}
\centering
\includegraphics[width=16.51cm, height=8.805cm]{Figure3.png}
\caption{Assignment of excitonic composition and
charge transfer character. (a) Slice along $\omega$\textsubscript{det.} of the
2DEV spectrum corresponding to exciton 1 (red, integrated at
$\omega$\textsubscript{exc.} = 14,500-14,650 cm\textsuperscript{-1}), exciton 2
(yellow, $\omega$\textsubscript{exc.} = 14,690 cm\textsuperscript{-1}), exciton
5 (green, $\omega$\textsubscript{exc.} = 14,850 cm\textsuperscript{-1}), and
exciton 8 (blue, $\omega$\textsubscript{exc.} = 14,940 cm\textsuperscript{-1})
at a waiting time of 60 fs. The vertical solid, dotted, and dash-dotted
lines, as well as the black arrow follow the same convention as in
Figure 2. (b) Character of initial charge transfer state, exciton 1,
along with the site contributions of excitons 2, 5, and 8 where the area
of the shaded circles is proportional to the population of the
corresponding sites based on the model of Novoderezhkin et
al.\textsuperscript{35} For clarity, the slight, additional
contributions from D1 pigments, nearly identical to the relative
contributions of exciton 2, were omitted from exciton 1. Likewise, the
charge transfer character present in excitons 2 and 5 was precluded for
simplicity.}
\label{fig:fig3}
\end{figure}
At early time, the exciton 1 signal, formed directly upon
photoexcitation, shows clear structure corresponding to
Phe\textsuperscript{-} (1,722 cm\textsuperscript{-1}, 1,730
cm\textsuperscript{-1}, and 1,740 cm\textsuperscript{-1}),
Chl\textsubscript{D1}\textsuperscript{+} (1,716 cm\textsuperscript{-1},
1,750 cm\textsuperscript{-1}, and 1,764 cm\textsuperscript{-1}). In
addition, the amide CO bands reflecting CS at 1,657
cm\textsuperscript{-1} and 1,666 cm\textsuperscript{-1} show clear
structure compared on the other excitonic states, highlighting the
significant CT character of exciton 1 state. The characteristic
P\textsuperscript{+} signal (1,713 cm\textsuperscript{-1}) only appears
at later waiting times and is accompanied by evolution at both of the
aforementioned band positions as well as a decay in the 1,750
cm\textsuperscript{-1} region assigned to
Chl\textsubscript{D1}\textsuperscript{+} (Figure S1)---collectively
indicating a conspicuous lack of initial contributions from
P\textsuperscript{+}.
The lack of P\textsuperscript{+} is in contrast to several previous
spectroscopic studies that suggested there are two CS pathways in the
PSII-RC.\textsuperscript{21,22,24,34} However, these experiments
utilized spectroscopic methods solely in the visible region which are
significantly disadvantaged when it comes to untangling the highly
overlapping signals of the relevant states. In this case, the
vibrational characterization of exciton 1 afforded by the application of
2DEV spectroscopy provides direct evidence that the initial CT state in
the PSII-RC is characterized by
Chl\textsubscript{D1}\textsuperscript{+}Phe\textsuperscript{-} rather
than P\textsubscript{D2}\textsuperscript{+}
P\textsubscript{D1}\textsuperscript{-} (Figure 3b). Such a result is
consistent with a recent QM/MM calculation, utilizing range-separated
TD-DFT theory and the coupled cluster theory with single and double
excitations (CCSD), which proposed that the lowest CT state was
Chl\textsubscript{D1}\textsuperscript{+}Phe\textsuperscript{-}.\textsuperscript{38}
A previous transient IR study also suggested that the initial electron
acceptor is Phe,\textsuperscript{11} however, this study relied on an
extrinsic deconvolution of the vibrational spectrum as opposed to the
intrinsic ability of 2DEV spectroscopy to separate excitonic and CT
contributions along the \emph{$\omega$}\textsubscript{exc.} dimension. This
advantage of 2DEV spectroscopy is particularly useful in the
characterization of the CT state which is only weakly optically allowed
and can therefore be easily obscured in other spectroscopic methods.
Considering the other states, an analysis of the GSB features of exciton
2 and 8 characterize these excitons as predominantly composed of RC
pigments in the active (D1) branch and of the peripheral
Chlz\textsubscript{D1}, respectively, which is consistent with the model
put forth by Novoderezhkin et al. (Figure 3b).\textsuperscript{35} These
assignments also substantiate that Chl and Phe at different binding
position in the PSII-RC are indeed excited by different excitation
frequencies---offering support for the importance of the protein
environment in tuning the site energies of the embedded
pigments.\textsuperscript{38}
Exciton 2 also notably displays characteristic
Chl\textsubscript{D1}\textsuperscript{+} and Phe\textsuperscript{-}
signals at early waiting times (Figure 3a). In comparison to exciton 5,
which is mainly composed of RC pigments in addition to
Chlz\textsubscript{D2} (Figure 3b), these CT signatures in exciton 2 are
markedly more pronounced. Here, we have chosen exciton 5 as a
representative for the energetically intermediate excitonic states,
where there is congestion even in the 2DEV spectra. However, the
vibrational structure is still telling in that the additional
Chlz\textsubscript{D2} contributions of exciton 5 should be similar to
those of Chlz\textsubscript{D1}, which is indeed reflected in the fact
that exciton 5 resembles a mixture of exciton 2 (mainly RC pigments) and
exciton 8 (mainly composed of a peripheral pigment). This comparison
highlights the enhanced CT character in exciton 2 versus exciton 5 at
early waiting times which confirms the suggestion put forth in the model
by Novoderezhkin et al.\textsuperscript{35} that exciton 2 is
responsible for initiating primary charge separation. Further, in the
model, exciton 1 was taken to be characterized by a CT state which
borrowed intensity from the neighboring state, exciton 2. This is in
agreement with the close resemblance between the GSB and ESA
(particuarly below 1650 cm\textsuperscript{-1} which is outside of the
dominant window for the CS markers) structure of exciton 1 compared to
that of exciton 2 (Figure 3a) and signifies similar overall pigment
contributions. This point is made even clearer on comparison of exciton
1 versus exciton 5 or 8 where there is little similarity in these
regions. Correspondingly, this indicates that exciton 2 is characterized
by a mixed exciton-CT state, rather than a purely excitonic state that
rapidly evolves to the CT state. The mixed character between exciton 1
and 2 also offers a mechanism through which rapid charge separation can
be initiated in the RC.
\textbf{Charge separation dynamics.} To elucidate the dynamics, a global
analysis of the data with sequential modeling was performed. We note
that while the time constants represent a convolution of various
processes, this method is able to holistically capture the spectral
evolution along both frequency dimensions. Therefore, the analysis
captures the \emph{$\omega$}\textsubscript{exc.}-dependent spectra and
dynamics, the latter which can be largely disentangled via vibrational
signatures as we will show. The two-dimensional-evolution associated
difference spectra (2D-EADS) analysis (Figure S1), which can be thought
as the two-dimensional analogue of EADS,\textsuperscript{46} required
five components for a reasonable fit (170 fs, 660 fs, 8.2 ps, 17 ps, and
a non-decaying offset component beyond 100 ps, the duration of the
experiment).
Figure 4 contains exciton-specific slices through the actual 2DEV
spectra along \emph{$\omega$}\textsubscript{det.} at the earliest resolvable
waiting time and at subsequent waiting times corresponding to each of
the above mentioned time constants. Throughout, we focus our attention
on excitons 2, 5, and 8 as these states have substantially more
oscillator strength than exciton 1 and therefore will have a larger
influence on the obtained time constants. The evolution associated with
these time constants can be interpreted such that each spectrum (or
slice) evolves into the next one with the associated time constant. For
example, in exciton 2 (Figure 4b), spectral evolution on the 170 fs
timescale can be understood through a comparison of the pink and yellow
slices. Noticeably, there is growth at 1,657 cm\textsuperscript{-1}, a
characteristic marker for CS. However, in exciton 5 and 8 (Figure 4c and
d, respectively) there is no such growth indicative of CS, rather there
are only slight changes in the keto GSB and ESA regions. On the 660 fs
timescale (comparison of the yellow and green slices in Figure 4b),
exciton 2 exhibits further growth at 1,657 cm\textsuperscript{-1} and
1,666 cm\textsuperscript{-1} while a slight shoulder begins to emerge in
this region for exciton 5. This evolution is also accompanied by marked
changes in the keto ESA structure. We assign both the 170 fs and 660 fs
timescales to progressive completion of CS, i.e.
(Chl\textsubscript{D1}\textsuperscript{$\delta$+}Phe\textsuperscript{$\delta$-})*
\(\longrightarrow\)
Chl\textsubscript{D1}\textsuperscript{+}Phe\textsuperscript{-} (more
pronounced for exciton 2), convoluted with EET within the excitonic
manifold (more pronounced for exciton 5) and an environmental response.
These timescales also agree with previous works which suggested that
there is a fast component to the EET dynamics (100-200 fs time
scale)\textsuperscript{12} and that initial CS occurs within 600-800
fs,\textsuperscript{11} among others which have reported
multiexponential CS dynamics.\textsuperscript{21,24} The distinction
here is that the vibrational structure allows for a targeted assessment
of the dynamical components for each of the states.
\begin{figure}
\centering
\includegraphics[width=15.977cm, height=4.369cm]{Figure4.png}
\caption{Dynamics of the PSII-RC. The time-dependent
evolution of 2DEV spectra corresponding to excitons 1, 2, 5, and 8.
Inset shows the range of $\omega$\textsubscript{det} = 1705-1725
cm\textsuperscript{-1}, highlighting the red-shifting behavior of the
Chl\textsuperscript{+} band.}
\label{fig:fig4}
\end{figure}
On an 8.2 ps timescale, both the 1,657 cm\textsuperscript{-1} and 1,666
cm\textsuperscript{-1} CS markers exhibit further evolution along with a
distinct, progressive redshift in the band at 1,716
cm\textsuperscript{-1} to 1,713 cm\textsuperscript{-1} for excitons 1,
2, and 5. This component is similar to the previously reported timescale
for Chl\textsubscript{D1}\textsuperscript{+}Phe\textsuperscript{-}
\(\longrightarrow\) P\textsuperscript{+}Phe\textsuperscript{-} of 6
ps.\textsuperscript{11} Additionally, in a previous light-induced FTIR
difference spectroscopic study, it was proposed that the blue shift of
the keto stretch of Chl cation is smaller for the charge delocalized
dimeric Chl (\textasciitilde10 cm\textsuperscript{-1} in the case of
P680\textsuperscript{+}) compared to that of monomeric Chl
(\textasciitilde30 cm\textsuperscript{-1}).\textsuperscript{47} Both
experimental\textsuperscript{47,48} and
theoretical\textsuperscript{49,50} efforts further support that the P680
cation is partially delocalized over the P\textsubscript{D1} and
P\textsubscript{D2} pigments. Thus, we assign the slight red shift as
the hole migration towards a more delocalized cationic state, i.e.
Chl\textsubscript{D1}\textsuperscript{+}Phe\textsuperscript{-}
\(\longrightarrow\)
(P\textsubscript{D1}P\textsubscript{D2})\textsuperscript{+}Phe\textsuperscript{-}
(likely in addition to further environmental response to CS).
Considering that the mode at 1,713 cm\textsuperscript{-1}, the
characteristic marker for P\textsuperscript{+}, only appears on an 8.2
ps timescale, it is very unlikely that P\textsuperscript{+} contributes
appreciably to the features at 1,657 cm\textsuperscript{-1} and 1,666
cm\textsuperscript{-1} at earlier waiting times. The evolution observed
around 1,657 cm\textsuperscript{-1} and 1,666 cm\textsuperscript{-1} at
later waiting times can therefore be understood as arising from both
Phe\textsuperscript{-} and P\textsuperscript{+}.
The final 17 ps component can be understood as predominantly reflecting
CS limited by EET from peripheral Chlz to RC pigments as only
significant evolution at the CS markers is observed on this timescale
for exciton 8 (Figure 4d). This timescale is also captured by the zero
node line slope (ZNLS) present at \emph{$\omega$}\textsubscript{det} = 1,710
cm\textsuperscript{-1} (Figure 5a, dotted line) in the spectra which
decays with a time constant of 21 ± 4 ps (Figure 5b) and grossly
indicates equilibration within the excitonic manifold. We note that
while the ZNLS trends toward zero, a non-decaying component beyond the
duration of the experiment (\textgreater100 ps) suggests the presence of
the inhomogeneous CS due to the different conformational distributions
of the proteins on the ground state.\textsuperscript{21} This timescale
also falls within the previously established range (14 ps to 37 ps
determined at temperatures of 77 K and 277 K, respectively) for EET from
peripheral Chlz to RC pigments.\textsuperscript{13,19}
\begin{figure}
\centering
\includegraphics[width=16.459cm, height=8.128cm]{Figure5.png}
\caption{2DEV spectral evolution and ZNLS dynamics of the
PSII-RC. (a) 2DEV spectra of the PSII-RC at different waiting times.
Zero node line slope (ZNLS), obtained by a linear fit of the zero signal
intensity distribution along the excitation axis, is depicted in the
spectra as a dotted line. Contour levels are drawn in 5\% intervals. (b)
ZNLS dynamics of the PSII-RC. Red dots indicate the ZNLS value at each
waiting time and the black curve shows the fit result of a single
exponential function (and an offset) with a time constant of 21 ± 4 ps.}
\label{fig:fig5}
\end{figure}
\textbf{Concluding comments.} Our results demonstrate that the CT state
can be prepared directly upon photoexcitation, which is characterized by
Chl\textsubscript{D1}\textsuperscript{$\delta$'+}Phe\textsuperscript{$\delta$'-} ($\delta$'
\textgreater{} $\delta$), and indicate that CS is facilitated by exciton-CT
mixing with a contribution from
(Chl\textsubscript{D1}\textsuperscript{$\delta$+}Phe\textsuperscript{$\delta$-})*
throughout the excitonic manifold. The data further establishes that the
initial electron acceptor in the PSII-RC is Phe with no appreciable
competition from P\textsubscript{D1}---independent of excitation
wavelength. These results are entirely in agreement with the recent
theoretical work of Sirohiwal et al. where the
Chl\textsubscript{D1}\textsuperscript{+}Phe\textsuperscript{-} CT state
was found to be the lowest energy excitation globally within the
PSII-RC.\textsuperscript{38} Further, no similarly low energy CT states
involving P\textsubscript{D1}P\textsubscript{D2} were
found,\textsuperscript{38} thus theoretically excluding the special pair
as a candidate for initial CS as our experimental data supports. This is
notably distinct from the bacterial RC where CS is largely initiated at
the special pair (P) with the A branch bacteriochlorophyll (BChl) acting
as the primary acceptor. The distinct excitation asymmetry in the
PSII-RC has been rationalized as a direct consequence of the
electrostatic effect of the protein environment which likely arose as an
evolutionary accommodation of water splitting in oxygenic photosynthetic
systems (particularly its operation in the
far-red).\textsuperscript{36--38} However, this remains an open question
as the initial CS step itself in the has long evaded clear
characterization.
\textbf{ACKNOWLEDGMENTS}
We thank Rafael Picorel for advice regarding isolation of the PSII-RC.
This research was supported by the U.S. Department of Energy, Office of
Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and
Biosciences Division. Y.Y. appreciates the support of the Japan Society
for the Promotion of Science (JSPS) Postdoctoral Fellowship for Research
Abroad. E.A.A. acknowledges the support of the National Science
Foundation Graduate Research Fellowship (Grant No. DGE 1752814).
\textbf{REFERENCES}
1. Wydrzynski, T. J., Satoh, K. \& Freeman, J. A. \emph{Photosystem II
The Light-Driven Water:Plastoquinone Oxidoreductase}. vol. 22 (Springer
Netherlands, 2005).
2. Blankenship, R. E. \emph{Molecular Mechanisms of Photosynthesis, 2nd
Edition}. (Wiley, 2014).
3. Shen, J.-R. The Structure of Photosystem II and the Mechanism of
Water Oxidation in Photosynthesis. \emph{Annu. Rev. Plant Biol.}
\textbf{66}, 23--48 (2015).
4. Renger, G. \& Renger, T. Photosystem II: The machinery of
photosynthetic water splitting. \emph{Photosynth. Res.} \textbf{98},
53--80 (2008).
5. Croce, R. \& Van Amerongen, H. Light-harvesting and structural
organization of Photosystem II: From individual complexes to thylakoid
membrane. \emph{J. Photochem. Photobiol. B Biol.} \textbf{104}, 142--153
(2011).
6. Romero, E., Novoderezhkin, V. I. \& Van Grondelle, R. Quantum design
of photosynthesis for bio-inspired solar-energy conversion.
\emph{Nature} \textbf{543}, 355--365 (2017).
7. Loll, B., Kern, J., Saenger, W., Zouni, A. \& Biesiadka, J. Towards
complete cofactor arrangement in the 3.0 Å resolution structure of
photosystem II. \emph{Nature} \textbf{438}, 1040--1044 (2005).
8. Umena, Y., Kawakami, K., Shen, J.-R. R. \& Kamiya, N. Crystal
structure of oxygen-evolving photosystem II at a resolution of 1.9Å.
\emph{Nature} \textbf{473}, 55--60 (2011).
9. Shelaev, I. V. \emph{et al.} Primary light-energy conversion in
tetrameric chlorophyll structure of photosystem II and bacterial
reaction centers: II. Femto- and picosecond charge separation in PSII
D1/D2/Cyt b559 complex. \emph{Photosynth. Res.} \textbf{98}, 95--103
(2008).
10. Nadtochenko, V. A., Semenov, A. Y. \& Shuvalov, V. A. Formation and
decay of P680 (PD1-PD2) +PheoD1- radical ion pair in photosystem II core
complexes. \emph{Biochim. Biophys. Acta - Bioenerg.} \textbf{1837},
1384--1388 (2014).
11. Groot, M. L. \emph{et al.} Initial electron donor and acceptor in
isolated Photosystem II reaction centers identified with femtosecond
mid-IR spectroscopy. \emph{Proc. Natl. Acad. Sci.} \textbf{102},
13087--13092 (2005).
12. Prokhorenko, V. I. \& Holzwarth, A. R. Primary processes and
structure of the photosystem II reaction center: A photon echo study.
\emph{J. Phys. Chem. B} \textbf{104}, 11563--11578 (2000).
13. Holzwarth, A. R. \emph{et al.} Kinetics and mechanism of electron
transfer in intact photosystem II and in the isolated reaction center:
Pheophytin is the primary electron acceptor. \emph{Proc. Natl. Acad.
Sci.} \textbf{103}, 6895--6900 (2006).
14. Myers, J. A. \emph{et al.} Two-Dimensional Electronic Spectroscopy
of the D1-D2-cyt b559 Photosystem II Reaction Center Complex. \emph{J.
Phys. Chem. Lett.} \textbf{1}, 2774--2780 (2010).
15. Durrant, J. R. \emph{et al.} A multimer model for P680, the primary
electron donor of photosystem II. \emph{Proc. Natl. Acad. Sci. U. S. A.}
\textbf{92}, 4798--4802 (1995).
16. Raszewski, G., Diner, B. A., Schlodder, E. \& Renger, T.
Spectroscopic properties of reaction center pigments in photosystem II
core complexes: Revision of the multimer model. \emph{Biophys. J.}
\textbf{95}, 105--119 (2008).
17. Crystall, B. \emph{et al.} Observation of Multiple Radical Pair
States in Photosystem 2 Reaction Centers. \emph{Biochemistry}
\textbf{30}, 7573--7586 (1991).
18. Konermann, L., Gatzen, G. \& Holzwarth, A. R. Primary processes and
structure of the photosystem II reaction center. 5. Modeling of the
fluorescence kinetics of the D1-D2-cyt-b559 complex at 77 K. \emph{J.
Phys. Chem. B} \textbf{101}, 2933--2944 (1997).
19. Visser, H. M. \emph{et al.} Subpicosecond transient absorption
difference spectroscopy on the reaction center of photosystem II:
Radical pair formation at 77 K. \emph{J. Phys. Chem.} \textbf{99},
15304--15309 (1995).
20. Groot, M. L. \emph{et al.} Charge separation in the reaction center
of photosystem II studied as a function of temperature. \emph{Proc.
Natl. Acad. Sci. U. S. A.} \textbf{94}, 4389--4394 (1997).
21. Romero, E., Van Stokkum, I. H. M., Novoderezhkin, V. I., Dekker, J.
P. \& Van Grondelle, R. Two different charge separation pathways in
photosystem II. \emph{Biochemistry} \textbf{49}, 4300--4307 (2010).
22. Romero, E. \emph{et al.} Quantum coherence in photosynthesis for
efficient solar-energy conversion. \emph{Nat. Phys.} \textbf{10},
676--682 (2014).
23. Fuller, F. D. \emph{et al.} Vibronic coherence in oxygenic
photosynthesis. \emph{Nat. Chem.} \textbf{6}, 706--711 (2014).
24. Duan, H.-G. \emph{et al.} Primary Charge Separation in the
Photosystem II Reaction Center Revealed by a Global Analysis of the
Two-dimensional Electronic Spectra. \emph{Sci. Rep.} \textbf{7}, 12347
(2017).
25. Groot, M. L., Van Wilderen, L. J. G. W. \& Di Donato, M.
Time-resolved methods in biophysics. 5. Femtosecond time-resolved and
dispersed infrared spectroscopy on proteins. \emph{Photochem. Photobiol.
Sci.} \textbf{6}, 501--507 (2007).
26. Di Donato, M. \& Groot, M. L. Ultrafast infrared spectroscopy in
photosynthesis. \emph{Biochim. Biophys. Acta - Bioenerg.} \textbf{1847},
2--11 (2015).
27. Breton, J. Fourier transform infrared spectroscopy of primary
electron donors in type I photosynthetic reaction centers.
\emph{Biochim. Biophys. Acta - Bioenerg.} \textbf{1507}, 180--193
(2001).
28. Noguchi, T. \& Berthomieu, C. Molecular Analysis by Vibrational
Spectroscopy. in \emph{Photosystem II: The Light-Driven
Water:Plastoquinone Oxidoreductase} (eds. Wydrzynski, T. J., Satoh, K.
\& Freeman, J. A.) 367--387 (Springer Netherlands, 2005).
doi:10.1007/1-4020-4254-X\_17.
29. Nabedryk, E. \emph{et al.} Characterization of bonding interactions
of the intermediary electron acceptor in the reaction center of
Photosystem II by FTIR spectroscopy. \emph{Biochim. Biophys. Acta -
Bioenerg.} \textbf{1016}, 49--54 (1990).
30. Breton, J., Hienerwadel, R. \& Nabedryk, E. FTIR Difference Spectrum
of the Photooxidation of the Primary Electron Donor of Photosystem II.
in \emph{Spectroscopy of Biological Molecules: Modern Trends} 101--102
(Springer Netherlands, 1997). doi:10.1007/978-94-011-5622-6\_44.
31. Noguchi, T., Tomo, T. \& Inoue, Y. Fourier transform infrared study
of the cation radical of P680 in the photosystem II reaction center:
Evidence for charge delocalization on the chlorophyll dimer.
\emph{Biochemistry} \textbf{37}, 13614--13625 (1998).
32. Noguchi, T., Tomo, T. \& Kato, C. Triplet formation on a monomeric
chlorophyll in the photosystem II reaction center as studied by
time-resolved infrared spectroscopy. \emph{Biochemistry} \textbf{40},
2176--2185 (2001).
33. Nabedryk, E., Leonhard, M., Mäntele, W. \& Breton, J. Fourier
Transform Infrared Difference Spectroscopy Shows No Evidence for an
Enolization of Chlorophyll a upon Cation Formation either in Vitro or
during P700 Photooxidation. \emph{Biochemistry} \textbf{29}, 3242--3247
(1990).
34. Romero, E. \emph{et al.} Mixed exciton-charge-transfer states in
photosystem II: Stark spectroscopy on site-directed mutants.
\emph{Biophys. J.} \textbf{103}, 185--194 (2012).
35. Novoderezhkin, V. I., Dekker, J. P. \& van Grondelle, R. Mixing of
Exciton and Charge-Transfer States in Photosystem II Reaction Centers:
Modeling of Stark Spectra with Modified Redfield Theory. \emph{Biophys.
J.} \textbf{93}, 1293--1311 (2007).
36. Thapper, A., Mamedov, F., Mokvist, F., Hammarström, L. \& Styring,
S. Defining the far-red limit of photosystem II in Spinach. \emph{Plant
Cell} \textbf{21}, 2391--2401 (2009).
37. Pavlou, A., Jacques, J., Ahmadova, N., Mamedov, F. \& Styring, S.
The wavelength of the incident light determines the primary charge
separation pathway in Photosystem II. \emph{Sci. Rep.} \textbf{8}, 1--11
(2018).
38. Sirohiwal, A., Neese, F. \& Pantazis, D. A. Protein Matrix Control
of Reaction Center Excitation in Photosystem II. \emph{J. Am. Chem.
Soc.} \textbf{142}, 18174--18190 (2020).
39. Pettai, H., Oja, V., Freiberg, A. \& Laisk, A. Photosynthetic
activity of far-red light in green plants. \emph{Biochim. Biophys. Acta
- Bioenerg.} \textbf{1708}, 311--321 (2005).
40. Oliver, T. A. A., Lewis, N. H. C. \& Fleming, G. R. Correlating the
motion of electrons and nuclei with two-dimensional
electronic-vibrational spectroscopy. \emph{Proc. Natl. Acad. Sci.}
\textbf{111}, 10061--10066 (2014).
41. Lewis, N. H. C. \emph{et al.} Observation of Electronic Excitation
Transfer Through Light Harvesting Complex II Using Two-Dimensional
Electronic--Vibrational Spectroscopy. \emph{J. Phys. Chem. Lett.}
\textbf{7}, 4197--4206 (2016).
42. Arsenault, E. A. \emph{et al.} Vibronic mixing enables ultrafast
energy flow in light-harvesting complex II. \emph{Nat. Commun.}
\textbf{11}, 1460 (2020).
43. Arsenault, E. A., Yoneda, Y., Iwai, M., Niyogi, K. K. \& Fleming, G.
R. The role of mixed vibronic Qy-Qx states in green light absorption of
light-harvesting complex II. \emph{Nat. Commun.} \textbf{11}, 6011
(2020).
44. Yoneda, Y. \emph{et al.} Electron--Nuclear Dynamics Accompanying
Proton-Coupled Electron Transfer. \emph{J. Am. Chem. Soc.} \textbf{143},
3104--3112 (2021).
45. Groot, M. L., Breton, J., Van Wilderen, L. J. G. W., Dekker, J. P.
\& Van Grondelle, R. Femtosecond visible/visible and visible/mid-IR
pump-probe study of the photosystem II core antenna complex CP47.
\emph{J. Phys. Chem. B} \textbf{108}, 8001--8006 (2004).
46. Van Stokkum, I. H. M., Larsen, D. S. \& Van Grondelle, R. Global and
target analysis of time-resolved spectra. \emph{Biochim. Biophys. Acta -
Bioenerg.} \textbf{1657}, 82--104 (2004).
47. Okubo, T., Tomo, T., Sugiura, M. \& Noguchi, T. Perturbation of the
structure of P680 and the charge distribution on its radical cation in
isolated reaction center complexes of photosystem II as revealed by
fourier transform infrared spectroscopy. \emph{Biochemistry}
\textbf{46}, 4390--4397 (2007).
48. Diner, B. A. \emph{et al.} Site-directed mutations at D1-His198 and
D2-His197 of photosystem II in Synechocystis PCC 6803: Sites of primary
charge separation and cation and triplet stabilization.
\emph{Biochemistry} \textbf{40}, 9265--9281 (2001).
49. Saito, K. \emph{et al.} Distribution of the cationic state over the
chlorophyll pair of the photosystem II reaction center. \emph{J. Am.
Chem. Soc.} \textbf{133}, 14379--14388 (2011).
50. Narzi, D., Bovi, D., De Gaetano, P. \& Guidoni, L. Dynamics of the
Special Pair of Chlorophylls of Photosystem II. \emph{J. Am. Chem. Soc.}
\textbf{138}, 257--264 (2016).
\textbf{Author contributions}
Y.Y. and G.R.F. conceived the research. Y.Y., E.A.A., and K.O. performed
the 2DEV experiments. Y.Y. analyzed the experimental data. M.I. prepared
the sample. Y.Y., E.A.A., and G.R.F. wrote the manuscript. All authors
discussed the results and contributed to the manuscript.
\textbf{Competing financial interests}
The authors declare no competing financial interests.
\includepdf[pages=-]{SI.pdf}
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{INTRODUCTION}
Our knowledge about $\Lambda$ resonances is much poorer than that of
nucleon resonances~\cite{Beringer:2012}. Even for the
well-established low-lying negative parity states, such as
$\Lambda(1405)S_{01}$, $\Lambda(1520)D_{03}$ and
$\Lambda(1670)S_{01}$, their properties are still
controversial~\cite{Klempt:2009pi}. Up to now we can not clarify
whether these states are excited three quark states, dynamically
generated resonances, three quark states containing multi-quark
components or the other explanations, although there are extensive
discussions about their
natures~\cite{Chen:2009de,Isgur78,Isgur:1977ky,Capstick:1986bm,Melde:2008,
Bijker:2000, Glozman:1997ag,Loring:2001ky,Schat:2001xr,
Goity:2002pu,Menadue:2011pd,Engel:2012qp,Koniuk:1979vy,Melde:2006yw,Melde:2007,An:2010wb,
Hyodo:2011ur,Oller:2005ig,Roca:2006sz,
Oller:2006hx,Borasoy:2005ie,Hyodo:2003qa,Oset:2001cn,
GarciaRecio:2002td,Jido:2003cb,Oller:2000fj,
Oset:1997it,Oller:2006jw,Borasoy:2006sr,Roca:2008kr,Bouzasa;2008,Martin:1969ud,Manley:2002,
Lutz:2001yb, Buttgen:1985yz,MuellerGroeling:1990cw,Hamaie:1995wy,
Zhong:2009,Martin:1980qe,Gensini:1997fp,Guo:2012vv,Zhang:2013cua,Zhang:2013sva,
Geng:2007vm,Geng:2007hz,Geng:2008er,Liu:2012ge,Liu:2011sw,Xie:2013wfa,GarciaRecio:2003ks,Zou:2008be}.
Recently, we systematically studied the reactions $K^-p\rightarrow
\Sigma^0\pi^0,\Lambda\pi^0,\bar{K}^0n$ in a chiral quark model
approach~\cite{Zhong:2013oqa}. Obvious roles of the low-lying
negative parity states, $\Lambda(1405)$, $\Lambda(1520)$ and
$\Lambda(1670)$, are found in the $K^-p\rightarrow
\Sigma^0\pi^0,\bar{K}^0n$ reactions, where we have extracted their
strong coupling properties. For example, we found that
$\Lambda(1670)$ should have a very weak coupling to $\bar{K}N$,
while $\Lambda(1520)$ needs a strong coupling to $\bar{K}N$, which
can not be well explained with the symmetry constituent quark model
in the $SU(6)\otimes O(3)$ limit~\cite{Zhong:2013oqa}.
To obtain more strong coupling properties and better understandings
of these low-lying $\Lambda$ resonances, in this work, we will
continue to study another important $\bar{K}N$ reaction
$K^-p\rightarrow \eta\Lambda$. This reaction provides us a very
clear place to study the low-lying $\Lambda$ resonances, because
only the $\Lambda$ resonances contribute here due to the isospin
selection rule. Especially, the poorly known strong coupling of
$\Lambda(1670)$ to $\eta\Lambda$ might be reliably obtained from the
$K^-p\rightarrow \eta\Lambda$, for this reaction at threshold is
dominated by formation of the
$\Lambda(1670)$~\cite{Starostin:2001zz}. Recently, the new data of
the $K^-p\rightarrow \eta\Lambda$ reaction from Crystal Ball
Collaboration~\cite{Starostin:2001zz} were analyzed with an
effective Lagrangian model by Liu and
Xie~\cite{Liu:2012ge,Liu:2011sw}. They might find some evidence of a
exotic $D$-wave resonance with mass $M\simeq 1669$ MeV and width
$\Gamma\simeq 1.5$ MeV in the reaction, which will be discussed in
present work as well.
Furthermore, to understand the natures of these strong coupling
properties extracted from the $\bar{K}N$ scattering, we will further
carry out a systematical study of the strong decays of the low-lying
negative parity $\Lambda$ resonances in the chiral quark model
approach as well. Combing the strong coupling properties extracted
from the $\bar{K}N$ scattering with the strong decay properties from
the Particle Data Group (PDG)~\cite{Beringer:2012}, we expect to
obtain more reliable understandings of the natures for these
low-lying negative parity $\Lambda$ resonances.
This work is organized as follows. In Sec.~\ref{FRAME}, the model is
reviewed. Then, the numerical results are presented and discussed in
Sec.~\ref{RESULT}. Finally, a summary is given in Sec.~\ref{summ}.
\section{FRAMEWORK}\label{FRAME}
In this work, we study the $K^-p\rightarrow \eta\Lambda$ reaction in
a chiral quark model. This model has been well developed and widely
applied to meson photoproduction
reactions~\cite{qkk,Li:1997gda,zhao-kstar,qk3,qk4,qk5,He:2008ty,Saghai:2001yd,Zhong:2011ht}.
Its recent extension to describe the $\pi N$ ~\cite{Zhong:2007fx}
and $\bar{K} N$ ~\cite{Zhong:2009,Zhong:2013oqa} reactions also
turns out to be successful and inspiring.
In the calculations, we consider three basic Feynman diagrams, i.e.,
$s$-, $u$- and $t$-channels at the tree level. The reaction
amplitude is expressed as
\begin{equation}
\mathcal{M}=\mathcal{M}_s+\mathcal{M}_u+\mathcal{M}_t,
\end{equation}
where the $s$- and $u$-channel reaction amplitudes $\mathcal{M}_s$
and $\mathcal{M}_u$ are given by
\begin{eqnarray}
\mathcal{M}_s=\sum_j\langle N_f|H^f_m|N_j\rangle\langle
N_j|\frac{1}{E_i+\omega_i-E_j}H^i_m|N_i\rangle,\label{sc}\\
\mathcal{M}_u=\sum_j\langle N_f|H^i_m\frac{1}{E_i-\omega_f-E_j}|N_j\rangle\langle
N_jH^f_m|N_i\rangle.\label{uc}
\end{eqnarray}
In the above equations, $H_m$ stands for the quark-meson coupling,
which might be described by the effective chiral
Lagrangian~\cite{qkk,Li:1997gda}
\begin{equation}
H_m=\sum_j\frac{1}{f_m}\overline{\psi}_j\gamma^j_u\gamma^j_5
\psi_j\vec{\tau}\cdot\partial^u\vec{\phi}_m,
\end{equation}
where $\psi_j$ represents the $j$th quark field in a baryon, and
$f_m$ is the meson's decay constant. The pseudoscalar meson octet
$\phi_m$ is written as
\begin{equation}
\phi_m=\left(\begin{array}{ccc}
\frac{1}{\sqrt{2}}\pi^0+\frac{1}{\sqrt{6}}\eta & \pi^+ & K^+ \cr
\pi^- & -\frac{1}{\sqrt{2}}\pi^0+\frac{1}{\sqrt{6}}\eta & K^0 \cr
K^- & \bar{K}^0 & -\sqrt{\frac{2}{3}}\eta
\end{array}\right).
\end{equation}
In Eqs. (\ref{sc}) and (\ref{uc}), $\omega_i$ and $\omega_f$ are the
energies of the incoming and outgoing mesons, respectively.
$|N_i\rangle$, $|N_j\rangle$ and $|N_f\rangle$ stand for the
initial, intermediate and final states, respectively, and their
corresponding energies are $E_i$, $E_j$ and $E_f$, which are the
eigenvalues of the nonrelativistic Hamiltonian of constituent quark
model $\hat{H}$~\cite{Isgur78, Isgur:1977ky,Capstick:1986bm}.
The resonance transition amplitudes of the $s$-channel can be
generally expressed as~\cite{Zhong:2007fx}
\begin{eqnarray}
\mathcal{M}^s_R=\frac{2M_R}{s-M^2_R+iM_R
\Gamma_R}\mathcal{O}_Re^{-(\textbf{k}^2+\textbf{q}^2)/(6\alpha^2)},\label{stt}
\end{eqnarray}
where $M_R$ and $\Gamma_R$ stand for the mass and width of the
resonance, respectively. The Mandelstam variable $s$ is defined as
$s\equiv(P_i+k)^2 $. The single-resonance-excitation amplitude,
$\mathcal{O}_R$, can be obtain by the relation~\cite{Zhong:2013oqa}
\begin{eqnarray}\label{pt}
\mathcal{O}(n,l,J)=\sum_R\mathcal{O}_R(n,l,J)=\sum_Rg_R\mathcal{O}(n,l,J),
\end{eqnarray}
where $g_R$ stands for the relative strength of a single-resonance
in the partial amplitude $\mathcal{O}(n,l,J)$. The $g_R$ factors are
determined by the structure of each resonance and their couplings to
the meson and baryon. The partial amplitudes, $\mathcal{O}(n,l,J)$,
up to $n=2$ shell have been derived in our previous
work~\cite{Zhong:2013oqa}, where the details can be found. For
example, the important partial amplitude for the $S$ waves is given
by~\cite{Zhong:2013oqa}
\begin{eqnarray}
\mathcal{O}_1(S)&=&\left(g_{s1}-\frac{1}{2}g_{s2}\right)\Big(|\mathbf{A}_{out}|\cdot|\mathbf{A}_{in}
|\frac{|\mathbf{k}||\mathbf{q}|}{9\alpha^2}
+\frac{\omega_i}{6\mu_q}\mathbf{A}_{out}\cdot
\mathbf{q}\nonumber\\
&&+\frac{\omega_f}{6\mu_q}\mathbf{A}_{in}\cdot
\mathbf{k}+\frac{\omega_i\omega_f}{4\mu_q\mu_q}\alpha^2\Big),
\end{eqnarray}
where $\mathbf{k}$ and $\mathbf{q}$ stand for the three-momenta of
the incoming and outgoing mesons, respectively, and $\alpha$ is the
harmonic oscillator parameter. The reduced mass $\mu_q$ at the quark
level is defined by $1/\mu_q=1/m_q^i+1/m_q^f$, where $m_q^i$ and
$m_q^f$ correspond to the initial and final quark masses,
respectively. $\mathbf{A}_{in}$ and $\mathbf{A}_{out}$ are the same
variables defined in~\cite{Zhong:2013oqa}. The $g$-factors in the
partial amplitudes, such as $g_{s1}$ and $g_{s2}$, have been defined
in~\cite{Zhong:2013oqa} as well. These $g$-factors can be derived in
the SU(6)$\otimes$O(3) symmetry limit. In Tab.~\ref{abf}, we have
listed the $g$-factors for the reaction
$K^-p\rightarrow\eta\Lambda$.
And the transition amplitudes of the $u$-channel are given
by~\cite{Zhong:2007fx,Zhong:2009}
\begin{eqnarray}
\mathcal{M}^u_n=-\frac{2M_n}{u-M^2_n}\mathcal{O}_n
e^{-(\textbf{k}^2+\textbf{q}^2)/(6\alpha^2)}.\label{utt}
\end{eqnarray}
In Eq.(\ref{utt}), the amplitude $\mathcal{O}_n$ is determined by
the structure of each resonance and their couplings to the meson and
baryon, which has been derived in our previous
work~\cite{Zhong:2013oqa}. The Mandelstam variable $u$ are defined
as $u\equiv(P_i-q)^2 $.
In the calculations, we consider the vector- and scalar-exchanges
for the $t$-channel backgrounds. The vector meson-quark and scalar
meson-quark couplings are given by
\begin{eqnarray}\label{coup}
H_V&=& \bar{\psi}_j\left(a\gamma^{\nu}+\frac{
b\sigma^{\nu\lambda}\partial_{\lambda}}{2m_q}\right)V_{\nu} \psi_j,\\
H_S&=&g_{Sqq}\bar{\psi}_j\psi_jS,
\end{eqnarray}
where $V$ and $S$ stand for the vector and scalar fields,
respectively. The constants $a$, $b$ and $g_{Sqq}$ are the vector,
tensor and scalar coupling constants, respectively. They are treated
as free parameters in this work.
On the other hand, the $VPP$ and $SPP$ couplings ($P$ stands for a
pseudoscalar-meson) are adopted as
\begin{eqnarray}
H_{VPP}&=&-iG_VTr([\phi_m,\partial_\mu\phi_m]V^{\mu}),\\
H_{SPP}&=&\frac{g_{SPP}}{2m_\pi}\partial_\mu\phi_m\partial^\mu
\phi_m S,
\end{eqnarray}
where $G_V$ and $g_{SPP}$ are the $VPP$ and $SPP$ coupling constants
to be determined by experimental data.
For the vector meson exchange, the $t$-channel amplitude in the
quark model is written as~\cite{Zhong:2013oqa}
\begin{equation}
\mathcal{M}^V_t=\mathcal{O}^t_V\frac{1}{t-M^2_V}e^{-(\mathbf{q}-\mathbf{k})^2/(6\alpha^2)},
\end{equation}
where $e^{-(\mathbf{q}-\mathbf{k})^2/(6\alpha^2)}$ is a form factor
deduced from the quark model, and $M_V$ is the vector-meson mass.
The amplitude $\mathcal{O}^t_V$ is given by~\cite{Zhong:2013oqa}
\begin{eqnarray}
\mathcal{O}^t_V&=&G_va[g^s_t(\mathcal{H}_0+\mathcal{H}_1\mathbf{q}\cdot\mathbf{k})
+g^v_t\mathcal{H}_2i\mathbf{\sigma}\cdot(\mathbf{q}\times\mathbf{k})]\nonumber\\
&&+\text{tensor term},\label{tvector}
\end{eqnarray}
where the factors $g^s_t$ and $g^v_t$ are defined by $g^s_t\equiv
\langle N_f|\sum^3_{j=1}I^{ex}_j|N_i\rangle$ and $g^v_t\equiv
\langle N_f|\sum^3_{j=1}\sigma_j I^{ex}_j|N_i\rangle$, where,
$I^{ex}_j$ is the isospin operator of exchanged meson. These factors
can be deduced from the quark model.
For the scalar meson exchange, the $t$-channel amplitude in the
quark model is given by~\cite{Zhong:2013oqa}
\begin{equation}
\mathcal{M}^S_t=\mathcal{O}^t_S\frac{1}{t-M^2_S}e^{-(\mathbf{q}-\mathbf{k})^2/(6\alpha^2)},
\end{equation}
where $M_S$ is the scalar-meson mass, and the $\mathcal{O}^t_S$ is
written as~\cite{Zhong:2013oqa}
\begin{eqnarray}
\mathcal{O}^t_S&\simeq&\frac{g_{SPP}g_{Sqq}}{2m_\pi}(\omega_i\omega_f
-\mathbf{q}\cdot\mathbf{k})[g^s_t(\mathcal{A}_0+\mathcal{A}_1\mathbf{q}\cdot\mathbf{k})\nonumber\\
&&g^v_t\mathcal{A}_2i\mathbf{\sigma}\cdot(\mathbf{q}\times\mathbf{k})].\label{tscalar}
\end{eqnarray}
In Eqs.(\ref{tvector}) and (\ref{tscalar}), the variables
$\mathcal{H}_i$ and $\mathcal{A}_i$ ($i=0,1,2$) are the same
definitions as in ~\cite{Zhong:2013oqa}.
In this work, we consider the $K^{*}$- and $\kappa$-exchanges in the
$K^-p\rightarrow\Lambda\eta$ process. The factors $g^s_t$ and
$g^v_t$ derived from the quark model have been listed in
Tab.~\ref{abf} as well.
\begin{table}[ht]
\caption{The $g$-factors appearing in the $s$-, $u$- and $t$-channel
amplitudes of the $K^-p\rightarrow\Lambda\eta$ process obtained in
in the SU(6)$\otimes$O(3) symmetry limit. $\phi_p$ is the
$\eta$-$\eta'$ mixing angle defined
in~\cite{DiDonato:2011kr,Ke:2009mn}.}\label{abf}
\begin{tabular}{|c|c|c|c|c|c|c }\hline
\hline
$g_{s1}=-\frac{\sqrt{6}}{6}\sin{\phi_p}$ &$g_{v1}=-\frac{\sqrt{6}}{4}\sin{\phi_p}$ \\
$g^u_{s1}=\frac{\sqrt{3}}{2}\cos{\phi_p}$ &$g^u_{v1}=\frac{\sqrt{3}}{2}\cos{\phi_p}$\\
$g^s_t=\frac{\sqrt{6}}{2}$ &$g^v_t=\frac{\sqrt{6}}{2}$\\
\hline
\end{tabular}
\end{table}
\section{RESULT AND ANALYSIS}\label{RESULT}
\subsection{Parameters}
With the transition amplitudes derived within the quark model, the
differential cross section can be calculated by~\cite{Zhong:2013oqa}
\begin{eqnarray}
\frac{d\sigma}{d\Omega}&=&\frac{(E_i+M_i)(E_f+M_f)}{64\pi^2s(2M_i)(2M_f)}
\frac{|\mathbf{q}|}{|\mathbf{k}|}\frac{M_N^2}{2}\nonumber\\
&&\times\sum_{\lambda_i,\lambda_f}\left|[\frac{\delta_{m_i}}{f_{m_i}}\frac{\delta_{m_f}}{f_{m_f}}
(\mathcal{M}_s+\mathcal{M}_u)+\mathcal{M}_t]_{\lambda_f,\lambda_i}\right|^2,
\end{eqnarray}
where $\lambda_i=\pm1/2$ and $\lambda_f=\pm1/2$ are the helicities
of the initial and final state $\Lambda$ baryons, respectively.
$f_{m_i}$ and $f_{m_f}$ are the initial and final meson decay
constants, respectively. $\delta_{m_i}\delta_{m_f}$ is a global
parameter accounting for the flavor symmetry breaking effects
arising from the quark-meson couplings, which will be determined by
experimental data.
In the calculation, the universal value of harmonic oscillator
parameter $\alpha=0.4$ GeV is adopted. The masses of the $u$, $d$,
and $s$ constituent quarks are set as $m_u=m_d=330$ MeV, and
$m_s=450$ MeV, respectively. The decay constants for $\eta$ and $K$
are adopted as $f_\eta=f_K=160$ MeV.
In present work, the resonance transition amplitude,
$\mathcal{O}_R$, is derived in the $SU(6)\otimes O(3)$ symmetric
quark model limit. In reality, due to e.g. spin-dependent forces in
the quark-quark interaction, the symmetry of $SU(6)\otimes O(3)$ is
generally broken. As a result, configuration mixing would
occur~\cite{Isgur78,Isgur:1977ky,Capstick:1986bm,Schat:2001xr}. To
take into account the breaking of that symmetry, a set of coupling
strength parameters, $C_R$, should be introduced for each resonance
amplitude,
\begin{equation}
\mathcal{O}_R\rightarrow C_R\mathcal{O}_R,
\end{equation}
where $C_R$ should be determined by fitting the experimental
observation. One finds that $C_R=1$ in the $SU(6)\otimes O(3)$
symmetry limit, while deviation of $C_R$ from unity implies the
$SU(6)\otimes O(3)$ symmetry breaking. The determined values of
$C_R$ for the $K^-p\rightarrow\Lambda\eta$ process have been listed
in Table~\ref{Pra}. These strength parameters $C_R$ for the main
resonances will be further discussed in Sec.~\ref{cxx}.
\begin{table}[ht]
\caption{The determined values for the parameters $C_R$,
$\delta_{m_i}\delta_{m_f}$ and $\phi_P$ in the
$K^-p\rightarrow\Lambda\eta$ scatting process.} \label{Pra}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c }\hline\hline
Parameter & $C_{S_{01}(1405)}$ &$C_{D_{03}(1520)}$&$C_{S_{01}(1670)}$&$C_{D_{03}(1690)}$&$\delta_{m_i}\delta_{m_f}$&$\phi_P$\\
\hline
Value & 1.17 & 1.18 &34.70& 38.58 &1.24&$35^\circ$\\
\hline
\end{tabular}
\end{table}
In the $t$ channel, there are two parameters, $G_{V}a$ and
$g_{SPP}g_{Sqq}$, which come from $K^{*}$- and $\kappa$-exchanges,
respectively. By fitting the data, we obtain $G_Va\simeq4.8$ and
$g_{SPP}g_{Sqq}\simeq105$, which are consistent with our previous
determinations in~\cite{Zhong:2013oqa}.
In the $u$ channel, the intermediate states are nucleon and nucleon
resonances. One finds that the contributions from $n\geq1$ shell are
negligibly small, and are insensitive to the degenerate masses for
these shells. In present work, we take $M_1=1650$ MeV and $M_2=1750$
MeV for the degenerate masses of $n=1$ and $n=2$ shell nucleon
resonances, respectively.
In the $s$-channel of the $K^-p\rightarrow\Lambda\eta$ process,
there are only the contributions from $\Lambda$ resonances. The
low-lying $\Lambda$ resonances classified in the quark model up to
$n=2$ shell are listed in Tab.~\ref{qc}. From the table, we can see
that in $n=0$ shell, only the $\Lambda$ pole contribute to the
process, while in $n=1$ shell, two $S$-waves (i.e.,
$[70,^21]\Lambda(1405)S_{01}$, $[70,^28]\Lambda(1670)S_{01}$) and
two $D$-waves (i.e., $[70,^21]\Lambda(1520)D_{03}$,
$[70,^28]\Lambda(1690)D_{03}$) contribute to the reaction. The
excitations of $[70,^48]$ are forbidden for the $\Lambda$-selection
rule~\cite{Zhao:2006an}. In the calculations, the $n=2$ shell
$\Lambda$ resonances in the $s$ channel are treated as degeneration,
and their degenerate mass and width are taken as $M$=1800 MeV and
$\Gamma$=100 MeV, since in the low-energy region the contributions
from the $n =2$ shell are not significant.
By fitting the experimental data, we obtain their Breit-Wigner
masses and widths, which are listed in Tab.~\ref{BW}. From the
table, it is seen that the extracted resonances' parameters are
compatible with the data from PDG~\cite{Beringer:2012}.
\begin{table}[ht]
\caption{The classification of $\Lambda$ resonances in the quark
model up to n=2 shell. The "?" denotes the resonances being
unestablished. $l_{I,2J}$ is the PDG notation of baryons. $N_6$ and
$N_3$ denote the SU(6) and SU(3) representation. $L$ and $S$ stand
for the total orbital momentum and spin of a baryon, respectively.}
\label{qc}
\begin{tabular}{|c|c||c|c|c|c|c }\hline
\hline
$|N_6,^{2S+1}N_3,n,L\rangle$ &$l_{I,2J}$ &$|N_6,^{2S+1}N_3,n,L\rangle$ &$l_{I,2J}$\\
\hline
$|56,^28,0,0\rangle$ &$P_{01}(1116)$&...&...\\
\hline
$|70,^21,1,1\rangle$ &$S_{01}(1405)$ &$|56,^28,2,2\rangle$ &$P_{03}(?)$\\
&$D_{03}(1520)$ & &$F_{05}(?)$\\
\hline
$|70,^28,1,1\rangle$ &$S_{01}(1670)$ &$|70,^21,2,2\rangle$ &$P_{03}(?)$\\
&$D_{03}(1690)$ & &$F_{05}(?)$\\
\hline
$|70,^48,1,1\rangle$ &$S_{01}(1800)$ &$|70,^28,2,2\rangle$ &$P_{03}(?)$\\
&$D_{03}(?)$ & &$F_{05}(?)$\\
&$D_{05}(1830)$ & & \\
\hline
$|56,^28,2,0\rangle$ &$P_{01}(1600)$ &$|70,^48,2,2\rangle$ &$P_{01}(?)$\\
$|70,^21,2,0\rangle$ &$P_{01}(1810)$ & &$P_{03}(?)$\\
$|70,^28,2,0\rangle$ &$P_{01}(?)$ & &$F_{05}(?)$\\
$|70,^48,2,0\rangle$ &$P_{03}(?)$ & &$F_{07}(?)$\\
\hline
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{Breit-Wigner masses $M_R$ (MeV) and widths $\Gamma_R$ (MeV)
for the resonances in the $s$-channel compared with the experimental
data from PDG.} \label{BW}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c }\hline\hline
Resonance &$M_R$ &$\Gamma_R$ &$M_R$ (PDG) &$\Gamma_R$ (PDG)\\
\hline
$\Lambda(1405)S_{01}$ &$1405.0$ &$53.37$ &$1405.0^{+1.3}_{-1.0}$ &$50\pm2$\\
$\Lambda(1520)D_{03}$ &$1519.5$ &$15.6$ &$1519.5\pm1.0$&$15.6\pm1.0$\\
\hline
$\Lambda(1670)S_{01}$ &$1676.0$&$35.0$&$1670\pm10$&$25\sim50$\\
$\Lambda(1690)D_{03}$ &$1682.4$ &$70.0$ &$1690\pm5$&$50\sim70$\\
\hline
\end{tabular}
\end{table}
\subsection{Total cross section}
The total cross section as a function of the beam momentum is shown
in Fig.~\ref{fig-tcr}, where we find that the observations can be
well described within the chiral quark model.
\begin{figure}[ht]
\centering \epsfxsize=9.0 cm
\epsfbox{total.eps}\caption{$K^-p\rightarrow\Lambda\eta$ total cross
sections compared with the data~\cite{Starostin:2001zz}. The bold
solid curves are for the full model calculations. In the upper
panel, exclusive cross sections for $\Lambda(1405)S_{01}$,
$\Lambda(1670)S_{01}$, $t$-channel and $u$-channel are indicated
explicitly by the legends in the figures. In the lower panel, the
results by switching off the contributions of $\Lambda(1405)S_{01}$,
$\Lambda(1670)S_{01}$, $t$-channel and $u$-channel are indicated
explicitly by the legends in the figures.} \label{fig-tcr}
\end{figure}
It is found that $\Lambda(1670)S_{01}$ should play a dominant role
in the reaction. $\Lambda(1670)S_{01}$ is responsible for the bump
structure in the cross section around its threshold. To well
describe the data, a large amplitude of $\Lambda(1670)S_{01}$ in the
reaction is needed, which is about 34 times (i.e.,
$C_{S_{01}(1670)}=34 $) larger than that derived in the
$SU(6)\otimes O(3)$ limit. In our previous work, we found
$\Lambda(1670)S_{01}$ should have a weaker coupling to $\bar{K}N$
than that derived in the $SU(6)\otimes O(3)$
limit~\cite{Zhong:2013oqa}, thus, $\Lambda(1670)S_{01}$ should have
a much stronger coupling to $\eta \Lambda$ than that derived from
the symmetry quark model. These phenomenologies might be explained
by the configuration mixing between the $S$-wave states
$\Lambda(1405)S_{01}$, $\Lambda(1670)S_{01}$ and
$\Lambda(1800)S_{01}$, which will be further studied in
Sec.~\ref{cxx}.
Furthermore, a sizeable contribution from $\Lambda(1405)$ also can
be seen in the cross section. The total cross section around the
peak is slightly underestimated without its contribution.
No obvious contributions from the $D$-wave states,
$\Lambda(1520)D_{03}$ and $\Lambda(1690)D_{03}$, are found in the
reaction.
It should be emphasized that both $u$- and $t$-channel backgrounds
play crucial roles in the reactions. Switching off their
contributions, the cross section changes significantly. The
important roles of $u$- and/or $t$-channel backgrounds are also
found in the other $\bar{K}N$ reactions $K^-p\rightarrow
\Sigma^0\pi^0,\Lambda\pi^0,\bar{K}^0n$~\cite{Zhong:2013oqa}.
\begin{figure*}[ht]
\centering \epsfxsize=16.0 cm \epsfbox{difc.eps}
\caption{Differential cross sections of the $K^-p\rightarrow
\eta\Lambda$ compared with the data from~\cite{Starostin:2001zz}.
The bold solid curves are for the full model calculations. The
results by switching off the contributions from
$\Lambda(1405)S_{01}$, $\Lambda(1670)S_{01}$ and $u$- and
$t$-channel backgrounds are indicated explicitly by the legend in
the figures. }\label{fig-sd}
\end{figure*}
\subsection{Differential cross section}
The differential cross sections (DCS) compared with the data are
shown in Fig.~\ref{fig-sd}. From the figure, it is seen that the
data in the low energy region from threshold to $P_K=770$ MeV/c can
be reasonably described within our chiral quark model. However, it
should be remarked that our theoretical results seem to slightly
underestimate the DCS at both forward and backward angles in the
beam momenta region of $P_K=730\sim742$ MeV/c. Improved measurements
in this beam momenta region are needed to clarify the discrepancies.
To explore the contribution of individual resonances and the $u$-
and $t$-channel backgrounds to the DCS, we have shown the
predictions by switching off one of their contributions in
Fig.~\ref{fig-sd} as well. From the figure, the dominant roles of
$\Lambda(1670)S_{01}$ and $u$-, $t$-channel backgrounds are
significantly seen in the DCS. Switching off the contribution of
$\Lambda(1670)S_{01}$, we find that the cross sections will be
underestimated draftily. Without the $u$-channel contribution, the
DCS will be significantly underestimated around $\eta$ production
threshold. While, switching off $t$-channel contribution, we can see
that the DCS are strongly overestimated at both forward and backward
angles. Furthermore, slight contributions of $\Lambda(1405)S_{01}$
can be seen in the DCS around $\eta$ production threshold, where
$\Lambda(1405)S_{01}$ has a constructive interference with
$\Lambda(1670)S_{01}$. However, $\Lambda(1405)S_{01}$ is not a
crucial contributor in the reaction, which can explain the reason
why the contribution of $\Lambda(1405)S_{01}$ can be neglected in
some studies at the hadron
level~\cite{Manley:2002,Liu:2012ge,Liu:2011sw}.
From Fig.~\ref{fig-sd}, it is seen that a bowl structure seems to
appear in the data around $\eta$ production threshold. As we know,
the bowl structures are the typical effects of the interferences
between the $S$- and $D$-wave states. In this energy region, the
bowl structures might be caused by the interferences between
$\Lambda(1670)S_{01}$ and $\Lambda(1690)D_{03}$. Considering
$\Lambda(1690)D_{03}$ as the conventional three-quark state
classified in the constituent quark model, we can not obtain a bowl
structure in the DCS for the too small contributions of
$\Lambda(1690)D_{03}$ in the reaction. In
Refs.~\cite{Liu:2012ge,Liu:2011sw}, Liu and Xie had carefully
studied these bowl structures appearing in the DCS, they need an
exotic $D$-wave state $\Lambda(1669)$ with a very narrow width of
$\Gamma\simeq 1.5$ MeV to reproduce the bowl structures. Finally, it
should be pointed out that for the rather large uncertainties of the
present data, we can not confirm whether there are obvious bowl
structures in the DCS or not. Thus, more accurate measurements are
needed.
As a whole, $\Lambda(1670)S_{01}$ plays a dominant role in the
reaction. $\Lambda(1670)S_{01}$ should have a much stronger coupling
to $\eta \Lambda$, while have a weaker coupling to $\bar{K}N$ than
that derived in the $SU(6)\otimes O(3)$ limit. The $u$- and
$t$-channel backgrounds also play crucial roles in the reaction.
Furthermore, slight contributions of $\Lambda(1405)S_{01}$ can be
seen in the DCS around $\eta$ production threshold. No obvious
evidence from the $D$-wave states, $\Lambda(1520)D_{03}$ and
$\Lambda(1690)D_{03}$, is found in the reaction.
\subsection{Configuration mixing and strong couplings}\label{cxx}
To further understand the strength parameters $C_R$ in the
$K^-p\rightarrow\Lambda\eta$ reaction, and explain the strong
coupling properties of the $\Lambda$ resonances extracted from the
$\bar{K}N$ scattering, e.g., the weak coupling of
$\Lambda(1670)S_{01}$ to $\bar{K}N$ and strong coupling of
$\Lambda(1670)S_{01}$ to $\eta \Lambda$, in this subsection we study
the configuration mixing effects in the low-lying negative $\Lambda$
resonances.
\subsubsection{Configuration mixing and strong decays}
If there is configuration mixing in several resonances with the same
$J^P$ values, their strong coupling properties might be very
different from the pure states classified in the constituent quark
model. Here, we study the strong decays of low-lying negative
$\Lambda$ resonances and test whether the configuration mixing can
explain the strong couplings of these resonances or not.
In this work, the strong decays of the $\Lambda$ resonances also
studied with the chiral quark model. This approach has been
successfully used to study the strong decays of charmed baryons,
$\Xi$ baryons, and heavy-light
mesons~\cite{Zhong:2007gp,Zhong:2010vq,Liu:2012sj,Xiao:2013xi}. The
details of how to describe the strong decays of the baryon
resonances in the chiral quark model can be found
in~\cite{Xiao:2013xi}.
As we know, $\Sigma(1385)$ is a well-estimated strangeness-1 hyperon
state. According to the classification of the quark model, it is
assigned to the pure $|56,^410,0,0,\frac{3}{2}^+\rangle$
representation. In this work, the measured width of $\Sigma^0(1385)$
as an input (i.e., $\Gamma=36$ MeV~\cite{Beringer:2012}) is used to
determine the overall parameter $\delta$($=0.654$) in the decay
amplitudes. With this determined parameter, we can calculate the
strong decays of the other strangeness-1 hyperon states.
\paragraph{$S$-wave states}
Firstly, we study the strong decay properties of the $S$-wave states
$\Lambda(1405)S_{01}$, $\Lambda(1670)S_{01}$ and
$\Lambda(1800)S_{01}$. If they are pure states, according to the
classification of the constituent quark model, they should be
assigned to the $|70,^21,1,1,\frac{1}{2}^-\rangle$,
$|70,^28,1,1,\frac{1}{2}^-\rangle$ and
$|70,^48,1,1,\frac{1}{2}^-\rangle$, respectively~\cite{Xiao:2013xi}.
\begin{table}[ht]
\caption{The predicted total and partial decay widths (MeV) and
partial decay width ratios of $\Lambda(1670)S_{01}$ as a pure state
of $|70,^28,1,1,\frac{1}{2}^-\rangle$. $\Gamma^{th}$ denotes our
prediction, while $\Gamma^{exp}$ denotes the data from PDG. }
\label{lambda16}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c }\hline\hline
Channel&$\Gamma^{th}_i$&$\Gamma^{th}_{total}$&$\Gamma^{exp}_{total}$&$
\frac{\Gamma_i}{\Gamma_{total}}|_{th}$~~~~~&$\frac{\Gamma_i}{\Gamma_{total}}|_{exp}$\\
\hline
$\Sigma\pi$~~~~~&$15.4$&$123.4$&$25$ to $50(\approx35)$&$0.12$&$0.25\sim0.55$\\
$NK$~~~~~&$103.1$~~~~~&$$~~~~~&$$~~~~~&$0.84$&$0.20\sim0.30$\\
$\Lambda\eta$~~~~~&$0.28$~~~~~&$$~~~~~&$$~~~~~&$0.002$&$0.10\sim0.25$\\
$\Sigma(1385)\pi$~~~~~&$4.7$~~~~~&$$~~~~~&$$~~~~~&$0.04$&$\cdot\cdot\cdot$\\
\hline\hline
\end{tabular}
\end{table}
Considering $\Lambda(1670)S_{01}$ as the pure state
$|70,^28,1,1,\frac{1}{2}^-\rangle$, we calculate its strong decay
properties, which are listed in Tab.~\ref{lambda16}. From the table,
we see that the total decay width in theory
($\Gamma^{th}_{total}=123.4$ MeV) are much larger than the data
($\Gamma^{exp}_{total}\simeq35$ MeV). Meanwhile, according to our
calculation, the branching ratio of the $\Lambda\eta$ channel is too
small, while the branching ratio of the $N\bar{K}$ channel is too
large to compare with the data. Thus, as a pure state, the
$\Lambda(1670)S_{01}$ strong decays can not be described at all.
It should be remarked that several different representations with
the same $J^P$ numbers might be coupled together via some kind of
interactions~\cite{Isgur78,Isgur:1977ky,Capstick:1986bm,Schat:2001xr}.
Thus, $\Lambda(1670)S_{01}$ may be a mixed state between three
different representations $|70,^21,1,1\rangle$, $|70,^28,1,1\rangle$
and $|70,^48,1,1\rangle$ with $J^P=1/2^-$. Based on the standard
Cabibbo-Kobayashi-Maskawa (CKM) matrix method, the physical states
might be expressed as
\begin{equation}\label{mixs}
\left(\begin{array}{c}|\Lambda(1800)\frac{1}{2}^-\rangle\cr
|\Lambda(1670)\frac{1}{2}^-\rangle\cr
|\Lambda(1405)\frac{1}{2}^-\rangle
\end{array}\right)=U \left(\begin{array}{c} |70,^21\rangle \cr |70,^28\rangle \cr |70,^48\rangle
\end{array}\right),
\end{equation}
with
\begin{equation}
U=\left(\begin{array}{ccc} c_{12}c_{13} & s_{12}c_{13} & s_{13} \cr
-s_{12}c_{23}-c_{12}s_{23}s_{13} & c_{12}c_{23}-s_{12}s_{23}s_{13} &
s_{23}c_{13} \cr s_{12}s_{23}-c_{12}c_{23}s_{13} &
-c_{12}s_{23}-s_{12}c_{23}s_{13} & c_{23}c_{13}
\end{array}\right),
\end{equation}
where $c_{ij}\equiv \cos{\theta}_{ij}$ and $s_{ij}\equiv
\sin{\theta}_{ij}$. $\theta_{ij}$ stands for the mixing angles,
which could be determined by fitting the strong decay data of
$\Lambda(1670)S_{01}$.
By fitting the experiment data from PDG~\cite{Beringer:2012}, we
have obtained that $\theta_{12}\simeq 75^0$, $\theta_{13}\simeq
50^0$ and $\theta_{23}\simeq 125^0$. With these mixing angles, the
strong decay properties of $\Lambda(1670)S_{01}$ can be reasonably
described. The theoretical results compared with the data are listed
in Tab.~\ref{mix16}. From the table, it is seen that with the
configuration mixing the $\Lambda\eta$ branching ratio is enhanced
obviously, while the $N\bar{K}$ branching ratio is suppressed, which
can naturally explain the weak coupling of $\Lambda(1670)S_{01}$ to
$\bar{K}N$ and strong coupling of $\Lambda(1670)S_{01}$ to $\eta
\Lambda$ needed in the $\bar{K}N$ reactions.
\begin{table}[ht]
\caption{The predicted total and partial decay widths (MeV) and
partial decay width ratios of $\Lambda(1670)S_{01}$ as a mixed state
compared with the experiment data from PDG.} \label{mix16}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c }\hline\hline
Channel&$\Gamma^{th}_i$~~~~~&$\Gamma^{th}_{total}$~~~~~&$\Gamma^{exp}_{total}$&$\frac{\Gamma_i}{\Gamma_{total}}|_{th}$~~~~~&$\frac{\Gamma_i}{\Gamma_{total}}|_{exp}$\\
\hline
$\Sigma\pi$~~~~~&$11.8$~~~~~&$44.7$~~~~~&$25$ to $50(\approx35)$&$0.26$~~~~~&$0.25\sim0.55$\\
$NK$~~~~~&$13.6$~~~~~&$$~~~~~&$$~~~~~&$0.30$~~~~~&$0.20\sim0.30$\\
$\Lambda\eta$~~~~~&$18.2$~~~~~&$$~~~~~&$$~~~~~&$0.41$~~~~~&$0.10\sim0.25$\\
$\Sigma(1385)\pi$~~~~~&$1.1$~~~~~&$$~~~~~&$$~~~~~&$0.02$~~~~~&$\cdot\cdot\cdot$\\
\hline\hline
\end{tabular}
\end{table}
According to the determined mixing angles, Eq.(\ref{mixs}) can be
explicitly expressed as
\begin{equation}\label{mix1}
\left(\begin{array}{c}|\Lambda(1800)\frac{1}{2}^-\rangle\cr |\Lambda(1670)\frac{1}{2}^-\rangle\cr
|\Lambda(1405)\frac{1}{2}^-\rangle\cr\end{array}\right)=\left(\begin{array}{ccc}
0.17&0.62&0.77\cr 0.39 &-0.76&0.53\cr 0.90&0.21&-0.37
\end{array}\right)\left(\begin{array}{c}|70,^21\rangle\cr|70,^28\rangle\cr|70,^48\rangle\cr
\end{array}\right),
\end{equation}
where, we find that the main component of $\Lambda(1670)S_{01}$ is
$|70,^28\rangle (\sim58\%$). Meanwhile, the $|70,^21\rangle$ and
$|70,^48\rangle$ components also have a sizable proportion, which
are $\sim 15\%$ and $\sim 28\%$, respectively. $\Lambda(1405)S_{01}$
is dominated by the $|70,^21\rangle$($\sim81\%$), while it contains
significant octet components of $|70,^28\rangle(\sim4\%$) and
$|70,^48\rangle$($\sim14\%$). $\Lambda(1800)S_{01}$ is dominated by
both the $|70,^48\rangle$($\sim59\%$) and $|70,^28\rangle(\sim38\%$)
components.
With these mixing schemes, we have calculated the strong decay
properties of $\Lambda(1405)S_{01}$ and $\Lambda(1800)S_{01}$. The
calculated decay width of $\Lambda(1405)S_{01}$ is $\Gamma\simeq53$
MeV, which in good agreement with the data ($\Gamma=50\pm2$ MeV).
Considering the uncertainties in the mass of $\Lambda(1800)S_{01}$,
we vary its mass from 1700 MeV to 1870 MeV. The predicted strong
decay properties of $\Lambda(1800)S_{01}$ have been shown in
Fig.~\ref{fig-s8}. From the figure, we can see that the strong
decays of $\Lambda(1800)S_{01}$ are dominated by the $\bar{K}N$,
$\eta \Lambda$ and $\Sigma \pi$ decay modes. While the decay channel
$\Sigma(1385)\pi$ also has a significant contribution to the strong
decays of $\Lambda(1800)S_{01}$. It is found that our predicted
strong decay properties of $\Lambda(1800)S_{01}$ are compatible with
the data of ALSTON (see
Tab.~\ref{w180})~\cite{AlstonGarnjost:1977rs}. About
$\Lambda(1800)S_{01}$, more measurements are needed in experiments.
As a whole the configuration mixing is needed to understand the
strong decay properties of the $S$-wave resonances
$\Lambda(1405)S_{01}$, $\Lambda(1670)S_{01}$ and
$\Lambda(1800)S_{01}$.
\begin{figure}[ht]
\centering \epsfxsize=9.0 cm \epsfbox{1800.eps} \caption{The strong
decay properties of $\Lambda(1800)S_{01}$, which is taken as a mixed
state in Eq.(\ref{mixs}).}\label{fig-s8}
\end{figure}
\begin{table}[ht]
\caption{The predicted total and partial decay widths (MeV) of
$\Lambda(1800)S_{01}$ compared with the experiment data from
ALSTON~\cite{AlstonGarnjost:1977rs}. We set the mass of
$\Lambda(1800)S_{01}$ as M=1725 MeV, which is the observed value
from ALSTON.} \label{w180}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c }\hline\hline
Channel&$N\bar{K}$~~~~~&$\Sigma\pi$~~~~~&$\Lambda\eta$~~~~~&$\Sigma(1385)\pi$~~~~~\\
\hline
$\Gamma^{th}_i$&$51.1$&$39.5$&$56.1$&$13.2$ \\
\hline
$\Gamma^{exp}_{i}$&$52\pm 9$&...&...~ &... \\
\hline\hline
\end{tabular}
\end{table}
\paragraph{$D$-wave states}
Then, we will further study whether the configuration mixing is
necessary to explain the strong decays of the well-established
$D$-wave resonances $\Lambda(1520)D_{03}$ and $\Lambda(1690)D_{03}$
or not. If $\Lambda(1520)D_{03}$ and $\Lambda(1690)D_{03}$ are pure
states, they should be classified as the
$|70,^21,1,1,\frac{3}{2}^-\rangle$ and
$|70,^48,1,1,\frac{3}{2}^-\rangle$ configurations, respectively, in
the constituent quark model.
\begin{table}[ht]
\caption{The predicted total and partial decay widths (MeV) and
partial decay width ratios of $\Lambda(1520)D_{03}$ as a pure state
$|70,^21,1,1,\frac{3}{2}^-\rangle$ compared with the experiment data
from PRD.} \label{w150}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c }\hline\hline
Channel&$\Gamma^{th}_i$~~~~~&$\Gamma^{th}_{total}$~~~~~&$\Gamma^{exp}_{total}$
~~~~~&$\frac{\Gamma_i}{\Gamma_{total}}|_{th}$~~~~~&$\frac{\Gamma_i}{\Gamma_{total}}
|_{exp}$\\
\hline
$\Sigma\pi$~~~~~&$10.7$~~~~~&$14.5$~~~~~&$15.6\pm1.0$~~~~~&$0.74$~~~~~&$0.42\pm0.01$\\
$NK$~~~~~&$3.81$~~~~~&$$~~~~~&$$~~~~~&$0.26$~~~~~&$0.45\pm0.01$\\
\hline\hline
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{The predicted total and partial decay widths (MeV) and
partial decay width ratios of $\Lambda(1690)D_{03}$ as a pure state
of $|70,^28,1,1,\frac{3}{2}^-\rangle$ compared with the experiment
data from PRD.} \label{w1690}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c }\hline\hline
Channel&$\Gamma^{th}_i$&$\Gamma^{th}_{total}$&$\Gamma^{exp}_{total}$&$
\frac{\Gamma_i}{\Gamma_{total}}|_{th}$&$\frac{\Gamma_i}{\Gamma_{total}}|_{exp}$\\
\hline $\Sigma\pi$&$9.74$&$117.2$&$50\sim70(\approx60)$&$0.08$&$
0.20\sim0.40$\\
$NK$&$58.31$&$$&$$&$0.50$&$0.20\sim0.30$\\
$\Lambda\eta$&$0.001$~~~~~&$$~~~~~&$$~~~~~&$0.00$&$\cdot\cdot\cdot$\\
$\Sigma(1385)\pi$~~~~~&$49.1$~~~~~&$$~~~~~&$$~~~~~&$0.42$&$\cdot\cdot\cdot$\\
\hline\hline
\end{tabular}
\end{table}
Firstly, we study the decay properties of $\Lambda(1520)D_{03}$ and
$\Lambda(1690)D_{03}$ as pure states. The predictions compared with
the data are listed in Tabs.~\ref{w150} and ~\ref{w1690},
respectively.
From Tab.~\ref{w150}, we find that as a pure state the strong decay
coupling of $\Lambda(1520)D_{03}$ to $\Sigma\pi$ is overestimated.
However, the strong coupling of $\Lambda(1520)D_{03}$ to $N\bar{K}$
is underestimated, which is also found in the $\bar{K}N$
scattering~\cite{Zhong:2013oqa}.
While considering $\Lambda(1690)D_{03}$ as a pure state
$|70,^48,1,1,\frac{3}{2}^-\rangle$, from Tab.~\ref{w1690} we find
that the theoretical predictions are inconsistent with the
experimental observations. The predicted total decay width is much
larger than the data. In addition, the partial decay width ratio for
$\Sigma\pi$ is too small, while, that for $N\bar{K}$ is too large to
compare with the data. Thus, as pure states, the strong decay
properties of $\Lambda(1520)D_{03}$ and $\Lambda(1690)D_{03}$ can't
be understood reasonably.
For these reasons, it is naturally for us to take $\Lambda(1520)$
and $\Lambda(1690)$ as two mixing states between
$|70,^21,1,1,\frac{3}{2}^-\rangle$,
$|70,^28,1,1,\frac{3}{2}^-\rangle$ and
$|70,^48,1,1,\frac{3}{2}^-\rangle$. By the using of the CKM matrix
method again, and fitting the strong decay data of $\Lambda(1690)$,
we obtain
\begin{equation}\label{mix2}
\left(\begin{array}{c}|\Lambda(1520)\frac{3}{2}^-\rangle\cr |\Lambda(1690)\frac{3}{2}^-\rangle\cr
|\Lambda\frac{3}{2}^-\rangle_3\cr\end{array}\right)=\left(\begin{array}{ccc}
0.94&0.34&0.09\cr
0.31&-0.92&0.26\cr
0.17&-0.21&-0.96
\end{array}\right)\left(\begin{array}{c}|70,^21\rangle\cr|70,^28\rangle\cr|70,^48\rangle\cr
\end{array}\right).
\end{equation}
From Eq.(\ref{mix2}), it is seen that $\Lambda(1690)$ has sizable
components of $|70,^21\rangle$ ($\sim 9\%$) and $|70,^48\rangle$
($\sim 7\%$), except for the main component $|70,^28\rangle$ ($\sim
85\%$). The predicted strong decay properties of
$\Lambda(1690)D_{03}$ compared with the data are listed in
Tab.~\ref{mix80}, where we find that with the configuration mixing
effects, the strong decays of $\Lambda(1690)D_{03}$ can be well
described. It should be emphasized that $\Lambda(1690)$ has a very
weak coupling to $\Lambda\eta$, although it has been draftily
enhanced by considering the configuration mixing effects, which
gives an explanation why the contribution of $\Lambda(1690)D_{03}$
to the reaction $K^-P\rightarrow\Lambda\eta$ is tiny even though
$\Lambda(1690)D_{03}$ has a large $C_R$-factor.
\begin{table}[ht]
\caption{The predicted total and partial decay widths (MeV) and
partial decay width ratios of $\Lambda(1520)$ as a mixed state
compared with the experiment data from PDG.} \label{m150}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c }\hline\hline
Channel&$\Gamma^{th}_i$~~~~~&$\Gamma^{th}_{total}$~~~~~&$\Gamma^{exp}_{total}$~~~~~
&$\frac{\Gamma_i}{\Gamma_{total}}|_{th}$&$\frac{\Gamma_i}{\Gamma_{total}}|_{exp}$\\
\hline
$\Sigma\pi$~~~~~&$7.0$~~~~~&$13.5$~~~~~&$15.6\sim1.0$~~~~~&$0.52$&$0.42\pm0.01$\\
$NK$~~~~~&$6.2$~~~~~&$$~~~~~&$$~~~~~&$0.46$&$0.45\pm0.01$\\
$\Sigma(1385)\pi$~~~~~&$0.3$~~~~~&$$~~~~~&$$~~~~~&$0.02$&$\cdot\cdot\cdot$\\
\hline\hline
\end{tabular}
\end{table}
\begin{figure}[ht]
\centering \epsfxsize=9.0 cm \epsfbox{thethirdstate.eps}
\caption{The strong decay properties of
$|\Lambda\frac{3}{2}^-\rangle_3$ as a counterpart of
$\Lambda(1690)$.}\label{fig-dw}
\end{figure}
Furthermore, with the mixing scheme determined in Eq.(\ref{mix2}),
we study the strong decay of $\Lambda(1520)D_{03}$. The predicted
results compared with the data are listed in Tab.~\ref{m150}, where
we find both the total decay width and the partial decay width
ratios are in good agreement with the data. The $N\bar{K}$ branching
ratio is about a factor 2 larger than that derived in the
$SU(6)\otimes O(3)$ limit, which is consistent with our previous
analysis of the $\bar{K}N$ scattering in~\cite{Zhong:2013oqa}. From
Eq.(\ref{mix2}), we can see that the main component of
$\Lambda(1520)D_{03}$ is still the $|70,^21\rangle$ configuration
($\sim 88\%$), while it contains significant octet component of
$|70,^28\rangle$ ($\sim 12\%$).
\begin{table}[ht]
\caption{The predicted total and partial decay widths(MeV) and
partial decay width ratios of $\Lambda(1690$ as a mixed state
compared with the experiment data from PDG.
\label{mix80}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c }\hline\hline
Channel&$\Gamma^{th}_i$~~~~~&$\Gamma^{th}_{total}$&$\Gamma^{exp}_{total}$
&$\frac{\Gamma_i}{\Gamma_{total}}|_{th}$&$\frac{\Gamma_i}{\Gamma_{total}}|_{exp}$\\
\hline
$\Sigma\pi$&$27.5$&$70.6$&$50\sim70(\approx60)$&$0.39$&$
0.20\sim0.40$\\
$NK$&$21.4$&$$&$$&$0.30$&$0.20\sim0.30$\\
$\Lambda\eta$&$0.01$&$$&$$&$0.00$&$\cdot\cdot\cdot$\\
$\Sigma(1385)\pi$&$21.6$&$$&$$&$0.30$&$\cdot\cdot\cdot$\\
\hline\hline
\end{tabular}
\end{table}
Finally, we give our predictions of the third $D$-wave resonance
$|\Lambda\frac{3}{2}^-\rangle_3$, which is still not established in
experiment. According to the quark model prediction, its mass is
around $1800$
MeV~\cite{Isgur78,Isgur:1977ky,Capstick:1986bm,Schat:2001xr}.
Varying its mass from 1700 MeV to 1900 MeV, we calculate the strong
decays of $|\Lambda\frac{3}{2}^-\rangle_3$. Our predictions have
been shown in Fig.~\ref{fig-dw}. It is found that the strong decays
of $|\Lambda\frac{3}{2}^-\rangle_3$ are dominated by
$\Sigma(1385)\pi$ and $\Sigma\pi$, while the $N\bar{K}$ and $\Lambda
\eta$ branching ratios are negligibly small. Thus, we suggest the
our experimental colleagues find this missing $D$-wave state in the
$\Sigma(1385)\pi$ and $\Sigma\pi$ channels.
In a word, the configuration mixing is also needed to understand the
strong decay properties of the $D$-wave resonances
$\Lambda(1520)D_{03}$, $\Lambda(1690)D_{03}$.
\subsubsection{Interpretation of $C_R$ with configuration mixing}
If the configuration mixing effects are included, the
single-resonance-excitation amplitude given in Eq.~(\ref{pt}) should
be rewritten as
\begin{eqnarray}
\mathcal{O}(n,l,J)=\sum_Rg_R'\mathcal{O}(n,l,J)\equiv\sum_RC_Rg_R\mathcal{O}(n,l,J),
\end{eqnarray}
where $g_R'$ ($g_R$) stands for the relative strength of a
single-resonance with (without) configuration mixing effects in the
partial amplitude $\mathcal{O}(n,l,J)$. The $C_R$ parameters can be
derived by
\begin{equation}\label{cp}
C_R=\frac{g'_R}{g_R}.
\end{equation}
In the following work, we study the $C_R$ parameters of the
important resonances $\Lambda(1405)S_{01}$, $\Lambda(1670)S_{01}$,
$\Lambda(1520)D_{03}$ and $\Lambda(1690)D_{03}$ for the
$K^-p\rightarrow \eta\Lambda$ process.
Taking $\Lambda(1405)S_{01}$, $\Lambda(1670)S_{01}$,
$\Lambda(1520)D_{03}$ and $\Lambda(1690)D_{03}$ as pure states in
the constituent quark model, we can derive the couplings of the
transition amplitudes for these resonances, which are given by
\begin{eqnarray}
R_{\Lambda(1405)}&=&-\frac{\sqrt{3}}{108}(\sqrt{2}\sin\phi_P+\cos\phi_P),\\
R_{\Lambda(1670)}&=&-\frac{\sqrt{3}}{108}(\sqrt{2}\sin\phi_P-\cos\phi_P),\\
R_{\Lambda(1520)}&=&-\frac{\sqrt{3}}{54}(\sqrt{2}\sin\phi_P+\cos\phi_P),\\
R_{\Lambda(1690)}&=&-\frac{\sqrt{3}}{54}(\sqrt{2}\sin\phi_P-\cos\phi_P),
\end{eqnarray}
where the $\phi_P$ is the $\eta$-$\eta'$ mixing angle. Then the
$g_R$ parameters for these states can be obtained by
\begin{eqnarray}
g_{\Lambda(1405)~ \mathrm{or}~
\Lambda(1670)}&=&\frac{R_{\Lambda(1405)}~
\mathrm{or}~R_{\Lambda(1670)}}{R_{\Lambda(1405)}+R_{\Lambda(1670)}},\\
g_{\Lambda(1520)~ \mathrm{or}~
\Lambda(1690)}&=&\frac{R_{\Lambda(1520)}~
\mathrm{or}~R_{\Lambda(1690)}}{R_{\Lambda(1520)}+R_{\Lambda(1690)}}.
\end{eqnarray}
Considering the configuration mixing effects, the wave functions of
the $S$- and $D$-wave states $\Lambda(1405)S_{01}$,
$\Lambda(1670)S_{01}$, $\Lambda(1520)D_{03}$ and
$\Lambda(1690)D_{03}$ can be generally written as
\begin{eqnarray}
|\Lambda(1405)\rangle=a_1 |70,^21\rangle_{S}+b_1
|70,^28\rangle_{S}+c_1|70,^48\rangle_{S},\\
|\Lambda(1670)\rangle=a_2 |70,^21\rangle_S+b_2 |70,^28\rangle_S+c_2
|70,^48\rangle_S,\\
|\Lambda(1520)\rangle=a_3 |70,^21\rangle_{D}+b_3
|70,^28\rangle_{D}+c_3|70,^48\rangle_{D},\\
|\Lambda(1690)\rangle=a_4 |70,^21\rangle_D+b_4 |70,^28\rangle_D+c_4
|70,^48\rangle_D,
\end{eqnarray}
where $a_i$, $b_i$ and $c_i$ ($i=1,2,3,4$) have been determined in
Eqs.~(\ref{mix1}) and (\ref{mix2}). Then we can derive the couplings
of the transition amplitudes for these mixed states, they are
\begin{eqnarray}
R'_{\Lambda(1405)}&=&-\frac{\sqrt{3}}{108}(\sqrt{2}\sin\phi_P+\cos\phi_P)(a_1^2+a_1b_1)\nonumber\\
&&-\frac{\sqrt{3}}{108}(\sqrt{2}\sin\phi_P-\cos\phi_P)(b_1^2+a_1b_1),\\
R'_{\Lambda(1670)}&=&-\frac{\sqrt{3}}{108}(\sqrt{2}\sin\phi_P+\cos\phi_P)(a_2^2+a_2b_2)\nonumber\\
&&-\frac{\sqrt{3}}{108}(\sqrt{2}\sin\phi_P-\cos\phi_P)(b_2^2+a_2b_2),\\
R'_{\Lambda(1520)}&=&-\frac{\sqrt{3}}{54}(\sqrt{2}\sin\phi_P+\cos\phi_P)(a_3^2+a_3b_3)\nonumber\\
&&-\frac{\sqrt{3}}{54}(\sqrt{2}\sin\phi_P-\cos\phi_P)(b_3^2+a_3b_3),\\
R'_{\Lambda(1690)}&=&-\frac{\sqrt{3}}{54}(\sqrt{2}\sin\phi_P+\cos\phi_P)(a_4^2+a_4b_4)\nonumber\\
&&-\frac{\sqrt{3}}{54}(\sqrt{2}\sin\phi_P-\cos\phi_P)(b_4^2+a_4b_4).
\end{eqnarray}
Finally, we obtain the relative strength parameters $g_R'$ for these
mixed states:
\begin{eqnarray}
g_{\Lambda(1405)~ \mathrm{or}~
\Lambda(1670)}'&=&\frac{R_{\Lambda(1405)}'~
\mathrm{or}~R_{\Lambda(1670)}'}{R_{\Lambda(1405)}'+R_{\Lambda(1670)}'},\\
g_{\Lambda(1520)~ \mathrm{or}~
\Lambda(1690)}'&=&\frac{R_{\Lambda(1520)}'~
\mathrm{or}~R_{\Lambda(1690)}'}{R_{\Lambda(1520)}'+R_{\Lambda(1690)}'}.
\end{eqnarray}
With these extracted $g_R$ and $g_R'$ parameters, the $C_R$
parameters can be easily worked out according to Eq.~(\ref{cp}). It
is found that $C_R$ is a function of the $\eta$-$\eta'$ mixing angle
$\phi_P$, which might be in the range $\phi_P\simeq (30^\circ,
47^\circ)$~\cite{Ke:2009mn,DiDonato:2011kr}. Considering the
uncertainties of $\phi_P$, we plot $C_R$ as a function of $\phi_P$
in Fig.~\ref{fig-cr}. From the figure, one can find that the $C_R$
parameters for $\Lambda(1670)S_{01}$ and $\Lambda(1690)D_{03}$ are
sensitive to the $\eta$-$\eta'$ mixing angle $\phi_P$. When takeing
a small $\eta$-$\eta'$ mixing angle $\phi_P=35^\circ$, we obtain a
large value $C_{\Lambda(1670)}\simeq 34$ for $\Lambda(1670)S_{01}$,
which can naturally explain the large contributions of
$\Lambda(1670)S_{01}$ found in the $K^-P\rightarrow\Lambda\eta$
process.
Using the determined $\eta$-$\eta'$ mixing angle $\phi_P=35^\circ$,
we also obtain a large value of $C_{\Lambda(1690)}\simeq 39$ for
$\Lambda(1690)D_{03}$. It is should be mentioned that although the
configuration mixing effects have largely enhanced the contribution
of $\Lambda(1690)$ in the $K^-P\rightarrow\Lambda\eta$ process
(about a factor of $39$), the contribution of $\Lambda(1690)D_{03}$
in the reaction is still negligibly small for the very weak coupling
to $\eta \Lambda$.
As a whole, the configuration mixing effects are crucial to
understand the strong decay properties of the low-lying negative
$\Lambda$ resonances. These resonances are most likely mixed states
between different configurations, which is consistent with the
predictions in large $N_c$ QCD~\cite{Schat:2001xr}. Considering
configuration mixing effects, we can reasonably explain the weak
coupling of $\Lambda(1670)S_{01}$ to $\bar{K}N$ and strong coupling
of $\Lambda(1670)S_{01}$ to $\eta \Lambda$, and the large strength
parameter $C_{\Lambda(1670)}\simeq 34$. The contribution of
$\Lambda(1690)D_{03}$ to the $K^-P\rightarrow\Lambda\eta$ process is
too small to give a bowl structure in the DCS, even we consider the
configuration mixing effects in these $D$-wave states.
\begin{figure}[ht]
\centering \epsfxsize=9.0 cm \epsfbox{tetechange.eps} \caption{The
coupling strength parameter, $C_R$, as a function of the
$\eta$-$\eta'$ mixing angle $\phi_P$. } \label{fig-cr}
\end{figure}
\section{Summary}\label{summ}
In this work, we have studied the low energy reaction
$K^-p\rightarrow\Lambda\eta$ with a chiral quark model approach. A
reasonable description of the measurements has been achieved. It is
found that $\Lambda(1670)S_{01}$ dominates the reaction around at
the low energy regions, and the $t$- and $u$-channel backgrounds
also play crucial roles. Slight contributions of
$\Lambda(1405)S_{01}$ are found, however, $\Lambda(1405)S_{01}$ does
not obviously affect the shapes of the differential cross sections.
No obvious roles of the $D$-wave states $\Lambda(1520)D_{03}$ and
$\Lambda(1690)D_{03}$ are found in the reaction.
Furthermore, by the study of the $K^-p\rightarrow\Lambda\eta$
process, we have extracted the strong interaction properties of
$\Lambda(1670)S_{01}$. We find that a much large amplitude of
$\Lambda(1670)S_{01}$ in the reaction is needed, which is about 34
times (i.e., $C_{S_{01}(1670)}\simeq 34 $) larger than that derived
from the symmetry quark model. Combing this with our previous study
in~~\cite{Zhong:2013oqa}, we conclude that $\Lambda(1670)S_{01}$
should have a much weaker coupling to $\bar{K}N$, while a much
stronger coupling to $\eta \Lambda$ than that predicted in the
symmetry quark model.
To understand these strong interaction properties of
$\Lambda(1670)S_{01}$, we further study the strong decay properties
of the low-lying negative parity $\Lambda$-resonances. It is found
that the configuration mixing effects are crucial to understand the
strong decay properties of the low-lying negative $\Lambda$
resonances. These resonances are most likely mixed states between
different configurations. Considering configuration mixing effects,
we can reasonably explain the strong interaction properties of
$\Lambda(1670)S_{01}$ extracted from the
$K^-P\rightarrow\Lambda\eta$.
The data of the $K^-p\rightarrow\Lambda\eta$ process show that there
seems to exist a bowl structure in the DCS in a narrow energy
region near the $\eta\Lambda$ threshold, which indicates a strong
$D$-wave contribution there. However, the contribution of
$\Lambda(1690)D_{03}$ to $K^-P\rightarrow\Lambda\eta$ process too
small to give a bowl structure in the DCS. Although with the
configuration mixing effects in these $D$-wave states, the amplitude
of $\Lambda(1690)D_{03}$ in the reaction could be enhanced a factor
of $\sim38$, the contribution of $\Lambda(1690)D_{03}$ is still tiny
for the very weak coupling of $\Lambda(1690)D_{03}$ to
$\eta\Lambda$. Based on the bowl structures in the DCS, Liu and Xie
believed there might exist a exotic $D$-wave state
$\Lambda(1669)D_{03}$ with a very narrow width of $\Gamma=1.5$ MeV.
To clarify whether there are contributions of a narrow $D$-wave
state or not, more accurate measurements are needed.
As a byproduct, we also have predicted the strong decay properties
of the unestablished $D$-wave state
$|\Lambda\frac{3}{2}^-\rangle_3$. This resonance mainly decay into
$\Sigma(1385)\pi$ and $\Sigma\pi$ channels. We hope the
experimentalists can search this missing $D$-wave state in the
$\Sigma(1385)\pi$ and $\Sigma\pi$ channels.
\section*{ Acknowledgements }
This work is supported, in part, by the National Natural Science
Foundation of China (Grants No. 11075051 and No. 11375061), Program
for Changjiang Scholars and Innovative Research Team in University
(PCSIRT, Grant No. IRT0964), the Program Excellent Talent Hunan
Normal University, the Hunan Provincial Natural Science Foundation
(Grants No. 11JJ7001 and No. 13JJ1018), and the Hunan Provincial
Innovation Foundation For Postgraduate.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The (re-)acceleration mechanism of relativistic particles in the intracluster medium of galaxy clusters is still poorly known as well as the origin of large-scale magnetic fields in such environments.\\
One of the spectacular manifestations of these components is represented by diffuse radio sources, known as radio halos and relics, respectively hosted at the centre and in the outskirts of galaxy clusters. These sources are faint (${\rm S_{\nu}\simeq 0.1-1\,\muup Jy/arcsec^{2}}$ at 1.4\,GHz) synchrotron sources, typically with steep power-law radio spectra (S$_{\nu}\sim\nu^{-\alpha}$, with $\alpha\sim$1), which extend over Mpc-scales \citep[see][for a recent review]{van19}. Radio relics are usually associated with shock waves propagating in the ICM as a consequence of galaxy cluster merging phenomena. This coincidence \citep{enss98} supports the diffusive shock acceleration \citep[DSA][]{drury,bland} as a mechanism of re-acceleration of cosmic-ray particles up to the $\sim$GeV energies required to explain the observed emission.
According to DSA, particles scatter from magnetic field inhomogeneities and they cross the shock wave back and forth gaining energy at each crossing. The DSA mechanism generates a power-law energy distribution and, if the cooling of the particles is balanced by the injection of relativistic particles, a power-law behaviour of the flux density as a function of frequency. This trend has been observed in a large number of radio relics suggesting that these sources are produced by the DSA mechanism.
Other observed characteristics of radio relics are a gradual spectral index steepening toward the cluster centre, indicating that particles are ageing in the post-shock region, and a high degree of polarisation across the relic which suggests that the magnetic field has been compressed in a thin layer.
Indeed, shock compression can amplify the magnetic field component perpendicular to the shock direction. According to the models, the different mechanisms of magnetic field amplification can result in differences in the observed emission properties such as the radio spectrum, the surface brightness, and the spectral index profiles \citep[see for example the work of][]{donnert16}. \\
Several observations have challenged the DSA mechanism. These observations have revealed that for some relics the derived X-ray Mach numbers are low, namely weak shocks with M$\leq$3 \citep{brunetti,van19}. For such weak shocks, the DSA mechanism is not efficient enough to accelerate particles up to GeV energies from the thermal pool \citep[see also][]{botteon}. Only recently, a new class of radio relics with low surface brightness, and emissivity compatible with the standard DSA scenario might have been discovered at low frequency using LOFAR \citep{locatelli}.
In some cases, the Mach number inferred from X-ray observations and the one obtained from radio spectra are not in agreement \citep[see for example][]{van16}. In addition, some authors have reported the presence of a break in the spectrum, based on interferomteric observations, which is inconsistent with DSA \citep{stroe16,trasatti15}. \\
All these findings motivates the search for complementary/alternative models with respect to the DSA to explain the observed emission.
In particular, it has been proposed that nearby active galactic nuclei can inject cosmic-ray electrons \citep{enss01}, a scenario confirmed in some cases \citep{bonafede14,van17,b19}, or that the injection is caused by powerful galactic wind \citep{volk}.
Spherically-expanding shocks with fossil particle populations \citep{kang_ciza} re-accelerated through DSA can generated curved spectra, while
a non-uniform magnetic field in the relic region \citep{donnert16} could cause a steepening in the radio spectrum.
Multiple shocks along the line-of-sight \citep[][and reference therein]{hong15} can explain the inconsistency between X-ray and radio derived Mach numbers. However, the modelling proposed by \citet{hong15} and \citet{donnert16} in the framework of the DSA does not solve the low efficiency problem which makes the acceleration of particles from the thermal pool unrealistic.\\
The inconsistency between the X-ray and radio derived Mach numbers \citep[see for example][for radio and X-ray derived Mach number]{van10,aka}, and the steepening in the flux density spectrum \citep{stroe16} was also observed for the northern relic of the galaxy cluster CIZA J2242.8+5301.
This cluster (redshift z=0.1921) was discovered in the X-rays by \citet{koc07}, while its diffuse radio sources, a central radio halo and a pair of opposite radio relics, were discovered by \citet{van10}.
Its northern relic is one of the most famous and extensively studied radio relics \citep{van10,stroe13,stroe14,stroe16,loi17,hoang17,kier17,digennaro18} and it is often considered a textbook example of radio relics because of its $\sim$2 Mpc arc-shaped morphology and uniform brightness along its length. \\
Interferometric measurements of the northern relic at 15.85\,GHz and 30\,GHz \citep{stroe16} obtained with the Arcminute Microkelvin Imager \citep[AMI,][]{ami} and Combined Array for Research in Millimeter-wave Astronomy \citep[CARMA,][]{carma} telescopes, triggered the search for alternative physics, as they indicated a steepening of the spectrum which is not compatible with the standard DSA model. Nevertheless, it should be noted that, especially at high frequency, interferometric observations can suffer from the missing zero baseline problem. While single-dish telescopes can retain angular structures as large as the observed area, interferometers can detect a maximum angular structure corresponding to their minimum baseline, since their minimum baseline is not equal to zero as in the case of single-dish. This means that interferometric observations are not able to fully recover the flux density of very extended sources. In particular, AMI small array (SA) and large array (LA) observations at 15.85\,GHz have a corresponding large angular scale $\theta_{LAS}\sim 11$\,arcmin and $\theta_{LAS}\sim 2$\,arcmin respectively. CARMA at 30\,GHz was blind to scales above $\theta_{LAS}\sim$3.5\,arcmin. The northern relic covers an angular scale larger than 11 arcmin. Therefore, even if \citet{stroe16} tried to correct for this effect, obtaining reliable flux density measurements from these telescopes is an hard task.\\
\citet{kier17} excluded a possible steepening between 153\,MHz and 8.35\,GHz and model the radio relic with a power-law with a spectral index $\alpha\sim$0.9 while DSA predicts $\alpha>1$.
In a previous work \citep{loi17}, we established, using data between $\sim$300\,MHz and $\sim$8\,GHz that the CIZA J2242.8+5301 northern relic spectral behaviour was consistent with the DSA model in this frequency range assuming a continuous injection of relativistic particles. From the spectral modelling, we also derived a Mach number consistent within the errors with the X-ray estimate \citep{aka}. However, the uncertainty about the relic behaviour at higher frequency remained.\\
In this work, we show the results of a large observing program (code: 72-19, P.I. Francesca Loi) conducted at the Sardinia Radio Telescope facility \citep[SRT,][]{bolli,prandoni} aiming at observing this famous relic at 18.6 GHz with the 7-feed K band receiver of this single-dish telescope in full-Stokes mode. We also present 14.25\,GHz data acquired with the Effelsberg single-dish telescope (code: 73-19, P.I. Rainer Beck).\\
In Sect. 2, we describe the observational set up, the data reduction and the imaging procedure for both total intensity and polarised intensity images.
In Sect. 3, we show the resulting total intensity image and the measurement of the relic flux, and compare these value with literature data. In Sect. 4, we discuss the Sunyaev-Zel'dovich effect \citep[SZ,][]{sz1,sz2} which could affect our measurements and based on the expected/observed contamination we give a rough estimate of the magnetic field in the relic region.
In Sect. 5, we show the polarised intensity image and we discuss the polarimetric properties of the detected sources.
In Sect. 6, we discuss our findings and draw the conclusions.\\
Throughout this paper, we assume a $\Lambda$CDM cosmology with ${\rm H_0 = 71\,km\cdot s^{-1}\cdot Mpc^{-1}}$ , ${\rm \Omega_m=0.27}$, and ${\rm \Omega_{\Lambda}=0.73}$. At the redshift of CIZA J2242.8+5301 (z=0.1921), 1 arcmin corresponds to 189.9\,kpc.
\section{Observations and data reduction}
\subsection{SRT observations}
We used the 7-feed K-Band receiver of the SRT to observe at a central frequency of 18.6\,GHz inside a bandwidth of 1200\,MHz. These data were acquired with the SArdinia Roach2-based Digital Architecture for Radio Astronomy back end \citep[Sardara,][]{melis} using 1500 MHz bandwidth and 1024 channels of 1.46 MHz width in full-Stokes mode. \\
The observations have been carried out between January and April 2020, for a total of 240 hours divided in 27 slots of about 6-12 hours each. A summary of the observing program is reported in Table \ref{tab:obs}.
\begin{table}
\centering
\caption{Details about the SRT observations.}
\label{tab:obs}
\begin{tabular}{llll}
\hline
Date & Obs. time & Calibrators & Number of maps\\
\hline
12 Jan 2020 & 8 hrs & 3C286, 3C84 & 8 RA + 8 Dec\\
24 Jan 2020 & 7 hrs & 3C286, 3C84 & 11 RA + 10 Dec\\
01 Feb 2020 & 8 hrs & 3C286, 3C84 & 10 RA + 10 Dec \\
04 Feb 2020 & 10 hrs & 3C286, 3C84 & 11 RA + 11 Dec\\
06 Feb 2020 & 8 hrs & 3C147, 3C84, 3C138 & 8 RA + 8 Dec \\
07 Feb 2020 & 8 hrs & 3C286, 3C84 & 11 RA + 11 Dec \\
08 Feb 2020 & 8 hrs & 3C286, 3C84 & 8 RA + 8 Dec \\
09 Feb 2020 & 8 hrs & 3C286, 3C84 & 11 RA + 11 Dec \\
11 Feb 2020 & 11 hrs & 3C147, 3C84, 3C138 & 6 RA + 6 Dec\\
12 Feb 2020 & 8 hrs & 3C286, 3C84 & 10 RA + 10 Dec \\
26 Feb 2020 & 11 hrs & 3C286, 3C84 & 13 RA + 13 Dec \\
27 Feb 2020 & 13 hrs & 3C286, 3C84 & 17 RA + 17 Dec \\
28 Feb 2020 & 13 hrs & 3C286, 3C84 & 17 RA + 17 Dec \\
29 Feb 2020 & 7 hrs & 3C286, NGC7027 & 10 RA + 10 Dec \\
10 Mar 2020 & 6 hrs & 3C286, 3C84 & 9 RA + 9 Dec \\
14 Mar 2020 & 8 hrs & 3C286, 3C84 & 12 RA + 12 Dec \\
15 Mar 2020 & 8 hrs & 3C286, 3C84 & 11 RA + 11 Dec \\
19 Mar 2020 & 10 hrs & 3C147, 3C84, 3C138 & 14 RA + 14 Dec\\
24 Mar 2020 & 9 hrs & 3C147, 3C84, 3C138 & 11 RA + 11 Dec\\
25 Mar 2020 & 6 hrs & 3C48, 3C147 & 8 RA + 8 Dec\\
26 Mar 2020 & 7 hrs & 3C48, 3C84, 3C138 & 9 RA + 9 Dec \\
02 Apr 2020 & 8 hrs & 3C286, 3C84 & 11 RA + 11 Dec \\
03 Apr 2020 & 6 hrs & 3C48, 3C84, 3C138 & 5 RA + 5 Dec\\
04 Apr 2020 & 10 hrs & 3C286, 3C84 & 16 RA + 16 Dec \\
10 Apr 2020 & 8 hrs & 3C286, 3C84 & 12 RA + 14 Dec\\
14 Apr 2020 & 11 hrs & 3C147, 3C84, 3C138 & 11 RA + 11 Dec\\
16 Apr 2020 & 10 hrs & 3C286, 3C84 & 14 RA + 14 Dec \\
\hline
\end{tabular}
\end{table}
During each slot, we observed the primary calibrator 3C286 (or 3C147 if the former was not available) to calibrate band pass and flux density scale. We performed sky dips to derive the trend of the system temperature with elevation during our observations. We then modelled the T$_{\rm sys}$ trend with the airmass model to obtain the zenithal opacity, $\tau$. The values of $\tau$ derived from the sky dip were generally in good agreement with those provided by the radiometer working at the SRT site \citep{buffa}.
We also used the calibrator 3C286 as a reference for the absolute polarisation position angle \citep[we assumed the values from][]{pb2013}. If 3C286 was not available, we used 3C138 instead. The sources 3C84 and NGC7027, that we considered to be completely unpolarised, were used to correct for the on-axis instrumental polarisation. The northern relic of CIZA J2242.8+5301 was observed with the on-the-fly strategy in the equatorial frame, covering an area of 21$\times$15\,arcmin$^2$ centred on the relic centre (R.A. 22h 42m 58s, Dec. +53d 07m 12s). To facilitate the removal of the scan noise, we acquired orthogonal on-the-fly maps along the RA and DEC directions. Individual sub-scans within these maps are separated by 15 arcseconds, in order to sample the FWHM of the SRT beam with about 4 pixels along each direction. \\
We performed the data reduction and the imaging with the proprietary software package Single-dish Spectral-polarimetry Software \citep[SCUBE,][]{murgia}. We used a standardised pipeline for the calibration and imaging: we excised all the RFIs at well known specific frequencies and we applied an automatic flagging procedure to remove the remaining RFIs. We then determined the opacity, we subtracted the calibrator baselines with a linear fit of each scan based on the first and last 10\% of the data, we determined the bandpass solution, we gridded the calibrators in a regular grid with a pixel size of 15 arcsec, and we used the resulting images to fit a 2D-Gaussian to determine the flux scale and the leakage terms.\\
Our primary goal is to image the emission of the relic and the point sources in the field-of-view of target. Since we are uninterested to retain any large scale foreground emission, in the data of the target we removed the baseline scan-by-scan by fitting a 2nd order polynomial to the "cold-sky" regions devoid of both discrete sources and of the galaxy cluster extended emission (relics and halo). These cold-sky regions are identified using a mask created with the 1.4\,GHz SRT+WSRT image presented in \citet{loi17}, convolved with a beam FWHM of 1 arcmin. In this way, we removed the baselevel related to the receiver noise, the atmospheric emission, and the large scale foreground sky emission.
We then imaged the spectral cubes using a regular grid of 15 arcsec of pixel size and we averaged all the spectral channel to increase the signal-to-noise ratio. The images from all the observing slots are stacked together to reduce the noise level. We stack the RA and DEC scans by mixing their stationary wavelet transform (SWT) coefficients \citep[see][]{murgia}. The de-stripping resulting from the SWT stacking is effective in isolating and removing the noise along the scan direction.
After the first stack was completed, we returned to the individual images using the higher signal-to-noise image as a reference model to flag all residual low-level RFIs or small-scale atmospheric fluctuations that were not captured at the flag and baseline removal stages. This refined flagging step significantly improved the quality of the images. In order to verify the consistency of the flux density scale calibration between all the different observing slots, we performed a self-calibration procedure using the cluster central source (R.A. 22h 42m 51.30s, Dec. +53d 04m 41.40s) as reference. We assumed that this source has a stable flux density of 8.8\,mJy and, by mean of a 2D Gaussian fit to this source, we calculated a correction factor for each observing slot. By analysing the results of the self-calibration procedure, we deduced that the rms scatter of the flux density calibration scale through our project is of about $\sim$10\%. This scatter could be assumed as an estimate for our systematic uncertainty of the flux density calibration.\\
The polarised image has been obtained after correcting for the leakage term determined with 3C84 (or NGC7027). We calibrated the polarised angle and fraction with 3C286 (or 3C138). To perform the imaging of the Q and U Stokes parameters, we followed the same steps described above for the total intensity. The images of the polarised intensity P and the polarisation angle $\Psi$ have been derived from the Stokes parameters solving for the following equations:
\begin{eqnarray}
\rm P & = & \sqrt{\rm Q^2+U^2},\\
\Psi & = & 0.5 \cdot \arctan{\rm \frac{U}{Q}} .
\end{eqnarray}
We also corrected the polarisation images for the Rician bias \citep[see][]{murgia}.\\
The maximum Rotation Measure observed for this relic \citep[see][]{loi17,kier17} is RM=-400\,rad\,m$^{-2}$. At 18.6\,GHz this would cause a rotation of the polarisation plane of $\Delta\Psi\sim$6\,deg. Therefore, it is unlikely that our signal can undergo significant depolarisation effect but it is indeed very close to the intrinsic polarisation properties of the relic.
\begin{figure*}
\centering
\includegraphics[width=1.9\columnwidth]{ciza_I.pdf}
\caption{18.6\,GHz SRT total intensity image between 18\, and 19.2\,GHz. Contours start at 3$\sigma$ where $\sigma$=0.13\,mJy\,beam$^{-1}$ and increment with a $\sqrt{2}$ factor. Dashed contours are negative contours drawn at -3$\sigma$. The beam size is shown in the bottom left corner of the image, its FWHM corresponds to $\sim$0.9 arcmin.}
\label{fig:ciza}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{newfig2.png}
\caption{1.4\,GHz WSRT+SRT total intensity image from \citet{loi17} with the positive contours of Fig. \ref{fig:ciza}. The blue contour indicates the region where we estimated the flux density of the relic.}
\label{fig:ciza_cnt_box}
\end{figure}
\subsection{Effelsberg observations}
We also observed the northern relic of CIZA J2242.8+5301 with the new two-horn Ku-band receiver of the Effelsberg 100-m telescope that provides two channels of 2.5\,GHz bandwidth each, centred on 14.25\,GHz and 16.75\,GHz. The system temperatures are 33\,K and 44\,K. In December 2019 and January 2020 we obtained 12 coverages of a field of 12$\times$9\,arcmin$^2$ by scanning ("on-the fly") in alternating RA and DEC directions. The total on-source observation time was 7 hours. Data processing (RFI removal, baselevel corrections, and combination of the coverages in RA and DEC with the "basket weaving" technique) was performed with the NOD3 software package \citep{nod} using 3C147 or 3C48 as flux density calibrators. The final image at 14.25 GHz with a resolution of 49\,arcsec has a rms noise of 1\,mJy\,beam$^{-1}$. To increase the signal-to-noise ratio, we smoothed the image to 72\,arcsec.
In the second channel centred at 16.75\,GHz no significant signal could be
detected. The second horn is separated in azimuthal direction by 3.85\,arcmin
and can be used to reduce weather effects. As this requires scanning larger
areas in azimuthal direction and the weather was excellent, this method was
not used. The linear polarisation signal was also recorded, but was too weak
due to the relatively short observation time.
\begin{figure*}
\centering
\includegraphics[width=1.5\columnwidth]{eff_bar.pdf}
\caption{14.25\,GHz Effelsberg total intensity image between 13.0 and 15.5\,GHz. Positive contours (solid line) start at 3$\sigma$ where $\sigma=0.5$\,mJy\,beam$^{-1}$ and increase with a factor of $\sqrt{2}$ while negative contours at shown at -3$\sigma$ (dashed line). The beam size is shown in the bottom left corner, its FWHM corresponds to 72\,arcsec.}
\label{fig:eff}
\end{figure*}
\section{Total intensity results and analysis}
\subsection{SRT image and analysis}
Fig. \ref{fig:ciza} shows the resulting 18.6\,GHz SRT image obtaining by averaging the data between 18\,GHz and 19.2\,GHz. The noise is 0.13\,mJy\,beam$^{-1}$, the beam size is 0.9 arcmin. Solid contours are drawn from 3$\sigma$ increasing by a factor of $\sqrt{2}$, while dashed contours are the negative -3$\sigma$ contours.\\
We clearly detected the emission associated to the brightest radio sources in the field including the northern relic which extends over a length of $\sim$1.8\,Mpc with a deconvolved width ranging from $\sim$40\,kpc up to $\sim$160\,kpc. The relic surface brightness is not homogeneous across the arc, remarking the filamentary structure observed for the first time at 1.5 and 3\,GHz \citep{digennaro18}. \\
In the central part of the image, we can observe the two radio galaxies \citep[labelled D and E in Fig. 7 of][]{loi17} which are unresolved at our beam resolution.
A patch of radio emission is located eastwards from the centre. We noticed that this structure does not correspond to the candidate relic observed at lower frequencies \citep[see][]{hoang17} and that it can be seen only in the LOFAR 145\,MHz low-resolution (i.e. 35\,arcsec) image.\\
Fig. \ref{fig:ciza_cnt_box} shows the positive contours of Fig. \ref{fig:ciza} overlaid on the WSRT+SRT 1.4\,GHz total intensity image reported in \citet{loi17}. A gray arrow indicates the eastern structure mentioned above.\\
Fig. \ref{fig:ciza_cnt_box} also shows a blue contour, traced following the 3$\sigma$ contours of the 18.6\,GHz image, which indicates the northern relic area. We measured the flux density of the relic inside this region which covers an area of 13 beam areas. Similar to what has been done in \citet{kier17}, using the same region, we evaluated the residual base level of the image by computing the mean of the surface brightness in 10 different regions of the image with no obvious source.
The resulting mean surface brightness computed from these 10 regions is $\sim$4.5\,$\muup$Jy\,beam$^{-1}$ and the associated rms is $\sim$75\,$\muup$Jy\,beam$^{-1}$.
The base level correction factor of the image is given by this mean multiplied by the number of beams of the relic region, and corresponds to \,58.5$\,\muup$Jy.
We compute the error on the flux density as:
\begin{equation}
{\rm \Delta S_{\nu} = \sqrt{(f \cdot S_{\nu})^2 + \sigma^2 \cdot N_{BEAM} + (\Delta BL})^2},
\end{equation}
where f is the systematic flux uncertainty that we assume to be equal to 10 \%, $\sigma$ is the noise image and N$_{\rm BEAM}$ is the number of beam corresponding to the relic area. ${\rm \Delta BL}$ is the error associated to the base level correction which is equal to the rms of the regions divided by the square root of ${\rm N_{BEAM}}$. \\
The flux density of the relic at 18.6\,GHz corrected for the base level is:
\begin{equation}
{\rm S_{18.6\,GHz}=(7.67 \pm 0.90)\,mJy.}
\end{equation}
\subsection{Effelsberg image and analysis}
Fig. \ref{fig:eff} shows the 14.25\,GHz Effelsberg image, convolved at a resolution of 72\,arcsec, shown in the bottom left corner. The noise in this image is 0.5\,mJy\,beam$^{-1}$. Solid
contours are drawn from 3$\sigma$ increasing by a factor of $\sqrt{2}$, while dashed contours are the negative -3$\sigma$ contours.\\
In this image, we detected the central sources of the cluster, namely the D and E sources, and patches associated with the northern radio relic of CIZA J2242.8+5301. \\
Using the same blue region of Fig. \ref{fig:ciza_cnt_box}, we evaluated the relic flux in the 14.25\,GHz Effelsberg image at its original resolution of 49\,arcsec.
At 14.25\,GHz, the relic flux is:
\begin{equation}
{\rm S_{14.25\,GHz}=(9.5 \pm 3.9)\,mJy.}
\end{equation}
\subsection{Spectral fitting}
Fig. \ref{fig:fit} shows the flux density of the relic as a function of frequency. We included our 14.25\,GHz and 18.6\,GHz measurements as a green and red dot respectively to the most updated results in the literature \citep{kier17,loi17,hoang17,digennaro18} shown as black dots. More details about the measurements are in Table \ref{tab:meas}.
\begin{figure}
\includegraphics[width=\columnwidth]{fit.pdf}
\caption{The CIZA J2242.8+5301 northern relic flux density as a function of frequency. Black dots are measurements from the literature, while the green and red dots are the 14.251,GHz and 18.6\,GHz measurement presented in this work. The power-law best-fit is shown as a solid blue line. The two empty points are measurements from \citet{stroe16} which we did not include in the fitting procedure.}
\label{fig:fit}
\end{figure}
\begin{table}
\centering
\caption{Flux density measurements of the CIZA J2242.8+5301 northern relic shown in Fig. \ref{fig:fit}. The measurement at 6.6\,GHz has been repeated considering a 3$\sigma$ threshold.}
\label{tab:meas}
\begin{tabular}{ccc}
\hline
frequency [GHz] & $S_{\nu}$ [mJy] & Reference\\
\hline
0.145 & 1637$\pm$168 & \citet{hoang17}\\
0.153 & 1488$\pm$171 & \citet{hoang17}\\
0.323 & 646$\pm$71 & \citet{hoang17}\\
0.608 & 337$\pm$35 & \citet{hoang17}\\
1.221 & 148$\pm$16 & \citet{hoang17}\\
1.382 & 140$\pm$14 & \citet{hoang17}\\
1.4 & 126$\pm$12.6 & \citet{loi17}\\
1.5 & 128.1$\pm$12.81 & \citet{digennaro18}\\
1.714 & 106$\pm$11 & \citet{hoang17}\\
2.272 & 72$\pm$8 & \citet{hoang17}\\
3 & 56.1$\pm$5.61 & \citet{digennaro18}\\
4.85 & 32$\pm$8 & \citet{kier17}\\
6.6 & 19.6$\pm$2.3 & \citet{loi17}*\\
8.35 & 17$\pm$5 & \citet{kier17}\\
14.25 & 9.5$\pm$3.9 & This work\\
16 & 1.2$\pm$0.5 & \citet{stroe16}\\
18.6 & 7.67$\pm$0.90 & This work\\
30 & 0.6$\pm$0.3 & \citet{stroe16}\\
\hline
\end{tabular}
\end{table}
These data have been modelled with a power law spectrum which resulted to be the best-fit model, with a reduced chi-square $\chi^2_{\nu}=0.26$. In the fitting procedure, we did not include the interferometric measurements at 16 and 30\,GHz by \cite{stroe13} because we suspected that a significant fraction of the flux density could be missed in these estimates due to the lack of sensitivity at scales larger than their minimum baseline. \\
We measured again the flux at 6.6\,GHz considering now a 3$\sigma$ threshold instead of 5$\sigma$ \citep[see][for the details about this measurement]{loi17}.
The resulting integrated spectral index calculated between 145 MHz and 18.6 GHz is:
\begin{equation}
\alpha=1.12\pm0.03,
\end{equation}
and confirms what found in recent works over a smaller frequency range \citep[i.e. between 145\,MHz and 2.2\,GHz, and between 1.5 and 3\,GHz as reported by][respectively]{hoang17,digennaro18}. Our measurements exclude a possible steepening of the relic spectrum up to a frequency of 19\,GHz.
\section{Sunyaev-Zel'dovich decrement}
Observations at high-frequency (i.e. >10GHz) can be affected by the Sunyaev-Zel'dovich (SZ) effect which consists in a inverse-Compton interaction between cosmic microwave background (CMB) photons and thermal ICM particles. As a result, the synchrotron emission associated to cluster-embedded sources is reduced from true values because the background CMB emission is shifted towards higher frequencies \citep[see][]{birk} adding a "negative" contamination to the flux density of cluster-embedded sources.\\
We can reasonably assert that the measurements presented in this work are not affected by the large-scale radial SZ decrement since such an effect would be mitigated in the baseline subtraction procedure. Nevertheless, as described by \citet{basu}, a sharp pressure jump due to a shock is expected to generate a "localized" SZ small-scale decrement at the observed frequencies and it is reasonable that our images are affected by this contamination (even if we remark that in the case of the northern relic of CIZA J2242.8+5301, the surface brightness and pressure jump were not clearly detected \citep[but see][]{ogrean}). \\
Even if we do not have a direct measurement of the SZ at the position of the shock, we could try to estimate the expected decrement at 18.6\,GHz from numerical simulation. According to \citet{basu} the Comptonization parameter y at the relic location exceeds by -1.4$\times 10^4$ with respect to the radial y trend which we assume to have been absorbed by the baseline subtraction. Using Eq. 3 by \citet{basu} to compute the expected negative surface brightness associated to the CMB inverse-Compton emission, after multiplying this quantity by the number of beam area covered by the relic to derive the flux density $\rm S_{SZ}$, we can compute the SZ decrement at 18.6\,GHz as follows:
\begin{equation}
\rm \Delta S_{SZ}=-2\,y\,S_{SZ}=-0.9\,mJy.
\end{equation}
This value corresponds to the uncertainty associated with our measurement. At 14.25\,GHz the decrement is equal to -0.3\,mJy. This means that it is very hard to detect a SZ decrement in our images and in fact we do not see any evident deviation from the power-law trend in the relic flux density spectrum.
We can fix the -11.7\% (which is $\rm \Delta S_{SZ}$ at 18.6\,GHz divided by the relic flux density) as an upper-limit of the contamination due to the SZ effect, assuming that no other effects are contributing to compensate it. Considering Eq. 15 by \citet{basu} reported below:
\begin{equation}
\begin{split}
\rm \bigg( \frac{S_\nu^{relic}}{\Delta S_{SZ}} \bigg) \simeq -9\cdot10^{4}
\cdot \bigg( \frac{\xi_{e/p}}{0.05} \bigg)
\cdot \bigg( \frac{M}{3} \bigg)
\cdot \bigg( \frac{T_u}{1\,keV} \bigg)^{1/2}
\cdot \bigg( \frac{W}{100\,kpc} \bigg)^{-1} \\
\cdot (1+z)^{-(4+\delta/2)}
\cdot \frac{B_{\rm relic}^{1+\delta/2}}{B_{\rm CMB}^2+B_{\rm relic}^{2}}
\cdot\bigg( \frac{\nu}{\rm 1.4\,GHz} \bigg)^{-(2+\delta/2)},
\end{split}
\label{eq:eq}
\end{equation}
we can then tentatively investigate how the magnetic field in the relic region $\rm B_{relic}$ changes with the Mach number M assuming the previous SZ decrement upper-limit.
In the above formula, $\rm S_{\nu}^{relic}/\Delta S_{SZ}$ is the SZ decrement in percentage fixed to -11.7\%, $\xi_{e/p}$ is the ratio between relativistic protons and electron that we fix equal to 0.0026 according to \citet{kang12}, $\rm T_u$ is the upstream temperature equal to 2.7\,keV \citep{aka}, W is the relic width that is equal to $\sim$100\,kpc (see Sect. 3), $\delta$ is the spectral index of the relativistic particle distribution assumed to be equal to 4.2 \citep{kang12}, ${\rm B_{CMB}}$=3.24(1+z)$^2$\,$\muup$G.
We stress that many approximations have been done to derive this formula, for instance it is assumed that the relic is perfectly seen edge-on. However, this simplified formula can give us an alternative way to estimate of the magnetic field strength in the relic region.
\begin{figure}
\includegraphics[width=\columnwidth]{B_M.png}
\caption{Magnetic field strength versus Mach number. See Eq. \ref{eq:eq}.}
\label{fig:sz}
\end{figure}
The blue line in Fig. \ref{fig:sz} shows the relic magnetic field values at different Mach number at $\rm \nu=18.6\,GHz$. This has been derived computing the resulting Mach number values at different magnetic field. We draw dotted vertical lines at the X-ray and radio derived Mach number. To these values corresponds a magnetic field strength of $\rm B_{relic}\sim$3-4\,$\muup$G, which is an intermediate value between what found by \citet{van10}, B$\rm _{relic}\sim5-7\,\muup G$, and \citet{kier17}, B$\rm _{relic}\sim2.4\,\muup G$, by fitting the radial profile of the relic and with equipartition arguments respectively.
\section{Polarisation properties at 18.6 GHz}
Fig. \ref{fig:ciza_pol} shows the polarised intensity associated with the northern and central part of the galaxy cluster CIZA J2242.8+5301 detected at 18.6\,GHz with the SRT. This is the first polarised image of a radio relic at these frequencies and it is fundamental to evaluate its intrinsic polarisation properties which are strongly related to the local magnetic field.
Contours are the total intensity contours already shown in Fig. \ref{fig:ciza}. The length and the orientation of the overlaid vectors represent the intensity and the orientation of the radio wave electric field vectors. We draw them considering all pixels with a total intensity larger than 3$\sigma$, which also have a fractional linear polarisation SNR$\geqslant$2 and errors on the polarisation angle smaller than 15 degrees.\\
\begin{figure*}
\centering
\includegraphics[width=1.9\columnwidth]{ciza_pol_15deg_2s.pdf}
\caption{18.6\,GHz SRT polarised intensity image between 18\, and 19.2\,GHz. Contours are the same as the total intensity contours shown in Fig. \ref{fig:ciza}. Vectors represent the intensity and the orientation of the E-field. They have been traced for pixels with a total intensity larger than 3$\sigma$, a fractional linear polarisation SNR$\geqslant$2, and an error on the polarisation angle smaller than 15 degrees.}
\label{fig:ciza_pol}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.97\columnwidth]{pol_vs_l.pdf}
\caption{Polarisation fraction computed in boxes with size corresponding to the beam size. The dashed line shows the mean value reported in the bottom left corner of the plot.}
\label{fig:ciza_fpol}
\end{figure}
The E-field vectors are aligned perpendicular to the relic filaments. This has already been observed at other frequencies, namely at 4.9\,GHz \citep{van10}, at 6.6\,GHz \citep{loi17}, at 4.85 and 8.35\,GHz \citep{kier17}, and it is expected in presence of a shock wave.
Visually, only the magnetic field lines at 90 degrees from the line-of-sight are observed, while we are blind to the other magnetic fields components, since the resulting E-vectors are not propagating towards us. The "observable" magnetic fields lines are aligned with the shock surface and the E-vectors are perpendicular to it.
We note that, if the magnetic field would already have been aligned on large scales, its non-zero components would be perpendicular to the shock direction, generating E-vectors similar to what observed in the previous case. Therefore, we cannot distinguish between a large-scale ordered magnetic field and a turbulent magnetic field structure in the presence of a shock wave which compress the magnetic field in a thin layer.\\
Fig. \ref{fig:ciza_fpol} shows the polarisation fraction computed in 9 boxes with size corresponding to the beam size located from east to west across the relic.
The average polarisation fraction at this resolution is equal to (47$\pm$13)\%.
We detected the polarised signal associated to the D and E sources at the centre of the cluster which shows a mean polarisation fraction of $\sim 12\%$.
The eastern structure is strongly polarised, with an average polarisation fraction of $\sim$70\%.
\section{Discussion and conclusions}
The northern relic of CIZA J2242.8+5301 is a privileged site to study the acceleration of relativistic electrons by merger shocks in the ICM. Even if it is one of the most cited radio relics, it has posed a lot of questions since its discovery, and it has challenged the standard model of shock acceleration in cluster outskirts.
With this work, we have shown that the radio relic spectral behaviour is well modelled with a power-law from 145\,MHz up to 19\,GHz. The measurements presented in this work constitute clear evidence that there is no steepening at high frequencies at variance with earlier claims \citep{stroe16}. The inconsistency between this measurement and the 16\,GHz and 30\,GHz taken with the AMI and CARMA interferometers is mostly likely due to the lack of sensitivity of these interferometers on scales larger than their minimum baseline. It is clear that interferometers can miss flux associated with sources extended on large angular scales and therefore might not be sufficient in the study of extended sources, making a combination with single-dish observations essential.\\
We know that in the context of the DSA \citep{hoeft}, from the fitted integrated spectral index, we can derive the Mach number as:
\begin{equation}
M=\sqrt{\frac{\alpha+1}{\alpha-1}}.
\end{equation}
According to what found in this work, the Mach number should be M=4.2$^{+0.4}_{-0.6}$. This value is significantly different from what derived with X-ray observations \citep[M$_{\rm X}=2.7^{+0.7}_{-0.4}$, ][]{aka}. However, as also previous works highlighted \citep{hoang17,digennaro18}, deriving the Mach number from the integrated spectral index can lead to significant errors in some cases. Indeed, the previous formula has been derived under the assumption that the properties of the shock and the downstream gas remain constant during the electron cooling. Recent cosmological simulations of radio relics by \citet{wittor} have shown that both the Mach number and the magnetic field are not uniform across the shock front. Furthermore, they showed that the downstream magnetic field is by far not constant. By comparing spectral index measurements from observations and simulation, \citet{rajpurohit20} argued that the spectral index is most-likely biased by the high value tail of the underlying Mach number distribution.
A better way to estimate the Mach number is to evaluate it from the injected spectral index, measuring this quantity in the injection region from images at very high-resolution or from the color-color diagram in case of projection effect. \\
Following \citet{enss98}, we can use our polarised image to investigate the possible projection of the relic in the framework of the DSA.\\
Under the assumption of a weak magnetic field in the relic region, i.e. if the magnetic pressure is lower than the gas pressure, the average fractional linear polarisation $<\Pi>$ is a function of the viewing angle $\theta$ as:
\begin{equation}
\rm <\Pi_{\rm weak}> = \frac{3\gamma+3}{3\gamma+7}\frac{\sin^2{\theta}}{\frac{2R^2}{R^2-1}-\sin^2{\theta}},
\end{equation}
where R is the compression ratio defined as:
\begin{equation}
\rm R=\frac{\alpha+1}{\alpha-\frac{1}{2}},
\end{equation}
and $\gamma$ is the spectral index of the electron power-law like energy distribution, with $\gamma=2\alpha$+1.
On the other side, if the magnetic field pressure is higher that the gas pressure, we have:
\begin{equation}
\rm <\Pi_{\rm strong}> = \frac{3\gamma+3}{3\gamma+7}\frac{\sin^2{\theta}}{\frac{2}{15}\frac{13R-7}{R-1}-\sin^2{\theta}}.
\end{equation}
The scalar mean <$\Pi$> of the fractional polarisation is evaluated from the Q, U, and I Stokes parameters averaged over the entire relic area. If the shock relic is seen edge-on, i.e. forming a viewing angle of 90 degrees, then we observe its maximum averaged polarisation fraction, since the polarised signal is due to the amplified magnetic field vector components along the shock direction. For a relic seen face-on, the magnetic field vector components which are illuminated by the relic relativistic particles are not aligned, therefore we observe a null averaged polarisation fraction. This is shown in
Fig. \ref{fig:fpol_angle}, where we plot the polarisation fraction as a function of the viewing angle in the case of weak (blue) and strong (red) magnetic fields, assuming $\alpha=1.12$. \\
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{last_pang.pdf}
\caption{Polarisation fraction as a function of the viewing angle in the case of weak (blue) and strong (red) magnetic fields. }
\label{fig:fpol_angle}
\end{figure}
We measured the scalar mean of the polarisation fraction which is equal to <$\Pi$>=(42$\pm$17)\%. This value is slightly different from what reported in the previous section since there we measured the mean polarisation fraction from values obtained within beam size regions, while here we computed the mean from Q, U, and I averaged over the entire relic region.
Using the geometrical argument of \citet{enss98}, we computed a viewing angle of 33 degrees which means that the shock direction is forming an angle of 57 degrees with respect to the plane of the sky. A black dot shows this estimate in Fig. \ref{fig:fpol_angle}. It is clear that the assumption of a spherical geometry of the shock wave is a simplification of the real shock geometry. For this reason is not surprising that the black dot is far from the plotted curves. Nevertheless, according to the measured value of polarization fraction we can see that the radio relic is not perfectly seen edge-on, but we are observing it from a viewing angle between 45 and 80 degrees, i.e. the relic could be inclined with respect to the plane of the sky by an angle ranging from 10 to 45 degrees.
Numerical simulation of this relic \citep{van11} determined that the shock direction is inclined with respect to the plane of the sky by an angle $\lesssim$10 degree. This estimates is shown as a black triangle in the plot.
The interpretation of the mean polarised fraction in the framework of the DSA confirms that the relic is not seen perfectly edge-on and can explain the inconsistency between X-ray and radio derived Mach number. In addition, it is important to mention that X-ray observations do not clearly confirm the presence of a shock wave, since the shock has been only observed as a jump in temperature but not in the surface brightness \citep[<2$\sigma$ detection, see][]{ogrean}. Shock waves with Mach number M$\sim$3 should show a significant jump in the X-ray surface brightness profile. \\
Another question which needs to be answered is what is the source of cosmic-ray electrons. As already pointed out in the Introduction, shock waves with weak Mach number cannot efficiently accelerate particles from the thermal pool \citep{brunetti,van19}. Therefore, also for this particular case, we expect that a pre-existing relativistic particle population has been re-accelerated through the DSA mechanism at the shock passage. Due to the large extension of the relic, it is unlikely that the two nearby radio galaxies can supply such relativistic electrons. Nevertheless, fossil particles, injected by galaxies which are no longer active, may have accumulated in the relic area during the past history of the cluster.
A multiple shock \citep{hong15} structure along the line-of-sight could be the responsible of the (re-)acceleration of such fossil particles in the northern relic of CIZA J2242.8+5301. This would also explain the inconsistency between radio and X-ray derived Mach numbers. Interestingly, \citet{ogrean} identified additional inner small density discontinuities both on and off the merger axis with Chandra data, which could be interpreted, as suggested by the authors, as shock fronts.\\
To summarise, we presented new measurements of the northern radio relic of CIZA J2242.8+5301 obtained with the Effelsberg and the SRT single-dish telescopes. We found a flux density of $\rm S_{14.25\,GHz}=(9.5\pm3.9)\,mJy$ and $\rm S_{18.6\,GHz}=(7.67\pm0.90)\,mJy$ at 14.25\,GHz with the Effelsberg and 18.6\,GHz with the SRT telescopes respectively. The best-fit modelling of the radio relic spectrum between 143\,MHz and 19\,GHz is a power-law with spectral index $\alpha=(1.12\pm0.03)$. Our measurements exclude a possible steepening of the relic spectrum up to a frequency of 19\,GHz. We estimated the possible SZ contamination and we determined that the expected decrement both at 14.25\,GHz and at 18.6\,GHz would be within the uncertainty associated with our measurements. Assuming the modelling of \citet{basu}, we also inferred a rough estimate of the magnetic field in the relic region based on the derived SZ decrement upper-limit at 18.6\,GHz and this resulted to be $\rm B_{relic}\sim3-4\,\muup G$, a value close to what found in previous works with different approaches \citep{van10,kier17}. For the first time, we also detected the polarised intensity associated to the relic at 18.6\,GHz. The mean polarisation fraction calculated in boxes of the same beam size of the image is equal to (47$\pm$13)\% while the scalar mean computed over the entire relic area, dividing the mean polarized intensity by the mean total intensity, is equal to (44$\pm$18)\%. In the last paragraphs, we speculated about the origin of this radio relic. We suggested that this relic emission could be due to a fossil plasma re-accelerated by a multiple shock structure which is propagating in a direction inclined with respect to the plane of the sky. High-resolution and high-frequency polarised radio images as well as deep X-ray images could help to constrain the viewing angle and the shock structure of the relic respectively, allowing us to validate the proposed scenario.
\section*{Data availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\section*{Acknowledgements}
We thank the anonymous Referee for the useful suggestions and comments which help to improve our paper.
We thank Sorina Reile for her work on reducing the Effelsberg data. FL and PS acknowledge financial support from the Italian Minister for Research and Education (MIUR), project FARE, project code R16PR59747, project name FORNAX-B. D.W. is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 441694982. AB acknowledges financial support from the Italian Minister for Research and Education (MIUR), project FARE. AB e MB acknowledges financial support from the ERC-Stg DRANOEL, no 714245. D.W, K.R. and F.V. acknowledge financial support from the ERC Starting Grant "MAGCOW", no. 714196.
The Sardinia Radio Telescope \citep{bolli,prandoni} is funded by the Ministry of Education, University and Research (MIUR), Italian Space Agency (ASI), the Autonomous Region of Sardinia (RAS) and INAF itself and is operated as National Facility by the National Institute for Astrophysics (INAF). The development of the SARDARA back-end has been funded by the Autonomous Region
of Sardinia (RAS) using resources from the Regional Law 7/2007 "Promotion of the scientific research and technological innovation in Sardinia" in the context of the research project CRP-49231 (year 2011, PI Possenti): "High resolution sampling of the Universe in the radio band: an unprecedented instrument to understand the fundamental laws of the nature". Partly based on observations with the 100-m telescope of the MPIfR
(Max-Planck-Institut für Radioastronomie) at Effelsberg.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Computations using the theory of the Color Glass Condensate can generate even
flow harmonics from initial state
correlations~\cite{Dumitru:2008wn,Dumitru:2010iy,Dusling:2012iga,Dusling:2012wy}.
These correlations are non-vanishing in the limit of an infinite number of color
sources, but suppressed by the number of colors. This is in
distinction from
fluctuations generated by a finite number of scattering centers which are
non-vanishing in the limit of a large number of colors but vanish in the limit of an
infinite number of
color sources~\cite{Alver:2006wh,Alver:2008zza,Bzdak:2013rya,Yan:2014afa,Bzdak:2013raa,Dumitru:2014yza,McLerran:2015sva,McLerran:2016ivs}.
Four- and higher order-particle elliptical anisotropies also demonstrate a
non-trivial behavior as a function of number of colors and number of
sources~\cite{Dumitru:2014yza,Skokov:2014tka,Dumitru:2015cfa}.
The situation for odd harmonics is very interesting.
Unlike the case for even harmonics, to obtain
odd harmonics at small $x$ requires final state interactions
at least in the classical approximation to the
CGC.\footnote{
As demonstrated in Ref.~\cite{Kovner:2016jfp},
odd azimuthal anisotropy is present
in the CGC wave function beyond the classical approximation.}
This is a
consequence of time reversal invariance. In models of the Glasma, where classical equations are computed
numerically, one
sees odd harmonics, and they indeed develop after the collision
of the nuclei has taken place while final state interactions are in
play~\cite{Gale:2012rq,Lappi:2015vta,Schenke:2015aqa}.
It is the purpose of this paper to elaborate somewhat on the generation of flow in the classical
equations that
describe the evolution of the Glasma~\cite{Kovner:1995ja,Kovner:1995ts}
and to build a bridge between the analytical calculations in the dilute--dense
limit (see e.g.~Ref.~\cite{Kovner:2010xk,Kovner:2012jm}) and the dense--dense numerical
results~\cite{Lappi:2009xa,Schenke:2015aqa}.
We begin by solving the classical Yang--Mills equations around
the free field equations for a distribution
of fluctuating sources.
We consider a proton nucleus collision in a momentum range where the
field of the proton can be treated as weak.
We show explicitly that there are no odd harmonics generated by this lowest order
solution. We then iterate the equations
around the leading order, treating the color field of the
nucleus to all orders, and we find that we generate odd moments of azimuthal anisotropy in the first
such iteration of these equations. The non-zero contribution to odd
harmonics arises from the interference of the leading and next-to-leading
orders. We find remarkable
simplification
for the result for such odd moments when gluons are put on mass shell, and we integrate over
intermediate coordinates associated with iterating the equations.
This exercise is not only academic, in that it provides analytic confirmation of what is already
known from numerical simulation, but it is also useful for describing dilute
systems such
provided by pp and pA collisions, since the analytical form may be somewhat
simpler to use than numerical solutions to the full scattering problem.
\section{Notation and review of known results}
\label{Sec:Notation}
In this section we set up our notation, and will use well known results from the literature concerning
the classical equations that describe the Color Glass Condensate and the Glasma \cite{Gelis:2010nm}.
We begin by writing down the color field of an isolated nucleon or nucleus as
\begin{equation}
\alpha^i_{m}(x_\perp) = - \frac{1}{ig} U_m(\vp{x}) \partial^i U^\dagger_m(\vp{x}) = \frac{1}{ig} [\partial^i U_m(\vp{x}) ] U^\dagger_m(\vp{x}),
\end{equation}
where the Wilson lines are in the fundamental representation.
The field is generated by valence color charges
\begin{equation}
\partial_i \alpha_i(\vp{x}) = g \rho(\vp{x})\;.
\label{Eq:WWfield}
\end{equation}
The label $m$ is $1$ for the field of a proton and $2$ for the nucleus.
This is the field that describes the nucleon or nucleus before the
collision, and is the same as for an isolated nucleon or nucleus.
We consider the source for the proton $\rho_1$ to be weak and expand the corresponding
Wilson lines into power series to get
\begin{equation}
\alpha^i_1(\vp{x}) = \partial^i \Phi_1 (\vp{x}) - \frac{ig}{2}
\left(
\delta_{ij} - \frac{\partial_i \partial_j}{\partial^2}
\right) \left[ \partial^j \Phi_1 (\vp{x}), \Phi_1 (\vp{x}) \right] + {\cal O} (\Phi_1^3) \;,
\label{Eq:U1_expansion}
\end{equation}
where
\begin{equation}
\Phi_1(\vp{x}) = \frac{g}{\partial^2} \rho(\vp{x})\;.
\label{Eq:Phi_1}
\end{equation}
The only Wilson lines we will
encounter in the text correspond to the strong, nucleus field $m=2$.
In order to simplify the notation we will omit the redundant subscript, that is
\begin{equation}
U(\vp{x}) \equiv U_2(\vp{x})\,.
\end{equation}
In what follows we will perform
the expansion in the weak field; the notation for the expansion coefficients is
defined by
\begin{equation}
f(k) = \lim_{N\to\infty} \sum_{n=1}^N f^{(n)}(k) \,.
\end{equation}
We warn the reader that there might be a potential confusion with the notation of the Hankel functions, which also involves bracketed integers in the superscript.
In this paper we will consider only first two nontrivial corrections,
i.e. we terminate the expansion at $N=2$. When one computes the cross section
for particle production, one evaluates a square of the amplitudes associated with the color field.
The
first non-trivial correction to particle production that involves a second order iteration of the proton field, is of the order
$\rho_1^3$ and originates from the interference of the leading and next-to-leading order expansion coefficients.
We will argue that this is the leading order correction contributing to the odd
azimuthal anisotropy of double inclusive gluon production.
Following a widely accepted convention, we define
\begin{equation}
\tau = \sqrt{2 x_+ x_-},
\end{equation}
where
\begin{equation}
x_\pm = \frac{x_0\pm x_z}{\sqrt{2}}\,.
\end{equation}
When it is convenient we will use the Milne metric, or, the $\tau-\eta$-coordinates.
In this case the Minkowski coordinates are parametrized by
$$ x = (\tau \cosh \eta, \vec{x}_\perp, \tau \sinh \eta).$$
Here $\vec{x}_\perp$ is a two-dimensional vector.
We work in the Fock-Schwinger gauge $$A_\tau = x_- A_+ + x_+ A_- = 0.$$
Therefore the $\eta$ component of the vector potential is
\begin{equation}
A_\eta = x_- A_+ - x_+ A_- = \tau^2 \alpha,
\end{equation}
where $\alpha$ is introduced for convenience.
Since the quantum corrections are explicitly ignored in our classical Yang-Mills
approach,
the field created in collisions is
$\eta$-independent.
For Bessel (Neumann) functions of $n$-th order the following notation is used
$J_n(x)$ ($Y_n(x)$).
\section{Equations of motion}
In the upper light cone, assuming independence of rapidity,
the Classical Yang--Mills (CYM) equations
$[D_\mu, F^{\mu\nu}] =0$ can be written as
\begin{eqnarray}
&&\frac{1}{\tau} \partial_\tau \tau \partial_\tau A_i -
\partial_j (\partial_j \partial A_i - \partial_i \partial A_j)
\notag \\
&&\quad + ig \left( \partial_i [A_j, A_i] + \frac{1}{\tau^2} [A_\eta, F_{\eta i}] + [A_j , F_{ij}]
\right) = 0, \nonumber \\
&&\partial_\tau \tau^{-1} \partial_\tau A_\eta
- \frac{1}{\tau} \partial_\perp^2 A_\eta
+ ig \left(
\frac{1}{\tau} \partial_j [A_j, A_\eta] + \frac{1}{\tau} [A_j, F_{j\eta}]
\right) =0, \nonumber\\
&& \partial_\tau \partial_i A_i -ig
\left(
\frac{1}{\tau^2} [A_\eta, \partial_\tau A_\eta] + [A_i, \partial_\tau A_i]
\right)
= 0.
\end{eqnarray}
Note that the $\tau$ derivatives in the second equation can be written
in the following equivalent form
\begin{equation}
\partial_\tau \tau^{-1} \partial_\tau A_\eta \equiv
\tau \partial_\tau^2 \alpha + 3 \partial_\tau \alpha \equiv \tau^{-2} \partial_\tau \tau^3 \partial_\tau \alpha.
\end{equation}
\section{Solutions of CYM to leading order in weak field}
In this section we review the result of Ref.~\cite{Kovner:1995ja,Kovner:1995ts,Dumitru:2001ux} using the notation we defined in Sec.~\ref{Sec:Notation} and present them in the form
that will be useful for what follows.
The gluon field has the following dependence
\begin{align}
A^\pm(x^+,x^-,\vec{x}_\perp) &= \pm x^\pm \alpha(\tau, \vec{x}_\perp) \theta(x^+) \theta(x^-),\\
A^i(x^+,x^-,\vec{x}_\perp) &= \alpha^i(\tau, \vec{x}_\perp) \theta(x^+) \theta(x^-) \notag
\\ &+
\alpha^i_1( \vec{x}_\perp) \theta(- x^+) \theta(x^-) +
\alpha^i_2(\ \vec{x}_\perp) \theta(x^+) \theta(- x^-)
\end{align}
with the initial conditions obtained by matching the singularities on the light cone
\begin{eqnarray}
\alpha (\tau\to 0, \vp{x}) &=& \frac{ig}{2}
[\alpha_1^{i}(\vp{x}) ,\, \alpha_2^{i}(\vp{x})], \\
\alpha^i (\tau\to 0, \vp{x}) &=&
\alpha_1^{i}(\vp{x})
+
\alpha_2^{i}(\vp{x}).
\end{eqnarray}
The gauge rotation
\begin{eqnarray}
\alpha(\tau,\vec{x}_\perp)
&=&
U(\vec{x}_\perp) \beta (\tau,\vec{x}_\perp) U^\dagger (\vec{x}_\perp)\;, \\
\alpha_i(\tau,\vec{x}_\perp)
&=&
U(\vec{x}_\perp) \left( \beta_i (\tau,\vec{x}_\perp) -
\frac{1}{ig} \partial_i \right) U^\dagger (\vec{x}_\perp)
\end{eqnarray}
enables us to perform a systematic expansion in powers of $\rho_1$.
At the leading order, the CYM equations are
\begin{eqnarray}
&&\left[ \partial_\tau^2
+ \frac{3}{\tau} \partial_\tau
- \partial_\perp^2 \right] \beta^{(1)}(\tau, \vp{x}) = 0 , \\
&&\partial_\tau \partial_i \beta_i^{(1)}(\tau, \vp{x}) =0 , \\
&&\left[
\delta^{i j} \left( \partial_\tau^2 + \frac{1}{\tau} \partial_\tau
- \partial_\perp^2 \right) + \partial_i \partial_j
\right] \beta^{(1)}_j (\tau, \vp{x}) =0
\end{eqnarray}
with solutions
\begin{eqnarray}
\beta^{(1)} (\tau, \vp{k} ) &=& b_1 (\vp{k}) \frac{J_1(k_\perp \tau)}{k_\perp \tau}, \\
\beta^{(1)}_i (\tau, \vp{k} ) &=&
i \frac{\varepsilon^{ij} k_j}{k_\perp^2} b_2 (\vp{k}) J_0( k_\perp \tau)
+ i k_i \Lambda(\vp{k})\,.
\end{eqnarray}
The newly introduced functions are defined by the initial conditions
\begin{eqnarray}
\label{Eq:b1}
b_1(\vp{x}) &=& ig U^\dagger (\vp{x})
[ \alpha_1^{(1)\,i}(\vp{x}), \alpha_2^i(\vp{x})] U(\vp{x}) , \\
\label{Eq:b2}
b_2(\vp{x}) &=& \epsilon^{ij} \partial^j \left( U^\dagger (\vp{x})
\alpha_1^{(1)\,i}(\vp{x}) U(\vp{x}) \right), \\
\Lambda(\vp{x}) &=& \frac{\partial^i}{\partial_\perp^2}
\left(
U^\dagger (\vp{x}) \alpha_1^{(1)\,i}(\vp{x}) U(\vp{x})
\right).
\end{eqnarray}
Note that these functions are manifestly real (to be precise the components), thus the following holds for their Fourier images
\begin{equation}
f(\vp{k}) = f^*(-\vp{k})\;,
\end{equation}
where
$f(\vp{k})$ is either of $b_1(\vp{k})$, $b_2(\vp{k})$ or $\Lambda(\vp{k})$.
Equations \eqref{Eq:b1} and \eqref{Eq:b2} can also be rewritten in a similar form.
For this, we use the definition of $\alpha_2^i(\vp{x})$ and simplify the commutator in
Eq.~\eqref{Eq:b1}:
~\eqref{Eq:b1}:
\begin{align}
\label{Eq:b1_sym}
b_1(\vp{x}) &= \delta^{ij} \Omega_{ij}, \\
b_2(\vp{x}) &= \epsilon^{ij}
\Omega_{ij}\;
\label{Eq:b2_sym}
\end{align}
with
\begin{align}
\notag \Omega^{ij} (\vp{x}) &= \left(\alpha_{1 }^{(1)\,i}(\vp{x})\right)_a \partial^j \left(
U^\dagger (\vp{x})
t_a U(\vp{x}) \right) \\ &=
g \left[ \frac{\partial_i}{\partial^2} \rho^a_1(\vp{x}) \right] \partial^j W_{ba} (\vp{x}) t^b,
\label{Eq:A}
\end{align}
where we used the adjoint Wilson line
$$W_{ab} (\vp{x}) = 2\ {\rm tr} \left( U^\dagger (\vp{x}) t_b U(\vp{x}) t_a \right) .$$
To derive these equations we have made explicit use of the form of the solution
for $\alpha_1^i$ when expanded to first order in the strength of the proton
source, see Eq.~\eqref{Eq:U1_expansion}.
\section{Particle production}
\subsection{LSZ}
We start this section from reviewing the Lehmann--Symanzik--Zimmermann (LSZ)
reduction formula for a scalar field.
The time-dependent creation operator describing one particle at state $\vec{k}$
is defined by
\begin{equation}
a^+ (\vec{k}, t) =
\frac{1}{i} \int d^3x \exp(-i k \cdot x)
\overset{\leftrightarrow}{\partial_0} \phi(x) \;,
\end{equation}
where $k$ and $x$ are four-dimensional vectors and $k \cdot x = k_\mu x^\mu$.
We can construct the combination
\begin{align}
&a^+(\vec{k}, t\to\infty)
- a^+(\vec{k}, t \to 0) =
\frac{1}{i} \int_0^\infty dt \partial_0 \left(
\int d^3x \exp(-i k \cdot x)
\overset{\leftrightarrow}{\partial_0} \phi(x)
\right) \notag \\ &=
\frac{1}{i}
\int_0^\infty dt d^3x \exp(-i k \cdot x)
\left( \Box + m^2 \right) \phi(x),
\label{Eq:LSZaa}
\end{align}
where usually instead of $t\to 0$ the limit $t\to-\infty$ is used for the second
term. We chose the limit $t\to 0$ to mimic our problem where
the initial conditions are formulated on the light
cone.
From the equality~\eqref{Eq:LSZaa}, we can express the creation operator in the
final state by
\begin{align}
a^+(\vec{k},\infty)
&=
\left[
\frac{1}{i}
\int d^3x \exp(-i k \cdot x)
\overset{\leftrightarrow}{\partial_0} \phi(x)\right]_{t=0}
\notag
\\
&+
\frac{1}{i}
\int_0^\infty dt \int d^3x \exp(-i k \cdot x)
\left( \Box + m^2 \right) \phi(x) \,.
\end{align}
Therefore under the classical approximation,
we deduce that number of
produced particles is given by
\begin{align}
E_k \frac{d N}{d^3 k} = \frac{1}{2 (2\pi)^3}
&\left|
\left[ \int d^3x \exp(-i k \cdot x)
\overset{\leftrightarrow}{\partial_0} \phi(x) \right]_{t=0}
\right. \notag \\ & \left.
+
\int_0^\infty dt \int d^3x \exp(-i k \cdot x)
\left( \Box + m^2 \right) \phi(x)
\right|^2\,.
\label{Eq:LSZ}
\end{align}
Here we have two distinct contributions. One is from the initial time $t=0$
``surface'' and the other involving the time integration from the ``bulk''.
Anticipating the results, we want to comment that the surface contribution
is manifestly $T$-even and thus is not expected to produce non-zero
odd azimuthal anisotropy.
\subsection{Milne metric}
A straightforward generalization of Eq.~\eqref{Eq:LSZ} for $\beta_i$ and $\beta$
in the Milne metric reads
\begin{equation}
E_k \frac{d N}{d^3 k} =\frac{1}{8 \pi} \left[
\left|
\mathfrak{S}_\perp (\vp{k})
+
\mathfrak{B}_\perp (\vp{k})
\right|^2
+ \left|
\mathfrak{S}_\eta (\vp{k})
+
\mathfrak{B}_\eta (\vp{k})
\right|^2
\right]\;,
\end{equation}
where the surface contributions at $\tau \to 0+$ are given by
\begin{align}
\mathfrak{S}_\perp (\vp{k}) &= \lim_{\tau \to 0+}
\left(
\tau H_0^{(1)} (k_\perp \tau) \overset{\leftrightarrow}{\partial_\tau} \beta_{\perp} (\tau, \vp{k})
\right)\;,\\
\mathfrak{S}_\eta (\vp{k}) &=
\lim_{\tau \to 0+ }
\left(
\tau^3 \left\{ \frac{H_1^{(1)}(k_\perp \tau)}{\tau} \overset{\leftrightarrow}{\partial_\tau} \beta (\tau, \vp{k})
\right\}
\right)\,.
\label{Eq:Surface3}
\end{align}
The bulk contributions from the upper light cone are
\begin{align}
\label{Eq:Bulk_perp}
\mathfrak{B}_\perp (\vp{k}) &= \int_0^\infty d \tau \tau
H_0^{(1)} (k_\perp \tau)
\left\{
\frac{1}{\tau} \partial_\tau \tau \partial_\tau \beta_{\perp}(\tau, \vp{k}) -
\partial_\perp^2 \beta_{\perp}(\tau, \vp{k})
\right\}\;, \\
\mathfrak{B}_\eta (\vp{k}) &=
\int_0^\infty d \tau \tau^2
H_1^{(1)} (k_\perp \tau)
\left\{
\frac{1}{\tau^3} \partial_\tau \tau^3 \partial_\tau \beta(\tau, \vp{k}) -
\partial_\perp^2 \beta(\tau, \vp{k})
\right\}\; ,
\label{Eq:Bulk}
\end{align}
where the transverse part of the field $\beta_i$ is defined as~\footnote{$\epsilon_{ij}$ stands for the antisymmetric tensor, $\epsilon_{12}=1$.}
\begin{equation}
\beta_\perp (\tau, \vp{k}) = i \frac{\epsilon^{ij}k_j}{k_\perp} \beta_i (\tau, \vp{k}) .
\end{equation}
The imaginary unit is included to guarantee that the function $\beta_\perp (\tau, \vp{x})$ is real.
In order to simplify the notations we will introduce the following combinations
\begin{eqnarray}
\label{Eq:j_perp_k}
j_\perp (\tau, \vp{k}) &=& \frac{1}{\tau} \partial_\tau \tau \partial_\tau \beta_\perp(\tau, \vp{k}) - \partial_\perp^2 \beta_\perp(\tau, \vp{k}), \\
j_i (\tau, \vp{k}) &=& \frac{1}{\tau} \partial_\tau \tau \partial_\tau \beta_i(\tau, \vp{k}) - \partial_\perp^2 \beta_i(\tau, \vp{k}), \\
j (\tau, \vp{k}) &=& \frac{1}{\tau^3} \partial_\tau \tau^3 \partial_\tau \beta(\tau, \vp{k}) - \partial_\perp^2 \beta(\tau, \vp{k})\;.
\end{eqnarray}
which will be referred to as
``currents'' because they vanish in the absence of non-trivial
interaction in the bulk.
\subsection{Absence of odd azimuthal anisotropy at leading order}
Lets consider the solutions of the CYM to the leading order in the weak field.
Owing to the equations of motions we get
\begin{eqnarray}
j^{(1)} &=& 0, \\
j^{(1)}_\perp &=& 0\,.
\end{eqnarray}
Because of the absence of the currents, there are no non-zero contributions from the upper light-cone.
The surface term for the transverse component is defined by the initial conditions
\begin{align}
\mathfrak{S}^{(1)}_\perp(\tau,\vp{k})
&= -
\lim_{\tau \to 0+} \left(
\tau \partial_\tau H_0^{(1)} (k_\perp \tau)
\beta^{(1)}_\perp (\tau, \vp{k}) \right) \notag \\&=
-
\frac{2}{\pi} i
\beta^{(1)}_\perp(\tau=0, \vp{k}) =
\frac{2 i }{\pi k_\perp} b_2(\vp{k})
\,.
\end{align}
Correspondingly, the contribution from the $\eta$ component is given by
\begin{equation}
\mathfrak{S}^{(1)}_\eta(\tau,\vp{k}) =
- \frac{4}{\pi} i \frac{\beta(\tau=0,\vp{k})}{k_\perp} = -
\frac{2 i }{\pi k_\perp} b_1(\vp{k})\,.
\end{equation}
Combining these equations together,
we conclude that the single inclusive gluon distribution to this order is given by
\begin{align}
\label{Eq:leadingorder}
E_k \frac{d N}{d^3 k} =
\frac{1}{2\pi^3}
&\left[
\beta_i(\tau=0, \vp{k})
\mathfrak{t}_{ij} (\vp{k})
\beta_j(\tau=0, -\vp{k})
\right. \notag \\ &\left.
+ \frac{4}{k^2_\perp}
\beta(\tau=0, \vp{k})
\beta(\tau=0, -\vp{k})
\right]\;,
\end{align}
where $ \mathfrak{t}_{ij} (\vp{k})$ is the two-dimensional transverse projector
\begin{equation}
\mathfrak{t}_{ij} (\vp{k}) = \delta_{ij} - \frac{k_i k_j }{k_\perp^2}.
\end{equation}
This expression is manifestly symmetric under $\vp{k}\to-\vp{k}$.
To match this result to the one derived previously~\cite{Dumitru:2001ux}, we rewrite
Eq.~\eqref{Eq:leadingorder}
in the form~\footnote{See Eq.~\eqref{Eq:t_to_e}.}
\begin{eqnarray}
E_k \frac{d N}{d^3 k} &=&
\frac{1}{2\pi^2}
{\rm tr}
\left( |a_1|^2 + |a_2|^2 \right)
\label{Eq:dNdk1}
\end{eqnarray}
where
\begin{eqnarray}
a_1 &=& \frac{g}{\sqrt{\pi} k_\perp}
\int d^2 x_\perp
e^{-i \vp{k} \vp{x}}
U^\dagger(\vp{x})
[
\alpha^{(1) i}_1(\vp{x}) ,
\alpha_2^i(\vp{x})
]
U(\vp{x}),\\
a_2 &=& \frac{1}{\sqrt{\pi} k_\perp}
\int d^2 x_\perp
e^{-i \vp{k} \vp{x}}
\epsilon_{ij} \partial_j
U^\dagger(\vp{x})
\alpha^{(1) i}_1(\vp{x})
U(\vp{x})\,.
\end{eqnarray}
This coincides with Dumitru--McLerran result~\cite{Dumitru:2001ux} modulo an
irrelevant complex phase in the definition of $a_{1,2}$.
Further simplifications are possible if explicitly expand
the weak field into a power series in $\rho_1$~\footnote{See Eq.~\eqref{Eq:U1_expansion}.}
\begin{equation}
\alpha^{(1) i}_1 (\vp{x}) = \partial^i \Phi_1(\vp{x}) =
g \frac{\partial^i}{\partial_\perp^2} \rho_1(\vp{x}).
\end{equation}
Substituting to Eq.~\eqref{Eq:dNdk1} we obtain
\begin{align}
E_k \frac{d N}{d^3 k} &=
\frac{1}{4\pi^3 k_\perp^2}
(
\delta_{ij} \delta_{lm} +
\epsilon_{ij} \epsilon_{lm}
)
\Omega_{ij}^b(\vp{k})
\left[\Omega_{lm}^b(\vp{k})\right]^*
\notag \\ &=
\frac{g^2}{4\pi^3 k_\perp^2}
(
\delta_{ij} \delta_{lm} +
\epsilon_{ij} \epsilon_{lm}
) \notag \\
&\times
\int
\frac{d^2 p_\perp}{(2\pi)^2}
\frac{d^2 q_\perp}{(2\pi)^2}
\frac{p_{\perp, i} (k-p)_{\perp, j} }{p_\perp^2}
\frac{q_{\perp, l} (k-q)_{\perp, m} }{q_\perp^2}
\notag \\
&\quad\quad\quad \times
\rho_a^*(\vp{q})
\left[
W^\dagger(\vp{k}-\vp{q})
W(\vp{k}-\vp{p})
\right]_{ab}
\rho_b(\vp{p}) ,
\label{Eq:Gelis_form}
\end{align}
where we introduced the Fourier transforms of
the components of Eq.~\eqref{Eq:A}
\begin{equation}
\Omega_{ij}^b(\vp{k}) =
g \int \frac{d^2 p_\perp}{(2\pi)^2}
\frac{p_{\perp, i} (k-p)_{\perp, j} }{p_\perp^2}
\rho_a(\vp{p}) W_{ba} (\vp{k}-\vp{p})
\label{Eq:OmegaDef}
\end{equation}
to simplify the notation in the coming section.
In Appendix A, we provide yet another alternative form of Eq.~\eqref{Eq:dNdk1}.
\section{Second order}
At second order we expect some non-trivial modification of the
particle production owing to the presence of
non-trivial currents
\begin{eqnarray}
j^{(2)} (\tau, \vp{x}) &=& -ig \left(
\partial_i [\beta^{(1)}_i(\tau,\vp{x}), \beta^{(1)}(\tau, \vp{x}) ]\right.
\notag
\\ &&\left. +
[\beta^{(1)}_i(\tau,\vp{x}), \partial_i \beta^{(1)}(\tau, \vp{x}) ]
\right), \\
\label{Eq:j2}
j^{(2)}_i (\tau, \vp{x}) &=&
- \partial_i \partial_j \beta^{(2)}_j (\tau, \vp{x} ) - ig \left(
\partial_j [\beta^{(1)}_j(\tau,\vp{x}), \beta^{(1)}_i(\tau, \vp{x}) ]
\right. \\ && \left.
+
[\beta^{(1)}_j(\tau,\vp{x}), \partial_j \beta^{(1)}_i(\tau, \vp{x})
-\partial_i \beta^{(1)}_j (\tau, \vp{x})
] \right. \notag \\ \notag
&&\left.
-\tau^2 [\beta^{(1)}(\tau,\vp{x}), \partial_i \beta^{(1)}(\tau, \vp{x})]
\right)\notag .
\end{eqnarray}
Note that to this order, we do not have to solve the equations of motion for $\beta^{(2)}$.
The
contribution of the currents to particle production is solely defined by
combinations of $\beta^{(1)}$ except for the
term proportional to the gradient of the divergence of $\beta^{(2)}$
(the first term in Eq.~\eqref{Eq:j2}).
Fortunately this term does not contribute to the transverse current $j_\perp$
and thus drops out from the particle production equations,
see Eq.~\eqref{Eq:Bulk_perp} and Eq.~\eqref{Eq:j_perp_k}.
This becomes obvious in momentum space~\footnote{The two-dimensional cross-product is defined as $\vp{a} \times \vp{b} = \epsilon_{ij} a_i b_j$.}
\begin{align}
\notag
j^{(2)} (\tau, \vp{k}) &= g \int \frac{d^2q}{(2\pi)^2} [ (2 \vp{k} -\vp{q}) \cdot \vec{\beta}^{(1)} (\tau, \vp{q}) , \beta^{(1)} (\tau, \vp{k} - \vp{q}) ], \\
j^{(2)}_\perp (\tau, \vp{k}) &=
g
\int \frac{d^2q}{(2\pi)^2}
\left(
i \left[ (2 \vp{k} - \vp{q}) \cdot \vec{\beta} ^{(1)} (\tau, \vp{q}) , \frac{\vec{\beta} ^{(1)} (\tau, \vp{k} - \vp{q}) \times \vp{k}}{k_\perp} \right]
+ \right.
\notag
\\&
\quad \quad \quad \quad \left.
i\, \frac{\vp{q}\times\vp{k}}{k_\perp} \left( \tau^2 [ \beta ^{(1)} (\tau, \vp{q}) , \, \beta ^{(1)} (\tau, \vp{k} - \vp{q}) ] \right. \right.
\notag \\
&\quad \quad \quad \quad \left. \left.
+ [ \vec{\beta} ^{(1)} (\tau, \vp{q}),\, \vec{\beta} ^{(1)} (\tau, \vp{k} - \vp{q})]
\right)
\right). \notag
\end{align}
\subsection{ $\eta$-component of bulk contribution }
The goal of this subsection is to compute
\begin{equation}
\mathfrak{B}^{(2)} _\eta(\vp{k}) = \int d \tau \tau^2
H_1^{(1)} (k_\perp \tau)
j^{(2)} (\tau, \vp{k})\,.
\end{equation}
For this we note two useful identities obtained based on the equations from
Appendix B:
\begin{eqnarray}
&&\int d\tau \tau H^{(1)}_1 (k_\perp \tau)
J_0(q_\perp \tau)
J_1(|\vp{k}-\vp{q}|\tau) =
\frac{1}{\pi} \frac{1}{|\vp{k}-\vp{q}| k_\perp} \times \nonumber \\
&& \left( \frac{(\vp{k}-\vp{q})\cdot \vp{k}}{|\vp{q}\times\vp{k}|} - i \right)\,,\\
&& \int d\tau \tau H^{(1)}_1 (k_\perp \tau)
J_1(|\vp{k}-\vp{q}|\tau) =
i \frac{2}{\pi} \frac{|\vp{k}-\vp{q}|}{k_\perp} \frac{1}{k_\perp^2 - |\vp{k}-\vp{q}|^2}\,.
\end{eqnarray}
Thus the bulk contribution for the $\eta$-component at second order reads
\begin{eqnarray}
\mathfrak{B}^{(2)}_\eta(\vp{k}) &=& \frac{2 ig}{\pi k_\perp} \int \frac{d^2q}{(2\pi)^2}
\left(
\frac{\vp{k}\times\vp{q}}{q^2|\vp{k}-\vp{q}|^2} \left(\frac{(\vp{k}-\vp{q})\cdot \vp{k}}{|\vp{q}\times\vp{k}|} - i \right) \right. \times \nonumber \\
& & \left. [b_2(\vp{q}),\, b_1(\vp{k}-\vp{q})]
\notag + i [\Lambda(\vp{q}),\, b_1(\vp{k}-\vp{q})]
\right) .
\end{eqnarray}
\subsection{ Transverse vector-component of bulk contribution}
Analogously, using the integrals from Appendix B we get
\begin{align}
\mathfrak{B}^{(2)} _\perp (\vp{k}) &= \int d \tau \tau
H_0^{(1)} (k_\perp \tau)
j^{(2)}_\perp (\tau, \vp{k}) \notag \\
&= \notag g \frac{2}{\pi k_\perp}
\int \frac{d^2q}{(2\pi)^2}
\left(
\frac{1}{2}
\vp{k} \times \vp{q} [\Lambda(\vp{q}), \Lambda(\vp{k}-\vp{q})]
\right. \notag \\& \notag
\left.
- \frac{\vp{k} \cdot \vp{q}}{q_\perp^2} [b_2(\vp{q}), \Lambda(\vp{k}-\vp{q})]
\right. \\ \notag
&- i \frac{1}{2} \frac{ \vp{k} \times \vp{q}}{ | \vp{k} \times \vp{q}| }
\frac{k_\perp^2 + \vp{q} \cdot (\vp{q} - \vp{k})}{q_\perp^2 |\vp{k}-\vp{q}|^2}
[b_2(\vp{q}), b_2(\vp{k}-\vp{q})]
\\
&\left.
+ \frac{1}{2} \frac{\vp{k}\times\vp{q}}{q_\perp^2 |\vp{k}-\vp{q}|^2}
\left(
1 + i \frac{\vp{q} \cdot (\vp{k}-\vp{q})}{| \vp{k} \times \vp{q}|}
\right)
[b_1(\vp{q}), b_1(\vp{k}-\vp{q})]
\right)\,.
\end{align}
Although the last equation is complicated, we expect significant simplifications
for the asymmetric part, as we will demonstrate below.
\subsection{Surface contributions}
To obtain the final equation for particle production we have to derive the
surface contributions as well.
They are
\begin{equation}
\mathfrak{S}^{(2)}_\eta (\vec{k})
= - i \frac{4}{\pi} \frac{\beta^{(2)} (\tau=0,\vp{k})}{k_\perp}
\end{equation}
and
\begin{equation}
\mathfrak{S}^{(2)}_\perp (\vec{k})
=
- i
\frac{2}{\pi}
\beta^{(2)}_\perp(\tau=0, \vp{k})\,,
\end{equation}
where the functions are defined by the second-order
expansion coefficient in the weak field of the initial conditions
\begin{eqnarray}
\beta^{(2)} (\tau\to 0, \vp{x}) &=& \frac{ig}{2}
U^\dagger(\vp{x})
[\alpha_1^{(2) \, i}(\vp{x}) ,\, \alpha_2^{i}(\vp{x})]
U(\vp{x}), \\
\beta^{(2)}_i (\tau\to 0, \vp{x}) &=&
U^\dagger(\vp{x})
\alpha_1^{(2) \, i}(\vp{x})
U(\vp{x}).
\end{eqnarray}
Here the weak proton field at second order is given by~\footnote{See Eq.~\eqref{Eq:U1_expansion}.}
\begin{equation}
\alpha_1^{(2) \,i} =
\left(
\delta_{ij} - \frac{\partial_i \partial_j}{\partial^2}
\right) \left[ \partial^j \Phi_1 (\vp{x}), \Phi_1 (\vp{x}) \right]\;.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{bulksurf.pdf}
\caption{Illustration of the bulk (left) and the surface (right) contributions to the amplitude at order $g^3$.}
\label{fig:bulksurf}
\end{figure}
\subsection{Odd azimuthal anisotropy on event-by-event basis}
The goal of this section is to show that the single inclusive
particle production configuration-by-configuration (before performing the average
with respect to $\rho_1$ and $\rho_2$)
has odd azimuthal harmonics at
second order. This can be straight-forwardly shown using the
results obtained in the previous sections. We however prefer to
use the following line of argumentation. Lets consider the
single inclusive cross-section
\begin{align}
E_k \frac{d N}{d^3 k} &=
\sum_{\gamma = \eta, _\perp}
(a^{(1)}_\gamma(\vp{k}) + a^{(2)}_\gamma(\vp{k}))
(a^{(1)}_\gamma(\vp{k}) + a^{(2)}_\gamma(\vp{k}))^*.
\end{align}
As we discussed previously the first order is entirely defined by the
surface contribution, i.e. $ a^{(1)}_\gamma(\vp{k}) = \mathfrak{S}^{(1)}_\gamma(\vp{k}) $
with the following property for $ \mathfrak{S}^{(1)}_\gamma (\vp{k})$
\begin{align}
\mathfrak{S}^{(1)}_\gamma (\vp{k}) = - ( \mathfrak{S}^{(1)}_\gamma (-\vp{k}) )^* \;.
\end{align}
An analogous relation holds also for second order
\begin{align}
\mathfrak{S}^{(2)}_\gamma (\vp{k}) = - ( \mathfrak{S}^{(2)}_\gamma (-\vp{k}) )^* \;.
\end{align}
The asymmetric part of the single inclusive production is
\begin{align}
&\frac{E_k}{ 2} \left( \frac{d N (\vp{k})}{d^3 k}
- \frac{d N (-\vp{k})}{d^3 k} \right) \\ & =
\frac12 \sum_{\gamma = \eta, _\perp}
(a^{(1)}_\gamma(\vp{k}) + a^{(2)}_\gamma(\vp{k}))
(a^{(1)}_\gamma(\vp{k}) + a^{(2)}_\gamma(\vp{k}))^* \notag \\ &
- \frac12 \sum_{\gamma = \eta, _\perp}
(a^{(1)}_\gamma(-\vp{k}) + a^{(2)}_\gamma(-\vp{k}))
(a^{(1)}_\gamma(-\vp{k}) + a^{(2)}_\gamma(-\vp{k}))^*
\notag \\
& =
\frac{1}{8\pi} }
\Re \left( (\mathfrak{S}^{(1)}_\gamma (\vp{k}))^* \left[
\mathfrak{B}^{(2)}_\gamma (\vp{k}) +
(\mathfrak{B}^{(2)}_\gamma (-\vp{k}))^*
+ \mathfrak{S}^{(2)}_\gamma (\vp{k})
+ (\mathfrak{S}^{(2)}_\gamma (\vp{k}))^*
\right]\notag
\right)\\ &\notag + {\cal O} (\rho_1^4
=
\frac{1}{8\pi} }
\Re \left( (\mathfrak{S}^{(1)}_\gamma (\vp{k}))^* \left[
\mathfrak{B}^{(2)}_\gamma (\vp{k}) +
(\mathfrak{B}^{(2)}_\gamma (-\vp{k}))^*
\right]
\right) + {\cal O} (\rho_1^4) .
\label{Eq:Odd_ass}
\end{align}
The surface contribution in the square bracket cancels, as we alluded to before.
In order to compute the bulk contribution in the square brackets,
lets go back and consider Eq.~\eqref{Eq:Bulk}.
Since the functions $\beta(\vp{x})$ and $\beta_\perp(\vp{x})$ are real we have
\begin{align}
\mathfrak{B}^{(2)}_\eta (\vp{k}) +
(\mathfrak{B}^{(2)}_\eta (-\vp{k}))^*
& =
2 \int_0^\infty d \tau \tau^2
J_1 (k_\perp \tau)
j^{(2)} (\tau, \vp{k})
\end{align}
and
\begin{align}
\mathfrak{B}^{(2)}_\perp (\vp{k}) +
(\mathfrak{B}^{(2)}_\perp (-\vp{k}))^*
& =
2 \int_0^\infty d \tau \tau
J_0 (k_\perp \tau)
j^{(2)}_\perp (\tau, \vp{k})\;.
\end{align}
The cancellation of the Neumann functions
simplifies the computation of the right-hand side
\begin{align}
&\mathfrak{B}^{(2)}_\eta (\vp{k}) +
(\mathfrak{B}^{(2)}_\eta (-\vp{k}))^*
\notag
\\
&= \frac{4 g}{k_\perp \pi}
\int \frac{d^2 q}{(2\pi)^2}
\frac{\vp{k}\times \vp{q}}{|\vp{k}\times \vp{q}|}
\frac{(\vp{k}-\vp{q})\cdot\vp{k}}{q_\perp^2 |\vp{k}-\vp{q}|^2 }
i [b_{ 2} (\vp{q}), b_{1}(\vp{k}-\vp{q})]
\label{Eq:Odd_Beta}
\end{align}
and
\begin{align}
\label{Eq:Odd_Bperp}
&\mathfrak{B}^{(2)}_\perp (\vp{k}) +
(\mathfrak{B}^{(2)}_\perp (-\vp{k}))^*
=
\notag \\
&\quad - \frac{2 g}{k_\perp \pi}
\int \frac{d^2 q}{(2\pi)^2}
\frac{\vp{k}\times \vp{q}}{|\vp{k}\times \vp{q}|}
\frac{1}{q_\perp^2 |\vp{k}-\vp{q}|^2 }
\notag \\&\quad
\left(
(k_\perp^2 + \vp{q}\cdot(\vp{q}-\vp{k}))
i [b_2(\vp{q}), b_2(\vp{k}-\vp{q})]
\right. \notag
\\
&\quad
\left.
-\vp{q}\cdot(\vp{k}-\vp{q})
i [b_1(\vp{q}), b_1(\vp{k}-\vp{q})]
\right) \,.
\end{align}
It is remarkable that the gauge field $\Lambda(\vp{k})$ does not contribute to this expression.
Both expressions in Eqs.
\eqref{Eq:Odd_Beta}
and
\eqref{Eq:Odd_Bperp}
are non-local owing to the presence of the
ratio
\begin{equation}
\frac{\vp{k}\times \vp{q}}{|\vp{k}\times \vp{q}|}
= \rm {sign} \left[ \; \sin(\phi_{\angle(\vp{k}, \vp{q})}) \right]\;.
\end{equation}
Summing everything up, the odd contribution is given by the following
\begin{align}
&\frac{E_k}{ 2} \left( \frac{d N (\vp{k})}{d^3 k}
- \frac{d N (-\vp{k})}{d^3 k} \right) = \notag \\
&\quad
{\frac{1}{8\pi} }
\Re \left(
\frac{4 ig}{\pi^2 k_\perp^2} b^\star_2(\vp{k})
\int \frac{d^2 q}{(2\pi)^2}
\frac{\vp{k}\times \vp{q}}{|\vp{k}\times \vp{q}|}
\frac{1}{q_\perp^2 |\vp{k}-\vp{q}|^2 }
\right.\notag \\ & \quad \left.
\left(
(k_\perp^2 + \vp{q}\cdot(\vp{q}-\vp{k}))
i [b_2(\vp{q}), b_2(\vp{k}-\vp{q})]
\right. \right. \notag
\\
&\quad
\left. \left.
-\vp{q}\cdot(\vp{k}-\vp{q})
i [b_1(\vp{q}), b_1(\vp{k}-\vp{q})]
\right) +
\right.
\notag
\\
& \quad \quad \left.+ \frac{8 ig}{\pi^2 k_\perp^2} b^\star_1(\vp{k})
\int \frac{d^2 q}{(2\pi)^2}
\frac{\vp{k}\times \vp{q}}{|\vp{k}\times \vp{q}|}
\frac{(\vp{k}-\vp{q})\cdot\vp{k}}{q_\perp^2 |\vp{k}-\vp{q}|^2 }
i [b_{ 2}(\vp{q}), b_{1}(\vp{k}-\vp{q})]
\right) \notag = \\
&=
{ \frac{1}{8\pi} }
\Im
\left\{
\frac{2g}{\pi^2 k_\perp^2}
\int \frac{d^2 q}{(2\pi)^2}
\frac{\vp{k}\times \vp{q}}{|\vp{k}\times \vp{q}|}
\frac{1}{q_\perp^2 |\vp{k}-\vp{q}|^2 }
\right. \notag \\
&\quad \left.
\times
f^{abc}
\Omega^a_{ij} (\vp{q})
\Omega^b_{mn} (\vp{k}-\vp{q})
\Omega^{c\star}_{rp} (\vp{k})
\notag
\times \right. \\ & \quad
\left.
\left[
\left(
k_\perp^2 \epsilon^{ij} \epsilon^{mn}
-\vp{q} \cdot (\vp{k} - \vp{q} )
(\epsilon^{ij} \epsilon^{mn}+\delta^{ij} \delta^{mn})
\right) \epsilon^{rp}
\right. \right.\notag \\ & \quad \left. \left.
+
2 \vp{k} \cdot (\vp{k}-\vp{q}) {\epsilon^{ij} \delta^{mn}} \delta^{rp}
\right]
\right\} ,
\label{Eq:FinalAssymitry}
\end{align}
where $\Omega$ is defined in Eq.~\eqref{Eq:OmegaDef}.
\section{Double inclusive gluon production in leading log}
\label{Sec:DIP}
The double inclusive gluon production at the leading log approximation reads~\cite{Gelis:2008ad}
\begin{equation}
E_k E_q \frac{d\; \overline{N}}{d^3k d^3 q} =
\left\langle
E_k \frac{d N}{d^3 k }
E_q \frac{d N}{d^3 q}
\right\rangle\;,
\end{equation}
where the average is performed over the target and the projectile fields.
In a Gaussian ensemble, the average removes all contributions odd in $\rho_1$.
To simplify the notation we define
\begin{equation}
E_k \frac{d N}{d^3 k } = n^{(2)}(\vp{k}) + n^{(3)}(\vp{k}) + n^{(4)}(\vp{k}) + \dots\;,
\end{equation}
where according to previously used definitions
\begin{align}
n^{(2)} (\vp{k}) &= \sum_{\gamma=\eta,\perp} |a^{(1)}_\gamma(\vp{k})|^2 \;, \\
n^{(3)} (\vp{k}) &= \sum_{\gamma=\eta,\perp}
a^{(1)}_\gamma(\vp{k}) \left( a^{(2)}_\gamma(\vp{k}) \right)^* + {\rm c.c.} \;, \\
n^{(4)} (\vp{k}) &= \sum_{\gamma=\eta,\perp} \left[
a^{(1)}_\gamma(\vp{k}) \left( a^{(3)}_\gamma(\vp{k}) \right)^* +
a^{(3)}_\gamma(\vp{k}) \left( a^{(1)}_\gamma(\vp{k}) \right)^*
+ |a^{(2)}_\gamma(\vp{k})|^2 \right]
\;.
\end{align}
As we established earlier, to the leading order the cross-section is symmetric
configuration-by-configuration
\begin{equation}
n^{(2)} (\vp{k}) = n^{(2)} (- \vp{k}) .
\end{equation}
In addition, the condition that
\begin{equation}
E_k E_q \frac{d\; \overline{N}}{d^3k d^3 p} (\vp{k}, \vp{p})
=
E_k E_q \frac{d\; \overline{N}}{d^3k d^3 p} (-\vp{k}, -\vp{p})
\end{equation}
leads to
\begin{equation}
\left\langle \left( n_\gamma^{(4)}(\vp{k}) - n_\gamma^{(4)}(-\vp{k}) \right)
n^{(2)}_\gamma(\vp{p})
\right\rangle = 0 \;.
\end{equation}
This guarantees that the contribution to the odd asymmetry
depends only on $n^{(3)}$ computed in the previous section.
Indeed
\begin{equation}
\frac{E_k E_q}{2} \left( \frac{d \overline{N}}{d^3 k d^3 q } (\vp{k}, \vp{p}) -
\frac{d \overline{N}}{d^3 k d^3 p } (-\vp{k}, \vp{p})
\right) = \frac12
\left\langle
\left(
n^{(3)}_\gamma(\vp{k})
-
n^{(3)}_\gamma(- \vp{k})
\right)
n^{(3)}_\gamma(\vp{p})
\right\rangle.
\end{equation}
In this notation, the difference
$\frac12\left(
n^{(3)}_\gamma(\vp{k})
-
n^{(3)}_\gamma(- \vp{k})
\right)
$ is given entirely by Eq.~\eqref{Eq:FinalAssymitry}.
This contribution is non-vanishing and gives rise to
odd azimuthal anisotropy. It is obviously connected to
the initial state distribution of the color charges, but
has some non-local dependence on spatial points.
Most importantly this contribution
comes from the evolution of the field in the forward light cone and is not just defined
by the initial conditions on the light-cone as at the leading order.
To proceed further
we will consider an
expression asymmetrized both with respect to $\vp{k}$ and
$\vp{q}$:
\begin{equation}
\frac14
\left(
n^{(3)}_\gamma(\vp{k})
-
n^{(3)}_\gamma(- \vp{k})
\right)
\left(
n^{(3)}_\gamma(\vp{p})
-
n^{(3)}_\gamma(- \vp{p})
\right).
\label{Rq:n3}
\end{equation}
As we established in the previous section, each difference is
proportional to the imaginary part of some function $f$, i.e
\begin{align*}&\frac{1}{2} \left(
n^{(3)}_\gamma(\vp{p})
-
n^{(3)}_\gamma(- \vp{p})
\right) = \Im f(\vp{p})
=
\frac{1}{2 i } \left( f(\vp{p}) - f^*(\vp{p}) \right) \\
&
=
\frac{1}{2 i } \left( f(\vp{p}) - f(-\vp{p}) \right) .
\end{align*}
Thus for our purpose, we can just equate
$ f(\vp{p}) = i n^{(3)}_\gamma(\vp{p}) $. This assumptions is
not true
in general but is sufficient for the current calculations of the
asymmetric part
\begin{align}
&\frac14
\left(
n^{(3)}_\gamma(\vp{k})
-
n^{(3)}_\gamma(- \vp{k})
\right)
\left(
n^{(3)}_\gamma(\vp{p})
-
n^{(3)}_\gamma(- \vp{p})
\right)
= \notag \\ &
- \frac{1}{4}
\left(
f(\vp{p}) - f(-\vp{p})
\right)
\left(
f(\vp{k}) - f(-\vp{k})
\right)
= \notag \\ &
- \frac{1}{4}
\left( \left[ f(\vp{p}) f(\vp{k}) - (\vp{k}\to-\vp{k}) \right]
- (\vp{p}\to-\vp{p})
\right).
\end{align}
Therefore it is enough to consider only one term, e.g. $ f(\vp{p}) f(\vp{k})$;
the rest of the terms can be obtained by changing the direction of momenta.
In Appendix C we derived the expression for $\langle
f(\vp{p}) f(\vp{k}) \rangle_{\rho_1}$.
In has fifteen different terms and must be further averaged with respect to the target field.
This would generate over 125 terms only for $\langle
f(\vp{p}) f(\vp{k}) \rangle_{\rho_1, \rho_2}$. At this point we see that the only reasonable resolution
would be to perform numerical simulations where averages with respect to the projectile and target configurations are performed using Monte-Carlo technique. We postpone this for further publications.
\section{Summary and conclusions}
Here we briefly summarize our results and provide some comments.
\begin{enumerate}
\item{The surface contribution on the light cone gives zero odd azimuthal anisotropy to
all orders.
It is T-even and can be written in a local form.}
\item{The odd harmonics originate from evolution in the forward light cone. They are non-local
and not T-even. In single particle inclusive process they average out to zero
for a Gaussian ensemble because they are proportional to $\rho_1^3$. Essentially they are defined by
odderon exchanges.}
\item{We were unable to establish the connection between our
formulae and geometric anisotropy in the initial
state $\epsilon_3$. From the equation it is obvious that
the anisotropy is not defined by the global scales, but rather by the
geometry on the scales of $1/Q_s$.}
\item{The argument presented in Ref.~\cite{Kovner:2010xk,Kovner:2012jm} is valid only for the surface
contribution in the dilute approximation. We showed that the bulk contribution for
configuration-by-configuration single inclusive result is not symmetric under
$\vp{k} \to -\vp{k}$.}
\item{Our results take into account the first saturation correction, which was
also considered in~Ref.~\cite{Chirilli:2015tea}.}
\item{We complement the numerical results of Refs.~\cite{Lappi:2009xa,Schenke:2015aqa}
with an analytical prove beyond any doubts in numerics and uncertainty in the
prescription of what is defined by a gluon at an intermediate state, $\tau$, that
CYM produces odd azimuthal anisotropy.}
\item{Our results can be potentially used to calculate $v_3$ without solving CYM
numerically.}
\end{enumerate}
\section{Acknowledgements}
We thank A.~Bzdak for sharing his puzzle on the two-dimensional Fourier transformation, odd harmonics, and necessity to include the time-dependence.
We thank A. Dumitru, M. Sievert, H.-U. Yee, and especially A. Kovner, M. Lublinsky
and R. Venugopalan
for useful discussions. L. McLerran was supported under Department of Energy contract number Contract No. DE-SC0012704 at Brookhaven National Laboratory,
and grant number grant No. DE-FG02-00ER41132 at Institute for Nuclear Theory.
\section{Note added after publication}
The results obtaind here in Fock-Schwinger gauge $A_\tau =0$
were reproduced and extedned in the global $A^+=0$ gauge in Ref.~\cite{Kovchegov:2018jun}.
Phenomenological calculations were performed in Ref.~\cite{Mace:2018vwq}.
The effect of quantum evolution in the projectile was considered in Ref.~\cite{Kovner:2016jfp}.
\section{Appendix A: Leading order results in coordinate space} \label{appA}
Equation~\eqref{Eq:Gelis_form} can be rewritten in an alternative form.
Lets consider the combination
\begin{align}
&\frac{\delta_{ij} \delta_{lm} + \epsilon_{ij} \epsilon_{lm}}{k_\perp^2}
\frac{p_{\perp,i} ( k-p)_{\perp,j}}{p_\perp^2}
\frac{q_{\perp,l} ( k-q)_{\perp,m}}{q_\perp^2}
\notag \\ &=
\frac{\vp{p}\cdot(\vp{k}-\vp{p})\ \vp{q}\cdot(\vp{k}-\vp{q}) + (\vp{p}\times \vp{k}) \ (\vp{q}\times \vp{k}) } {k_\perp^2 p_\perp^2 q^2_\perp} .
\end{align}
The last expression can be further simplified using the identity
\begin{equation}
(\vp{p}\times \vp{k}) \ (\vp{q}\times \vp{k}) = (\vp{p}\cdot \vp{q}) \ k_\perp^2
- (\vp{p} \cdot \vp{k}) \ (\vp{q} \cdot \vp{k} )
\end{equation}
which can be proven starting from the identity
\begin{equation}
(\vp{k}\times \vp{u})^2 = k_\perp^2 u_\perp^2 - (\vp{k}\cdot \vp{u})^2
\end{equation}
and proceeding by substituting $\vp{u} = \vp{p}+\vp{q}$.
Thus
\begin{align}
&\frac{\delta_{ij} \delta_{lm} + \epsilon_{ij} \epsilon_{lm}}{k_\perp^2}
\frac{p_{\perp,i} ( k-p)_{\perp,j}}{p_\perp^2}
\frac{q_{\perp,l} ( k-q)_{\perp,m}}{q_\perp^2}
\notag \\ &=
\left(
\frac{\vp{k}}{k_\perp^2}
-
\frac{\vp{p}}{p_\perp^2}
\right)
\cdot
\left(
\frac{\vp{k}}{k_\perp^2}
-
\frac{\vp{q}}{q_\perp^2}
\right) .
\end{align}
Substituting this into Eq.~\eqref{Eq:Gelis_form} we get
\begin{align}
E_k \frac{d N}{d^3 k} &=
\frac{2g^2}{(2\pi)^3}
\int
\frac{d^2 p_\perp}{(2\pi)^2}
\frac{d^2 q_\perp}{(2\pi)^2}
\left(
\frac{\vp{k}}{k_\perp^2}
-
\frac{\vp{p}}{p_\perp^2}
\right)
\cdot
\left(
\frac{\vp{k}}{k_\perp^2}
-
\frac{\vp{q}}{q_\perp^2}
\right)
\notag \\ &\times
\rho_a^*(\vp{p})
\left[W^\dagger(\vp{k}-\vp{p})
W(\vp{k}-\vp{q}) \right]^{ab}
\rho_b(\vp{q})
\label{Eq:Kovner_form}
\end{align}
or in coordinate space
\begin{align}
E_k \frac{d N}{d^3 k} &= \frac{2 \alpha_s}{\pi}
\int_{u} \int_{v} \int_{x} \int_{y}
e^{i \vp{k} (\vp{u}-\vp{v})}
\frac{\vec{v}-\vec{y}}{|\vec{v}-\vec{y}|^2}
\frac{\vec{x}-\vec{u}}{|\vec{v}-\vec{x}|^2} \times
\\
& \rho^a (\vp{x})
\left(
\left[ W^\dagger (\vp{x}) - W^\dagger(\vp{u})
\right]
\left[ W (\vp{y}) - W(\vp{v})
\right]
\right)^{ab}
\rho^b (\vp{y})\;.
\end{align}
\section{Appendix B: List of useful integrals and relations} \label{appB}
Here we collect the list of useful integrals and relations.
Some integrals are adopted from more general ones of Ref.~\cite{prudnikov_integrals_1998}
\begin{eqnarray}
&&\int d \tau \tau J_\nu (p_\perp \tau) J_\nu(k_\perp \tau)
= \frac{\delta(p_\perp-k_\perp)}{k_\perp}\;,\\
&&\int d \tau \tau J_0(p_\perp \tau) Y_0(k_\perp \tau)
= \frac{2}{\pi} \frac{1}{k_\perp^2-p_\perp^2}\;, \\
&&\int d \tau \tau J_1(p_\perp \tau) Y_1(k_\perp \tau)
= \frac{2}{\pi} \frac{p_\perp}{k_\perp} \frac{1}{k_\perp^2-p_\perp^2}\;, \\
&&\int d \tau \tau H^{(1)}_0(k_\perp \tau) J_0(|\vec{q}_\perp-\vec{k}_\perp| \tau) J_0(q_\perp \tau)
= \frac{1}{\pi} \frac{1}{|\vp{k}\times\vp{q}|}\;,\\
&&\int d \tau \tau J_1(q_\perp \tau) J_0(|\vec{q}_\perp-\vec{k}_\perp| \tau) Y_1(k_\perp \tau)
= - \frac{1}{\pi} \frac{1}{q_\perp k_\perp}\;, \\
&&\int d \tau \tau J_1(q_\perp \tau) J_1(|\vec{q}_\perp-\vec{k}_\perp| \tau) Y_0(k_\perp \tau)
= \frac{1}{\pi} \frac{1}{q_\perp |\vp{k}-\vp{q}|}\;, \\
&&\int d\tau \tau
J_1(q_\perp \tau) J_1(k_\perp \tau) J_0(|\vp{k}-\vp{q}|\tau)
=
\frac{1}{\pi q k} \frac{\vp{q} \cdot \vp{k} }{|\vp{q} \times \vp{k}|}\;,
\\
&&\int d\tau \tau
J_1(q_\perp \tau) J_1(|\vp{k}-\vp{q}| \tau) J_0(k_\perp \tau)
\notag \\ &&\quad \quad =
\frac{1}{\pi q |\vp{k}-\vp{q}| } \frac{\vp{q} \cdot (\vp{q}-\vp{k}) }{|\vp{q} \times \vp{k}|}
\;.
\end{eqnarray}
For completeness we also list the limits used in the main text
\begin{align}
&\lim_{\tau \to 0 } \frac{J_1(k_\perp \tau)}{k_\perp \tau} = \frac12\;,\\
&\lim_{\tau\to 0 } \tau \partial_\tau H_0^{(1,2)} (k_\perp \tau) =
\pm i \frac{2}{\pi}\;, \\
&\lim_{\tau\to 0 } \tau^3 \partial_\tau \tau^{-1} H_1^{(1,2)} (k_\perp \tau) =
\pm i \frac{4}{\pi k_\perp}\;
\end{align}
and the identity connecting the transverse projector and antisymmetric symbols
\begin{equation}
\label{Eq:t_to_e}
a_i b_j \mathfrak{t}_{ij}(\vp{k}) =
\frac{\epsilon_{ij} k_j a_i}{k_\perp}
\frac{\epsilon_{nm} k_n b_m}{k_\perp} \;.
\end{equation}
\section{Appendix C: Average with respect to projectile configurations in MV model} \label{appC}
Lets consider the following combination averaged with respect to the
projectile field in the MV model
\begin{align}
\Omega_{ij,lm}^{a,b} (\vp{p},\vp{q}) & \equiv
\langle
\Omega_{ij}^a (\vp{p})
\Omega_{lm}^b (\vp{q})
\rangle _{\rho_1}
= \notag \\
& = g^2
\int \frac{d^2 u} {(2\pi)^2}
\int \frac{d^2 v} {(2\pi)^2}
\frac{ u_i (p-u)_j v_l (q-v)_m }{u_\perp^2 v_\perp^2}
\langle
\rho^\alpha_1(\vp{u}) \rho^\beta_1(\vp{v})
\rangle_{\rho_1} \notag \\ &
\quad \times W_{a \alpha}(\vp{p}-\vp{u}) W_{b \beta}(\vp{q}-\vp{v}) = \notag \\
& = g^2
\int \frac{d^2 u} {(2\pi)^2}
\mu^2(\vp{u})
\frac{ u_i (u+p)_j u_l (u-q)_m }{u_\perp^4}
\notag \\
& \quad \times
\left[
W(\vp{u}+\vp{p})
W^\dagger(\vp{u}-\vp{q})
\right]_{ab},
\label{Eq:Omega_double}
\end{align}
where we used the MV correlator
\begin{equation}
\langle
\rho^\alpha_1(\vp{u}) \rho^\beta_1(\vp{v})
\rangle_{\rho_1}
= (2\pi)^2 \mu^2(\vp{u}) \delta(\vp{u}+\vp{v}).
\end{equation}
We also use the notation from Sec.~\ref{Sec:DIP}
\begin{align}
f(\vp{k}) &=
\frac{2g}{(2\pi)^3 k_\perp^2}
\int \frac{d^2 q}{(2\pi)^2}
\frac{\vp{k}\times \vp{q}}{|\vp{k}\times \vp{q}|}
\frac{1}{q_\perp^2 |\vp{k}-\vp{q}|^2 }
\notag \\
&\times
f^{abc}
\Omega^a_{ij} (\vp{q})
\Omega^b_{mn} (\vp{k}-\vp{q})
\Omega^{c\star}_{rp} (\vp{k})
\notag
\times \\ &
\left[
\left(
k_\perp^2 \epsilon^{ij} \epsilon^{mn}
-\vp{q} \cdot (\vp{k} - \vp{q} )
(\epsilon^{ij} \epsilon^{mn}+\delta^{ij} \delta^{mn})
\right) \epsilon^{rp}
\right.
\notag \\ &
\left.
+
2 \vp{k} \cdot (\vp{k}-\vp{q}) {\epsilon^{ij} \delta^{mn}} \delta^{rp}
\right].
\label{Eq:f}
\end{align}
Using Eq.~\eqref{Eq:Omega_double} we can rewrite
the projectile averaged combination $\langle f(\vp{p}) f(\vp{k}) \rangle_{\rho_1}$ as
follows
\begin{align}
&\langle f(\vp{p}) f(\vp{k}) \rangle_{\rho_1} =
\left(\frac{2g}{(2\pi)^3}\right)^2
\frac{1}{p_\perp^2 k_\perp^2}
\int \frac{d^2 q}{(2\pi)^2}
\int \frac{d^2 q'}{(2\pi)^2}
\frac{\vp{p}\times \vp{q}}{|\vp{p}\times \vp{q}|}
\frac{\vp{k}\times \pvp{q}}{|\vp{k}\times \pvp{q}|}
\times
\notag \\ &
\frac{f^{abc}}{q_\perp^2 |\vp{p}-\vp{q}|^2 }
\frac{f^{a'b'c'}}{ {q'_\perp}^2 |\vp{k}-{\pvp{q}}|^2 } \notag
\times
\\
&
\left[
\left(
p_\perp^2 \epsilon^{ij} \epsilon^{mn}
-\vp{q} \cdot (\vp{p} - \vp{q} )
(\epsilon^{ij} \epsilon^{mn}+\delta^{ij} \delta^{mn})
\right) \epsilon^{rp}
\right. \notag \\
&\left. \quad
+ 2 \vp{p} \cdot (\vp{p}-\vp{q}) { \epsilon^{ij} \delta^{mn}} \delta^{rp}
\right]
\notag \times \\
&\left[
\left(
k_\perp^2 \epsilon^{i'j'} \epsilon^{m'n'}
-\pvp{q} \cdot (\vp{k} - \pvp{q} )
(\epsilon^{i'j'} \epsilon^{m'n'}+\delta^{i'j'} \delta^{m'n'})
\right) \epsilon^{r'p'}
\right. \notag \\
&\left. \quad
+ 2 \vp{k} \cdot (\vp{k}-\pvp{q}) { \epsilon^{i'j'} \delta^{m'n'} } \delta^{r'p'}
\right] \times \notag \\
&
\Big[
\Omega^{a,b}_{ij,mn}(\vp{q},\vp{k}-\vp{q}) \left(
\Omega^{c,a'}_{rp,i'j'}(-\vp{p},\pvp{q})
\Omega^{b',c'}_{m'n',r'p'}(\vp{k}-\pvp{q},-\vp{k}) +\right. \notag \\ & \left.
\Omega^{c,b'}_{rp,m'n'}(-\vp{p},\vp{k}-\pvp{q})\Omega^{a',c'}_{i'j',r'p'}(\pvp{q},-\vp{k}) +
\right. \notag \\ & \left.
\Omega^{c,c'}_{rp,r'p'}(-\vp{p},-\vp{k})\Omega^{a',b'}_{i'j',m'n'}(\pvp{q},\vp{k}-\pvp{q})
\right) \notag \\
&
\Omega^{a,c}_{ij,rp}(\vp{q},-\vp{p})
\left(
\Omega^{b,a'}_{mn,i'j'}(\vp{p}-\vp{p},\pvp{q}) \Omega^{b',c'}_{m'n',r'p'}(\vp{k}-\pvp{q},-\vp{k}) + \right. \notag \\ & \left.
\Omega^{b,b'}_{mn,m'n'}(\vp{p}-\vp{q},\vp{k}-\pvp{q})\Omega^{a',c'}_{i'j',r'p'}(\pvp{q},-\vp{k}) +
\right. \notag \\ & \left.
\Omega^{b,c'}_{mn,r'p'}(\vp{p}-\vp{q},-\vp{k})\Omega^{a',b'}_{i'j',m'n'}(\vp{q}',\vp{k}-\pvp{q})
\right) +
\notag \\
&
\Omega^{a,a'}_{ij,i'j'}(\vp{q},\pvp{q})
\left(
\Omega^{b,c}_{mn,rp}(\vp{p}-\vp{p},-\vp{p}) \Omega^{b',c'}_{m'n',r'p'}(\vp{k}-\pvp{q},-\vp{k}) + \right. \notag \\ & \left.
\Omega^{b,b'}_{mn,m'n'}(\vp{p}-\vp{q},\vp{k}-\pvp{q})\Omega^{c,c'}_{rp,r'p'}(-\vp{p},-\vp{k}) +
\right. \notag \\ & \left.
\Omega^{b,c'}_{mn,r'p'}(\vp{p}-\vp{q},-\vp{k})\Omega^{c,b'}_{rp,m'n'}(-\vp{p}',\vp{k}-\pvp{q})
\right) +
\notag \\
&
\Omega^{a,b'}_{ij,m'n'}(\vp{q},\vp{k}-\pvp{q})
\left(
\Omega^{b,c}_{mn,rp}(\vp{p}-\vp{p},-\vp{p}) \Omega^{a',c'}_{i'j',r'p'}(\pvp{q},-\vp{k}) + \right. \notag \\ & \left.
\Omega^{b,a'}_{mn,i'j'}(\vp{p}-\vp{q},\pvp{q}) \Omega^{c,c'}_{rp,r'p'}(-\vp{p},-\vp{k}) +
\right. \notag \\ & \left.
\Omega^{b,c'}_{mn,r'p'}(\vp{p}-\vp{q},-\vp{k})
\Omega^{c,a'}_{rp,i'j'}(-\pvp{p},\pvp{q})
\right) +
\notag \\
&
\Omega^{a,c'}_{ij,r'p'}(\vp{q},-\vp{k})
\left(
\Omega^{b,c}_{mn,rp}(\vp{p}-\vp{p},-\vp{p}) \Omega^{a',b'}_{i'j',m'n'}(\pvp{q},\vp{k}-\pvp{q}) + \right. \notag \\ & \left.
\Omega^{b,a'}_{mn,i'j'}(\vp{p}-\vp{q},\pvp{q}) \Omega^{c,b'}_{rp,m'n'}(-\vp{p},\vp{k}-\pvp{q}) +
\right. \notag \\ & \left.
\Omega^{b,b'}_{mn,m'n'}(\vp{p}-\vp{q},\vp{k}-\pvp{q})
\Omega^{c,a'}_{rp,i'j'}(-\pvp{p},\pvp{q})
\right)\Big]\,.
\label{Eq:f_av}
\end{align}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The electromagnetically induced transparency (EIT), a quantum interference phenomenon, is related to increase in the transmission of a weak probe beam in presence of a strong coupling beam \cite{Harris:1990,Harris:1st}. Since the conception of EIT, several applications of this phenomenon have attracted attention of researchers which include applications in optical communication network \cite{Ma:2017}, optical switching devices \cite{Clarke:12:2001,Ray:2013}, manipulation of group velocity of light and light storage \cite{Kash:1999,Kocharovskaya:2001,Karpa:2006,Goldfarb:2008,Alotaibi:2015}, tight laser frequency locking \cite{Bell:2007}, high resolution spectroscopy \cite{Krishna:2005,Kale:2015}, atomic clocks \cite{Guidry:2017}, quantum information processing \cite{Beausoleil:eit,Beausoleil:cpt}, highly sensitive magnetometery \cite{Lee:1998,Fleischhauer:1994,Yudin:2010,Cox:2011,Margalit:2013,Cheng:2017}, velocimetry \cite{Santos:2006}, etc.
The simple schemes to obtain EIT involve three atomic levels, in which the weak probe laser beam is resonant to one pair of energy levels and the strong coupling beam is resonant to another pair of energy levels such that one level is common to probe and coupling both. These configurations are known as ladder \cite{Moon:2005}, lambda \cite{Li:1995} and vee \cite{Kang:2014} systems. Albeit, detailed studies of these configurations already exist in the literature \cite{Fulton:116:1995,Fulton:52:1995,Clarke:64:2001,Yan:2001,VBT:2010,Xiaojun:2016}, proper comparative studies of a particular atomic system are hard to find to decide the better system for a specific application.
In this article, our objective is to provide a detailed account of the EIT phenomenon for the two possible $\Lambda$-systems of $D_{2}$ line of $^{87}Rb$ atom by varying the coupling beam and magnetic field strength to find the better system for EIT based laser locking applications. The two $\Lambda$-systems, denoted hereafter as system (A) and system (B) respectively, are formed with the atomic levels $|5^{2}S_{1/2} F=1\rangle \rightarrow |5^{2}P_{3/2} F'=1\rangle \leftarrow |5^{2}S_{1/2} F=2\rangle$ and $|5^{2}S_{1/2} F=1\rangle \rightarrow |5^{2}P_{3/2} F'=2\rangle \leftarrow |5^{2}S_{1/2} F=2\rangle$. Both these $\Lambda$-systems have been investigated in absence and in presence of an applied magnetic field. These two $\Lambda$-systems differ in terms of the strength of transitions and the proximity of other energy levels surrounding the excited state. In presence of magnetic field, the Zeeman splitting of levels further differentiates the two $\Lambda$-systems. The following are highlights of the experimental observations which have been found consistent with our theoretical models. In absence of a magnetic field for a particular coupling beam power, system (B) provides the EIT signal of higher magnitude and more symmetric line shape than the system (A). The presence of magnetic field splits the single EIT peak into three peaks where the central peak of system (B) again exhibit better strength. At a higher coupling beam power, this central EIT peak possess steeper slope than that of the peak observed in absence of the magnetic field. These signals with steeper slope can be extremely useful for tight frequency locking of lasers. Also, the linear dependence of the separation between the split peaks on the magnetic field strength can be exploited to lock a laser frequency at a frequency detuning controlled by the magnetic field. These studies may be performed in near future.
The article is organized as follows. Section \ref{sec:expt} describes the experimental setup. Our results and discussion are presented in section \ref{sec:result}. Finally, the conclusion of our work is given in section \ref{sec:concl}.
\section{Experimental Setup}
\label{sec:expt}
\begin{figure}[t]
\centering
\includegraphics[width=8.5 cm]{energy-levels.pdf}
\caption{ (a) The relevant energy levels of $^{87}Rb$ atom showing two $\Lambda$-systems investigated in the present work. $\Delta_{c}$ is the detuning of the coupling beam from respective transition resonance. (b) $\Lambda$-subsystems within the system (B) in presence of a longitudinal magnetic field. $\Delta_{g}$ and $\Delta_{e}$ are energy separations between magnetic sublevels of ground and excited states respectively. Solid and dashed lines connecting levels represent transitions involving $\sigma^{+}$ and $\sigma^{-}$ polarizations.}
\label{lvl}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{setup.pdf}
\caption{(Color online) Schematic diagram of the experimental setup. PBS: polarising beam splitter; NPBS: non-polarising beam splitter; RbVC: Rubidium vapor cell; FI: Faraday isolator, PD: photodiode; GP: glass plate; HWP: half-wave plate; BD: beam dump; SAS-P and SAS-C: saturated absorption spectroscopy for the probe and coupling beams respectively.}
\label{setup}
\end{figure}
The relevant energy levels of $^{87}Rb$ atom for the systems (A) and (B) are shown in Figure \ref{lvl}(a). The two $\Lambda$-systems were formed using a weak probe beam and a strong coupling beam connecting the two ground states with a common excited state. The probe beam frequency was fixed at the transition peak $|5^{2}S_{1/2} F=1\rangle \rightarrow |5^{2}P_{3/2} F'=1\rangle$ for system (A) and $|5^{2}S_{1/2} F=1\rangle \rightarrow |5^{2}P_{3/2} F'=2\rangle$ for system (B), while the coupling beam frequency was scanned across the transitions $|5^{2}S_{1/2} F=2\rangle \rightarrow |5^{2}P_{3/2} F'=1, 2,3\rangle$ for both the systems. The resonant probe beam gives a flat transmission signal over which the effect of the scanned coupling beam can be detected easily. The transition peaks were identified using the saturation absorption spectroscopy (SAS) technique.
The schematic diagram of the experimental setup is shown in Figure \ref{setup}. The probe and coupling beams were derived from two independent external cavity diode lasers (TA-Pro and DL-100, TOPTICA, Germany) operating at 780 nm wavelength having spectral linewidths less than 1 MHz. The $1/e^{2}$ radii of the probe and coupling beams were 1.36 mm and 2.04 mm respectively. In order to make the systems Doppler free and insensitive to the atomic velocity, the probe and coupling beams were aligned in co-propagating geometry and passed through a 50 mm long $Rb$ vapor cell \cite{carvalho}. The pressure inside this vapor cell was $\sim 3.6 \times 10^{-7}$ Torr corresponding to a number density of $Rb$ atoms $\sim 1.2 \times 10^{10}\ cm^{-3}$. The magnetic field was applied using a current carrying solenoid wrapped over the Rb vapor cell. The length of the solenoid was much longer than the cell length to ensure the homogeneity of the applied magnetic field. Both the coil and the vapor cell were kept inside two layers of $\mu$-metal sheet to shield the cell from the stray magnetic field. The polarizations of the coupling and probe beams were kept linear but mutually orthogonal to each other. The power in the probe and coupling beams ($P_{p}$ and $P_{c}$ respectively) were controlled using a combination of a half-waveplate (HWP) and a polarizing beam splitter (PBS) in the beam paths. The transmitted probe and coupling beams were separated after the vapor cell using a PBS as shown in Figure \ref{setup}. The transmitted probe beam was collected on a photo-diode (connected to an oscilloscope) to measure the transmitted probe signal as a function of coupling beam detuning.
\section{Results and Discussion}\label{sec:result}
\subsection{Experimental results}\label{sec:expt_result}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{vsop.pdf}
\caption{ The transmitted probe signal as a function of coupling beam detuning. The spectra for both the systems show five absorption dips a, b, c, d and e, which are due to velocity selective optical pumping. Here only dip b shows a narrow EIT peak at zero detuning due to fulfilment of two photon resonance condition. The probe power is $P_{p}=0.08$ mW and the coupling beam power is $P_{c}=1$ mW}.
\label{vsop}
\end{figure}
In the experiments, the power of the probe beam was fixed at $P_{p}=0.08$ mW (Rabi frequency $\Omega_{p}=2\pi \times 3.3$ MHz for both the systems (A) and (B)) and the coupling beam power was varied in a wide range from $P_{c}=0.1$ mW to $P_{c}=20$ mW (resulting in a variation of the coupling Rabi frequency from $\Omega_{c}=2\pi \times 1.1$ MHz to $\Omega_{c}=2\pi \times 15.5$ MHz for system (A) and $\Omega_{c}=2\pi \times 2.4$ MHz to $\Omega_{c}=2\pi \times 34.7$ MHz for system (B)). The Rabi frequency was calculated using the expression $\Omega=\Gamma \sqrt{\frac{I}{2I_{sat}}}$, where $I$ is the beam intensity, $\Gamma$ is the natural linewidth ($2\pi \times 6$ MHz) and $I_{sat}=\frac{c \epsilon_{0} \Gamma^{2} \hbar^{2}}{4d^{2}}$ is saturation intensity. The dipole moment $d=\mu_{ij} \mu_{0}$ with $\mu_{0}=3.58\times10^{-29}$ C-m and $\mu_{ij}$ is dipole transition matrix element between states $i$ and $j$. Here, $c, \epsilon_{0}$ and $\hbar$ are speed of light, vacuum permittivity and reduced Planck's constant with values $3 \times10^8$ m/s, $8.85 \times10^{-12}$ F/m and $1.05 \times 10^{-34}$ J-s respectively. Since both the systems (A) and (B) have equal $\mu_{ij}$ corresponding to the probe transition, the probe beam Rabi frequency $\Omega_{p}$ is equal for both the systems. Figure \ref{vsop} shows the recorded probe signals in which there are five absorption dips (a, b, c, d and e) for both the $\Lambda$-systems. These spectral features are known as velocity selective optical pumping (VSOP) absorption dips \cite{Iftiquar:2009,Chakrabarti:2005,Hossain:2011}. Among these absorption dips, only dip b exhibits an EIT peak due to the fulfillment of the two-photon resonance condition. In the present work, further investigation of this EIT peak in both the $\Lambda$-systems has been carried out by varying different experimental parameters.
The dependence of the EIT signal on the coupling beam power for both the $\Lambda$-systems was observed and the corresponding EIT signals are shown in Figure \ref{spectrum}. The relative transmission in the figure is defined as the ratio of the signal height (with respect to the VSOP minimum) to the total depth of corresponding VSOP absorption dip. This chosen scale provides the direct measure of recovery of the transmission due to EIT effect against the VSOP absorption, without involving the absolute value of probe signal voltage. This relative transmission ($T_R$) can be written in terms of regular transmission ($T$) and transmission at VSOP dip ($T_{VSOP}$) as, $T_R=\frac{T-T_{VSOP}}{1-T_{VSOP}}$.
The important features of these observed EIT signals are as follows: (i) both strength and linewidth of the EIT signals increase with increase in the coupling beam power, (ii) the asymmetry in the line shape increases with the coupling power, (iii) the linewidth of the signal remains sub-natural even at higher coupling power, and (iv) the system (B) gives more symmetric and higher strength EIT signal than the system (A). Here, the linewidth is defined by parameter $\delta_b$ as shown in Figure \ref{spectrum} (\textit{iii}). Since the lineshape of the observed EIT signal is asymmetric, so measurement of the full width at half maximum (FWHM) of the EIT signal is difficult. Therefore, the separation between two minima in the EIT signal (\textit{i.e.} $\delta_b$) is considered as linewidth of the EIT signal. The actual FWHM of the EIT peak could be approximately the half of $\delta_b$. The strength of the EIT peak implies here the height of the EIT peak. The asymmetry in the signal is defined as difference between two minima values surrounding the EIT peak.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{spectrum.pdf}
\caption{ The relative transmission as a function of coupling beam detuning for both the $\Lambda$-systems (A) and (B), for different coupling beam power and the fixed probe beam power $P_{p}= 0.08$ mW. Here $\delta_{b}$ represents separation between minima of the EIT peak}.
\label{spectrum}
\end{figure}
The asymmetry in line shape of the EIT signal may be due to the presence of other nearby excited states, as reported earlier \citep{Bharti:2012,Chen:2013}. A more asymmetric EIT signal in the $\Lambda$-system (A) than that in the system (B) could possibly be due to this effect, as excited state in the system (A) has more closely spaced nearby levels than the system (B). The linewidth increases with the increase in the coupling beam power, while both the systems exhibit nearly same linewidth for a given coupling power. But, the system (B) shows stronger and more symmetric EIT signal than the system (A) as shown in Figure \ref{spectrum}. These results suggest that the $\Lambda$-system (B) can be more suitable for practical applications than the $\Lambda$-system (A).
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{spectrum_B.pdf}
\caption{ The relative transmission as a function of coupling beam detuning at different strength of the magnetic field ($B_{||}$) for both the $\Lambda$-systems. Here probe power is 0.08 mW and coupling power is 4 mW}.
\label{spectraB}
\end{figure}
Thereafter, experiments were performed by applying a longitudinal magnetic field $B_{||}$ to the vapor cell. The EIT signals for both the $\Lambda$-systems were recorded for different strengths of the magnetic field ($B_{||}$), while keeping the probe and coupling power fixed ($P_{p}=0.08$ mW and $P_{c}=4$ mW). The results of these observations are shown in Figure \ref{spectraB}. It is evident from this figure that EIT signals for both the systems split into three peaks in presence of the magnetic field. However, the central peak (among the three split peaks) for the system (B) is of higher strength than that for the system (A). The origin of the splitting of the EIT peaks is due to the removal of the degeneracy of the Zeeman hyperfine states ($m_F$ and $m_{F'}$) in presence of the applied magnetic field. The energy separation between two adjacent Zeeman sublevels is given by,
\begin{equation}
\Delta\left(|g_F|,B_{||}\right)=\frac{|g_{F}|\mu_{B}B_{||}}{h},
\label{eq}
\end{equation}
where $g_{F}$ is the hyperfine Land\'e g-factor, $\mu_{B}$ is Bohr magneton and $h$ is Planck's constant.
Each of the linearly polarized probe and coupling beams can be decomposed into two opposite circularly polarized beams with $\sigma^{+}$ and $\sigma^{-}$ polarizations with respect to the direction of the magnetic field. Along with the Zeeman sublevels, these circularly polarized beams give rise to multiple $\Lambda$-subsystems within each $\Lambda$-system as per the transition selection rules $\Delta m_{F}=\pm 1$. The two-photon resonance condition in these $\Lambda$-subsystems should result in emergence of multiple EIT windows at different values of coupling beam detuning. It can be shown that there are only three possible values of coupling beam detuning for which the required two-photon resonance condition for EIT can be achieved. This can be explained as below.
Due to the different Lande g-factors associated with the ground and excited states (\textit{i.e.} $g_F$ values), the separation between their magnetic sublevels are different and are denoted as $\Delta_g$ (\textit{i.e.} $\Delta(|g_F|=1/2)$) and $\Delta_e$ (\textit{i.e.} $\Delta(|g_F|=2/3)$) respectively as shown in Figure \ref{lvl} (b). Now, a particular $\Lambda$-subsystem of the system (B) formed with the transitions $|F=1,m_{F}=1 \rangle \rightarrow |F'=2,m_{F'}=2 \rangle \leftarrow |F=2,m_{F}=1 \rangle$ is considered where the probe and coupling beam detunings can be determined in terms of $\Delta_g$ and $\Delta_e$ as $\Delta_{p} = \Delta_{g} + 2\Delta_{e}$ and $\Delta_{c}=-\Delta_{g}+2\Delta_{e}$ respectively. The two photon resonance condition becomes $\Delta_{p}-\Delta_{c}= 2\Delta_{g}$. Since the probe is kept resonant (\textit{i.e.} $\Delta_{p}=0$), the two photon resonance condition required for EIT can be achieved at $\Delta_{c}= -2\Delta_{g}$. A similar calculation for all the other $\Lambda$-subsystems results in three different values of coupling beam detuning $\Delta_{c}= 0,\pm 2\Delta_{g}$ for which the EIT resonance condition is satisfied.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{spectrum_PcB.pdf}
\caption{ The relative transmission as a function of coupling beam detuning for both the $\Lambda$-systems in presence of longitudinal magnetic field $B_{||}=3.4\ G$. Plots \textit{(i)}-\textit{(v)} are for different coupling beam power at a fixed probe beam power of 0.08 mW}.
\label{spectraPcB}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{slope_B.pdf}
\caption{(color online) The transmitted probe signal as a function of coupling beam detuning for $\Lambda$-system (B) in absence and in presence of the longitudinal magnetic field (\textit{i.e.} $B_{||}=0\ G$ (black curve) and $B_{||}=3.4\ G$ (red curve) respectively). The signals are for coupling beam power of 20 mW and probe beam power of 0.08 mW. The signal shown in black (lower curve) is shifted down by 10 mV for better visibility. The inset depicts the derivative of the transmitted signal with respect to the coupling beam detuning (\textit{i.e.} piecewise slope of the signal) as a function of coupling beam detuning. The slope in the presence of magnetic field is 2.7 times higher than the slope in absence of magnetic field}.
\label{slope}
\end{figure}
The behavior of these split EIT signals has also been studied by varying the coupling beam power, while keeping the magnetic field strength fixed at $B_{||}=$3.4 G and the probe beam power at $P_{p}=0.08$ mW. The results of this study are shown in Figure \ref{spectraPcB} for both the systems (A) and (B). It is evident from Figure \ref{spectraPcB} that the separation between the split-EIT peaks is independent of the coupling beam power whereas the strength of the EIT peaks increases by an increase in the coupling beam power. It can be noted that for coupling power greater than 4 mW, the central EIT peaks show higher amplitude with steeper slopes as compared to the EIT peaks observed in absence of the magnetic field. As an example, for a coupling power 20 mW in the system (B), both the signals are shown in Figure \ref{slope}. The slope of the central EIT peak in presence of magnetic field is $\sim$ 2.7 times higher than that of the EIT peak in absence of magnetic field which is clearly visible from the inset of Figure \ref{slope}. Because of the higher slope, the magnetic field induced split EIT signals may be a better choice for accurate frequency locking of the lasers. Along with this, side split peaks can also be utilized to lock a laser at frequencies controlled by the magnetic field strength.
\subsection{Numerical Simulations}
\label{sec:simulation}
To model the experimental results, all the hyperfine states of the $D_{2}$ line transition of $^{87}Rb$ atom were considered and the resulting six-level system is shown in Figure \ref{lvl}(a). Here the levels $|0 \rangle$ and $|1 \rangle$ represent two ground state hyperfine levels $F=1$ and $F=2$ and the levels $|2 \rangle$, $|3 \rangle$, $|4 \rangle$ and $|5 \rangle$ represent the excited state hyperfine levels $F'=0$, $F'=1$, $F'=2$ and $F'=3$ respectively. The $\Lambda$-systems (A) and (B) involve transitions $|0 \rangle \rightarrow |3 \rangle \leftarrow |1 \rangle$ and $|0 \rangle \rightarrow |4 \rangle \leftarrow |1 \rangle$ respectively. Therefore, as an example, the Hamiltonian for the $\Lambda$-system (B) with six-levels ($|0 \rangle$ to $|5 \rangle$) interacting with the probe and coupling fields after applying the rotating-wave approximation can be constructed as,
\footnotesize
\begin{equation}\label{eq:6Lhamiltonian}
H=\begin{bmatrix}
\Delta_{p}-kv &0& &\Omega_{p}^{02}& &\Omega_{p}^{03}& &\Omega_{p}^{04}& 0\\
0 &\Delta_{c}-kv& &0& &\Omega_{c}^{13}& &\Omega_{c}^{14}& \Omega_{c}^{15}\\
\Omega_{p}^{20} &0& &-2\delta_{42}& &0& &0& 0 \\
\Omega_{p}^{30} &\Omega_{c}^{31}& &0& &-2\delta_{43}& &0& 0 \\
\Omega_{p}^{40} &\Omega_{c}^{41}& &0& &0& &0& 0\\
0 &\Omega_{c}^{51}& &0& &0& &0& 2\delta_{45}
\end{bmatrix},
\end{equation}
\normalsize
where $\Delta_{p}$ and $\Delta_{c}$ are the frequency detunings of the probe and coupling beams respectively from their resonances. The Rabi frequency for the transitions $|i \rangle \rightarrow |j \rangle$ is denoted by $\Omega_{p,c}^{ij}$ where $p$ and $c$ denotes the probe and coupling transitions and the separation between the energy levels $|i \rangle$ and $|j \rangle$ is denoted as $\delta_{ij}$. The diagonal elements of $H$ arise due to the contribution from the atomic Hamiltonian and the off-diagonal terms manifest the interaction of an atom with the probe and coupling fields. The term $kv$ accounts for the thermal motion of the atom, where $v$ is the velocity of an atom and \textit{k} is the wave vector of the electromagnetic fields. The evolution of this atomic system, in terms of the density matrix ($\rho$), can be described by the Lindblad master equation
\begin{equation} \label{eq:rho}
\dot\rho=-\frac{i}{\hbar}[H,\rho]+L(\rho).
\end{equation}
Here L($\rho$) is the Lindblad super-operator which incorporates the effect of the spontaneous emissions in the system. This Lindblad master equation yields a set of thirty-six time dependent equations, and the steady state solution of these equations are obtained numerically. The imaginary parts of different coherences (\textit{i.e.} $Im(\rho_{02})$, $Im(\rho_{03})$ and $Im(\rho_{04})$) have been used to calculate the imaginary part of the linear susceptibility $Im(\chi^{(1)})$, which in turn provides the probe transmission. Due to the presence of a large number of atoms with a Maxwell Boltzmann velocity distribution ($W(v)$) in the room temperature Rb vapor cell, the imaginary part of the susceptibility was averaged over all the velocities \cite{Peters:2012} as,
\begin{equation} \label{eq:chi}
Im(\chi^{(1)})=\int dv W(v) A \sum\limits_{j=2}^{4} \left(\frac{\mu_{0j}^{2}\times Im(\rho_{0j})}{\Omega_{p}^{0j}}\right),\\
\end{equation}
where A=$\frac{2n}{\epsilon_{0} \hbar}$, $\mu_{0j}$ is the dipole moment between states $|0\rangle$ and $|j\rangle$, $\epsilon_{0}$ is vacuum permittivity, $\hbar$ is reduced Planck's constant and $n$ is number density of $Rb$ atoms inside the $Rb$ vapor cell. The transmission of the medium is given as,
\begin{equation} \label{eq:T}
T=\exp[-\alpha l],
\end{equation}
where $\alpha=k Im(\chi^{(1)})$ is absorption coefficient, \textit{k} is the wave vector of the electromagnetic field and \textit{l} is the length of the vapor cell. The values of parameters n, l, $\Omega_p$, $\Omega_c$, $\delta_{ij}$ used in the simulations were kept same as experimental values as given in sections \ref{sec:expt} and \ref{sec:expt_result}. The results of the simulations are shown in Figure \ref{theory} for both the $\Lambda$-systems.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{theory_6lvl.pdf}
\caption{The simulated transmission of probe beam as a function of coupling beam detuning for both the $\Lambda$-systems at different coupling beam power, in absence of magnetic field. Plot (\textit{i}) shows the probe transmission for a large range of coupling beam detuning, depicting the VSOP and EIT signals together. Plots (\textit{ii}) - (\textit{iv}) show magnified view of the EIT signal for different coupling power}.
\label{theory}
\end{figure}
As evident from Figures \ref{theory} and \ref{vsop}, the results obtained from the numerical simulations are able to reproduce the experimentally observed EIT signal along with the velocity selective optical pumping absorption dips for both the $\Lambda$-systems. The simulation results also show the increase in the EIT signal strength as well as the asymmetry in line shape with the increase in coupling beam power. However, the simulated EIT signal is less asymmetric as compared to the experimentally observed signal. Incorporation of various factors like ground state relaxation rate, laser linewidth, effect of collisions, non-radiative decay of atoms within the hyperfine levels of the excited states, which are ignored in the simulations, may result in better agreement between the experimental and simulation results \cite{Figueroa:2006,Lu:1997,Ghosh:2009}. The numerical results in Figure \ref{theory} also proves that the system (B) exhibit EIT of better strength than that of the system (A) for a given coupling power.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{simulation_Ba.pdf}
\caption{The simulated transmission of probe beam as a function of coupling beam detuning for $\Lambda$-system (B) in presence of a magnetic field. Curves \textit{(i),(ii) and (iii)} show the variation in separation of split EIT peaks with the magnetic field strength for a fixed coupling beam power of 4 mW. Curves \textit{(iv),(v) and (vi)} show the variation in amplitude and line shape of split EIT peaks with the coupling beam power at a fixed magnetic field strength 3.4 G}.
\label{sim_B}
\end{figure}
In order to model the observed EIT signals in presence of a magnetic field for both the systems (A) and (B), the splitting of the hyperfine levels (\textit{i.e.} 11 sublevels for system (A) and 13 sublevels for system (B)) should also be considered. The Hamiltonian for $\Lambda$-system (B) with all the 13 Zeeman sublevels interacting with the applied probe and coupling beams can be written as,
\begin{multline} \label{eq:H:mf}
H=\hbar\sum\limits_{i=0}^{12}\omega_{ii} |i \rangle \langle i|+\frac{\hbar}{2}\sum\limits_{i=0 }^{i=2}\sum\limits_{\substack{j=8} }^{j=12} \Omega_p^{i,j} |i \rangle \langle j|+\\ \frac{\hbar}{2}\sum\limits_{i=3}^{i=7}\sum\limits_{\substack{j=8} }^{j=12}\Omega_c^{i,j} |i \rangle \langle j|+H.c.,
\end{multline}
where $i$ and $j$ correspond to different Zeeman hyperfine ground states ($m_{F}$) and excited states ($m_{F'}$) respectively and the corresponding transitions between $i^{th}$ and $j^{th}$ states follow the dipole selection rule $m_{F'}-m_{F}=\pm 1$ (Figure \ref{lvl}(b)). Here $\hbar \omega_{ii}$ represents the energy of $i^{th}$ state.
The Lindblad master equation (\ref{eq:rho}) was solved using the above Hamiltonian of equation (\ref{eq:H:mf}) to calculate the probe transmission in presence of the magnetic field. Numerically obtained values of probe transmission T as a function of coupling beam detuning for different values of magnetic field strength and coupling beam power are shown in Figure \ref{sim_B}. The variation in the peak separation with the applied magnetic field as obtained numerically (Figure \ref{sim_B} (i)-(iii)) are in good agreement with the experimental observations (Figure \ref{spectraB} (ii)-(iv) for $\Lambda$-system (B)). The occurrence of three split EIT peaks in the simulation results also agree with the analytical explanation provided in the earlier section.
In numerical simulations, the variation in the line shape and amplitude of split EIT peaks with the coupling beam power is shown in Figure \ref{sim_B} (curves (iv)-(vi)), for a fixed magnetic field strength 3.4 G. As the coupling power is increased, the simulated spectra show broadening in all the three EIT peaks. These results are also in qualitative agreement with the experimental observations. However, a better matching between the theoretical and experimental results may be obtained by incorporating various effects such as collisions \cite{Ghosh:2009}, spin exchange relaxations \cite{Shuker:2008}, etc in the theoretical model. Nonetheless, the presented numerical results provide an insight into the EIT phenomenon in these multi-level systems and can be employed to investigate similar other systems.
\section{CONCLUSION}
\label{sec:concl}
The two $\Lambda$-systems in $^{87}Rb$ atom, $|5^{2}S_{1/2} F=1\rangle \rightarrow |5^{2}P_{3/2} F'=1\rangle \leftarrow |5^{2}S_{1/2} F=2\rangle$ and $|5^{2}S_{1/2} F=1\rangle \rightarrow |5^{2}P_{3/2} F'=2\rangle \leftarrow |5^{2}S_{1/2} F=2\rangle$, denoted as system (A) and (B) respectively, have been investigated for EIT characteristics by varying the coupling beam and magnetic field strength. The experimentally observed results have been explained with our multi-level models for EIT phenomenon. Though, the observed linewidth of EIT signals is found to remain sub-natural even at higher coupling power for both the systems, the EIT signals for the system (B) have higher strength and more symmetric line shape than corresponding signals for system (A). At a higher coupling beam power and in presence of the magnetic field, the central EIT peak possess steeper slope than the peak observed in absence of the magnetic field. These signals with steeper slope can be extremely useful for tight laser frequency locking. Locking of a laser frequency at a frequency detuning controlled by the magnetic field can be achieved using the dependence of the separation between the split peaks on the magnetic field strength. The observed results indicate that the system (B) is more promising from application point of view.
\section{ACKNOWLEDGMENTS}
We are thankful to S. Singh and V. Singh for their help during experiment. Charu Mishra is grateful for financial support from RRCAT, Indore under HBNI, Mumbai program.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In principle the properties of mesons and baryons should be correctly
described by Quantum chromodynamics (QCD).
However, apart from some lattice gauge calculations, this is
practically impossible at the moment.
As a replacement simple quark models, in which
hadrons are viewed as bound states of constituent quarks, are quite
successful (for a review see \cite{rev}).
The simplest are the nonrelativistic (NR)
ones. The potential
used here, normally consists of a Coulomb term to
account for the perturbative
one gluon exchange (OGE), and a linear potential with possibly an
additional constant to
incorporate the nonperturbative confining. These models work very well
for the heavier charmonia and bottonia. For the lighter mesons however,
it is clear that relativistic corrections must be included. Roughly
speaking, this can be achieved in two ways. The most direct way is to
replace all NR expressions by their relativistic counterparts.
Spin dependences, like spin-spin, spin-orbit and tensor couplings are
included by hand. The second way, which, from a theoretical point of view
is more consistent,
is to start from a framework that is manifest Lorentz covariant from the
outset. The most natural representation for such a framework, like the
Bethe-Salpeter equations, is momentum space. Traditionally, however, and
also because of
the belief that it would be impossible to describe a confining potential
in momentum space, the equations are normally transformed to
configuration space.
Since a few years \cite{Vary,Heiss,gross,Norbury,maung,h2},
however, it has been realized, that
there is no obstacle to define a confining potential in momentum space,
even in the relativistic case. Therefore a growing interest has arisen to
study quark models directly in this more favourable representation.
This will also be the subject of the present paper.
The theory introduced by de Groot and Ruijgrok \cite{GR,RA,RG,HR,h1}
will be used as a model to calculate the masses of all known mesons, with the
exception of the selfconjugate light unflavoured ones. This Lorentz
covariant theory is defined via a natural generalization of the NR
Lippmann-Schwinger equation and does not require further specification in
the course of its solution. It does not start from the Bethe Salpeter
equations. The main difference is that in the intermediate states all
particles remain on their mass shell, and that total three velocity rather
than four momentum is conserved. Negative energy states are not
included. Retardation
effects are incorporated in a simple and unambiguous way. This is to be
contrasted with the Bethe-Salpeter equations,
where different three-dimensional
quasi-potential reductions lead to different retardations (see e.g.
Sec.\ (2.3) of \cite{rev}).
The theory has proven to give the correct fine structure
formulas for the hydrogen atom and positronium \cite{h1}.
In Sec. \ref{sec:theory} a brief summary of the
theory will be given.
In Sec.\ \ref{sec:pot} a number of quark-antiquark potentials
will be discussed. A modification
of the Richardson potential, to acount
for the OGE, as well as a relativistic generalization of the constant
potential is defined.
An important feature of the mesons consisting of light quarks is the
appearance of linear Regge trajectories. Their origin in the light of
the present theory is discussed in Sec.\ \ref{sec:regge}.
The numerical method used will be described in Sec.\
\ref{sec:num}, and its results
will be further discussed in Sec. \ref{sec:discus}.
The paper ends with some conclusions.
\section{Formulation of the Theory}\label{sec:theory}
In this section a summary of the theory, as introduced by de Groot and
Ruijgrok \cite{GR,RA,RG,HR}, with the modifications made in \cite{h1},
will be given
\subsection{The general framework}
A state $\alpha$ of a quark (mass $m_1$) and an antiquark (mass $m_2$),
can be characterized by $(p_1\lambda_1,p_2\lambda_2)$, where
$p_1$ and
$p_2$ are the four-momenta of the quark and antiquark, and $\lambda_1$
and $\lambda_2$ are
their helicities. Both particles are supposed to remain on
their mass shell, which means that
$p_i^0=\sqrt{|{\bf p}_i|^2+m_i^2}\equiv E_i$, $i=1,2$.
The theory is constructed in such a way that
in the interaction the three-velocity
\begin{equation}\label{veldef}
{\bf v}=\frac{{\bf p}_1+{\bf p}_2}{p_1^0+p_2^0},
\end{equation}
is conserved. This means that the quark-antiquark potential
$V_{\beta\alpha}$ for a transition from an initial state
$\alpha=(p_1\lambda_1,p_2\lambda_2)$ to a final state
$\beta=(p_1^\prime\lambda_1^\prime,p_2^\prime\lambda_2^\prime)$ contains
only non-zero elements if ${\bf v}^\prime={\bf v}$. In the center of
momentum system (cms) this velocity conservation coincides with
three-momentum conservation,
i.e., ${\bf p}_1=-{\bf p}_2\equiv {\bf p}$ and
${\bf p}_1^\prime=-{\bf p}_2^\prime\equiv {\bf p}^\prime$.
In this frame therefore the potential can be written as
\begin{displaymath}
V_{\beta\alpha}=V_{\lambda_1^\prime\lambda_2^\prime,\lambda_1\lambda_2}
({\bf p}^\prime,{\bf p}).
\end{displaymath}
In the NR case the momentum dependence of a central potential appears
in the form $|{\bf q}|^2=|{\bf p}^\prime-{\bf p}|^2$.
In the relativistic case this expression must be replaced by a covariant
one. Here the usual replacement
$|{\bf q}|^2\rightarrow -q^2$ cannot be used
because, due to the lack of four-momentum conservation, the
loss of momentum $q_1=p_1-p_1^\prime$ by the quark, will in general
differ from the gain of momentum $q_2=p_2^\prime-p_2$ by the antiquark.
Instead the following obvious and symmetrical substitution is made
\begin{equation}\label{q1q2}
|{\bf q}|^2\;\rightarrow\;-q_1\cdot q_2=
|{\bf p}^\prime-{\bf p}|^2-\tau(p^\prime,p),
\end{equation}
where the term $\tau$, defined by
\begin{equation}\label{retard}
\tau(p^\prime,p)=(E_1-E_1^\prime)(E_2^\prime-E_2),
\end{equation}
is responsible for retardation effects.
The theoretical justification for this replacement is twofold.
In the first place it can be shown \cite{h1} to be consistent with
velocity conservation. The second and more practical justification is,
that in the
case of the Coulomb potential, $\tau$
automatically generates the correct form for
the Breit interaction (see \cite{h1}).
In the equal mass case $\tau=-(E-E^\prime)^2$, which
is exactly opposite to the retardation
used by Gross and Milana \cite{gross} and
Maung, Kahana and Norbury \cite{maung}.
The difference in sign will give
the wrong sign for the Breit interaction,
which in turn will effect the
fine structure of positronium. In \cite{h2} it was shown
that Eq.\ (\ref{retard}) gives rise to the correct
positronium fine structure
formula.
The relativistic wave equation from which
the mass $M$ of the meson is
to be solved is in the cms given by ($\hbar=c=1$)
\begin{eqnarray}\label{waveequation}
[E_1+E_2-M]\Psi_{\lambda_1\lambda_2}({\bf p})+&&\\
\sum_{\lambda_1^\prime\lambda_2^\prime}
\int W_{\lambda_1^\prime\lambda_2^\prime,\lambda_1\lambda_2}
({\bf p}^\prime,{\bf p})\Psi_{\lambda_1^\prime\lambda_2^\prime}({\bf
p}^\prime)
\left[\frac{m_1m_2}{E_1^\prime E_2^\prime}\right]d{\bf p}^\prime&=&0,
\nonumber
\end{eqnarray}
where the wave function $\Psi_{\lambda_1\lambda_2}({\bf p})$
is normalized as
\begin{equation}\label{fnorm}
\sum_{\lambda_1\lambda_2}\int |\Psi_{\lambda_1\lambda_2}({\bf p})|^2
\left[\frac{m_1m_2}{E_1E_2}\right]d{\bf p}=1
\end{equation}
and $V=4m_1m_2W$. The quantity $W$ is introduced for convenience, because
it reduces in the NR limit to the NR potential. In this limit Eq.\
(\ref{waveequation}) reduces to the NR Schr\"odinger equation in
momentum space.
\subsection{Decomposition}
The interaction $W$ used in this paper can be decomposed
into a vector part $V_V$ and a scalar part $V_S$, which is in the cms
given by
\begin{eqnarray}\label{vvs}
W({\bf p}^\prime,{\bf p})&=&
\overline{u}_{\lambda_1^\prime}({\bf p}_1^\prime)
\overline{v}_{\lambda_2}({\bf p}_2)
\left[\gamma_\mu^{(1)}\cdot\gamma^{(2)\mu}
V_V({\bf p}^\prime,{\bf p}) \right.\nonumber\\
& &\left.+\openone^{(1)}\openone^{(2)} V_S({\bf p}^\prime,{\bf p})\right]
v_{\lambda_2^\prime}
({\bf p}_2^\prime)
u_{\lambda_1}
({\bf p}_1).
\end{eqnarray}
Here the Dirac spinors $u$ and $v$ for the quark resp. antiquark, are
defined by
\begin{eqnarray}\label{uvdef}
u_\lambda({\bf p})&=& N \left[\begin{array}{c}1\\2\lambda b\end{array}
\right]\chi(\lambda,\frac{{\bf p}}{p}),\nonumber\\
v_\lambda({\bf p})&=& N \left[\begin{array}{c}-2\lambda b\\1\end{array}
\right](-i)\sigma_2\chi^{*}(\lambda,\frac{{\bf p}}{p}),\nonumber
\end{eqnarray}
with $N=\sqrt{(E+m)/(2m)}$, $b=p/(E+m)$, and $\chi(\lambda,{\bf p}/p)$ the
helicity spinor with helicity $\lambda$.
For the two-particle helicity states we use the conventions introduced by
Jacob and Wick \cite{jacob}.
Potential (\ref{vvs}) partially decouples with respect to the states
$|p;JM;\lambda_1\lambda_2\rangle$, which are defined by Eq.\ (18) of
\cite{jacob}, giving
\begin{eqnarray}\label{Jdec}
\langle p^\prime;J^\prime M^\prime;\lambda_1^\prime\lambda_2^\prime|W|
p;JM;\lambda_1\lambda_2\rangle&=&\nonumber\\
\delta_{JJ^\prime}\delta_{MM^\prime}
\langle\lambda_1^\prime\lambda_2^\prime|W^J(p^\prime,p)|
\lambda_1\lambda_2\rangle.&&
\end{eqnarray}
Because of conservation of parity, $W$ further decomposes into two $2\times
2$ submatrices, each having a definite parity. The subspace spanned by
\begin{eqnarray}\label{triplet}
|t_1\rangle&=&\frac{1}{\sqrt 2}\left[|\frac{1}{2},\frac{1}{2}\rangle+
|-\frac{1}{2},-\frac{1}{2}\rangle\right],\\
|t_2\rangle&=&\frac{1}{\sqrt 2}\left[|\frac{1}{2},-\frac{1}{2}\rangle+
|-\frac{1}{2},\frac{1}{2}\rangle\right],\nonumber
\end{eqnarray}
has parity $(-1)^{J+1}$. It contains the triplet $J=l\pm1$ states.
The complementary subspace, spanned by
\begin{eqnarray}\label{singlet}
|s_1\rangle&=&\frac{1}{\sqrt 2}\left[|\frac{1}{2},\frac{1}{2}\rangle-
|-\frac{1}{2},-\frac{1}{2}\rangle\right],\\
|s_2\rangle&=&\frac{1}{\sqrt 2}\left[|\frac{1}{2},-\frac{1}{2}\rangle-
|-\frac{1}{2},\frac{1}{2}\rangle\right],\nonumber
\end{eqnarray}
has parity $(-1)^J$ and contains the $J=l$ singlet and triplet states.
Only in the equal mass case this subspace further splits into two
$1\times1$ subspaces.
Let
\begin{equation}\label{vdec}
V^{nJ}_{ij}=p^\prime p\frac{m_1m_2}{\sqrt{E_1^\prime E_2^\prime E_1E_2}}
\langle n_i|W^J(p^\prime,p)|n_j\rangle,\;\;n=s,t,
\end{equation}
then eigenvalue Eq.\ (\ref{waveequation}) can be cast in the form
(suppressing the quantumnumbers $J$, $M$ and $s$ or $t$)
\begin{equation}\label{decompeq}
\left[E_1+E_2-M\right]f_i(p)+\sum_j\int_0^\infty V_{ij}(p^\prime,p)
f_j(p^\prime)dp^\prime=0.
\end{equation}
In appendix \ref{app:partwave} explicit formula's for
$V_{ij}^{nJ}=(V_V)_{ij}^{nJ}+(V_S)_{ij}^{nJ}$
are given.
The reduced wave function $f$ is normalized to
\begin{equation}\label{normred}
\sum_i\int_0^\infty|f_i(p)|^2dp=1.
\end{equation}
\section{The quark-antiquark potential}\label{sec:pot}
The quark-antiquark potential must contain an one-gluon exchange (OGE)
to account for the short range, and a confining part for the
long range interaction. It is generally believed that $V_{\rm OGE}$
should have a vector Lorentz
structure, while about the confining part $V_{\rm CON}$ there is no
consensus. Some believe that it must have a purely scalar structure,
while others admit a mixture between scalar and vector coupling.
We will adopt this last point of view.
The potential can therefore
be written in the form (see Eq.\ (\ref{vvs}))
\begin{eqnarray}\label{vqq}
V_V&=&V_{\rm OGE}+\epsilon V_{\rm CON},\nonumber\\
V_S&=&(1-\epsilon) V_{\rm CON},
\end{eqnarray}
where $\epsilon$ is the scalar-vector mixing of the confining potential.
For $V_{\rm CON}$
the relativistic generalization of the linear potential, as described in
\cite{h2}, plus a constant potential (to be defined below) was used.
This generalization is defined in a formal way and does not introduce any
singularities.
For the OGE two different potentials were used:
the Richardson potential \cite{rich} and
a modified version of this potential,
both containing a running coupling constant (see Sec.\
\ref{sec:runcop}).
The Richardson potential contains a linear part by itself.
Therefore in this case
(from now on denoted by I), $\epsilon=0$ was chosen, so that
the confining in the
vector direction is completely determined by the Richardson
potential. The modified Richardson potential has no linear part.
Therefore in this case (denoted by II) a nonzero $\epsilon$ was admitted.
In all these potentials the
NR momentum transfer $|{\bf q}|^2$ was replaced by
$-q_1\cdot q_2$ (see Eq.\ (\ref{q1q2})) and in this way
retardation effects were included everywhere. For notational convenience,
the quantity
\begin{equation}\label{Qdef}
Q\equiv \sqrt{(-q_1\cdot q_2)}
\end{equation}
is introduced. In the NR limit it reduces to ${|\bf q}|$.
\subsection{The one-gluon-exchange:\protect\\running coupling constant}
\label{sec:runcop}
The renormalization scheme of perturbative QCD says that, for large
momentum transfer, the
running coupling constant $\alpha_s$
as it occurs in the one gluon exchange
(the factor $\frac{4}{3}$ arises from color averaging)
\begin{equation}\label{alphas}
V_{\rm OGE}= -\frac{4}{3}\frac{\alpha_s(Q^2)}{2\pi^2Q^2}
\end{equation}
is given by (see also Eq.\ (B.2) of \cite{data})
\begin{equation}\label{alphaqcd}
\alpha_s(Q^2)=\frac{a_n}{X_n}\left[1-b_n\frac{\log(X_n)}{X_n}+{\cal O}
\left(\frac{\log^2(X_n)}{X_n^2}\right)\right].
\end{equation}
Here
\begin{displaymath}
a_n=\frac{12\pi}{(33-2n)},\hspace{0.5cm}
b_n=\frac{6(153-19n)}{(33-2n)^2},
\end{displaymath}
\begin{displaymath}
X_n=\log\left[Q/\Lambda^{(n)}_{\overline{\rm MS}}\right]^2,
\end{displaymath}
and $n$ is the number of quarks with a mass smaller than $Q$.
The subscript $\overline{MS}$ denotes that the renormalization is
performed according to the modified minimal subtraction scheme.
The connection between the different $\Lambda^{(n)}_{\overline{\rm MS}}$'s
is given by Eq.\ (B.4) of \cite{data}.
The typical momentum transfer within a meson is on the order of one GeV,
so in this region $n=3$.
Therefore, Eq.\ (\ref{alphaqcd}) with $n=3$ is in many cases used as an
approximation for all
large momentum transfers. In addition the $b_3$ term is almost always
neglected. But this term is not small at all: in the $Q$-region from 1 to
5 GeV its contribution is about 25\%. Even for very high momentum
transfers its contribution is substantial $\sim {\rm 15}\%$ for $Q={\rm 50\;
GeV}$.
However, it appears that when $\Lambda_{\overline{MS}}^{(5)}$
rather than $\Lambda_{\overline{MS}}^{(3)}$ is used, a fairly
good approximation of Eq.\ (\ref{alphaqcd})
for large $Q$ is obtained by putting
\begin{equation}\label{alsapp}
\alpha_s\approx\frac{a_3}{X_5}
\end{equation}
(see the curve ``standard
approximation'' of Fig. \ref{fig1}).
For $Q={\rm 5\;GeV}$ the deviation from Eq.\ (\ref{alphaqcd})
is $\sim {\rm 7}\%$,
and for $Q\sim {\rm 50\;GeV}$ there is no detectable
difference. Also for smaller $Q$
the agreement is better, but of course in this region the validity of
Eq.\ (\ref{alphaqcd}) is doubtful. Nevertheless we think that
these considerations show
that there is no theoretical necessity
to stick to the value of $a_3$ in Eq.\
(\ref{alsapp}): a small deviation from it also results in a good
running coupling constant for large $Q$.
For small positive $Q$-values Eqs.\ (\ref{alphaqcd}) and (\ref{alsapp})
diverge.
To remedy this, Richardson \cite{rich}
proposed a potential in which the
divergence is shifted to the origin by making the replacement
$Q^2\rightarrow Q^2+\Lambda^2$ in Eq.\ (\ref{alsapp}):
\begin{eqnarray}\label{richard}
V_R(Q^2)&=&-\frac{\alpha_0}{2\pi^2Q^2\log[1+\frac{Q^2}{\Lambda^2}]}\\
&=& -\frac{\alpha_0\Lambda^2}{2\pi^2Q^4}
-\frac{\alpha_0}{4\pi^2Q^2}+\ldots
\;\mbox{for}\;Q\rightarrow 0.\nonumber
\end{eqnarray}
The color factor $\frac{4}{3}$ is absorbed in $\alpha_0$.
In Fig.\ \ref{fig1} the running coupling constant,
defined via Eq.\ (\ref{alphas}) with
$V_{\rm CON}=V_{\rm R}$ is
compared to the QCD formula
for $\alpha_0=\frac{4}{3}a_3=16\pi/27={\rm 1.862}$.
The singularity for $Q=0$ results from a linear term in the
potential with string tension
$\frac{1}{2}\alpha_0\Lambda^2$ (see \cite{h2} or Sec.\ \ref{sec:lin}).
When the singularity is subtracted, the running coupling
constant saturates to the value $\frac{1}{2}a_3=0.698$.
{}From Fig.\ \ref{fig1} it is seen that for
momentum transfers starting from 2 GeV, a much better approximation to
Eq.\ (\ref{alphaqcd}) is obtained, if $\alpha_0$ is slightly decreased.
A value of
$\alpha_0={\rm 1.750}$
turns out to be a very good choice.
A different way to remove the singularities is to also make the
replacement $Q^2\rightarrow Q^2+\Lambda^2$ in $1/Q^2$ itself. This
results into
\begin{eqnarray}\label{vh}
V_M(Q^2)&=&-\frac{\alpha_0}{2\pi^2[Q^2+\Lambda^2]
\log[1+\frac{Q^2}{\Lambda^2}]}
\\&=&-\frac{\alpha_0}{2\pi^2Q^2}+\ldots\;\mbox{for}\;
Q\rightarrow 0.\nonumber
\end{eqnarray}
This modified Richardson potential $V_M$
does not contain a linear part. The
coupling constant saturates to a value $a_3$.
The running coupling constant defined via this potential
is given in Fig.\ \ref{fig1} for $\alpha_0=\frac{4}{3}a_3$. {}From
this figure it is seen that this choice for $\alpha_0$ gives a good
representation of the QCD formula for moderate $Q$ values.
The spinless partial waves $W_R$ and $W_M$ of $V_R$
and $V_M$, defined by Eq.\
(\ref{vnospin}), are given in appendix \ref{app:rich}.
\subsection{The confining:\protect\\ ``linear + constant'' potential}
\label{sec:lin}
For the confinement a relativistic generalization of a linear plus
constant potential was used:
\begin{displaymath}
V_{\rm CON}=V_{\rm LIN}+V_{\rm C}.
\end{displaymath}
As was already mentioned in the introduction, it was
for a long time believed that
a linear potential could not correctly be described in momentum space.
A naive consideration
shows that it behaves like $-1/Q^4$, which results in
an ill-defined bound state equation. A few years ago
it was shown \cite{Vary,Heiss,Norbury}
that this singularity for the NR case is only
apparent. For the relativistic case different methods were employed
\cite{gross,maung,h2} to solve this problem.
The one used in this paper is defined by \cite{h2}
\begin{equation}\label{vlindef}
V_{\rm LIN}=\lim_{\eta\downarrow 0}\frac{\partial^2}{\partial\eta^2}
\frac{\lambda}{2\pi^2}\left[\frac{1}{Q^2+\eta^2}\right],
\end{equation}
where the color factor $\frac{4}{3}$ is absorbed in the string tension
$\lambda$.
In \cite{h2} it was shown that the limit exists in a distributional sense.
The result was that the integral in Eq.\ (\ref{decompeq}) is replaced by
\begin{equation}\label{principal}
-\hspace{-0.4cm}\int_0^\infty\left[ V_{ij}^{(nJ)}(p^\prime,p)f_j(p^\prime)
+\frac{4p^2 C_{ij}(p)}{(p^{\prime2}-p^2)^2} f_j(p)\right]dp^\prime.
\end{equation}
Here $V^{nJ}_{ij}$, $n=s,t$, is the naive pointwise limit obtained from
Eq.\ (\ref{vlindef}). The $1/(p^\prime-p)^2$ singularity is removed by the
quantity
\begin{equation}\label{Cdef}
C_{ij}(p)=\lim_{p^\prime\rightarrow p}\left[-[p^\prime-p]^2
V_{ij}^{(nJ)}(p^\prime,p)\right].
\end{equation}
The resulting $1/(p^\prime-p)$ singularity
is handled by the principal value
integral (denoted by $-\hspace{-0.3cm}\int$).
It was shown that this subtraction is not just a trick to avoid
singularities, but is really generated by Eq.\ (\ref{vlindef}).
For a confining potential that consists of a mixture of
a scalar and vector Lorentz
structure (see Eq.\ (\ref{vqq})), the pointwise limits of the spinless
partial waves (see Eq.\ (\ref{vnospin})) are given by $W^l_V=\epsilon W^l_L$
and $W^l_S=(1-\epsilon)W^l_L$, where
\begin{equation}\label{Wnaive}
W^l_L(p^\prime,p)=\frac{\lambda}{\pi}R(p^\prime,p)\frac{Q_l^\prime(z_0)}
{p^\prime p}
\end{equation}
and also $R(p^\prime,p)$ is given in Appendix\ \ref{app:partwave}.
Here $z_0$ is defined by Eq.\ (\ref{z0}) and $Q_l$ is the Legendre function
of the second kind of order $l$.
The $1/(p^\prime-p)^2$ singularity of $W_L^l$ is determined by
\begin{displaymath}
-\frac{\lambda}{\pi}\frac{R(p,p)}{(p^\prime-p)^2-\tau(p^\prime,p)}
\end{displaymath}
The retardation defined by Eq.\ (\ref{retard}) behaves around
$p^\prime\approx p$ like
\begin{displaymath}
\tau(p^\prime,p)=-\frac{p^2}{E_1E_2}\left(p^\prime-p\right)^2+\ldots
\end{displaymath}
and therefore contributes to the singularity. It follows that
$C_{ij}$ is given by
\begin{equation}\label{Clin}
C_{ij}(p)=\frac{\lambda}{\pi}\left[\epsilon+(1-\epsilon)\frac{m_1m_2}
{p_1\cdot p_2}\right]\delta_{ij},
\end{equation}
where $p_1\cdot p_2=E_1E_2+|{\bf p}|^2$ is the dotproduct between the
four vectors $p_1$ and $p_2$.
Note that $C_{ij}$ does not depend on $J$
and the parity $s$ or $t$. In addition it is a manifest Lorentz covariant
object.
When the interaction does not contain a linear part, the integral
(\ref{principal}) coincides with the integral within Eq.\ (\ref{decompeq}).
This is so because then $C_{ij}=0$ and there is no $1/(p^\prime-p)$
singularity, which means
that the principal value coincides with an ordinary
integral. Therefore replacement (\ref{principal}) in combination with Eq.\
(\ref{Cdef}) can be applied to
the entire interaction. In this way a nonzero
value of $C$ automatically
indicates the presence of a linear term. For the
Richardson potential (\ref{richard}) with a purely vector character this
results in
\begin{equation}\label{Crich}
(C_R)_{ij}=\frac{\alpha_0\Lambda^2}{2\pi}\delta_{ij},
\end{equation}
which indicates a linear term with string tension
$\frac{1}{2}\alpha_0\Lambda^2$.
In analogy with the linear potential the constant potential $V_C$ can
also be
defined via the Yukawa potential. In the NR case one has in configuration
space
\begin{displaymath}
V_C(r)=C=\lim_{\eta\downarrow 0}\frac{\partial}{\partial\eta}
\left[-\frac{Ce^{-\eta r}}{r}\right].
\end{displaymath}
Therefore in momentum space an obvious relativistic generalization is
\begin{equation}\label{VCdef}
V_C(Q^2)=\lim_{\eta\downarrow 0}\frac{\partial}{\partial\eta}
\left[-\frac{C}{2\pi^2(Q^2+\eta^2)}\right].
\end{equation}
Note that this expression also includes retardations, which are hidden in
$Q^2$ (see Eq.\ (\ref{Qdef})). Definition (\ref{VCdef}) has to be included
in the integral of Eq.\ (\ref{decompeq}) before the limit is taken.
The spinless partial wave $W_\eta^l$ of this constant potential is given by
\begin{displaymath}
W_\eta^l(p^\prime,p)=-\frac{CR(p^\prime,p)}{\pi}
\frac{\eta}{p^\prime p}Q_l^\prime\left[
z_0+\frac{\eta^2}{2p^\prime p}\right].
\end{displaymath}
The only term that survives the limit $\eta\downarrow 0$ is
\begin{displaymath}
\frac{CR}{\pi}\frac{\eta}{(p^\prime-p)^2-\tau+\eta^2}\rightarrow
CR\left[\frac{E_1E_2}{p_1\cdot p_2}\right]^{\frac{1}{2}}
\delta(p^\prime-p)+\cdots
\end{displaymath}
For a confining potential that has both a scalar and
vector part (see Eq.\ (\ref{vvs})) this results in
\begin{equation}\label{VC}
(V_C)^{nJ}_{ij}=C\left[\frac{p_1\cdot p_2}{E_1E_2}\right]^\frac{1}{2}
\left(\epsilon+(1-\epsilon)\frac{m_1m_2}{p_1\cdot p_2}\right)
\delta_{ij}\delta(p^\prime-p).
\end{equation}
\section{Linear Regge trajectories}\label{sec:regge}
The mesons which consist of the light $u$, $d$ and $s$ quarks only, are
found to lie on so-called linear Regge trajectories. This means that
there are several groups of mesons, for which the mass squared for each
meson within such a group, is proportional to its angular momentum $J$,
i.e. $M_J^2\approx\beta J+C$. The constant $C$ depends on the group, the
Regge slope $\beta$ however is about the same for all groups. Its
experimental value is $\beta\approx1.2\;({\rm GeV})^2$. For
mesons containing a heavy $c$ or $b$ quark, such trajectories have not
been observed. This makes it plausible that linear trajectories are
induced by relativistic effects. In fact, it is known
\cite{rev,bas,Martin},
that the Schr\"odinger equation with
ultrarelativistic (UR) kinematics (i.e. $2p$ instead of $p^2/(2\mu)$) for
a linear potential, does indeed give rise to linear trajectories, while the
(NR) Schr\"odinger equation does not. The slope $\beta$ solely depends on
the string tension $\lambda$, namely $\beta=8\lambda$.
For the present case a similar effect is observed. It numerically appears
that the (UR) limit (i.e. $m_1,m_2\rightarrow 0$)
of bound state equation (\ref{decompeq}) also leads to linear
trajectories,with a group independent slope $\beta$. This slope however,
depends on the vector part $\lambda_V$ of the string
tension
only. In addition, the dependence is a factor $\sqrt 2$ larger than for
the relativized Schr\"odinger equation, namely
\begin{equation}\beta\approx(8\sqrt 2)\lambda_V.\label{Regge}\end{equation}
As can be deduced from Appendix \ref{app:partwave}, the off-diagonal
elements $V_{12}$ and $V_{21}$ of both a vector and a scalar potential
vanish in the (UR) limit. Therefore Eq.(\ref{decompeq}) further decouples
into two single equations.
For the pure vector case, it reduces for the $V_{11}^{tJ}$ channel to
\begin{equation}\label{ultra}
\left[2p-M\right] f_J(p)+-\hspace{-0.4cm}\int_0^\infty\hspace{-0.1cm}
\left[V^J(p^\prime,p)
f_J(p^\prime)+\frac{\lambda}{\pi}f_J(p)\right]\!dp^\prime=0,
\end{equation}
with
\begin{equation}\label{wijultra}
V^J(p^\prime,p)=\frac{2\lambda}{\pi p^\prime p}Q_J^\prime \left[
\frac{p^{\prime 2}+p^2+(p^\prime -p)^2}{2p^\prime p}\right].
\end{equation}
This equation was solved numerically using the method described in
Section~\ref{sec:num}.
The calculated masses (in units of $\sqrt\lambda$)
of the lowest states for each $J$
are presented in Table~\ref{regge}.
The Schr\"odinger equation with (UR) dynamics is in momentum space also
given by Eq.\ (\ref{ultra}), but now with
\begin{equation}\label{zijultra}
V^J(p^\prime,p)=\frac{\lambda}{\pi p^\prime p} Q_J^\prime
\left[ \frac{p^{\prime 2}+p^2}{2p^\prime p}\right].
\end{equation}
The corresponding calculated masses are also listed in Table~\ref{regge}.
They all agree with the calculations performed by Basdevant and Boukraa
(see Table I of \cite{bas}).
In principle the trajectories are expected to be linear only for large
values of $J$.
However, as Table~\ref{regge} shows, the convergence is very fast.
It was found that also for moderate masses Eq.\ (\ref{decompeq}) leads to
Regge trajectories with the same relation (\ref{Regge}) between $\beta$ and
$\lambda$. The convergence, however, is then slower.
When in addition a OGE term and a constant are added to the potential,
relation (\ref{Regge}) is affected. The change, however, is not very
large. Therefore
it can be concluded that, in order to obtain reasonable
Regge slopes, the string tension
in the vector direction $\lambda_V$ should be
around
$\lambda_V\approx {\rm 0.1}\;{\rm GeV}^2$.
\section{Numerical method}\label{sec:num}
The present model embraces
eigenvalue equation (\ref{waveequation}) in combination with
a quark-antiquark potential $W$ consisting of an
one-gluon-exchange part $V_{\rm OGE}$ and a confining part $V_{\rm CON}$.
The way in which $V_{\rm OGE}$ and $V_{\rm CON}$ enter
in eigenvalue equation
(\ref{waveequation}) is given by Eqs.\ (\ref{vvs}) and (\ref{vqq}).
The OGE potential is determined by two parameters $\alpha_0$ and
$\Lambda$ and the confining potential is determined by a string tension
$\lambda$ and a constant $C$. Furthermore a parameter $\epsilon$ can be
introduced to give the confining potential a mixed scalar-vector
character.
The numerical solution of the model can be divided into two parts.
The first one concerns the calculation of the masses of the mesons, from
eigenvalue equation (\ref{waveequation}),
given all parameters of the potential under consideration, and the quark
masses. The second part is the fitting to the experimental data. The
eigenvalue equation was solved by
expanding the wave function into cubic Hermite
splines (see \cite{splines}). The integration region $p\in[0,\infty)$ was
projected onto the finite interval $x\in[0,1]$ by $x=(p-p_0)/(p+p_0)$,
where $p_0$ was chosen in the physical region. On this interval
$N$ equidistant spline intervals were chosen on which $2N$
spline functions were defined.
The matrix elements of the resulting eigenvalue
equation for the expansion coefficients only involved single integrations
of the potential times a spline function. This is a major advantage of
the spline method compared to the more conventional expansion
techniques, where the evaluation of
matrix elements involves two-dimensional integrals.
The integration was performed using Gauss-Legendre quadratures. In the
case where the singular point $p^\prime=p$ was inside the region of
integration, special
care had to be taken. In the first place, an even number of
abscissas centered around $p^\prime=p$ was used. In that way the
principal value, which occurs for the confining potential,
is automatically taken care of \cite{Norbury,sloan}. Secondly, the
logarithmic singularity $\sim\log(|p^\prime-p|)$, which is induced by both
the Coulomb and the confining potential, was separately handled by means
of Gaussian quadratures based on a logarithmic weigth function (see e.g.
Table 25.7 of \cite{abra}). Another important advantage of using Hermite
splines, is their small nonzero domains. Therefore on each spline
interval only a few of these
splines (four for
the Legendre and three for the logarithmic quadrature) were needed to
obtain high accuracies. The matrix equation was solved using standard
techniques \cite{numc}, giving the meson masses $M_i$.
The choice of the projection parameter $p_0$ and the number of intervals $N$,
depended on the specific meson. For instance, the typical momentum
transfer for the $\Upsilon$ mesons ($b\overline{b}$) is about 1 GeV, so
$p_0={\rm 1}\;{\rm GeV}$. The masses of these mesons are all known to a high
precision and they are all radial levels of the $J^{PC}=1^{--}$ channel. The
$\Upsilon_{v}$ is the tenth radial state ($n=10$). Therefore 20 spline
intervals were needed to guarantee accurate results. The $K_2^*$
however, is the only known
$J^{P}=2^+$ strange meson, apart from the unconfirmed $K_2^{*\prime}$.
Therefore $N={\rm 8}$ was sufficient to
obtain reliable results. $p_0={\rm 0.5}\;{\rm GeV}$ was
a proper choice for this
meson.
The second part of the problem was to get a good fit to the experimental
data. For this purpose the merit function
\begin{equation}\label{chi2}
\chi^2(a_1,.,a_n)=\sum_i\left[\frac{M^{\rm the}_i(a_1,.,a_n)-M^{\rm exp}_i
}{\sigma_i}\right]^2
\end{equation}
has to be optimized with respect to the parameters $a_1,.,a_n$.
Here $i$ labels the mesons, $M^{\rm exp}_i$ and
$M^{\rm the}_i$ denote their experimental and calculated
masses, and $\sigma_i$ their weights. A nonlinear regression method,
based on the Levenberg-Marquardt algorithm was used to perform the fits
(see section 15.5 of
\cite{numc}). This method requires as input the explicit knowledge of the
derivatives of the calculated masses with respect to the fitparameters.
For the present complex situation this information is not known.
It is only known that the derivatives of a meson mass with respect to
quark masses it does not contain, is equal to zero.
Therefore the derivatives where approximated in the
least time consuming way by the following expression:
\begin{equation}\label{crude}
\frac{\partial M^{\rm the}_i}{\partial a_j}\approx \frac{
M_i^{\rm the}(a_1,.,a_j+\Delta,.,a_n)-
M_i^{\rm the}(a_1,.,a_n)}{\Delta}.
\end{equation}
In this manner all required information, e.g. $M^{\rm the}_i$ and
$\partial M^{\rm the}_i/\partial a_j$, is obtained by calculating all
meson masses $(n+1)$ times. A more sophisticated method would
considerably increase this number. Approximation (\ref{crude}) turned out
to be very effective: starting with a physically sensible set of
parameters, after four or five steps convergence to an
optimum was reached. The value of the parameter $\Delta$ appeared to be
of minor importance. $\Delta={\rm 0.04}$ (dimensionless, in GeV, or
${\rm GeV}^2$, depending on the dimension of $a_j$)
was found to be a good choice.
All mesons regarded to be established by the 1992 Particle Data Group
\cite{data} (in Table\ \ref{meson_spectrum} indicated by a
``$\bullet$'')
were used in the fit, with the exception of the
selfconjugate (i.e. Isospin 0) light unflavoured ones. For a fair
description of these mesons, an annihilation interaction from initial
$q\overline{q}$ states to final $q^\prime\overline{q}^\prime$ states
should be included.
Also the charmed strange $D^*_s$ and $D_{sJ}$ were excluded,
because of the uncertainty of their quantum numbers.
Furthermore the up and down quark were
considered to be of equal mass. In addition the electromagnetic
interactions are completely neglected. Therefore the $\pi^0$ and
$\pi^\pm$, the $K^0$ and $K^\pm$, and so on will be degenerate in this
picture. Because of the indistinguishability of the $u$ and $d$ quark, from
now on such a quark will be denoted by ``$u\!/\!d$''.
This accumulates to a total of 52 mesons: 11 light unflavoured
($u\!/\!d\overline{u\!/\!d}$), 11 strange
($s\overline{u\!/\!d}$), 4 charmed ($c\overline{u\!/\!d}$),
2 charmed strange
($c\overline{s}$), 10 charmonia ($c\overline{c}$), 2 bottom
($b\overline{u\!/\!d}$) and 12 bottonia ($b\overline{b}$).
For the weight $\sigma_i$ the maximum of the uncertainty
$dM^{\rm exp}_i$ of the measured mass and the predictive power of the
model, was taken.
It is difficult to give an estimate for this predictive power. In the
first place quark models have a phenomenological nature; there is no
direct link with QCD. In the second place,
the mesons are not stable particles, but in fact resonances.
The decay mechanisms, which are not incorporated in this
paper, could considerably effect the position of the calculated masses.
This especially applies for the mesons that have decay widths of a
few hundred MeV.
To account for all of this, a grid size
$S={\rm 20}\;{\rm MeV}$ was introduced to give a minimum to $\sigma$.
Only for bottonium ($b\overline{b}$), a grid size of 10 MeV
was used, because in this system relativistic effects are less important
and most states have narrow widths.
Summarizing, the weights were determined by the following formula
\begin{equation}\label{sigma}
\sigma_i={\rm Max}\left[dM_i^{\rm exp},S\right].
\end{equation}
A few exceptions to this rule were made. The pion $\pi$ and the kaon $K$
are the ground states of the
$u\!/\!d\overline{u\!/\!d}$ resp
$s\overline{u\!/\!d}$ mesons.
It is commonly believed, that, in order
to give a fair description of these particles the mechanism of
chiral symmetry breaking should be included in the
model. It appeared that also the $K_0^*$ mass was badly described by the
model. This state, however,
has a large decay width of $\sim {\rm 300}\,{\rm MeV}$.
Therefore $\sigma_{\pi}={\rm 0.4}\;{\rm GeV}$ and
$\sigma_{K}=\sigma_{K_0^*}={\rm 0.2}\;{\rm GeV}$
were chosen, so that
these states get an insignificant weight in the fit.
Another point are the $\rho^\prime$ and $\rho^{\prime\prime}$.
These states also have large decay widths ($\sim {\rm 300}\;{\rm MeV}$).
It appeared that best results were obtained if each state was regarded to
be composed of two neighboring resonances. In Sec.\ \ref{sec:discus}
his point will be discussed in more detail.
To decrease the computation time first a rough fit was made by taking
only half of the spline intervals $N$ needed to obtain the desired
accuracies. The resulting fitparameters were then used as the starting
point for a full accuracy fit. The typical computation time for
a complete rough fit for all 52 mesons was 30 min on a Sparc 2 workstation,
while the fine tuning fit took about one hour.
As was already mentioned,
two different types of potentials were examined.
In case I the Richardson potential $V_{\rm R}$ was taken to account for
the OGE and for the confinement in the vector direction. $V_{\rm CON}$ has a
purely scalar character ($\epsilon=0$). For $\alpha_0$ both the ``QCD''
value $16\pi/27$ and the value 1.75, which gives a better agreement
with the QCD formula (\ref{alphaqcd}), were taken. Both choices ended in
comparable fits ($\chi^2\approx {\rm 260}$). The resulting parameters
for $\alpha_0=1.75$ (denoted by Ia) are given in Table\ \ref{fitresult}
and the calculated meson spectrum in Table\ \ref{meson_spectrum}.
Also the case in which naither $\alpha_0$ nor $\epsilon$ was fixed was
regarded. The regression method led to very small values for $\epsilon$
and the string tension $\lambda$. Therefore a fit, denoted by Ib,
was made where these two
parameters were put equal to zero, and where $\alpha_0$ was varied.
This resulted in a somewhat better fit (Ib) with $\chi^2={\rm 250}$
(see also Tables
\ref{fitresult} and \ref{meson_spectrum}).
In both cases seven parameters, three to model the potential and four
quark masses, were fit.
Finally, the case was considered in which the linear term of
$V_{\rm R}$ was subtracted.
To get a confining potential in the vector direction,
the mixing $\epsilon$ was also varied.
The results did not improve, however.
In case II the modified Richardson potential $V_{\rm M}$
in combination with a
mixed scalar-vector $V_{\rm CON}$ was taken.
The value $\alpha_0=16\pi/27$ was the only parameter held fixed, so that
eight parameters were varied. In spite of the extra parameter,
the resulting fit (II), see Tables \ref{fitresult} and \ref{meson_spectrum},
is worse than the fits found for case I and gave $\chi^2={\rm 322}$.
\section{Discussion}\label{sec:discus}
The meson spectrum calculated for parameter sets Ia, Ib and II is given
in Table\ \ref{meson_spectrum}. Also the mesons that were not involved in
the fitting procedure (the ones without a $\sigma$) were calculated.
It is seen that most of these unconfirmed mesons
(see \cite{data}), are reasonably described by the model. Many states are
a mixture between two ${}^{2s+1}L_J$ waves. Only in the NR limit these
waves decouple because only then the angular momentum $l$ is a good
quantum number. For each state the most dominant wave is underlined. Most
distributions are like 99\% vs. 1\%, which supports the statement that
$l$ is almost a good quantum number.
A few years ago (for a review, see page VII of \cite{data}), the
$\rho$(1450) and $\rho$(1700) were recognized as being a splitting in the
formerly known $\rho$(1600) resonance. It could be interpretated as the
fine structure
splitting between the $n=2$, dominantly $S$, and the $n=3$,
dominantly $D$, states. The splitting however is rather big ($\sim$ 250
MeV). For the present model this interpretation was found to be
in conflict with the rest of the spectrum. A correct
$\rho^\prime-\rho^{\prime\prime}$ splitting induced a far too large
splitting in the $1^{--}$ states of charmonium and bottonium, and visa
versa. Only if the $\rho^\prime$ was regarded to consist of the $n=2$
and $n=3$ states, and the $\rho^{\prime\prime}$ of the $n=4$ and $n=5$
states, correct splittings for the entire spectrum could be obtained. In
addition, the correct splitting between the observed
$\rho^{\prime\prime\prime}$
and $\rho^{iv}$ ($\sim$ 50 MeV) was obtained.
The difference between the $n=2$ and $n=3$ mass, and between the $n=4$
and $n=5$ mass, was found to be $\sim$ 100 Mev, which is much smaller than
the decay widths of the $\rho^\prime$ and the $\rho^{\prime\prime}$ ($\sim$
300 MeV).
The masses of the $\pi$, $K$ and $K^*_0$ are found to be
much to high. As was already
mentioned, this was to be expected, because
the small masses of these particles are believed to be a consequence of
spontaneous breaking of chiral symmetry.
In all cases Ia, Ib and II, the mass of the $\eta_c$ is found to be too
high.
As was pointed out by Hirono et.al. \cite{jappen}, this is problably
a consequence of
neglecting negative energy states.
They found that for a quark model based on the instantaneous ladder BS
equation for charmonium, the $\eta_c$ is strongly influences by neglecting
these states ($\sim$ 100 MeV), while the dependence on all other states
is much weaker ($\sim$ 10 MeV).
If one extrapolates these results to the present theory, this means
that the omission of the negative energy states only weakly affects
the spectrum. Only for the ${}^1S_0$ ground states a substantial mass drop
may arise. This would mean that also the masses of the
$D$ and the $D_S$, which are now found a bit too high, would become
smaller. The $B$ would also get a smaller mass, which only has a positive
result in case Ia.
The centre of gravity COG(n)
(see e.g. Sec.\ 8.1 of \cite{rev}) is defined by:
\begin{displaymath}
{\rm COG}(n)\equiv \frac{5}{9}M(n{}^3P_2)+
\frac{1}{3}M(n{}^3P_1)+\frac{1}{9}M(n{}^3P_0)
\end{displaymath}
It can be proved that, for an arbitrary scalar potential $V_S$ and a
Coulomb vector potential $V_V$, up to first order relativistic
corrections, this COG equals the corresponding $n^1P_1$ singlet. The
relation is violated by the $Q$-dependence of $\alpha_s$ and the presence
of a confining term in the vector direction. It is also
affected by higher order relativistic corrections.
In all cases the COG is found to be somewhat higher than the
corresponding singlet state.
A related quantity is the ratio \cite{rev,rho}
\begin{equation}
\label{rhodef}
\rho=\frac{M({}^3P_2)-M({}^3P_1)}{M({}^3P_1)-M({}^3P_0)}.
\end{equation}
Its experimental value is 0.21 for $u\!/\!d\overline{u\!/\!d}$,
0.48 for $c\overline c$, 0.66 for $n=1$ $b\overline b$ and 0.57 for $n=2$
$b\overline b$. For all three cases Ia, Ib and II, a rather constant
value of $\rho\sim {\rm 0.8}$
(see Table \ref{fitresult}) was found.
A perturbative configuration space calculation shows (see e.g. Sec.\ 4.2 of
\cite{rev}) that this too
large value for $\rho$ is a consequence of the dominance of the vector OGE.
An analysis for the
present case in momentum space, gives a similar result.
A more profound linear scalar potential might lower this ratio.
The following remarks on the parameter sets can be made.
{}From Table\ \ref{fitresult} it is seen that the
quark masses are substantially larger than usual in quark models.
Furthermore, the masses are quite different for the different cases. The
smallest masses are obtained by fit Ia. This is a consequence of
the large negative constant $C\sim -{\rm 1.0}\;{\rm GeV}$, which, however,
is neccesary in order to obtain a good fit for the entire spectrum.
If, for instance one only considers bottonium and charmonium, it turns
out that the quality of the fit only weakly depends on the value of $C$.
The system is overparametrized and one in fact does not even need a
constant in the potential. But, when simultaneously also good results
for the lighter mesons are required, the large negative constant
arises automatically.
The total string tension $\lambda_{\rm tot}$ is defined as the sum of
the tensions in vector and scalar direction. For case I one has
$\lambda_{\rm tot}=\lambda+\frac{1}{2}\alpha_0\Lambda^2$, while case II
simply gives $\lambda_{\rm tot}=\lambda$. These tensions are also quite
different for the different cases. In case Ib, which gave the best fit,
there is only a vector tension. This is in contrast to the requirement
that, in order toobtain
better $\rho$ values, the confining should be dominantly scalar.
The total tension for
case Ia is closest to value $\lambda\sim {\rm 0.18}\;
{\rm GeV}^2$, which is often given in the literature.
The Regge slopes of Ia and Ib are compatible with the experimental value
$\beta\approx {\rm 1.2}\;{\rm GeV}^2$. The slope found in II, is somewhat
too low. This can clearly be seen from the high-$J$ states like the $\rho_5$,
$a_6$ and $K^*_5$. The errors given in Table\
\ref{fitresult} represent a measure of linearity of the trajectories.
It is defined as the spread in the difference between the masses squared
of adjacent states. The spreads found are considerably smaller than the
experimental value.
Finally the running coupling constant for $Q={\rm 31}\;{\rm GeV}$ and
$Q=M_Z={\rm 91.16}\;{\rm GeV}$ were compared with the experimental values
\begin{eqnarray}
&\alpha_s({\rm 34}\;{\rm GeV})&={\rm 0.14}\pm {\rm 0.02},\nonumber\\
&\alpha_s(M_Z)&={\rm 0.1134}\pm {\rm 0.0035}.\nonumber
\end{eqnarray}
Only case Ia is compatible with both conditions.
The choice $\alpha_0={\rm 1.75}$
was made to give the best approximation to the QCD
formula (\ref{alphaqcd}) for moderate momentum transfer. Now it appears
that this choice also gives correct results for very high momenta. A fit
of type Ia, but now with $\alpha_0=16\pi/27$ (not displayed) gave a too
large high momentum $\alpha_s$.
In principle, the high
$Q$-range of the potential is completely irrelevant for the calculation
of the meson spectrum, where the potential is only tested up to a few
GeV. Nevertheless, for the sake of theoretical consistency, this test was
made.
\section{Concluding remarks}
In this paper a relativistic quark model defined in momentum space was
studied.
The quark-antiquark potential used, consisted of a OGE with a
Lorentz vector character, and a linear plus constant confining potential.
For the OGE the Richardson potential $V_R$, given by Eq.\ (\ref{richard}),
with and without its
linear part, as well a
modified Richardson potential $V_M$, defined by Eq.\
(\ref{vh}), was regarded.
Best results were obtained for the Richardson potential
including its linear term (case I).
The linear plus constant potential was given a pure scalar character, i.e.
$\epsilon=0$ in Eq.\ (\ref{vvs}). In
this way, the
confining in the vector direction was induced by the linear
part of $V_R$.
For case I two different fits were made, fit Ia, in which the value of
$\alpha_0$ was fixed to 1.75, and fit Ib, in which $\alpha_0$ was varied,
but the string tension in the scalar direction $\lambda$ was put equal
to zero.
Also reasonable results were obtained for
$V_{\rm OGE}=V_R$ (case II). Here the
confining potential was given a mixed scalar-vector character.
For the fits Ia, Ib and II, most meson masses, with the exception
of the $\pi$, $K$ and $K^*_0$ were found to be reasonably described by the
model. In case Ia and Ib correct Regge slopes were found, and only in
case Ia a correct strong coupling constant for large momenta was found.
The ratios $\rho$, defined by Eq.\ (\ref{rhodef}), however, were in all
three cases found to be too large.
It is concluded that case Ia
should be preferred.
No detailed comparison with other theories has been made, because the
main purpose of this paper was not so much to improve upon the existing
calculations, but rather to show that results of the same quality could be
obtained using a relativistic theory which is formulated in the momentum
representation.
\acknowledgments
I would like to thank Professor Th. W. Ruijgrok for his important
comments and suggestions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Semi-supervised learning (SSL) utilizes unlabeled data to improve model performance and has achieved promising results on standard SSL image classification benchmarks~\cite{rasmus2015semi,laine2016temporal,meanteacher,mixmatch,fixmatch,noisy-student}. A common assumption, which is often made implicitly during the construction of SSL benchmark datasets, is that the class distribution of labeled and/or unlabeled data are balanced. However, in many realistic scenarios, this assumption holds untrue and becomes the primary cause of poor SSL performance~\cite{covid, darp}.
Supervised learning on imbalanced data has been widely explored. It is commonly observed that models trained on imbalanced data are biased towards \textit{majority classes} which have numerous examples, and away from \textit{minority classes} which have few examples. Various solutions have been proposed to help alleviate bias, such as re-sampling~\cite{rs-1,rs-2}, re-weighting~\cite{cbloss,ldam}, and two-stage training~\cite{decoupling,bbn}. All these methods rely on labels to re-balance the biased model.
\begin{figure}
\subfigure[]{
\includegraphics[width=0.44\linewidth]{img/teaser_number.pdf}
\label{fig:teaser_number}
}
\subfigure[]{
\includegraphics[width=0.45\linewidth]{img/teaser_precision_recall.pdf}
\label{fig:teaser_precision_recall}
}
\subfigure[]{
\includegraphics[width=0.45\linewidth]{img/teaser_pl_accuracy.pdf}
\label{fig:teaser_pl_acc}
}
\subfigure[]{
\includegraphics[width=0.45\linewidth]{img/teaser_test_accuracy.pdf}
\label{fig:teaser_test_acc}
}
\caption{Experimental results on CIFAR10-LT. \subref{fig:teaser_number} Both labeled and unlabeled sets are class-imbalanced, where the most majority class has 100$\times$ more samples than the most minority class. The test set remains balanced. \subref{fig:teaser_precision_recall} Precision and recall of a FixMatch~\cite{fixmatch} model. Although minority classes have low recall, they obtain high precision. \subref{fig:teaser_pl_acc} \& \subref{fig:teaser_test_acc} The proposed CReST and CReST+ improve the quality of pseudo-labels \subref{fig:teaser_pl_acc} and thus the recall on the balanced test set \subref{fig:teaser_test_acc}, especially on minority classes.}
\label{fig:intro}
\vspace{-0.32cm}
\end{figure}
In contrast, SSL on imbalanced data has been under-studied.
In fact, data imbalance poses further challenges in SSL where missing label information precludes rebalancing the unlabeled set. Pseudo-labels for unlabeled data generated by a model trained on labeled data are commonly leveraged in SSL algorithms. However, pseudo-labels can be problematic if they are generated by an initial model trained on imbalanced data and biased toward majority classes: subsequent training with such biased pseudo-labels intensifies the bias and deteriorates the model quality. Apart from a few recent works~\cite{darp,rethinking}, the majority of existing SSL algorithms~\cite{mixmatch,remixmatch,uda,fixmatch} have not been thoroughly evaluated on imbalanced data distributions.
In this work, we investigate SSL in the context of class-imbalanced data in which both labeled and unlabeled sets have roughly the same imbalanced class distributions, as illustrated in Fig.~\ref{fig:teaser_number}.
We observe that the undesired performance of existing SSL algorithms on imbalanced data is mainly due to low recall on minority classes. Our method is motivated by the further observation that, despite this, precision on minority classes is surprisingly high.
In Fig.~\ref{fig:teaser_precision_recall}, we show predictions on a CIFAR10-LT dataset produced by FixMatch~\cite{fixmatch}, a representative SSL algorithm with state-of-the-art performance on balanced benchmarks.
The model obtains high recall on majority classes but suffers from low recall on minority classes, which results in low accuracy overall on the balanced test set. However, the model has almost perfect precision on minority classes, suggesting that the model is conservative in classifying samples into minority classes, but once it makes such a prediction we can be confident it is correct. Similar observations are made on other SSL methods, and on supervised learning~\cite{jamal2020rethinking}.
With this in mind, we introduce a class-rebalancing self-training scheme (CReST) which re-trains a baseline SSL model after adaptively sampling pseudo-labeled data from the unlabeled set to supplement the original labeled set. We refer to each fully-trained baseline model as a \textit{generation}. After each generation, pseudo-labeled samples from the unlabeled set are added into the labeled set to retrain an SSL model. Rather than updating the labeled set with all pseudo-labeled samples, we instead use a \textit{stochastic} update strategy in which samples are selected with higher probability if they are predicted as minority classes, as those are more likely to be correct predictions. The updating probability is a function of the data distribution estimated from the labeled set. In addition, we extend CReST to CReST+ by incorporating distribution alignment~\cite{remixmatch} with a temperature scaling factor to control its alignment strength over generations, so that predicted data distributions are more aggressively adjusted to alleviate model bias. As shown in Fig.~\ref{fig:teaser_pl_acc} and \ref{fig:teaser_test_acc}, the proposed strategy reduces the bias of pseudo-labeling and improves the class-balanced test set accuracy as a result.
We show in experiments that CReST and CReST+ improve over baseline SSL methods by a large margin. On CIFAR-LT~\cite{cbloss,ldam}, our method outperforms FixMatch~\cite{fixmatch} under different imbalance ratios and label fractions by as much as 11.8\% in accuracy. Our method also outperforms DARP~\cite{darp}, a state-of-the-art SSL algorithm designed for learning from imbalanced data, on both MixMatch~\cite{mixmatch} and FixMatch~\cite{fixmatch} by up to 4.0\% in accuracy. To further test the efficacy of the proposed method on large-scale data, we apply our method on ImageNet127~\cite{mergedIN}, a naturally imbalanced dataset created from ImageNet~\cite{in} by merging classes based on the semantic hierarchy, and get 7.9\% gain on recall. Extensive ablation study further demonstrates that our method particularly helps improve recall on minority classes, making it a viable solution for imbalanced SSL.
\section{Related work}
\subsection{Semi-supervised learning}
Recent years have observed a significant advancement of SSL research~\cite{pseudolabeling,laine2016temporal,vat,mixmatch,remixmatch,noisy-student,uda,fixmatch}. Many of these methods share similar basic techniques, such as entropy minimization~\cite{grandvalet2005semi}, pseudo-labeling, or consistency regularization, with deep learning.
Pseudo-labeling~\cite{pseudolabeling,fixmatch} trains a classifier with unlabeled data using pseudo-labeled targets derived from the model's own predictions. Relatedly, \cite{laine2016temporal,mixmatch,uda,remixmatch,noisy-student} use a model's predictive probability with temperature scaling as a soft pseudo-label.
Consistency regularization~\cite{sajjadi2016regularization,laine2016temporal,vat} learns a classifier by promoting consistency in predictions between different views of unlabeled data, either via soft~\cite{laine2016temporal,vat,mixmatch,uda} or hard~\cite{fixmatch} pseudo-labels. Effective methods of generating multiple views include input data augmentations of varying strength~\cite{devries2017improved,cubuk2020randaugment,remixmatch}, standard dropout within network layers~\cite{srivastava2014dropout}, and stochastic depth~\cite{huang2016deep}.
The performance of most recent SSL methods relies on the quality of pseudo-labels. However, none of aforementioned works have studied SSL in the class-imbalanced setting, in which the quality of pseudo-labels is significantly threatened by model bias.
\subsection{Class-imbalanced supervised learning}
Research on class-imbalanced supervised learning has attracted increasing attention. Prominent works include re-sampling~\cite{smote,rs-1,rs-2,rs-3} and re-weighting~\cite{khan2017cost,cbloss,ldam,tan2020equalization}
which re-balance the contribution of each class, while others focus on re-weighting each instance~\cite{focalloss,metanet,l2rw,jamal2020rethinking}. Some works~\cite{feature-transfer,wang2019dynamic,m2m,learnable-embed,oltr} aim to transfer knowledge from majority classes to minority classes. A recent trend of work proposes to decouple the learning of representation and classifier~\cite{bbn,decoupling,tang2020causaleffect}. These methods assume all labels are available during training and their performance is largely unknown under SSL scenarios.
\begin{figure*}
\centering
\subfigure{
\includegraphics[width=0.23\textwidth]{img/cifar10_recall.pdf}
\label{fig:eval-a}
}
\centering
\subfigure{
\includegraphics[width=0.23\textwidth]{img/cifar10_precision.pdf}
\label{fig:eval-b}
}
\centering
\subfigure{
\includegraphics[width=0.23\textwidth]{img/cifar100_recall.pdf}
\label{fig:eval-c}
}
\centering
\subfigure{
\includegraphics[width=0.23\textwidth]{img/cifar100_precision.pdf}
\label{fig:eval-d}
}
\caption{Bias of a FixMatch~\cite{fixmatch} model on class-imbalanced data. \textbf{Left}: Per-class recall and precision on CIFAR10-LT. \textbf{Right}: Per-class recall and precision on CIFAR100-LT. The class index is sorted by the number of examples in descending order. While the conventional assumption might be that the performance of the majority classes is better than that of the minority classes, we find it only partially true. The model obtains high recall but low precision on majority classes, while obtaining low recall but high precision on minority classes. See more details in Section~\ref{sec:model-bias}.}
\label{fig:method-pa}
\vspace{-0.2cm}
\end{figure*}
\subsection{Class-imbalanced semi-supervised learning}
While SSL has been extensively studied, it is under-explored regarding class-imbalanced data.
Recently, Yang and Xu~\cite{rethinking} argued that leveraging unlabeled
data by SSL and self-supervised learning can benefit class-imbalanced learning.
Hyun \etal~\cite{suppress} proposed a suppressed consistency loss to suppress the loss on minority classes.
Kim \etal~\cite{darp} proposed Distribution Aligning Refinery (DARP) to refine raw pseudo-labels via a convex optimization. In contrast, we boost the quality of the model's raw pseudo-labels directly via an class-rebalancing sampling strategy and a progressive distribution alignment strategy. DARP also discussed another interesting setting where labeled and unlabeled data do not share the same class distribution, while in this work we focus on the scenario when labeled and unlabeled data have roughly the same distribution.
\section{Class-Imbalanced SSL}
In this section, we first set up the problem and introduce baseline SSL algorithms. Next, we investigate the biased behavior of existing SSL algorithms on class-imbalanced data.
Based on these observations, we propose a class-rebalancing self-training framework (CReST) that takes advantage of, rather than suffers from, the model's bias to alleviate the performance degeneration on minority classes.
In addition, we extend distribution alignment~\cite{remixmatch} and integrate it as CReST+ to further improve the quality of online pseudo-labeling.
\subsection{Problem setup and baselines}
We first set up the problem of class-imbalanced semi-supervised learning. For an $L$-class classification task, there is a labeled set $\mathcal{X}\,{=}\,\big\{(x_n, y_n) \,{:}\, n \,{\in}\, (1, \ldots, N)\big\}$, where $x_n \,{\in}\, \mathbb{R}^d$ are training examples and $y_n \,{\in}\, \{1, \ldots, L\}$ are corresponding class labels.
The number of training examples in $\mathcal{X}$ of class $l$ is denoted as $N_l$, \ie, $\sum_{l=1}^L N_l\,{=}\,N$.
Without loss of generality, we assume that the classes are sorted by cardinality in descending order, \ie, $N_1 \,{\geq}\, N_2 \,{\geq}\, \cdots \,{\geq}\, N_L$.
The marginal class distribution of $\mathcal{X}$ is skewed, \ie, $N_1 \,{\gg}\, N_L$. We measure the degree of class imbalance by imbalance ratio, $\gamma\,{=}\,\frac{N_1}{N_L}$.
Besides the labeled set $\mathcal{X}$, an unlabeled set $\mathcal{U}\,{=}\,\big\{u_m \,{\in}\, \mathbb{R}^d\,{:}\, m \,{\in}\, (1, \ldots, M) \big\}$ that shares the same class distribution as $\mathcal{X}$ is also provided. The label fraction $\beta\,{=}\,\frac{N}{N+M}$ measures the percentage of labeled data.
Given class-imbalanced sets $\mathcal{X}$ and $\mathcal{U}$, our goal is to learn a classifier $f\,{:}\, \mathbb{R}^d \,{\rightarrow}\,\{1, \ldots, L\}$ that generalizes well under a class-\textit{balanced} test criterion.
Many state-of-the-art SSL methods~\cite{fixmatch,noisy-student} utilize unlabeled data by assigning a pseudo-label with the classifier's prediction $\hat{y}_m\,{=}\,f(u_m)$.
The classifier is then optimized on both labeled and unlabeled samples with their corresponding pseudo-labels. Therefore, the quality of pseudo-labels is crucial to the final performance.
These algorithms work successfully on standard class-balanced datasets since the quality of the classifier — and thus its online pseudo-labels — improves for all classes over the course of training.
However, when the classifier is biased at the beginning due to a skewed class distribution, the online pseudo-labels of unlabeled data can be even more biased, further aggravating the class-imbalance issue and resulting in severe performance degradation on minority classes.
\subsection{A closer look at the model bias}
\label{sec:model-bias}
Previous works~\cite{cbloss,ldam} introduce long-tailed versions of CIFAR~\cite{cifar} datasets with various class-imbalanced ratios to evaluate class-imbalanced fully-supervised learning algorithms. We extend this protocol by retaining a fraction of training samples as labeled and the remaining as unlabeled. We test FixMatch~\cite{fixmatch}, one of the state-of-the-art SSL algorithms designed for class-balanced data. Fig.~\ref{fig:method-pa} shows test recall and precision of each class on CIFAR10-LT with imbalance ratio $\gamma\,{=}\,100$, label fraction $\beta\,{=}\,10\%$, and CIFAR100-LT with imbalance ratio $\gamma\,{=}\,50$, label fraction $\beta\,{=}\,30\%$.
First, as shown in the first and third plots of Fig.~\ref{fig:method-pa}, FixMatch achieves very high recall on majority classes and poor recall on minority classes, which is consistent with the conventional wisdom. For example, the recall of the most and second most majority classes of CIFAR10-LT is 98.5\% and 99.7\%, respectively, while the model recognizes only 8.4\% of samples correctly from the most minority class.
In other words, the model is highly biased towards majority classes, resulting in poor recall averaged over all classes which is also known as accuracy as the test set is balanced.
Despite the low recall, the minority classes maintain surprisingly high precision as in the second and fourth plots of Fig.~\ref{fig:method-pa}. For example, the model achieves 97.7\% and 98.3\% precision, respectively, on the most and the second most minority classes of CIFAR10-LT, while only achieving relatively low precision on majority classes.
This indicates that many minority class samples are predicted as one of the majority classes.
While the conventional wisdom may suggest that the performance of the majority classes is better than that of the minority classes, we find that it is only partly true: the biased model learned on class-imbalanced data indeed performs favorably on majority classes in terms of recall, but favors minority classes in terms of precision. Similar observations are made on other SSL algorithms, and also on fully-supervised class-imbalanced learning~\cite{jamal2020rethinking}. This empirical finding motivates us to exploit the high precision of minority classes to alleviate their recall degradation. To achieve this goal, we introduce CReST, a class-rebalancing self-training framework illustrated in Fig.~\ref{fig:flowchart}.
\subsection{Class-rebalancing self-training}
Self-training~\cite{st-1, st-2} is an iterative method widely used in SSL.
It trains the model for multiple generations, where each generation involves two steps. First, the model is trained on the labeled set to obtain a teacher model.
Second, the teacher model's predictions are used to generate pseudo-labels $\hat{y}_m$ for unlabeled data $u_m$. The pseudo-labeled set $\hat{\mathcal{U}}\,{=}\,\big\{(u_m, \hat{y}_m)\big\}_{m=1}^M$ is included into the labeled set, \ie, $\mathcal{X}^\prime \,{=}\, \mathcal{X} \,{\cup}\, \hat{\mathcal{U}}$, for the next generation.
To accommodate the class-imbalance, we propose two modifications to the self-training strategy. First, instead of solely training on the labeled data, we use SSL algorithms to exploit both labeled and unlabeled data to get a better teacher model in the first step. More importantly, in the second step, rather than including every sample in $\hat{\mathcal{U}}$ in the labeled set, we instead expand the labeled set with a selected subset $\hat{\mathcal{S}} \,{\subset}\, \hat{\mathcal{U}}$, \ie, $\mathcal{X}^\prime \,{=}\, \mathcal{X} \,{\cup}\, \hat{\mathcal{S}}$.
We choose $\hat{\mathcal{S}}$ following a class-rebalancing rule: the less frequent a class $l$ is, the more unlabeled samples that are predicted as class $l$ are included into the pseudo-labeled set $\hat{\mathcal{S}}$.
We estimate the class distribution from the labeled set. Specifically, unlabeled samples that are predicted as class $l$ are included into $\hat{\mathcal{S}}$ at the rate of
\begin{equation}
\mu_l = \big(\frac{N_{L+1-l}}{N_1}\big)^\alpha\,,
\end{equation}
where $\alpha\,{\geq}\,0 $ tunes the sampling rate and thus the size of $\hat{\mathcal{S}}$. For instance, for a 10-class imbalanced dataset with imbalance ratio of $\gamma\,{=}\,\frac{N_1}{N_{10}}\,{=}\,100$, we keep all samples predicted as the most minority class since $\mu_{10} \,{=}\, (\frac{N_{10+1-10}}{N_1})^\alpha\,{=}\,1$. While for the most majority class, $\mu_{1}\,{=}\,(\frac{N_{10+1-1}}{N_1})^\alpha \,{=}\, 0.01^\alpha$ of samples are selected. When $\alpha \,{=}\, 0$, $\mu_l \,{=}\, 1$ for all $l$, then all unlabeled samples are kept and the algorithm falls back to the conventional self-training. When selecting pseudo-labeled samples in each class, we take the most confident ones.
The motivation of our CReST strategy is two-fold. First, as observed in Sec.~\ref{sec:model-bias}, the precision of minority classes is much higher than that of majority classes, hence minority class pseudo-labels are less risky to include in the labeled set. Second, adding samples to minority classes is more critical due to data scarcity. With more samples from minority classes added, the labeled set is more class-balanced, which leads to a less biased classifier for online pseudo-labeling in the subsequent generation. Note that there are other ways of sampling the pseudo-labels in a class-balancing fashion and we provide a practical and effective example.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{img/flowchart.pdf}
\caption{
CReST (Class-Rebalancing Self-Training) alternatingly trains a baseline SSL algorithm on both labeled and unlabeled data and expands the labeled set by sampling pseudo-labeled unlabeled data. Sampling rates for majority and minority classes are adaptively determined based on the quality of pseudo-labels. See text for details.}
\label{fig:flowchart}
\vspace{-0.2cm}
\end{figure}
\begin{table*}[ht]
\begin{center}
\resizebox{\textwidth}{!}{%
\begin{tabular}{lcccccccccc}
\toprule
& \multicolumn{6}{c}{CIFAR10-LT} & \multicolumn{4}{c}{CIFAR100-LT} \\
\cmidrule(l{3pt}r{3pt}){2-7} \cmidrule(l{3pt}r{3pt}){8-11}
& \multicolumn{3}{c}{$\beta\,{=}\,10\%$} & \multicolumn{3}{c}{$\beta\,{=}\,30\%$} & \multicolumn{2}{c}{$\beta\,{=}\,10\%$} & \multicolumn{2}{c}{$\beta\,{=}\,30\%$} \\
\cmidrule(l{3pt}r{3pt}){2-4} \cmidrule(l{3pt}r{3pt}){5-7} \cmidrule(l{3pt}r{3pt}){8-9} \cmidrule(l{3pt}r{3pt}){10-11}
Method & $\gamma\,{=}\,50$ & $\gamma\,{=}\,100$ & $\gamma\,{=}\,200$ & $\gamma\,{=}\,50$ & $\gamma\,{=}\,100$ & $\gamma\,{=}\,200$ & $\gamma\,{=}\,50$ & $\gamma\,{=}\,100$ & $\gamma\,{=}\,50$ & $\gamma\,{=}\,100$ \\
\cmidrule(l{3pt}r{3pt}){1-1} \cmidrule(l{3pt}r{3pt}){2-4} \cmidrule(l{3pt}r{3pt}){5-7} \cmidrule(l{3pt}r{3pt}){8-9} \cmidrule(l{3pt}r{3pt}){10-11}
FixMatch~\cite{fixmatch} & 79.4\ms{0.65} & 66.3\ms{1.74} & 59.7\ms{0.74} & 81.9\ms{0.30} & 73.1\ms{0.58} & 64.7\ms{0.69} & 33.7\ms{0.94} & 28.3\ms{0.66} & 43.1\ms{0.24} & 38.6\ms{0.45}\\
w/ CReST & 83.8\ms{0.45} & 75.9\ms{0.62} & 64.1\ms{0.23} & 84.2\ms{0.13} & 77.6\ms{0.86} & 67.7\ms{0.82} & 37.4\ms{0.29} & 32.1\ms{1.52} & 45.6\ms{0.19} & 40.2\ms{0.53}\\
w/ CReST+ & \textbf{84.2}\ms{0.39} & \textbf{78.1}\ms{0.84} & \textbf{67.7}\ms{1.39} & \textbf{84.9}\ms{0.27} & \textbf{79.2}\ms{0.20} & \textbf{70.5}\ms{0.56} & \textbf{38.8}\ms{1.03} & \textbf{34.6}\ms{0.74} & \textbf{46.7}\ms{0.34} & \textbf{42.0}\ms{0.44}\\
\bottomrule
\end{tabular}
}%
\end{center}
\caption{Classification accuracy (\%) on CIFAR10-LT and CIFAR100-LT under various label fraction $\beta$ and imbalance ratio $\gamma$. The numbers are averaged over 5 different folds.}
\label{tab:cifar-main-results}
\vspace{-0.2cm}
\end{table*}
\subsection{Progressive distribution alignment}
\label{sec:adaptive-da}
We further improve the quality of online pseudo-labels by additionally introducing progressive distribution alignment into CReST and distinguish it as CReST+.
While first introduced for class-balanced SSL, Distribution Alignment (DA)~\cite{remixmatch} fits with class-imbalanced scenarios particularly well. DA refines the original, biased online pseudo-labels so that the aggregate of refined pseudo-labels aligns with a target distribution, which is usually $p(y)$, the class distribution estimated from the labeled set. Let $q\,{=}\,p(y | u_m;f)$ be the original pseudo-label for an unlabeled example $u_m$, and let $\tilde{p}(y)$ be the moving average of all original pseudo-labels. DA first scales $q$ by the ratio $\frac{p(y)}{\tilde{p}(y)}$, aligning $q$ with the target distribution $p(y)$. It then re-normalizes the scaled result to form a valid probability distribution: $\tilde{q}\,{=}\,\text{Normalize}(q \frac{p(y)}{\tilde{p}(y)})$, where $\text{Normalize}(x)_i\,{=}\,x_i / \sum_j x_j$. $\tilde{q}$ is used as the refined online pseudo-label for $u_m$ instead of $q$. The refined pseudo-labels are less biased towards majority classes, and so is the model trained on these pseudo-labels.
To further enhance DA's ability to handle class-imbalanced data, we extend it with temperature scaling. Specifically, we add a tuning knob $t\,{\in}\, [0,1]$ that controls the class-rebalancing strength of DA. Instead of directly taking $p(y)$ as target, we use a temperature-scaled distribution $\text{Normalize}(p(y)^t)$. When $t\,{=}\,1$, we recover DA. When $t\,{<}\,1$, the temperature-scaled distribution becomes smoother and balances the model's predictive distribution more aggressively. When $t\,{=}\,0$, the target distribution becomes uniform.
While using a smaller $t$ can benefit a single generation under a class-balanced test criterion, it is less desirable for multiple generations of self-training since it affects the quality of pseudo-labels. Specifically, applying a $t\,{<}\,1$ enforces the model's predictive distribution to be more balanced than the class distribution of the training set, leading the model to predict minority classes more frequently.
However, on an imbalanced training set with few samples of minority classes, such pseudo-labeling tends to be over-balanced, \ie, more samples are wrongly predicted as minority classes. This decreases the high precision of minority classes, interfering with our ability to exploit it to produce better pseudo-labels.
To handle this, we propose to progressively increase the strength of class-rebalancing by decreasing $t$ over generations. Specifically, we set $t$ by a linear function of the current generation $g$ which indexes from 0:
\begin{equation}
t_g = (1-\frac{g}{G})\cdot 1.0 + \frac{g}{G} \cdot t_{\text{min}}\,,
\end{equation}
where $G\,{+}\,1$ is the total number of generations and $t_{\text{min}}$ is the temperature used for the last generation. This progressive schedule for $t$ enjoys both high precision of pseudo-labels in early generations, and stronger class-rebalancing in late generations. It also speeds up the iterative training, obtaining better results with fewer generations of training. See Sec.~\ref{sec:ablation} for empirical analysis.
\section{Experiments}
\subsection{CIFAR-LT}
\paragraph{Datasets.} We first evaluate the efficacy of the proposed method on long-tailed CIFAR10 (CIFAR10-LT) and long-tailed CIFAR100 (CIFAR100-LT) introduced in \cite{cbloss,ldam}. On these datasets, training images are randomly discarded per class to maintain a pre-defined imbalance ratio $\gamma$. Specifically, $N_l\,{=}\,\gamma^{-\frac{l-1}{L-1}}\,{\cdot}\, N_1$ while $N_1\,{=}\,5000$, $L\,{=}\,10$ for CIFAR10-LT and $N_1\,{=}\,500$, $L\,{=}\,100$ for CIFAR100-LT. We randomly select $\beta\,{=}\,10$\% and $30$\% of samples from training data to create the labeled set, and test imbalance ratio $\gamma\,{=}\,50$, $100$ and $200$ for CIFAR10-LT and $\gamma\,{=}\,50$ and $100$ for CIFAR100-LT. The test set remains untouched and balanced, so that the evaluated criterion, accuracy on the test set, is class-balanced.
\paragraph{Setup.} We use Wide ResNet-28-2~\cite{wrn} following \cite{realistic, fixmatch} as the backbone. We evaluate our method on FixMatch and MixMatch. For each generation, the model is trained for $2^{16}$ steps with level 1 half-cosine decay~\cite{fixmatch} when using FixMatch as the baseline SSL algorithm, while $2^{17}$ steps with level 5 half-cosine decay for MixMatch. Other hyper-parameters for each training generation are untouched. For CReST and CReST+ related hyper-parameters, we set $\alpha\,{=}\,1\,{/}\,3$, $t_{\text{min}}\,{=}\,0.5$ for FixMatch and $\alpha\,{=}\,1\,{/}\,2$, $t_{\text{min}}\,{=}\,0.8$ for MixMatch. CReST takes 15 generations, while CReST+ only takes 6 generations accelerated by progressive distribution alignment.
The hyper-parameters are selected based on a single fold of CIFAR10-LT with $\gamma\,{=}\,100$ and $\beta\,{=}\,10\%$.
For all algorithms, we evaluate the model on the test dataset every $2^{10}$ steps and report the average test accuracy of the last 5 evaluations.
Each algorithm is tested under 5 different folds of labeled data and we report the mean and the standard deviation of accuracy on the test set. Following \cite{mixmatch} and \cite{fixmatch}, we report final performance using an exponential moving average of model parameters.
\paragraph{Main results.} First, we compare our model with baseline FixMatch, and present the results in Table~\ref{tab:cifar-main-results}. Although FixMatch performs reasonably well on imbalance ratio $\gamma\,{=}\,50$, its accuracy decreases significantly with increasing imbalance ratio.
In contrast, CReST improves the accuracy of FixMatch on all evaluated settings and achieves as much as 9.6\% absolute performance gain. When incorporating progressive distribution alignment, our CReST+ model is able to further boost the performance on all settings by another few points, resulting in 3.0\% to 11.8\% absolute accuracy improvement compared to baseline FixMatch.
The accuracy of all compared methods improves with increasing number of labeled samples, but CReST consistently outperforms the baseline.
This indicates that CReST can better utilize labeled data to reduce model bias under imbalanced class-distribution.
We also observe that our model works particularly well and achieves 11.8\% and 6.1\% accuracy gain for imbalance ratio $\gamma\,{=}\,100$ with $10\%$ and $30\%$ labeled data, respectively. We hypothesize the reason is that our model finds more correctly pseudo-labeled samples to augment the labeled set when the imbalance ratio is moderate. However, when imbalance ratio is very high, \eg, $\gamma\,{=}\,200$, our model's capability is constrained by insufficient number of training samples from minority classes.
\begin{table}[ht]
\begin{center}
\begin{tabular}{lccc}
\toprule
Method & $\gamma\,{=}\,50$ & $\gamma\,{=}\,100$ & $\gamma\,{=}\,200$ \\
\cmidrule(l{3pt}r{3pt}){1-1} \cmidrule(l{3pt}r{3pt}){2-4}
Pseudo-Labeling~\cite{pseudolabeling} & 52.5\ms{0.74} & 46.5\ms{1.29} & 42.0\ms{1.39} \\
Mean Teacher~\cite{meanteacher} & 57.1\ms{3.00} & 48.1\ms{0.71} & 45.1\ms{1.28} \\
\cmidrule(l{3pt}r{3pt}){1-1} \cmidrule(l{3pt}r{3pt}){2-4}
MixMatch~\cite{mixmatch} & 69.1\ms{1.18} & 60.4\ms{2.24} & 54.5\ms{1.87} \\
w/ CReST & 69.8\ms{1.06} & 60.5\ms{1.56} & 55.2\ms{2.25} \\
w/ CReST+ & 76.7\ms{0.35} & 66.1\ms{0.79} & 57.6\ms{1.30} \\
\cmidrule(l{3pt}r{3pt}){1-1} \cmidrule(l{3pt}r{3pt}){2-4}
FixMatch~\cite{fixmatch} & 80.1\ms{0.44} & 67.3\ms{1.19} & 59.7\ms{0.63} \\
w/ CB~\cite{cbloss} & 80.2\ms{0.45} & 67.6\ms{1.88} & 60.8\ms{0.26} \\
w/ RS~\cite{rs-1,rs-2} & 80.2\ms{0.78} & 69.6\ms{1.30} & 60.9\ms{1.25} \\
w/ DA~\cite{remixmatch} ($t\,{=}\,1.0$) & 80.2\ms{0.45} & 69.7\ms{1.27} & 62.0\ms{0.84} \\
w/ DA~\cite{remixmatch} ($t\,{=}\,0.5$) & 82.4\ms{0.33} & 73.6\ms{0.63} & 63.7\ms{1.17} \\
w/ LA~\cite{la} & 83.2\ms{0.87} & 70.4\ms{2.90} & 62.4\ms{1.24} \\
w/ CReST & 83.2\ms{0.37} & 74.8\ms{1.09} & 63.4\ms{0.32} \\
w/ CReST+ & 84.2\ms{0.39} & 78.1\ms{0.84} & 67.7\ms{1.39} \\
w/ CReST+ \& LA & \textbf{85.6}\ms{0.36} & \textbf{81.2}\ms{0.70} & \textbf{71.9}\ms{2.24} \\
\bottomrule
\end{tabular}
\end{center}
\caption{We compare CReST and CReST+ with baseline methods including different SSL algorithms and typical class-rebalancing techniques designed for fully-supervised learning. All models are trained with the same number of steps, with or without self-training, for fair comparison. Three different imbalance ratios $\gamma$ with $\beta\,{=}\,10\%$ labels are evaluated and the numbers are averaged over 5 different folds.}
\label{tab:cifar-other-baselines}
\vspace{-0.2cm}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{lccc}
\toprule
Method & $\gamma\,{=}\,50$ & $\gamma\,{=}\,100$ & $\gamma\,{=}\,150$ \\
\cmidrule(l{3pt}r{3pt}){1-1} \cmidrule(l{3pt}r{3pt}){2-4}
Supervised & 65.2\ms{0.05} & 58.8\ms{0.13} & 55.6\ms{0.43} \\
\cmidrule(l{3pt}r{3pt}){1-1} \cmidrule(l{3pt}r{3pt}){2-4}
MixMatch~\cite{mixmatch} & 73.2\ms{0.56} & 64.8\ms{0.28} & 62.5\ms{0.31} \\
w/ DARP~\cite{darp} & 75.2\ms{0.47} & 67.9\ms{0.14} & 65.8\ms{0.52} \\
w/ CReST & 78.4\ms{0.36} & 70.0\ms{0.49} & 64.7\ms{0.96} \\
w/ CReST+ & \textbf{79.0}\ms{0.26} & \textbf{71.9}\ms{0.33} & \textbf{68.3}\ms{0.57} \\
\cmidrule(l{3pt}r{3pt}){1-1} \cmidrule(l{3pt}r{3pt}){2-4}
FixMatch~\cite{fixmatch} & 79.2\ms{0.33} & 71.5\ms{0.72} & 68.4\ms{0.15} \\
w/ DARP~\cite{darp} & 81.8\ms{0.24} & 75.5\ms{0.05} & 70.4\ms{0.25} \\
w/ CReST & 83.0\ms{0.39} & 75.7\ms{0.38} & 70.8\ms{0.25} \\
w/ CReST+ & \textbf{83.9}\ms{0.14} & \textbf{77.4}\ms{0.36} & \textbf{72.8}\ms{0.58} \\
\bottomrule
\end{tabular}
\end{center}
\caption{Accuracy (\%) under DARP's protocol~\cite{darp} on CIFAR10. Three different imbalance ratios $\gamma$ with $\beta\,{=}\,10\%$ labels are evaluated. Numbers are averaged over 5 runs.}
\label{tab:cifar-darp}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{lccc}
\toprule
Method & Gen$_1$ & Gen$_2$ & Gen$_3$ \\
\cmidrule(l{3pt}r{3pt}){1-1} \cmidrule(l{3pt}r{3pt}){2-4}
Supervised (100\% labels) & 75.8 & - & - \\
Supervised (10\% labels) & 46.0 & - & - \\
FixMatch (10\% labels) & 65.8 & - & - \\
w/ DA ($t\,{=}\,0.5$) & 69.1 & - & - \\
w/ CReST & 65.8 & 67.6 & 67.7 \\
w/ CReST+ & 68.3 & 70.7 & \textbf{73.7} \\
\bottomrule
\end{tabular}
\end{center}
\caption{Evaluating the proposed method on ImageNet127 with $\beta\,{=}\,10\%$ samples are labeled. We retrain FixMatch models for 3 generations with our CReST and CReST+.}
\label{tab:imagenet127}
\vspace{-0.4cm}
\end{table}
\paragraph{Comparison with baselines.} We further report the performance of other SSL baselines in Table~\ref{tab:cifar-other-baselines}. For fair comparison, all algorithms are trained for $6\,{\times}\,2^{16}$ steps. For CReST and CReST+ on a FixMatch base, we report the performance of its 6-\textit{th} generation. For our method on a MixMatch base, the results of its 3-\textit{rd} generation are reported.
We first directly evaluate several classic SSL methods on class-imbalanced datasets, including Pseudo-Labeling~\cite{pseudolabeling}, Mean Teacher~\cite{meanteacher}, MixMatch~\cite{mixmatch} and FixMatch~\cite{fixmatch}.
All the SSL baselines suffer from low accuracy due to imbalanced data, and the accuracy drop becomes more pronounced with increasing imbalance ratio.
On MixMatch, the improvement provided by CReST is modest mainly due to the schedule constraint. Providing more generation budget, the results of MixMatch with CReST can be further improved.
Among these algorithms, FixMatch achieves the best performance, so we take it as the baseline for various rebalancing methods.
We consider typical class-rebalancing methods designed for fully-supervised learning that can be directly applied in SSL algorithms including 1) Class-Balanced loss (CB)~\cite{cbloss}, a representative of re-weighting strategies in which labeled examples are re-weighted according to the inverse of the effective number of samples in each class; 2) Re-Sampling (RS)~\cite{rs-1, rs-2}, a representative of re-sampling strategies in which each labeled example is sampled with probability proportional to the inverse sample size of its class. We also consider Distribution Alignment (DA)~\cite{remixmatch} as described in Section~\ref{sec:adaptive-da} and Logit Adjustment (LA)~\cite{la}, an ad-hoc post-processing technique to enhance models' discriminative ability on minority classes by adjusting the logits of model predictions.
While CB, RS, DA and LA all improve accuracy over SSL baselines, the gain is relatively small. With CReST and CReST+, we successfully improve the accuracy for all imbalance ratios by at most 10.8\% over FixMatch, outperforming all compared SSL baselines and class-rebalancing methods.
Finally, applying LA as the post-processing correction of our CReST+ models further gives consistent accuracy gains, producing the best results.
\paragraph{Comparison with DARP.} We directly compare with DARP~\cite{darp}, the most recent state-of-the-art SSL algorithm specifically designed for imbalanced data. Both DARP and our method are built upon MixMatch and FixMatch as drop-in additions to standard SSL algorithms.
We apply our method on exactly the same datasets used in DARP and present the results in Table~\ref{tab:cifar-darp}. For all three imbalance ratios, our model consistently achieves up to 4.0\% accuracy gain over DARP on MixMatch, and up to 2.4\% accuracy gain on FixMatch.
\subsection{ImageNet127}
\paragraph{Datasets.} We also evaluate CReST on ImageNet127~\cite{mergedIN, nca} to verify its performance on large-scale datasets.
ImageNet127 is originally introduced in \cite{mergedIN}, where the 1000 classes of ImageNet~\cite{in} are grouped into 127 classes based on their top-down hierarchy in WordNet. It is a naturally imbalanced dataset with imbalance ratio $\gamma\,{=}\,286$. Its most majority class ``mammal'' consists of 218 original classes and 277,601 training images. While its most minority class ``butterfly'' is formed by a single original class with 969 training examples. We randomly select $\beta\,{=}\,10$\% training samples as the labeled set and keep the test set unchanged. Due to class grouping, the test set is not balanced. Therefore, we compute averaged class recall instead of accuracy to achieve a balanced metric.
We note that there are other large-scale datasets like iNaturalist~\cite{inaturalists} and ImageNet-LT~\cite{oltr} which often serve as testbeds for fully-supervised long-tailed recognition algorithms. However, these datasets contain too few examples of minority classes to form a statistically meaningful dataset and draw reliable conclusions for semi-supervised learning. For example, there are only 5 examples in the most minority class of the ImageNet-LT dataset.
\paragraph{Setup.} We use ResNet50~\cite{resnet} as the backbone.
The hyper-parameters for each training generation are adopted from the original FixMatch paper.
The model is self-trained for 3 generations with $\alpha\,{=}\,0.7$ and $t_{\text{min}}\,{=}\,0.5$.
\paragraph{Results.} We report results in Table~\ref{tab:imagenet127}. Supervised learning with 100\% and 10\% labeled training examples and DA with temperature scaling are also presented as reference. Comparing with the baseline FixMatch, both CReST and CReST+ progressively improve over 3 generations of self-training, while CReST+ provides 7.9\% absolute gain in the end, which verifies the efficacy of our proposed method.
\begin{figure}
\subfigure[]{
\includegraphics[width=0.415\linewidth]{img/illustration_p.pdf}
\label{fig:ablation-alpha-a}
}
\subfigure[]{
\includegraphics[width=0.545\linewidth]{img/ablation_p.pdf}
\label{fig:ablation-alpha-b}
}
\caption{Effect of $\alpha$ across multiple generations on CIFAR10-LT ($\gamma\,{=}\,100$, $\beta\,{=}\,10\%$) in CReST. \subref{fig:ablation-alpha-a} Illustration of how $\alpha$ influences sampling rate. \subref{fig:ablation-alpha-b} Test accuracy over generations with different $\alpha$. When $\alpha\,{=}\,0$, the method falls back to conventional self-training with all the unlabeled examples and corresponding pseudo-labels added into the labeled set, showing no improvement after generations of retraining, whereas our class-rebalancing sampling ($\alpha\,{>}\,0$) helps.}
\label{fig:ablation-alpha}
\end{figure}
\begin{figure}
\subfigure[]{
\includegraphics[width=0.425\linewidth]{img/illustration_t.pdf}
\label{fig:ablation-t-a}
}
\subfigure[]{
\includegraphics[width=0.54\linewidth]{img/ablation_t.pdf}
\label{fig:ablation-t-b}
}
\caption{Effect of temperature $t$ across multiple generations on CIFAR10-LT ($\gamma\,{=}\,100$, $\beta\,{=}\,10\%$). \subref{fig:ablation-t-a} Illustration of how $t$ controls the target distribution of distribution alignment. \subref{fig:ablation-t-b} Test accuracy over generations with different constant $t$ and our CReST+ using progressive $t$. Compared to using a constant $t$, CReST+ achieves the best final accuracy by progressing from $t\,{=}\,0$ to $t_{\text{min}}\,{=}\,0.5$ over 6 generations.}
\label{fig:ablation-t}
\vspace{-0.3cm}
\end{figure}
\begin{table*}[ht]
\begin{center}
\begin{tabular}{lcrrrrrrrrrrr}
\toprule
Method /\ Class & Split & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & Avg. \\
\cmidrule(l{3pt}r{3pt}){1-1} \cmidrule(l{3pt}r{3pt}){2-2} \cmidrule(l{3pt}r{3pt}){3-12} \cmidrule(l{3pt}r{3pt}){13-13}
FixMatch~\cite{fixmatch} & test &\textbf{98.7} & \textbf{99.5} & \textbf{90.0} & \textbf{83.5} & 85.0 & 47.6 & 69.9 & 59.0 & 8.9 & 7.2 & 64.9 \\
w/ CReST & test &97.7 & 98.3 & 88.8 & 81.9 & \textbf{88.2} & 59.7 & 79.5 & 61.2 & 47.0 & 47.9 & 75.0 \\
\rowcolor{Gray} & &-1.0 & -1.2 & -1.2 & -1.6 & +3.2 & +12.1& +9.6 & +2.2 & +38.1& +40.7& +10.1\\
w/ CReST+ & test &93.8 & 97.7 & 87.3 & 76.9 & 87.5 & \textbf{69.2} & \textbf{84.9} & \textbf{67.9} & \textbf{60.3} & \textbf{70.8} & \textbf{79.6} \\
\rowcolor{Gray} & &-4.9 & -1.8 & -2.7 & -6.6 & +2.5 & +21.6& +15.0& +8.9 & +51.4& +63.6& +14.7 \\
\cmidrule(l{3pt}r{3pt}){1-1} \cmidrule(l{3pt}r{3pt}){2-2} \cmidrule(l{3pt}r{3pt}){3-12} \cmidrule(l{3pt}r{3pt}){13-13}
FixMatch~\cite{fixmatch} & unlabeled &\textbf{98.5} & \textbf{99.1} & \textbf{90.0} & \textbf{84.0} & 84.7 & 49.7 & 64.9 & 65.6 & 14.9 & 22.2 & 67.4 \\
w/ CReST & unlabeled &97.8 & 96.8 & \textbf{90.0} & 82.9 & 87.4 & 62.4 & 79.3 & 64.8 & 60.8 & 66.7 & 78.9 \\
\rowcolor{Gray} & &-0.7 & -2.3 & 0 & -1.1 & +2.7 & +12.7& +14.4& -0.8 & +45.9& +44.5& +11.5\\
w/ CReST+ & unlabeled &92.2 & 95.7 & 86.1 & 76.7 & \textbf{87.6} & \textbf{68.1} & \textbf{85.1} & \textbf{71.2} & \textbf{75.7} & \textbf{75.6} & \textbf{81.4} \\
\rowcolor{Gray} & &-6.3 & -3.4 & -3.9 & -7.3 & +2.9 & +18.4& +20.2& +5.6 & +60.8& +53.4& +14.0 \\
\bottomrule
\end{tabular}
\end{center}
\caption{Per-class recall (\%) on the balanced test set and the imbalanced unlabeled set of CIFAR10-LT ($\gamma\,{=}\,100$, $\beta\,{=}\,10\%$). Our strategies compromise small loss on majority classes for significant gain on minority classes, leading to improved averaged recall over all classes.}
\label{tab:ablation-break-down}
\vspace{-0.2cm}
\end{table*}
\subsection{Ablation study}
\label{sec:ablation}
We perform an extensive ablation study to evaluate and understand the contribution of each critical component in CReST and CReST+. The experiments in this section are all performed with FixMatch on CIFAR10-LT with imbalance ratio $\gamma\,{=}\,100$, label fraction $\beta\,{=}\,10\%$ and a single fold of labeled data.
\paragraph{Effect of sampling rate.}
CReST introduces the sampling rate hyper-parameter $\alpha$ that controls the per-class sampling rate and the number of selected pseudo-labeled samples to be included in the labeled set. In Fig.~\ref{fig:ablation-alpha} we show how $\alpha$ influences performance over generations.
When $\alpha\,{=}\,0$, our method falls back to conventional self-training, which expands the labeled set with all unlabeled examples and their corresponding predicted labels.
However, conventional self-training does not produce a performance gain over multiple generations, showing that simply applying self-training can not provide performance improvement.
In contrast, with our class-rebalancing sampling strategy ($\alpha\,{>}\,0$), the accuracy can be improved by iterative model retraining.
As illustrated in Fig.~\ref{fig:ablation-alpha-a}, smaller $\alpha$ means more pseudo-labeled samples are added into the labeled set, which enlarges the labeled set but adversely introduces more low-quality pseudo-labels.
On the other hand, larger $\alpha$ biases pseudo-labeled samples towards minority classes.
As a result, the class-rebalancing sampling can be too strong with large $\alpha$, leading to imbalance in the reversed direction, towards the original minority classes. This is the case for $\alpha\,{=}\,1$ where, after the 10-\textit{th} generation, the model becomes increasingly biased towards minority classes and suffers from performance degradation on majority classes, resulting in decreased accuracy. For example, from the 10-\textit{th} generation to the last generation, the recall of the most minority classes increases by a large margin from 55.0\% to 71.1\%, while 7 of the other 9 classes suffer from severe recall degradation, resulting in 3.0\% drop of the class-balanced test set accuracy.
Empirically, we find $\alpha\,{=}\,1\,{/}\,3$ achieves a balance between the quality of pseudo-labels and the class-rebalancing strength across different imbalance ratios and label fractions on long-tailed CIFAR datasets.
\paragraph{Effect of progressive temperature scaling.} The proposed adaptive distribution alignment used in CReST+ introduces another hyper-parameter, temperature $t$, that scales the target distribution.
We first illustrate in Fig.~\ref{fig:ablation-t-a} how temperature $t$ smooths the target distribution in distribution alignment so that smaller $t$ provides stronger re-balancing strength. In Fig.~\ref{fig:ablation-t-b}, we study the effect of using a constant temperature and our proposed progressive temperature scaling in which $t$ gradually decreases from $1.0$ to $t_{\min}\,{=}\,0.5$ across generations of self-training.
First, we notice that $t\,{=}\,0.5$ provides the best \textit{single} generation accuracy of 75.1\% among all tested temperature values. This suggests that the model can benefit from class re-balancing with a properly ``smoothed'' target distribution compared with 70.0\% accuracy of the original distribution alignment whose temperate $t$ is fixed to $1.0$.
Further decreasing $t$ to $0.1$ results in lower accuracy, as the target distribution is overly smoothed, which introduces more pseudo-labeling errors.
Over multiple generations of self-training, using a constant $t$ is not optimal. Although a relatively small $t$ (\eg, 0.5) can give better performance in early generations, it can not provide further gains through continuing self-training due to the decreased pseudo-label quality.
When $t$ is lower than 0.5, performance can even degrade after certain later generations.
In contrast, the proposed CReST+, which progressively enhances the distribution alignment strength, provides the best accuracy at the last generation.
\paragraph{Per-class performance.} To show the source of accuracy improvements, in Table~\ref{tab:ablation-break-down} we present per-class recall on the balanced test set of CIFAR10-LT with imbalance ratio 100 and label fraction 10\%.
Both CReST and CReST+ sacrifice a few points of accuracy on four majority classes but provide significant gains on the other six minority classes, obtaining better performance over all classes.
We also include the results on the imbalanced unlabeled set. The results are particularly similar to those of the test set with mild drop on majority classes and remarkable improvement on minority classes. This suggests that our method indeed improves the quality of pseudo-labels, which can be transferred to better generalization on a balanced test criterion.
\section{Conclusion}
In this work, we present a class-rebalancing self-training framework, named CReST, for imbalanced semi-supervised learning.
CReST is motivated by the observation that existing SSL algorithms produce high precision pseudo-labels on minority classes.
CReST iteratively refines a baseline SSL model by supplementing the labeled set with high quality pseudo-labels, where minority classes are updated more aggressively than majority classes. Over generations of self-training, the model becomes less biased towards majority classes and focuses more on minority classes.
We also extend distribution alignment to progressively increase its class-rebalancing strength over generations and denote the combined method CReST+.
Extensive experiments on long-tailed CIFAR datasets and ImageNet127 dataset demonstrate that the proposed CReST and CReST+ improve baseline SSL algorithms by a large margin, and consistently outperform state-of-the-art rebalancing methods.
\paragraph{Acknowledgments.} We thank Zizhao Zhang, Yin Cui and Prannay Khosla for valuable advice. We thank Alex Kurakin for helping experiments on ImageNet, and Jinsung Yoon for the proofread of our manuscript.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In the search for extrasolar planetary systems, $\beta$ Pictoris and other
Vega--type stars have become very popular and our present knowledge
is summarized in several reviews (Backman \& Paresce~\cite{bac:par};
Vidal--Madjar \& Ferlet~\cite{v-m:f};
Artymowicz~\cite{art94}, \cite{art96}, \cite{art97}). In the last review,
Artymowicz~(\cite{art97}) analyzed
three principal components of the $\beta$ Pictoris system: the star itself,
the circumstellar dust and the circumstellar gas. Although there
is an improved determination of the distance towards ${\beta}~\rm Pic$ from
the Hipparcos satellite (Crifo et al.~\cite{crifo}),
the main conclusions of the previous analyses remain in force.
In general, information on dust comes from measurements of extinction,
polarization, scattered light and thermal emission.
For scattering, the geometrical relation
between source, scatterer and observer is essential. Whereas it is ill
determined in reflection nebulae and allows only a very rough
derivation of the grain properties, the geometrical configuration in the
circumstellar disc of $\beta$ Pic is clear and simple. In the case of ${\beta}~\rm Pic$
one observes:
\begin{itemize}
\item [{\it i})] no circumstellar extinction (Crifo et al.~\cite{crifo});
\item [{\it ii})] very weak polarization of the star itself
(Tinbergen~\cite{tinb});
\item [{\it iii})] scattered light from the nearly edge--on disc
(Smith \& Terrile~\cite{s:t}; Kalas \& Jewitt~\cite{k:j95};
Mouillet et al.~\cite{mou97})
and its polarization (Gledhill et
al.~\cite{gledh}; Wolstencroft et al.~\cite{wo95});
\item [{\it iv})] an infrared excess (Aumann et al.~\cite{aum})
which extends up to 1300${\mu}\rm{m}$ (Chini et al.~\cite{chi91});
there are also mid IR images (Lagage \&
Pantin~\cite{l:p}; Pantin et al.~\cite{p:l:art97}).
\end{itemize}
Many numerical calculations have been performed to explain the IR emission of
${\beta}~\rm Pic$ (see detailed discussion in Li \& Greenberg~\cite{li:gre98}).
Depending on the wavelength range considered,
the observations were reproduced using compact or fluffy grains
with sizes ranging from smaller 1\,${\mu}\rm{m}$ to up to 1\,mm.
With respect to modelling the scattering and polarization of light
from the disc of $\beta$ Pic, Artymowicz et al.~(\cite{art89}),
Kalas \& Jewitt~(\cite{k:j95}, \cite{k:j96}) and
Pantin et al.~(\cite{p:l:art97}) considered scattering only at one wavelength.
They assumed either isotropic or anisotropic scattering without
computing the asymmetry factor $g$. Scarrott et al.~(\cite{sca92}),
on the other hand, applied Mie theory and treated multi--wavelength scattering,
however, the polarization was only
calculated at the then available R--band.
Artymowicz~(\cite{art97}) reproduced the
observation in the V--band employing the empirical phase and polarization
function of zodiacal and cometary dust.
In this paper, we model scattering and polarization at all observed wavelengths
with particle cross sections computed from Mie theory. As a result, we are able
to constrain the properties of the grains and to exclude certain choices of dust
models which were hitherto thought possible.
\section{Observations of polarization and colours}
Imaging polarimetry of the ${\beta}~\rm Pic$ disc was performed by
Gledhill et al.~(\cite{gledh})
in the R waveband and by Wolstencroft et al.(~\cite{wo95})
in the B, V, R and I
wavebands. The polarization patterns are centro--symmetric and indicate that
${\beta}~\rm Pic$ itself is the illuminating source. The observational data are collected
in Fig.~\ref{fig1} and suggest the asymmetry in the polarizing
behaviour between the SW and NE sides, especially in the I--band.
\begin{figure} %
\resizebox{8.8cm}{!}{\includegraphics{8825f1.eps}}
\caption[]{Results of the multi--wavelength measurements of the polarization
along the disc of ${\beta}~\rm Pic$.}
\label{fig1}
\end{figure} %
The wavelength dependence of polarization $P(\lambda)$
is typical of reflection nebulae and characterized by the increase of the
polarization degree with wavelength. The increase is rather weak in
the NE side and well pronounced in the SW side.
The polarization vectors are oriented perpendicular to the disc (Gledhill et
al.~\cite{gledh}; Wolstencroft~\cite{wo98}). Tinbergen~(\cite{tinb}) included ${\beta}~\rm Pic$ in the list of
zero polarization standard stars. Using a large diaphragm which included the
disc he found that the mean degree of polarization over the wavelength range
$0.4 - 0.7\,{\mu}\rm{m}$ is $P = 0.020 \pm 0.008\%$. This value is compatible with the
absence of material in front of the star under the assumption of a maximum
polarization efficiency ($P/A_V = 3$\%) and implies $A_V \simeq 0.006$\,mag.
The small polarization observed by Tinbergen~(\cite{tinb})
should be due to dust scattering in the disc.
In ordinary reflection nebulae, the colour of the scattered light is usually
bluer than the illuminating star at small offsets and redder at large distances
(Voshchinnikov~\cite{vo85}). Unfortunately, the disc colours of ${\beta}~\rm Pic$ were observed
only out to $\sim$12$''$ offset (Paresce \& Burrows~\cite{p:b}; Lecavelier des Etangs
et al.~\cite{lec}). The data of Fig.~\ref{fig2} indicate that the disc has the same colour as
the star or is slightly redder. The colours do not depend on position, the only
exception being the innermost point ($\varphi = 2\farcs5$). It falls into the
central gap which is presently the subject of various speculations.
\begin{figure} %
\resizebox{8.7cm}{!}{\includegraphics{8825f2.eps}}
\caption[]{Observed colours along the disc of ${\beta}~\rm Pic$.}
\label{fig2}
\end{figure} %
We thus conclude that the properties of the scattered light are not peculiar.
We therefore believe that the usual approach to the study of reflection
nebulae can also be applied to ${\beta}~\rm Pic$.
\section{Modelling}
\subsection{Polarization mechanism and disc model}
The variation of the position angle and the degree of polarization in the disc
of ${\beta}~\rm Pic$ speaks in favour of light scattering by spheres
or arbitrarily oriented non--spherical particles.
The disc is seen nearly edge--on. As it is optically thin,
we can choose a model of single scattering in an optically thin medium.
In this case, the radiation escaping
from the disc at the angular distance $\varphi$ from the star
is the integral along the line of sight over some range of scattering
angles ($\Theta_1, \, 180{\degr}-\Theta_1$), where
\begin{equation}
\Theta_1 = \arcsin \left(\frac{\varphi}{\varphi_{\rm max}} \right)
\label{eq1}
\end{equation}
and $\varphi_{\rm max}$ denotes the maximum angular size of the disc.
We will adopt $\varphi_{\rm max} = 45\farcs0$ (Kalas \& Jewitt~\cite{k:j95}).
With growing $\varphi$, the range of
scattering angles, which is always centered at $90{\degr}$,
becomes narrower. A slightly tilted disc orientation ($\sim$10${\degr}$)
would not change the picture of light scattering significantly.
\subsection{Step 1: polarization diagrams vs scattering angle}
As a first step in the modelling, we compute the polarization and scattered
intensity of an elementary volume. In Mie scattering by spherical particles,
such polarization at a given wavelength $\lambda$ depends on the refractive
index of the particle, $m_{\lambda} = n_{\lambda} - k_{\lambda}i$, and its
radius $a$,
\begin{equation}
P(m, a, \lambda, \Theta) = \frac{i_1(m, a, \lambda,
\Theta)-i_2(m,a,\lambda,\Theta)} {i_1(m, a, \lambda,
\Theta)+i_2(m,a,\lambda,\Theta)} \ .
\end{equation}
Here $i_1$ and $i_2$ are the dimensionless intensities which determine the
radiation scattered perpendicular and parallel to the scattering plane,
respectively (van de Hulst~\cite{vdh}; Bohren \& Huffman~\cite{b:h}).
Figure~\ref{fig3} shows the polarization diagrams at $\lambda=0.70\,{\mu}\rm{m}$
(R--band) for which the observational database is largest (Fig.~\ref{fig1}).
The particles considered are a mixture of silicate and ice.
They could be porous,
i.e.~contain some volume fraction of vacuum. Porous grains have been
discussed many times for the disc of ${\beta}~\rm Pic$
(see Artymowicz~\cite{art97}; Li \& Greenberg~\cite{li:gre98}).
The refractive indices used are specified
in Table~\ref{tab1}. Note that we chose the volume fractions of the grain
constituents in such a way that several of the refractive indices
in Table~\ref{tab1} are identical, although they refer to grains of
different chemical composition. The
cross sections of grains with a silicate core
and an ice mantle were computed from G\"uttler's~(\cite{gu52}) theory
for two--layered spheres.
\begin{figure} %
\resizebox{8.2cm}{!}{\includegraphics{8825f3.eps}}
\caption[]{
The degree of linear polarization for various kinds of particles. Their
composition and refractive indices are given in Table~\ref{tab1}.
The peak of the curve for $a=0.1\,{\mu}\rm{m}$ is always slightly shifted to
larger values of scattering angle relative to those for $a=0.01\,{\mu}\rm{m}$.}
\label{fig3}
\end{figure}
\begin{table*}
\caption[]{Grain refractive indices $m_R$ at $\lambda=0.70\,{\mu}\rm{m}$ }
\begin{flushleft}
\begin{tabular}{ll}
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Refractive index & ~~Particle composition \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$1.715 - 0.030i$ & ~~compact silicate grains \\
$1.508 - 0.020i$ &
$\left\{ \begin{array}{l}
{\rm composite \ \mbox{grains:} \ mixture \ of \ 50\% \ silicate} + 50\% \ {\rm ice} \\
{\rm porous \ \mbox{grains:} \ mixture \ of \ 72\% \ silicate} + 28\% \ {\rm vacuum}
\end{array} \right.$ \\
\noalign{\smallskip}
$1.310 - 0.010i$ &
$\left\{ \begin{array}{l}
{\rm compact \ grains \ of \ dirty \ ice} \\
{\rm porous \ \mbox{grains:} \ mixture \ of \ 45\% \ silicate} + 55\% \ {\rm vacuum}
\end{array} \right.$ \\
\noalign{\smallskip}
$1.152 - 0.005i$ &
$\left\{ \begin{array}{l}
{\rm porous \ \mbox{grains:} \ mixture \ of \ 50\% \ ice} + 50\% \ {\rm vacuum} \\
{\rm porous \ \mbox{grains:} \ mixture \ of \ 24\% \ silicate} + 76\% \ {\rm vacuum}
\end{array} \right.$ \\
$1.715 - 0.030i$/$1.310 - 0.010i$ & ~~core--mantle grains: silicate core $+$ ice
mantle \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
Optical constants for silicate are from Draine~(\cite{draine85}), for dirty ice from
Greenberg~(\cite{gre68}). \\
The optical constants for composite and porous grains were
obtained from the Bruggeman rule (Bohren \& Huffman~\cite{b:h}).
\end{flushleft}
\label{tab1}
\end{table*}
It is important to note that all polarization diagrams of Fig.~\ref{fig3}
are similar and independent of the particle composition and structure.
Small grains belong to the Rayleigh domain where the polarization
has the well known bell--like shape with the maximum at $\Theta = 90{\degr}$.
Polarization diagrams for very large
spheres are also simple. They resemble smooth curves with the maximum at
$\Theta \approx 70{\degr}$. In both cases, the polarization does not change
sign and reaches $\sim$100\%. For particles of intermediate size, polarization
reverses sign repeatedly and a ripple--like structure is seen.
This behaviour of the curves $P(\lambda)$ in Fig.~\ref{fig3} reflects the general
principles of light scattering by small particles and does not principally
change for non--spherical bodies, like spheroids, cylinders, bispheres, or
fluffy aggregates arbitrarily aligned in space (Mishchenko et al.~\cite{mi96};
Kolokolova et al.~\cite{kol}); Lumme et al.~\cite{lumme}). Moreover, Lumme et al.~(\cite{lumme})
demonstrated that Mie theory of spheres reproduces the polarization diagrams for
complex particles rather well, except sometimes for forward and backward
scattering.
The polarization measurements in ${\beta}~\rm Pic$ were made at offsets from
$\varphi = 8\farcs5$ to $31\farcs2$ (see Fig.~\ref{fig1}).
According to Eq.~(\ref{eq1}), the scattering
angles vary in the limits from ($11{\degr}-169{\degr}$)
to ($44{\degr}-136{\degr}$). With these limits,
we conclude with the aid of Fig.~\ref{fig3}:
{\it the polarimetric observations of $\beta$ Pic cannot be explained by
light scattering of very small {\rm (}$< 0.1{\mu}\rm{m}${\rm )} or very large
{\rm (}$> 10{\mu}\rm{m}${\rm )} particles only
because they produce the polarization too high for the scattering angle
range.}
\subsection{Step 2: polarization diagrams vs particle size}
As the second step of the modelling procedure we include averaging
along the line of sight (scattering angles) at fixed angular
distance $\varphi$. We adopt that the number density of dust
grains in the disc has a power--law distribution
\begin{eqnarray}
n_{\rm d}(r) =
\left\{ \begin{array}{ll}
0, & \mbox{$r < r_0$}, \\
n_0 \left(\frac{r}{r_0} \right)^{-s},
& \mbox {$ r_0 \leq r < r_{\rm out}.$}
\end{array}
\right.
\label{dens}
\end{eqnarray}
Here, $r_0$ is the radius of the central hole where the density is equal to
$n_0$ and $r_{\rm out}$ is the outer radius of the disc. The latter
is given by $r_{\rm out}=D \varphi_{\rm max} = 19.28 \cdot 45 \approx 870$\,AU,
assuming a distance $D = 19.28$\,pc (Crifo et al.~\cite{crifo}).
The central cavity reaches out to $\varphi \la\varphi_0 \approx 6\farcs0$
(Artymowicz et al.~\cite{art89}).\footnote{The adaptive optics observations
made by Mouillet et al.~(\cite{mou97}) show that the scattered light is present
at angular distances up to $\varphi \approx 1\farcs5$.
Perhaps, this is the light from the outer disc scattered in almost forward
and backward directions. However, because the polarization observations
are made outside of this radius, this does not impact on the result, given
in the single scattering case.}
With these numbers, $r_0 \approx 120$\,AU and the ratio of the inner
to outer radius equals
$\varphi_0/\varphi_{\rm max} = r_0/r_{\rm out} \approx 0.13$.
The polarization degree can be expressed as
\begin{equation}
P(m,a,\lambda,\varphi,s) \ = \ \frac{<I_1> - <I_2>} {<I_1> + <I_2>} \ ,
\end{equation}
\begin{equation}
<I_j> \ = \ \frac{{\cal K}}{\varphi^{s+1}} \int_{\Theta_1}^{\pi-\Theta_1}
i_j(m,a,\lambda,\Theta) \sin^s\Theta \, {\rm d}\Theta \ ,
\end{equation}
where the intensities $I_j$ $(j=1,2)$ depend on $m,a,\lambda,\varphi,s$, and
${\cal K}$ is a constant. If $\varphi < 6\farcs0$ and $\sin \Theta > \frac{1}
{0.13} \cdot \frac{\varphi}{\varphi_{\rm max}}$, the line of sight intersects
the hole.
Examples of polarization diagrams are shown in Fig.~\ref{fig4} at the offset $\varphi
=10\farcs5$ for different types of particles and various values of $s$. The
polarization degree observed in the R--band at this offset angle in the NE and SW
extensions is shown by horizontal dashed lines. From this figure we again
conclude that grains of intermediate size have to be included into
consideration to explain the observed polarization.
Interestingly, the polarization observed at
given offset may be explained for any density distribution of particles: as
evident from Fig.~\ref{fig4}a (the parameter $s$ determines
only the polarization level at low and large size limits).
\begin{figure} %
\resizebox{8.8cm}{!}{\includegraphics{8825f4.eps}}
\caption[]{
({\bf a}) Upper panel: polarization vs particle size at
the angular distance $\varphi=10
\farcs5$ for various exponents, $s$, of the dust density distribution.
The grains are compact silicates with $m_R$ as indicated.
The horizontal dashed lines show the observed polarization $P_R$
in the NE and SW sides.
({\bf b}) Lower panel: as ({\bf a}), but now for a fixed
density distribution ($s=2$)
and particles of various refractive indices. }
\label{fig4}
\end{figure} %
The curves plotted in Fig.~\ref{fig4}b look even more similar
if we use the product $|m-1|a$ as an argument.
That this is so is well
known for the extinction efficiency (e.g.~Greenberg~\cite{gre68}).
The corresponding polarization diagrams are shown in Fig.~\ref{fig5}.
Thus, considering light scattering in the continuum,
the effects of refractive index and particle size
cannot be separated. In other words, {\it we can only estimate $|m-1|a$, the
product of refractive index times particle size, from observations at one
wavelength}.
\begin{figure} %
\resizebox{8.8cm}{!}{\includegraphics{8825f5.eps}}
\caption[]{
As Fig.~4b, but with polarization plotted vs the parameter $|m_R-1|a$.}
\label{fig5}
\end{figure} %
In more realistic models, we must also average over the size distribution
$n_{\rm d}(a)$. Afterwards the spikes in polarization seen
in Fig.~\ref{fig3} disappear. We use the power--law
\begin{equation}
n_{\rm d}(a) \sim a^{-q} \label{size_dis}
\end{equation}
\begin{figure} %
\resizebox{8.8cm}{!}{\includegraphics{8825f6.eps}}
\caption[]{
Polarization vs $a_{\rm min}$ at the angular distance $\varphi=10\farcs5$. The
upper size limit equals $a_{\rm max} = 100\,\mu$m and the exponent of the size
distribution $q=3.5$.}
\label{fig6}
\end{figure} %
\begin{figure} %
\resizebox{8.8cm}{!}{\includegraphics{8825f7.eps}}
\caption[]{
As Fig.~6, but with polarization vs the parameter $|m_R-1|a_{\rm min}$.}
\label{fig7}
\end{figure} %
with minimum and maximum radii $a_{\rm min}$ and $a_{\rm max}$, respectively.
Figures~\ref{fig6} and \ref{fig7} are analogous to Fig.~\ref{fig4}
and \ref{fig5}, but include an average over the
size distribution. The upper radius and the exponent are kept fixed
($a_{\rm max}=100\,\mu$m$, q=3.5$) and $a_{\rm min}$ is being varied.
The changes of $a_{\rm min}$ are most important with respect to polarization.
The increase or decrease of the maximum size has no influence if $|m-1|a_{\rm
max} \ga 10$. So if very large particles are present in the disc of $\beta$
Pic, they are not visible as scatterers at waveband R. Changes in $q$ do not
affect the overall picture and will be discussed in Section 3.6.
Figures~\ref{fig6} and \ref{fig7} contain plots for all refractive indices listed
in Table~\ref{tab1}.
The figures show that {\it in order to explain the observed polarization one
needs to keep $a_{\rm min}$ in the range $0.6\,{\mu}\rm{m} \la$ $a_{\rm min}$
$\la 10\,{\mu}\rm{m}$}. However, very porous grains with small values of $a_{\rm min}$
may, in principle, be used to model the observations as well.
\subsection{Step 3: polarization vs angular distance from star}
In the next step, we model the angular distribution of polarization using the
minimum cut--off in the size distribution as estimated from Fig.~\ref{fig6}. The curves
in Fig.~\ref{fig8} demonstrate that the same observations may be satisfactorily fit with
particles of different composition by changing only the parameters $a_{\rm min}$
and $s$.\footnote{Actually, the choice of the parameter $s$ is determined by the
radial brightness distribution of the scattered light, $I(\varphi)$.
For the outer parts of the disc of ${\beta}~\rm Pic$ ($\varphi > 6\farcs0$), $I(\varphi)
\propto \varphi^{- \gamma}$ with $\gamma=3.6-4.3$ (Kalas \& Jewitt~\cite{k:j95}).
The values of $\gamma$ for the models considered are given in Table~\ref{tab2}.}
Note that the variations of $s$ and $a_{\rm min}$ influence the polarization
mainly at small and large offset angles. (We adopted $a_{\rm max}=100\,{\mu}\rm{m}$
and $q=3.5$.) Therefore, {\it the polarimetric observations of the extended
sources (nebulae, circumstellar shells, galaxies) at one wavelength do
not allow to determine the chemical composition of particles}. Numerous other
sets of refractive indices and size parameters may also explain the
observations. As an example, let us consider the model of Scarrott et
al.~(\cite{sca92}). They reproduced the angular dependence of the scattered light and
polarization in the R--band observed by Gledhill et al.~(\cite{gledh}) with silicate
particles of $m_R=1.65-0.05i$ and $a_{\rm min}=0.01\,{\mu}\rm{m}$, $a_{\rm max}=3\,{\mu}\rm{m}$,
$q=4.0$, $s=2.75$. However, the disc colours in their model are too red
compared to those observed (from
$-0\fm72$ in (B--V)$_{\star - {\rm disc}}$ to $-1\fm24$ in
(V--I)$_{\star - {\rm disc}}$).
\begin{figure} %
\resizebox{8.8cm}{!}{\includegraphics{8825f8.eps}}
\caption[]{
Polarization in the disc of ${\beta}~\rm Pic$ as a function of the angular offset for
the R--band. Solid lines present models with different refractive indices
(see Table~\ref{tab1}). In all calculations, $a_{\rm max}=100\,{\mu}\rm{m}$
and $q=3.5$. The other model parameters ($a_{\rm min}$ and $s$) are
shown in the panels.}
\label{fig8}
\end{figure} %
\subsection{Step 4: polarization vs wavelength}
With the parameters found from modelling of the polarization in the R--band
(see Fig.~\ref{fig8}), we calculated the wavelength dependence of
polarization at the offset distance $\varphi=10\farcs5$. We also checked
the colour excesses given by the
corresponding model (see Table~\ref{tab2}) and compared
them with observations.
\begin{figure} %
\resizebox{8.8cm}{!}{\includegraphics{8825f9.eps}}
\caption[]{
The wavelength dependence of polarization in the disc of ${\beta}~\rm Pic$ at the angular
distance $\varphi=10\farcs5$. The observations in the SW and NE sides and their
errors are plotted by dashed lines. The parameters of the model are the same as
in Fig.~8.}
\label{fig9}
\end{figure} %
The results are shown in Fig.~\ref{fig9} and Table~\ref{tab2}. It is clear
that despite our thorough procedure and the successful fits of
the changes in polarization and
brightness in the R--band with angle $\varphi$ (see Fig.~\ref{fig8}
and Table~\ref{tab2}), the
dependence $P(\lambda)$ cannot be well explained for any refractive index.
From Table~\ref{tab2} it also follows that the colour index V--I is crucial
for the choice of the model. The colours in the models~1~--~4 are too red
and these models must be rejected. In principle, the correct
polarization behaviour (i.e.~its growth
with wavelength) can be reproduced for all refractive indices of
Table~\ref{tab1} if the minimum size $a_{\rm min}$ is lowered. But even
if we were to fit the dependence $P(\lambda)$, very small particles would
yield unacceptably blue colours.
\begin{table*}
\caption[]{
Observational and theoretical colour excesses in the disc of ${\beta}~\rm Pic$ at
$\varphi=10\farcs5$. }
\begin{flushleft}
\begin{tabular}{lllllll}
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Colour excess & ~Observations & Model 1 & Model 2 & Model 3 & Model 4 & Model 5 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
(B--V)$_{\star - {\rm disc}}$ & $-0\fm21 \pm 0\fm20$ &$-0\fm217$&$-0\fm213$& $-0\fm221$& $-0\fm228$& $\phantom{-}0\fm034$\\
(V--R)$_{\star - {\rm disc}}$ & $-0\fm17 \pm 0\fm20$ &$-0\fm200$&$-0\fm226$& $-0\fm228$& $-0\fm252$& $\phantom{-}0\fm061$\\
(V--I)$_{\star - {\rm disc}}$ & $\phantom{-}0\fm01 \pm 0\fm20$ &$-0\fm313$&$-0\fm344$& $-0\fm354$& $-0\fm400$& $\phantom{-}0\fm105$\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
Model 1: $m_R=1.715-0.030i$, $a_{\rm min}=0.7\,{\mu}\rm{m}$, $s=2$
($\gamma=3.4$, $\Lambda=0.67$, $g=0.80$). \\
Model 2: $m_R=1.508-0.020i$, $a_{\rm min}=1.2\,{\mu}\rm{m}$, $s=2$
($\gamma=3.4$, $\Lambda=0.66$, $g=0.86$). \\
Model 3: $m_R=1.310-0.010i$, $a_{\rm min}=2.4\,{\mu}\rm{m}$, $s=3$
($\gamma=4.2$, $\Lambda=0.67$, $g=0.91$). \\
Model 4: $m_R=1.152-0.005i$, $a_{\rm min}=5.5\,{\mu}\rm{m}$, $s=3$
($\gamma=4.4$, $\Lambda=0.67$, $g=0.96$). \\
Model 5: $m_R=1.152-0.005i$, $a_{\rm min}=0.2\,{\mu}\rm{m}$, $s=3$
($\gamma=4.4$, $\Lambda=0.89$, $g=0.82$). \\
All models have $a_{\rm max} = 100\,\mu$m and $q=3.5$.\\
$\gamma$ -- exponent in the brightness distribution,
$\Lambda$ -- single scattering albedo, $g$ -- asymmetry factor.
\end{flushleft}
\label{tab2}
\end{table*}
{\it So in order to explain all colour and polarization observations we need to
reduce the lower limit of the size distribution, $a_{\rm min}$, and the
refractive index} $m$.
\subsection{Step 5: polarization and colours: whole picture}
\begin{figure} %
\resizebox{8.8cm}{!}{\includegraphics{8825f10.eps}}
\caption[]{
The polarization observed in the disc of ${\beta}~\rm Pic$.
The solid lines correspond to the
theoretical models for which $m_R = 1.152 - 0.005i$, $a_{\rm min}=0.15\,{\mu}\rm{m}$,
$a_{\rm max}=100\,{\mu}\rm{m}$, $q=3.2$ and $s=3$.}
\label{fig10}
\end{figure} %
Finally, we suggest a fit to the angular dependence of the polarization
(in all wavebands) and the colour excesses (see Figs.~\ref{fig10} and \ref{fig11}).
In order to improve the fits, we varied the parameters $a_{\rm min}$ and $q$
around the values given in Table~\ref{tab2} for model 5.
Unfortunately, there remains some problem for the
position with the minimum distance $\varphi = 2\farcs5$. The colour
index B--V can be modeled only if the grains have very narrow size
distribution around $a \sim 15\,{\mu}\rm{m}$, but with such grains the
polarization at observed offsets becomes too large (see Fig.~\ref{fig4}).
\begin{figure} %
\resizebox{8.8cm}{!}{\includegraphics{8825f11.eps}}
\caption[]{
The colour excesses observed in the disc of ${\beta}~\rm Pic$.
The theoretical model has the
same parameters as in Fig.~10.}
\label{fig11}
\end{figure} %
\section{Concluding remarks}
All models presented here were constructed with a small number of parameters
which are rather well determined. Our favourite model presented
in Figs.~\ref{fig10}, \ref{fig11} has parameters:
$m_R = 1.152 - 0.005i$,
$a_{\rm min}=0.15\,{\mu}\rm{m}$,
$a_{\rm max}=100\,{\mu}\rm{m}$,
$q=3.2$,
$s=3$. \\
The model fits reasonably well the observed angular dependence of
polarization (in four wavebands), three colour excesses and
the radial brightness distribution. The
calculated values of the exponent in the brightness distribution, single
scattering albedo and asymmetry factor in the R--band are:
$\gamma$ = 4.4,
$\Lambda$ = 0.88,
$g$ = 0.79. \\
Still better fits could be obtained assuming different dust properties
on either side of the disc. Previous estimates of the grain albedo
in the disc of ${\beta}~\rm Pic$ based on visible/IR models gave
$\Lambda = 0.5 \pm 0.2$ (Artymowicz~\cite{art97}), which is smaller
than our value for very porous grains.
But Artymowicz et al.~(\cite{art89}) also derived $\Lambda\simeq 0.9$.
As usual, the albedo $\Lambda$ of the grains was estimated
independently of the scattering phase function. Evidently, the latter
cannot be isotropic in the visual because so far all suggested grain
sizes imply forward scattering. The value of
our model ($g \approx 0.8$) is larger than previous ones. Using the
Heyney--Greenstein phase function, Kalas \& Jewitt (\cite{k:j95})
found that the brightness distribution observed in the disc of ${\beta}~\rm Pic$
in the R--band may be reproduced with $0.3 \leq |g| \leq 0.5$.
Note that mirror particles (with
$g<0$) used by Kalas \& Jewitt~(\cite{k:j95}, \cite{k:j96}) are astronomical
nonsense because no grain material (with the exception of very small
iron particles) gives at visual
wavelengths a negative asymmetry parameter. Even pure conductors
($m= \infty $) have $g \approx -0.2$ only (van de Hulst~\cite{vdh}).
In our favourite model (see Figs.~\ref{fig10}, \ref{fig11}) we only specify
the refractive index (see Table~\ref{tab1}).
It was taken arbitrarily and is not motivated from the physical point of view.
From other side, the porous particles may be easily kept in the shell because
the radiation pressure acting on them is smaller:
we found that for silicate grains of radii $a \ga 2.0 \,{\mu}\rm{m}$
with 76\% porosity (Table~\ref{tab1}) the radiation pressure
force is smaller than gravitational force.
Very porous aggregates were adopted by Li \& Greenberg~(\cite{li:gre98})
to explain the IR observations. Mie theory can, of course, not be
applied in such cases. Although Li \& Greenberg used a standard mixing rule
to treat the fluffy aggregates, the underlying theory has not been developed
for particles with sizes larger than the wavelength nor for computing
the scattering of light.
We modeled scattering and polarization in the disc of ${\beta}~\rm Pic$ at all wavelengths
where it has been observed. Because the properties of the scattered light are
not peculiar, we could compute particle cross sections from Mie theory. Our
models exclude the possibility that the grains are compact spheres, or that they
are all very small or all very large grains. Instead, the particles
must be rather porous (degree of porosity $\ga 50\,\%$) and have
a lower cut--off in the size
distribution as small as $a_{\rm min} \sim 0.2 \, {\mu}\rm{m}$.
This limit is larger than the mean size of interstellar grains.
Our model can be further verified by computing the resulting IR fluxes
and check how well they agree with observations.
\begin{acknowledgements}
The authors are thankful to Vladimir Il'in, Nicolas Mauron and
an anonymous referee for helpful comments. NVV wishes to
thank for hospitality Max--Planck--Institut f\"ur Radioastronomie where this work
was begun. The work was partly supported by grants of
the program ``Astronomy'' of the government of the Russian Federation,
the program ``Universities of Russia -- Fundamental Researches''
(grant N~2154) and the Volkswagen Foundation.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction} \label{sec:intro}
The study of the $\Lambda(1405)$ resonance has been a hot topic in hadron physics, ever since it was predicted and observed more than 50 yr ago~\cite{Dalitz:1959dn,Dalitz:1960du,Alston:1961zzd}. There have been plenty of discussions to understand its mass, width, and the internal structure.
From a modern perspective, the $\Lambda(1405)$ is studied by the theoretical framework which combines the coupled-channel unitarity with chiral perturbation theory~\cite{Kaiser:1995eg,Oset:1997it,Oller:2000fj,Lutz:2001yb,Hyodo:2011ur}. A remarkable finding in this approach is the appearance of two poles in the complex energy plane near the $\Lambda(1405)$ resonance~\cite{Oller:2000fj,Jido:2003cb,Hyodo:2007jq}. This picture is confirmed by the systematic studies~\cite{Ikeda:2011pi,Ikeda:2012au,Guo:2012vv,Mai:2014xna} using the next-to-leading-order chiral interactions with the new experimental constraint by the SIDDHARTA Collaboration~\cite{Bazzi:2011zj,Bazzi:2012eq}, as well as by the comprehensive analysis of the $\pi\Sigma$ spectra in photoproduction~\cite{Roca:2013av,Roca:2013cca}. The origin of two poles is attributed to the two attractive components in the leading-order chiral interaction (Weinberg-Tomozawa term), the singlet and octet channels in the SU(3) basis~\cite{Jido:2003cb}, or the $\bar{K}N$ and $\pi\Sigma$ channels in the isospin basis~\cite{Hyodo:2007jq}. It is worth mentioning that there is another attraction in the strangeness $S=-1$ and the isospin $I=0$ sector, the SU(3) octet, or the $K\Xi$ channel. In fact, the $\Lambda(1670)$ resonance can also be generated in the same approach with a large coupling to the $K\Xi$ channel~\cite{Oset:2001cn}. The internal structure of the $\Lambda(1405)$ also draws much attention. The dominance of the $\bar{K}N$ molecular component in the $\Lambda(1405)$ has been shown from various aspects~\cite{Hyodo:2007np,Roca:2008kr,Hyodo:2008xr,Sekihara:2008qk,Sekihara:2010uz,Sekihara:2012xp,Miyahara:2015bya}. A recent lattice QCD study also supports the $\bar{K}N$ molecular structure of the $\Lambda(1405)$~\cite{Hall:2014uca}.
Because the resonance peak lies below the $\bar{K}N$ threshold, the $\Lambda(1405)$ decays exclusively into the $\pi\Sigma$ channel. Recently, detailed experimental studies of the $\Lambda(1405)$ in the $\pi\Sigma$ spectra have been performed in the low-energy production reactions, such as the photoproduction by the LEPS Collaboration~\cite{Niiyama:2008rt} and by the CLAS Collaboration~\cite{Moriya:2013eb,Moriya:2013hwg,Moriya:2014kpv}, and the proton-induced reaction by the HADES Collaboration~\cite{Agakishiev:2012xk}. In these studies, because the signal of the $\Lambda(1405)$ overlaps with the $\Sigma(1385)$ resonance, careful event selection is needed to eliminate the $\Sigma(1385)$ contribution. In addition, the isospin interference effect causes the difference of the $\pi^{+}\Sigma^{-}$ and $\pi^{-}\Sigma^{+}$ spectra~\cite{Nacher:1998mi},
which is, in fact, confirmed in the experimental data. This means that the charged $\pi\Sigma$ spectrum, whose detection is, in general, easier than the neutral one, is not in pure $I=0$. For the understanding of the property of the $\Lambda(1405)$, a detailed analysis is required to extract the neutral $\pi^{0}\Sigma^{0}$ channel, as done by the CLAS Collaboration.
Meanwhile, it has been shown in recent studies that the selection of the particular modes is possible in the nonleptonic weak decays of heavy hadrons~\cite{Chau:1982da,Chau:1987tk,Cheng:2010vk,Stone:2013eaa,Liang:2014tia,Bayar:2014qha,Xie:2014tma,Xie:2014gla,Liang:2014ama,Albaladejo:2015kea,Feijoo:2015cca}. Among others, of particular importance is the study of the $\Lambda_{b}\to J/\psi \Lambda(1405)$ decay, where the $\Lambda(1405)$ is generated in the final-state interaction of the meson-baryon pair~\cite{Roca:2015tea,Feijoo:2015cca}. It is found that the weak decay process near the $\Lambda(1405)$ production is dominated by the $I=0$ meson-baryon pair. The $\Lambda_{b}\to J/\psi \Lambda(1405)$ process therefore provides a clear meson-baryon final-state interaction with almost no contamination from the $\Sigma(1385)$ and the $I=1$ amplitude. This is different from the low-energy production experiments. Very recently, the $\Lambda_{b}\to J/\psi K^{-}p$ decay has been experimentally studied by the LHCb Collaboration~\cite{Aaij:2015tga} (see also Ref.~\cite{Roca:2015dva}) where the significant contribution from the $\Lambda^{*}$ resonances is observed in the $ K^{-}p$ spectrum.
In this paper, we propose to analyze the weak decay of the charmed baryon $\Lambda_{c}$ into $\pi^{+}MB$, where $MB$ stands for $\bar{K}N$, $\pi\Sigma$, or $\eta\Lambda$. This process has been discussed to extract the $\pi\Sigma$ scattering length from the cusp effect~\cite{Hyodo:2011js}. As we show below, the $\Lambda_{c}$ weak decay is also dominated by the $I=0$ $MB$ pair in the final state. In general, the production rate of $\Lambda_{c}$ should be much larger than that of $\Lambda_{b}$. In fact, the branching fractions of the $\Lambda_{c}\to \pi^{+}MB$ modes is experimentally well studied~\cite{Agashe:2014kda}. Furthermore, the decay of $\Lambda_c$ is more favored than the $\Lambda_b$ decay, as explained later.
It is also remarkable that the Belle Collaboration recently determined the absolute value of the branching fraction of the $\Lambda_{c}\to \pi^{+}K^{-}p$ mode~\cite{Zupanc:2013iki}. Moreover, the $K^{-}p$ spectrum in the $\pi^{+}K^{-}p$ decay has already been measured experimentally~\cite{Aitala:1999uq}. These facts indicate that the $\Lambda_{c}\to\pi^{+}MB$ process is an ideal channel by which to study the $\Lambda(1405)$. The dominance of the $I=0$ contribution is also advantageous for the study of the $\Lambda(1670)$ to suppress the unwanted $\Sigma^{*}$ contributions.
This paper is organized as follows. In Sec.~\ref{sec:formulation}, we present the theoretical formulation of the decay process $\Lambda_{c}\to\pi^{+}MB$. The formula for the invariant mass distribution will be given. The numerical results of the spectra are shown in Sec.~\ref{sec:result}, focusing on the $\Lambda(1405)$ and the $\Lambda(1670)$ separately. A discussion on the decay branching fractions of $\Lambda_{c}$ is also presented. A summary of this work is given in the last section.
\section{Formulation} \label{sec:formulation}
We consider the decay process $\Lambda_c^+\to\pi^+\Lambda^*\to\pi^+MB$, where $MB$ stands for the final meson-baryon states such as $\pi\Sigma$ and $\bar{K}N$. We show that, when the $MB$ invariant mass is restricted in the $\Lambda(1405)$ region, the dominant contribution of this decay process is given by the diagram shown in Fig.~\ref{fig:Lambdac_decay}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8cm,bb=0 0 642 379]{figure/Fig_Lambdac_decay.pdf}
\caption{The dominant diagram for the $\Lambda_c^+\to \pi^+MB$ decay. The solid lines and the wiggly line show the quarks and the $W$ boson, respectively. }
\label{fig:Lambdac_decay}
\end{center}
\end{figure}
First, the charm quark in $\Lambda_c^+$ turns into the strange quark with the $\pi^+$ emission by the weak decay. Second, the $\bar{q}q$ creation occurs to form $M$ ($B$) from the $s$ quark ($ud$ diquark). Finally, considering the final-state interactions of the hadrons, we obtain the invariant mass distribution for the $\Lambda_c^+\to\pi^+MB$ process. In the following, we discuss these three steps separately.
\subsection{Weak decay}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8cm,bb=0 0 1066 228]{figure/Fig_other_decay.pdf}
\caption{The weak decay part of diagrams for the $\Lambda_c^+\to \pi^+MB$ decay except for Fig.~\ref{fig:Lambdac_decay}. The solid lines and the wiggly line show the quarks and the $W$ boson, respectively.}
\label{fig:other_decay}
\end{center}
\end{figure}
We first discuss the decay of $\Lambda_{c}$ to produce $\pi^{+}$ and the $sud$ cluster in the final state. The Cabibbo favored weak processes are given by
\begin{align}
&c\to s + u+\bar{d} \quad \text{: $c$ decay}\label{eq:cdecay} ,\\
&c + d \to s + u \quad \text{: $cd$ scattering}\label{eq:cdscatt} .
\end{align}
The diagram shown in Fig.~\ref{fig:Lambdac_decay} is obtained by the $c$ decay process. Another contribution with the $c$ decay is to form $\pi^{+}$ using the $u$ quark in $\Lambda_{c}$ [Fig.~\ref{fig:other_decay}(a)]. In this process, however, the color of the $u\bar{d}$ pair from the $W^{+}$ decay is constrained to form the color singlet $\pi^{+}$. This process is therefore suppressed by the color factor in comparison with Fig.~\ref{fig:Lambdac_decay}. In addition, because the $ud$ diquark in $\Lambda_{c}$ is the most attractive ``good'' diquark~\cite{Jaffe:2004ph}, the process to destruct the strong diquark correlation [Fig.~\ref{fig:other_decay}(a)] is not favored. The contribution from the $cd$ scattering [Eq.~\eqref{eq:cdscatt}] [Figs.~\ref{fig:other_decay}(b) and \ref{fig:other_decay}(c)] is suppressed by the additional creation of a $\bar{q}q$ pair not attached to the $W$ boson as well as the $1/N_{c}$ suppression, compared with Fig.~\ref{fig:Lambdac_decay}. Diagrams ~\ref{fig:other_decay}(b) and ~\ref{fig:other_decay}(c) are called ``absorption diagrams" in the classification of Ref.~\cite{Chau:1982da}, and they are two-body processes, involving two quarks of the original $\Lambda_c$,
which are suppressed compared to the one-body process (Fig.~\ref{fig:Lambdac_decay}) involving only the $c$ quark, the $u$, $d$ quarks acting as
spectators. A discussion of this suppression is given in Ref.~\cite{Xie:2014tma}.
As discussed in Ref.~\cite{Hyodo:2011js}, the kinematical condition also favors the diagram shown in Fig.~\ref{fig:Lambdac_decay}. When the $\Lambda_{c}$ decays into the $\pi^{+}$ and $MB$ system with the invariant mass of 1400 MeV, the three-momentum of the final state is $\sim 700$ MeV in the rest frame of $\Lambda_{c}$. Thus, the $\pi^{+}$ should be emitted with a large momentum. It is kinematically favored to create the fast pion from the quarks involved by the weak process, because of the large mass of the $c$ quark. The diagrams Fig.~\ref{fig:other_decay}(a) and \ref{fig:other_decay}(c) are suppressed because one of the spectator quarks is required to participate in the $\pi^{+}$ formation.
In this way, the diagram in Fig.~\ref{fig:Lambdac_decay} is favored from the viewpoint of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, color suppression, the diquark correlation, and the kinematical condition. We note that this diagram has a bigger strength than the dominant one of the $\Lambda_{b}\to J/\psi \Lambda(1405)$ decay \cite{Roca:2015tea}, in which the weak process contains the necessary Cabibbo suppressed $b\to c$ transition and proceeds via internal emission \cite{Chau:1982da}, where the color of every quark in the weak process is fixed.
In this process, because the $ud$ diquark in $\Lambda_{c}$ is the spectator, the $sud$ cluster in the final state is combined as
\begin{align}
\frac{1}{\sqrt{2}}|s(ud-du)\rangle . \notag
\end{align}
This combination is a pure $I=0$ state. Because the $\bar{q}q$ creation does not change the isospin, we conclude that the dominant contribution for the $\Lambda_{c}\to \pi^{+}MB$ process produces the $MB$ pair in $I=0$. We note that the unfavored diagrams that we neglect can produce the $sud$ state with $I=1$. We revisit the $I=1$ contribution at the end of Sec.~\ref{sec:result}.
\subsection{$\bar{q}q$ creation}
To create the $MB$ final state, we must proceed to hadronize the $sud$ state, creating an extra $\bar{q}q$ pair. Because the total spin parity of the $MB$ pair is $J^P=1/2^-$, the $s$ quark should be in $L=1$ after the $c$-quark decay, together with the spectator $ud$ diquark. To achieve the final $MB$ state where all quarks are in the ground state, the $\bar{q}q$ creation must involve the $s$ quark to deexcite into $L=0$. Then the hadronization proceeds as depicted in Fig.~\ref{fig:Lambdac_decay}, where the $s$ quark goes into the meson $M$ and the $ud$ diquark is used to form the baryon $B$. Another possibility, the formation of the baryon from the $s$ quark, is not favored because of the correlation of the good $ud$ diquark and the suppression discussed above by forcing a spectator quark from the $\Lambda_c$ to form the emerging meson.
To evaluate the relative fractions of the $MB$ state, we follow the same procedure as in Ref.~\cite{Roca:2015tea}. The flavor wavefunction of the final meson-baryon state $|MB\rangle$ is given by
\begin{align}
|MB\rangle &= \frac{1}{\sqrt{2}}|s(\bar{u}u+\bar{d}d+\bar{s}s)(ud-du)\rangle \notag \\
&= \frac{1}{\sqrt{2}}\sum_{i=1}^3|P_{3i}q_i(ud-du)\rangle, \notag
\end{align}
where $q$ and $P$ represent the quark field and the $q\bar{q}$ pair,
\begin{align}
q&\equiv
\left( \begin{array}{c}
u \\ d \\ s
\end{array} \right), \notag \\
P&\equiv q\bar{q}=
\left( \begin{array}{ccc}
u\bar{u} & u\bar{d} & u\bar{s} \\
d\bar{u} & d\bar{d} & d\bar{s} \\
s\bar{u} & s\bar{d} & s\bar{s}
\end{array} \right). \notag
\end{align}
Using the mesonic degrees of freedom, $P$ can be written as
\begin{align}
P=\left( \begin{array}{ccc}
\frac{\pi^0}{\sqrt{2}}+\frac{\eta}{\sqrt{3}}+\frac{\eta^\prime}{\sqrt{6}} & \pi^+ & K^+ \\
\pi^- & -\frac{\pi^0}{\sqrt{2}}+\frac{\eta}{\sqrt{3}}+\frac{\eta^\prime}{\sqrt{6}} & K^0 \\
K^- & \bar{K}^0 & -\frac{\eta}{\sqrt{3}}+\frac{2\eta^\prime}{\sqrt{6}}
\end{array} \right), \notag
\end{align}
where we assume the ordinary mixing between the SU(3) singlet and octet states for $\eta$ and $\eta^\prime$~\cite{Bramon:1992kr}. Considering the overlap with the mixed antisymmetric flavor states of the baryons, we relate the three quark states with baryons as
\begin{align}
|p\rangle &=\frac{1}{\sqrt{2}}|u(ud-du)\rangle, \notag \\
|n\rangle &=\frac{1}{\sqrt{2}}|d(ud-du)\rangle, \notag \\
|\Lambda\rangle &=\frac{1}{\sqrt{12}}|(usd-dsu)+(dus-uds)+2(sud-sdu)\rangle. \notag
\end{align}
Using these hadronic representations, we obtain the meson-baryon states after the $\bar{q}q$ pair production as
\begin{align}
|MB\rangle &= |K^-p\rangle + |\bar{K}^0n\rangle -\frac{\sqrt{2}}{3}|\eta\Lambda\rangle. \label{eq:hadronstate}
\end{align}
Here we neglect the irrelevant $\eta^{\prime}\Lambda$ channel because its threshold is above 2 GeV. We can see that we obtain the isospin $I=0$ $\bar{K}N$ combination in the phase convention that we use where $|K^-\rangle = -|I=1/2,I_z=-1/2\rangle$.
Aside from the decay to $\pi^+$ and the $sud$ cluster, there are some candidates for the $\Lambda_c\to \pi^+MB$ decay process, as shown in Fig.~\ref{fig:other_decay2}. In the diagrams Figs.~\ref{fig:other_decay2}(c) and \ref{fig:other_decay2}(d), a quark of the $u\bar{d}$ pair from the weak decay should couple to the quarks of $sud$ cluster. Therefore, these diagrams are suppressed by the color factor.
From the diagrams Figs.~\ref{fig:other_decay2}(a) and \ref{fig:other_decay2}(b), only the $\eta\Lambda$ state appears as the final $MB$ pair. This is because the baryon is fixed as $B=\Lambda$, and the flavor wavefunction of the $M\pi^+$ state is obtained as follows:
\begin{align}
|M\pi^+\rangle = |u(\bar{u}u+\bar{d}d)\bar{d}\rangle\propto |\eta\pi^++\eta^\prime\pi^+\rangle. \notag
\end{align}
The $\eta\Lambda$ state does not change the $I=0$ filter discussed in the previous section. In the numerical calculation, we find that the quantitative results do not strongly depend on the strength of the $\eta\Lambda$ fraction [this would change the coefficient of $\eta \Lambda$ in Eq.~\eqref{eq:hadronstate}, which does not play a very important role]. Hence, we do not consider the effect of these diagrams.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8cm,bb=0 0 744 626]{figure/Fig_other_decay2.pdf}
\caption{Other diagrams for the $\Lambda_c^+\to \pi^+MB$ decay, which do not produce $\pi^+$ and $sud$ cluster. The solid lines and the wiggly line show the quarks and the $W$ boson, respectively.}
\label{fig:other_decay2}
\end{center}
\end{figure}
\subsection{Final-state interaction}
Here we derive the decay amplitude $\mathscr{M}$, taking the final-state interaction of the $MB$ pair into account. As shown in Fig.~\ref{fig:FSI}, the final-state interaction consists of the tree part and the rescattering part.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8cm,bb=0 0 911 169]{figure/Fig_FSI.pdf}
\caption{The diagram for the meson-baryon final-state interaction (solid circle) as the sum of the tree part (dot) and the rescattering part with the meson-baryon scattering amplitude (open circle).
\label{fig:FSI}
\end{center}
\end{figure}
The rescattering of the $MB$ pair is described by the chiral unitary approach~\cite{Kaiser:1995eg,Oset:1997it,Oller:2000fj,Lutz:2001yb,Hyodo:2011ur}, which is based on the chiral Lagrangians and is constructed to treat the nonperturbative phenomena. Though only the $K^-p,\ \bar{K^0}n,\ \eta\Lambda$ states appear in Eq.~\eqref{eq:hadronstate} in the tree-level production, the coupled-channel scattering leads to the other $MB$ states, $\pi^0\Sigma^0$, $\pi^-\Sigma^+$, $\pi^+\Sigma^-$, $\pi^{0}\Lambda$, $K^-p$, $\bar{K^0}n$, $\eta\Lambda$, $\eta\Sigma^0$, $K^+\Xi^-$, $K^0\Xi^0$.\footnote{The $\pi^{0}\Lambda$ and $\eta\Sigma^0$ channels are accessible only through the isospin breaking processes.} The decay amplitude for the $\Lambda_{c}\to \pi^{+}(MB)_{j}$ with the meson-baryon channel $j$ can then be written as
\begin{align}
\mathscr{M}_j =V_P &\left( h_j + \sum_i h_i G_i(M_{\rm inv})t_{ij}(M_{\rm inv}) \right), \label{eq:amplitude} \\
h_{\pi^0\Sigma^0}&=h_{\pi^-\Sigma^+}=h_{\pi^+\Sigma^-}=h_{\pi^0\Lambda}=0, \notag \\
h_{K^-p}&=h_{\bar{K^0}n}=1, \notag \\
h_{\eta\Lambda}&=-\frac{\sqrt{2}}{3}, \notag \\
h_{\eta\Sigma^0}&=h_{K^+\Xi^-}=h_{K^0\Xi^0}=0, \notag
\end{align}
where $M_{\rm inv}$ represents the invariant mass of the meson-baryon states. The weak decay and the $\bar{q}q$ pair creation are represented by the factor $V_P$, which is assumed to be independent of the invariant mass $M_{\rm inv}$ in the limited range of invariant masses that we consider. The coefficients $h_j$ are derived from Eq.~\eqref{eq:hadronstate}. $G_j$ and $t_{ij}$ are respectively the meson-baryon loop function and scattering matrix calculated from chiral unitary approach. Explicit forms can be found in Refs.~\cite{Kaiser:1995eg,Oset:1997it,Oller:2000fj,Lutz:2001yb,Hyodo:2011ur}. In Eq.~\eqref{eq:amplitude}, the first term $h_j$ stands for the primary production (tree level diagram) and the second term accounts for the rescattering of the $MB$ states into the final-state channel $j$. It is also instructive for practical calculations to show the amplitude in the isospin basis. If we assume the isospin symmetry, the amplitudes of the decay to the $\pi\Sigma$ and $\bar{K}N$ channels are written, respectively, as
\begin{align}
\mathscr{M}_{\pi^0\Sigma^0}&=\mathscr{M}_{\pi^-\Sigma^+}=\mathscr{M}_{\pi^+\Sigma^-} \notag \\
=&V_P\left(-\sqrt{\frac{2}{3}} G_{\bar{K}N}t^{I=0}_{\bar{K}N,\pi\Sigma}+\frac{\sqrt{2}}{3\sqrt{3}}G_{\eta\Lambda}t^{I=0}_{\eta\Lambda,\pi\Sigma} \right), \label{eq:amplitude_isobasis_piSig} \\
\mathscr{M}_{K^-p}&=\mathscr{M}_{\bar{K^0}n} \notag \\
=&V_P\left(1+G_{\bar{K}N}t^{I=0}_{\bar{K}N,\bar{K}N}-\frac{1}{3}G_{\eta\Lambda}t^{I=0}_{\eta\Lambda,\bar{K}N} \right). \label{eq:amplitude_isobasis_KbarN}
\end{align}
The partial decay width of the $\Lambda_{c}$ into the $\pi^{+}(MB)_{j}$ channel is given by
\begin{align}
\Gamma_j
&=\int d\Pi_{3}|\mathscr{M}_j|^2 ,
\end{align}
where $d\Pi_{3}$ is the three-body phase space. The invariant mass distribution is obtained as the derivative of the partial width with respect to $M_{\rm inv}$. In the present case, because the amplitude $\mathscr{M}_j$ depends only on $M_{\rm inv}$, the mass distribution ${\rm d}\Gamma_{j}/{\rm d}M_{\rm inv}$ is obtained by integrating the phase space as
\begin{align}
\frac{{\rm d}\Gamma_j}{{\rm d}M_{\rm inv}} &=\frac{1}{(2\pi)^3}\frac{p_{\pi^+}\tilde{p}_jM_{\Lambda_c^+}M_j}{M_{\Lambda_c^+}^2} |\mathscr{M}_j|^2, \label{eq:massdis}
\end{align}
where $M_j$ is the baryon mass. $p_{\pi^+}$ and $\tilde{p}_j$ represent the magnitude of the three-momentum of the emitted $\pi^+$ by the weak decay in the $\Lambda_c$ rest frame and of the meson of the final meson-baryon state in the meson-baryon rest frame,
\begin{align}
p_{\pi^+}=& \frac{\lambda^{1/2}(M_{\Lambda_c^+}^2,m_{\pi^+}^2,M_{\rm inv}^2)}{2M_{\Lambda_c^+}},\ \tilde{p}_j= \frac{\lambda^{1/2}(M_{\rm inv}^2,M_j^2,m_j^2)}{2M_{\rm inv}}, \notag \\
&\lambda(x,y,z) = x^{2}+y^{2}+z^{2}-2xy-2yz-2zx, \notag
\end{align}
where $m_j$ represents the meson mass.
Because the $\Lambda(1405)$ is mainly coupled to the $\pi\Sigma$ and $\bar{K}N$ channels, we calculate the invariant mass distribution of the decay to the $\pi\Sigma$ and $\bar{K}N$ channels. For the study of the $\Lambda(1670)$, we also calculate the decay to the $\eta \Lambda$ channel.
\section{Results} \label{sec:result}
We present the numerical results of the $MB$ invariant mass spectra in the $\Lambda_{c}\to \pi^{+}MB$ decay. We first show the results in the energy region near the $\bar{K}N$ threshold where the $\Lambda(1405)$ resonance plays an important role. We then discuss the spectra in the higher energy region with the emphasis of the $\Lambda(1670)$ resonance. The decay branching fractions of $\Lambda_{c}$ into different final states are discussed at the end of this section.
\subsection{Spectrum near the $\bar{K}N$ threshold}
To calculate the region near the $\bar{K}N$ threshold quantitatively, the final-state interaction of the $MB$ system should be constrained by the new experimental data from the SIDDHARTA Collaboration~\cite{Bazzi:2011zj,Bazzi:2012eq}, because the precise measurement of the energy-level shift of kaonic hydrogen significantly reduces the uncertainty of the scattering amplitude below the $\bar{K}N$ threshold. Here we employ the meson-baryon amplitude in Refs.~\cite{Ikeda:2011pi,Ikeda:2012au}, which implements the next-to-leading-order terms in chiral perturbation theory to reproduce the low-energy $\bar{K}N$ scattering data, including the SIDDHARTA constraint. The isospin symmetry breaking is introduced by the physical values for the hadron masses. In this model, the two resonance poles of the $\Lambda(1405)$ are obtained at $1424-26i$ MeV and $1381-81i$ MeV.
We show the spectra of three $\pi\Sigma$ channels in Fig.~\ref{fig:massdis_IHW}. From this figure, we find the $\Lambda(1405)$ peak structure around 1420 MeV. It is considered that the peak mainly reflects the pole at $1424-26i$ MeV. Because the initial state contains the $\bar{K}N$ channel with vanishing $\pi\Sigma$ component as shown in Eq.~\eqref{eq:hadronstate}, the present reaction puts more weight on the higher energy pole which has the strong coupling to the $\bar{K}N$ channel.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8cm,bb=0 0 846 594]{figure/Fig_massdis_piSig_KbarN_IHW.pdf}
\caption{(Color online) Invariant mass distribution of the decay $\Lambda_c^+\to \pi^+ MB$ near the $\bar{K}N$ threshold. The solid line represents the spectrum for $\pi\Sigma$ channels and the dashed line represents the spectrum for $\bar{K}N$ channels. The meson-baryon amplitude is taken from Ref.~\cite{Ikeda:2012au}. }
\label{fig:massdis_IHW}
\end{center}
\end{figure}
To proceed further, let us recall the isospin decomposition of the $\pi\Sigma$ channels~\cite{Nacher:1998mi}. The particle basis and the isospin basis are related as follows:
\begin{align}
|\pi^0\Sigma^0\rangle &= -\frac{1}{\sqrt{3}}|\pi\Sigma\rangle^{I=0}-\sqrt{\frac{2}{3}}|\pi\Sigma\rangle^{I=2}, \notag \\
|\pi^-\Sigma^+\rangle &= -\frac{1}{\sqrt{3}}|\pi\Sigma\rangle^{I=0}-\frac{1}{\sqrt{2}}|\pi\Sigma\rangle^{I=1}-\frac{1}{\sqrt{6}}|\pi\Sigma\rangle^{I=2}, \label{eq:piSig_rel} \\
|\pi^+\Sigma^-\rangle &= -\frac{1}{\sqrt{3}}|\pi\Sigma\rangle^{I=0}+\frac{1}{\sqrt{2}}|\pi\Sigma\rangle^{I=1}-\frac{1}{\sqrt{6}}|\pi\Sigma\rangle^{I=2}. \notag
\end{align}
In general reactions, the initial state of the $MB$ amplitude is a mixture of the $I=0$ and $I=1$ components.\footnote{In most cases, the small effect of $I=2$ can be neglected.} The charged $\pi\Sigma$ spectra thus contains the $I=1$ contribution as well as the interference effects of different isospin components.
It is therefore remarkable that all the $\pi\Sigma$ channels have the same peak position in Fig.~\ref{fig:massdis_IHW}. This occurs because the present reaction picks up the $I=0$ initial state selectively, as explained in Sec.~\ref{sec:formulation}. In this case, the $I=1$ contamination is suppressed down to the isospin breaking correction, and hence all the charged states exhibit the almost same spectrum.\footnote{The small deviation is caused by the isospin violation effect in the meson-baryon loop functions.} The differences of the spectra, because of the $I=0$ filter in the present reaction, are much smaller than in photoproduction \cite{Moriya:2013eb,Moriya:2013hwg}, where the explicit contribution of the $I=0$ and $I=1$ channels makes the differences between the different $\pi\Sigma$ channels much larger, even changing the position of the peaks. In this respect, the $\Lambda_{c}\to\pi^{+}\pi\Sigma$ reaction is a useful process to extract information on the $\Lambda(1405)$, because even in the charged states (the $\pi^0 \Sigma^0$ automatically projects over $I=0$) one filters the $I=0$ contribution and the charged states are easier to detect in actual experiments.
The spectra for the $\bar{K}N$ channels are also shown in Fig.~\ref{fig:massdis_IHW}. In the $\bar{K}N$ channels, the peak of the $\Lambda(1405)$ cannot be seen, because the $\bar{K}N$ threshold is above the $\Lambda(1405)$ energy. However, the enhancement near the threshold that we observe in Fig.~\ref{fig:massdis_IHW} is related to the tail of the $\Lambda(1405)$ peak. The shape of the $\bar{K}N$ spectrum, as well as its ratio to the $\pi\Sigma$ one, is the prediction of the meson-baryon interaction model. The detailed analysis of the near-threshold behavior of the $\bar{K}N$ spectra, together with the $\pi\Sigma$ spectra, will be important to understand the nature of the $\Lambda(1405)$.
\subsection{Spectrum above the $\bar{K}N$ threshold}
The spectrum above the $\bar{K}N$ threshold is also interesting. The LHCb Collaboration has found that a substantial amount of $\Lambda^{*}$s is produced in the $K^{-}p$ spectrum in the $\Lambda_{b}\to J/\psi K^{-}p$ decay~\cite{Aaij:2015tga}. Hence, the $K^{-}p$ spectrum in the weak decay process serves as a new opportunity to study the excited $\Lambda$ states.
For this purpose, here we adopt the model in Ref.~\cite{Oset:2001cn} for the meson-baryon final state interaction, which reproduces the $\Lambda(1670)$ as well as the $\Lambda(1405)$ in the $I(J^P)=0(1/2^-)$ channel. The pole position of the $\Lambda(1670)$ is obtained at $1678-20i$ MeV.\footnote{The present pole position is different from the one of the original paper \cite{Oset:2001cn}. This is because the original pole position is calculated with a physical basis though the present position is with isospin basis.}
Because the width of the $\Lambda(1670)$ is narrow, the pole of the $\Lambda(1670)$ also affects the invariant mass distribution of the $\Lambda_c^+$ decay.
In Fig.~\ref{fig:massdis_ORB}, we show the invariant mass distribution of the $\Lambda_c^+$ decay into the $\pi\Sigma$, $\bar{K}N$, and $\eta\Lambda$ channels. Because the meson-baryon amplitude in Ref.~\cite{Oset:2001cn} does not include the isospin breaking effect, all the isospin multiplets $\{K^{-}p,\bar{K}^{0}n\}$, $\{\pi^{0}\Sigma^{0},\pi^{+}\Sigma^{-},\pi^{-}\Sigma^{+}\}$ provide an identical spectrum. Because the $\Lambda(1520)$ resonance in $d$ wave is not included in the amplitude, such contribution should be subtracted to compare with the actual spectrum.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8cm,bb=0 0 655 463]{figure/Fig_massdis_Lambdac_ORB.pdf}
\caption{(Color online) Invariant mass distribution of the decay $\Lambda_c^+\to \pi^+ MB$. The solid, dotted, and dash-dotted lines represent the $\bar{K}N=\{K^-p,\ \bar{K^0}n\}$, $\pi\Sigma=\{\pi^0\Sigma^0,\ \pi^-\Sigma^+,\ \pi^+\Sigma^-\}$, and $\eta\Lambda$ channels, respectively. The meson-baryon amplitude is taken from Ref.~\cite{Oset:2001cn}, where the $\Lambda(1520)$ contribution in $d$ wave is not included.}
\label{fig:massdis_ORB}
\end{center}
\end{figure}
As in the previous section, we find the $\Lambda(1405)$ peak structure in the $\pi\Sigma$ channel and the threshold enhancement in the $\bar{K}N$ channel. Furthermore, in the higher-energy region, we find the additional peak structure of the $\Lambda(1670)$ around 1700 MeV in all channels. Especially, the peak is clearly seen in the $\bar{K}N$ and $\eta\Lambda$ channels, as a consequence of the stronger coupling of the $\Lambda(1670)$ to these channels than to the $\pi\Sigma$ channel~\cite{Oset:2001cn}. The $\eta\Lambda$ channel is selective to $I=0$, and the $\Lambda(1520)$ production is kinematically forbidden.
We expect that the structure of the $\Lambda(1670)$ can be analyzed from the measurements of the $\Lambda_c^+$ decay to the $\bar{K}N$ and $\eta\Lambda$ channels.
Finally, we examine the sensitivity of the spectrum to the $MB$ final-state interaction by comparing the spectra of Ref.~\cite{Ikeda:2012au} (hereafter called IHW) with that of Ref.~\cite{Oset:2001cn} (called ORB). In Fig.~\ref{fig:massdis_IHW_ORB}, we show the $\pi^{0}\Sigma^{0}$ and $K^{-}p$ spectra with the final-state interaction models of IHW and ORB with a common normalization of the weak decay vertex $V_{P}$. We find that the spectra of both channels in the IHW model are larger than those in the ORB model. This is caused by the difference of the loop function $G_{i}$ assigned before the final-state interaction in Eqs.~\eqref{eq:amplitude_isobasis_piSig} and \eqref{eq:amplitude_isobasis_KbarN}, which depend on the subtraction constants in each model.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8cm,bb=0 0 655 463]{figure/Fig_massdis_IHW_ORB.pdf}
\caption{(Color online) Comparison of the final-state interaction models of IHW~\cite{Ikeda:2012au} (thick lines) and ORB~\cite{Oset:2001cn} (thin lines) with a common normalization of the weak decay vertex $V_{P}$. The solid and dotted lines represent the $\pi^0\Sigma^0$ and $K^-p$ channels, respectively.}
\label{fig:massdis_IHW_ORB}
\end{center}
\end{figure}
Because the absolute normalization of $V_{p}$ is not calculated here, we may renormalize the difference of the magnitude to compare the shape of the spectra. In Fig.~\ref{fig:massdis_IHW_ORB2}, we compare the same IHW spectra with the rescaled ORB spectra multiplied by factor two. We find that the peak position of the $\pi^{0}\Sigma^{0}$ spectrum of ORB is slightly closer to the $\bar{K}N$ threshold than that of IHW, and the decrease of the $K^{-}p$ spectrum of ORB above the threshold is steeper than that of IHW. This can be understood by the position of the higher-energy pole of $\Lambda(1405)$ in ORB at $1427-17i$ MeV, in comparison with $1424-26i$ MeV in IHW. This tendency is also seen in the comparison of two models in the $\Lambda_{b}\to J/\psi MB$ decay~\cite{Roca:2015tea}. This indicates that the $MB$ spectra in the weak decays well reflect the details of the meson-baryon scattering amplitude.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=8cm,bb=0 0 655 463]{figure/Fig_test_massdis_IHW_ORB_twice.pdf}
\caption{(Color online) Comparison of the final-state interaction models of IHW~\cite{Ikeda:2012au} (thick lines) and ORB~\cite{Oset:2001cn} multiplied by factor two (thin lines). The solid and dotted lines represent the $\pi^0\Sigma^0$ and $K^-p$ channels, respectively.}
\label{fig:massdis_IHW_ORB2}
\end{center}
\end{figure}
\subsection{Branching fractions}
Experimentally, the decay branching fractions of $\Lambda_{c}\to \pi^{+}MB$ are determined as~\cite{Agashe:2014kda}
\begin{align}
\Gamma(\Lambda_{c}\to pK^{-}\pi^{+},\text{nonresonant}) &= 2.8\pm 0.8, \%
\label{eq:pKpi}\\
\Gamma(\Lambda_{c}\to \Sigma^{+}\pi^{+}\pi^{-}) &= 3.6\pm 1.0, \% \\
\Gamma(\Lambda_{c}\to \Sigma^{-}\pi^{+}\pi^{+}) &= 1.7\pm 0.5, \% \\
\Gamma(\Lambda_{c}\to \Sigma^{0}\pi^{+}\pi^{0}) &= 1.8\pm 0.8, \%
\end{align}
where the nonresonant component is obtained by subtracting the contributions from the $K^{*}(892)^{0}$, $\Delta(1232)^{++}$, and $\Lambda(1520)$ in the $K^{-}\pi^{+}$, $p\pi^{+}$, and $pK^{-}$ spectra, respectively. Taking the ratios of the central values, we obtain
\begin{align}
\frac{\Gamma(\Lambda_{c}\to \Sigma^{+}\pi^{+}\pi^{-})}
{\Gamma(\Lambda_{c}\to pK^{-}\pi^{+},\text{nonresonant}) } &= 1.3, \\
\frac{\Gamma(\Lambda_{c}\to \Sigma^{-}\pi^{+}\pi^{+})}
{\Gamma(\Lambda_{c}\to pK^{-}\pi^{+},\text{nonresonant}) } &= 0.6, \\
\frac{\Gamma(\Lambda_{c}\to \Sigma^{0}\pi^{+}\pi^{0})}
{\Gamma(\Lambda_{c}\to pK^{-}\pi^{+},\text{nonresonant}) } &=
0.6.
\end{align}
In principle, these ratios can be calculated in the present model by integrating Eq.~\eqref{eq:massdis} over $M_{\rm inv}$. However, in the present calculation, we consider the process which is dominant in the small $M_{\rm inv}$ region, as explained in Sec.~\ref{sec:formulation}. In the large $M_{\rm inv}$ region, the processes other than Fig.~\ref{fig:Lambdac_decay} can contribute. Also, higher excited $\Lambda^{*}$~\cite{Feijoo:2015cca} and resonances in the $\pi^{+} M$ and $\pi^{+} B$ channels may play an important role.\footnote{The largest contributions from $K^{*}$, $\Delta$, and $\Lambda(1520)$ are subtracted in the data of Eq.~\eqref{eq:pKpi}.} In this way, the validity of the present framework is not necessarily guaranteed for the large $M_{\rm inv}$ region.
Nevertheless, it is worth estimating the branching ratios by simply extrapolating the present model. The theoretical estimate of the ratio of the decay fraction is obtained as
\begin{align}
\frac{\Gamma_{\pi^-\Sigma^+}}{\Gamma_{K^-p}} =
\begin{cases}
1.05 & (\text{Ref.~\cite{Ikeda:2012au}}), \\
0.95 & (\text{Ref.~\cite{Oset:2001cn}}).
\end{cases}
\label{eq:decayratio}
\end{align}
Given the uncertainties in the experimental values and the caveats in the extrapolation discussed above, it is fair to say that the gross feature of the decay is captured by the present model. We note that the difference of the charged $\pi\Sigma$ states in our model is of the order of the isospin breaking correction. The large deviation in the experimental data, albeit non-negligible uncertainties, may indicate the existence of the mechanisms which is not included in the present framework. It is worth noting that in the theoretical model of Ref.~\cite{Ikeda:2012au} the $\pi^- \Sigma^+ \pi^+$ channel has the largest strength as in the experiment.
Let us also mention the measured value of the branching fraction of $\Gamma(\Lambda_{c}\to \Lambda\pi^{+}\pi^{0}) = 3.6\pm 1.3 \%$~\cite{Agashe:2014kda}. Because $\pi^{0}\Lambda$ is purely in $I=1$, the present model does not provide this decay mode. The finite fraction of this mode indicates the existence of mechanisms other than the present process. In other words, the validity of the present mechanism for the $I=0$ filter can be tested by measuring the $\pi^{0}\Lambda$ spectrum in the small $M_{\rm inv}$ region. We predict that the amount of the $\pi^{0}\Lambda$ mode should be smaller than the $\pi\Sigma$ mode, as long as the small $M_{\rm inv}$ region is concerned.
\section{Summary} \label{sec:summary}
We have studied the $\Lambda_c\to \pi^+$ and $MB$ decay process, which is favored for several reasons, the CKM matrix, the color factor, the $ud$-diquark correlation, the number of $\bar{q}q$ creation, and the kinematical condition. Comparing with the similar decay $\Lambda_b\to J/\psi MB$, the $\Lambda_c$ decay benefits from the CKM-matrix element $|V_{ud}|\gg |V_{cd}|$ and the extra color factor in the $Wu\bar{d}$ vertex. The reaction has the virtue of filtering $I=0$, such that in the $\bar{K}N$ and $\pi\Sigma$ final states we do not have contamination of $I=1$ channels, which is not the case in other reactions, making the interpretation more difficult, and we can study better the properties of the $\Lambda(1405)$.
We have analyzed the $MB$ spectrum, taking into account the $MB$ final-state interaction by the chiral unitary approach. Near the $\bar{K}N$ threshold energy, the peak of the $\pi\Sigma$ spectra appears around 1420 MeV, reflecting the higher mass pole of the two poles of the $\Lambda(1405)$. Thanks to the suppression of the $I=1$ contamination, all three charge combinations of the $\pi\Sigma$ channels show almost the same spectrum. The model for the interaction of $\bar{K}N$ with $\pi\Sigma$ and other coupled channels allows us to make predictions for $\bar{K}N$ production and relate it to $\pi\Sigma$ production.
Above the $\bar{K}N$ threshold, the peak of the $\Lambda(1670)$ can be observed in the $\bar{K}N$ and $\eta\Lambda$ channels. With the Bonn model of Ref.~\cite{Mai:2014xna}, the mass distribution did not show the $\Lambda(1670)$ peak in the $\Lambda_b\to J/\psi K^-p$ reaction. It thus becomes interesting to perform the $\Lambda_c\to \pi^+K^-p$ reaction, measuring the $K^-p$ mass distribution and see what one observes for this resonance, which is catalogued with four stars in the PDG \cite{Agashe:2014kda}.
In this way, we conclude that the $\Lambda_c\to \pi^+MB$ decay is an ideal process by which to study the $\Lambda$ resonances. We obtain the above-mentioned nice features based on the dominance of the mechanism shown in Fig.~\ref{fig:Lambdac_decay} for the $\bar{K}N$ and $\pi\Sigma$ channels. One should recall that, for the $\pi \Sigma$ spectra, integrated rates are available experimentally, but the peak of the $\Lambda(1405)$ has not been reported. The work done here should stimulate steps in this direction and also to measuring distributions that show the resonances discussed in the present paper.
\section*{Acknowledgments}
The authors thank the Yukawa Institute for Theoretical Physics, Kyoto University, where this work was initiated during the YITP-T-14-03 on ``Hadrons and Hadron Interactions in QCD.'' This work is supported in part by Open Partnership Joint Projects of JSPS Bilateral Joint Research Projects, JSPS KAKENHI Grant No. 24740152, the Yukawa International Program for Quark-Hadron Sciences (YIPQS), the Spanish Ministerio de Economia y Competitividad, European FEDER funds under Contracst No. FIS2011-28853-C02-01 and No. FIS2011-28853-C02-02, and the Generalitat Valenciana in the program Prometeo II, 2014/068. We acknowledge the support of the European Community-Research Infrastructure Integrating Activity Study of Strongly Interacting Matter (acronym HadronPhysics3, Grant Agreement No. 283286) under the Seventh Framework Programme of the EU.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\subsection{Notation}
\begin{itemize}
\item For an integer $x$ we will use $x^-$ to denote an integer less than or equal to $x$, and $x^+$ an integer at least as large as $x$.
\item We use the $\oplus$ operator to refer to the bitwise binary XOR operation. For example, $3 \oplus 5 = 6$.
\item Other notation common to combinatorial game theory is mentioned below.
\end{itemize}
\subsection{Combinatorial Game Theory}
Two-player games with alternating turns, perfect information, and no random elements are known as \emph{combinatorial games}. Combinatorial Game Theory concerns determining which of the players has a winning move from any \emph{position} (game state). Some important considerations here follow.
\begin{itemize}
\item This paper only considers games under \emph{normal play} conditions, meaning that the last player to make a move wins. (If a player can't make a legal move on their turn, they lose.)
\item The sum of two game positions is also a game where a legal move consists of moving on one of the two position summands. The value of a game sum is simply the sum of the values of the summands.
\item Many of the sections require the reader to be moderately comfortable with number values, nimbers (or Grundy values), and outcome classes (specifically, $\ensuremath{\mathcal{N}}$ and $\ensuremath{\mathcal{P}}$).
\item Understanding of the game-set notation (e.g. $\{*, 1 | *, 2\}$) is necessary for understanding Lemma \ref{lem:numbersPlusStar}.
\end{itemize}
The authors recommend \cite{LessonsInPlay:2007} and \cite{WinningWays:2001} for background texts on combinatorial game theory.
\subsection{Algorithmic Combinatorial Game Theory}
Algorithmic combinatorial game theory concerns the computational difficulty in determining the winnability, outcome class, or value of game positions. Given a position, how long does it take for an algorithm to determine whether the next player has a winning strategy? For many rulesets, this can be solved in time polynomial in the size of the representation of the position. These ``easy'' rulesets usually allow some sort of closed formula to determine the winner.
For others, the problem is known to be intractable. For example, solving \ruleset{Snort} is known to be \cclass{PSPACE}-complete\cite{DBLP:journals/jcss/Schaefer78}, meaning that unless an efficient algorithm is found to solve all other problems in \cclass{PSPACE}, no polynomial-time algorithm can determine the winnability of all \ruleset{Snort} positions. For these ``hard'' games, usually the majority of the induced game tree must be inspected in order to determine the winner.
In algorithmic combinatorial game theory, a ruleset is considered \emph{solved} if either the polynomial time ``trick'' is known, or if it has been shown to be intractable. Many rulesets remain unsolved: it is unknown whether they can be evaluated efficiently or not.
In this paper, we discuss known algorithmic results about each ruleset.
\subsection{Data Structures}
Data structures are logical structures used to store digital data in computing systems. Different structures facilitate different operations. For example, a max-heap can quickly add and remove elements, but is restricted to removing only the greatest element.
A data structures course in computer science is a common course in the basic programming sequence in many undergraduate curricula. The topics cover many logical structures to store digital data. This paper covers games based on: arrays, sets, stacks, queues, priority queues, linked lists, and binary trees. We define each of these in the following sections.
\subsection{Organization}
The following 7 sections are each devoted to a ruleset based on a particular data structure. We provide Table \ref{table:organization} as a handy key to corresponding data structures, ruleset names, and section numbers.
\begin{table}[h!]
\begin{center}\begin{tabular}{|l|l|l|}
\hline
Data Structure & Ruleset & Section\\ \hline
Array & \ruleset{Nim} & \ref{section:arrays} \\ \hline
Sets & \ruleset{Antonim} & \ref{section:sets} \\ \hline
Stacks & \ruleset{Tower Nim} & \ref{section:stacks}\\ \hline
Queues & \ruleset{Rotisserie Nim} & \ref{section:queues}\\ \hline
Priority Queues & \ruleset{Greedy Nim} & \ref{section:priorityQueues} \\ \hline
Linked Lists & \ruleset{Myopic Col} & \ref{section:linkedLists} \\ \hline
Binary Trees & \ruleset{Myopic Col} & \ref{section:binary-trees} \\ \hline
Graphs & many games & \ref{section:graphs} \\ \hline
\end{tabular}\end{center}
\caption{Corresponding data structures, rulesets, and sections.}\label{table:organization}
\end{table}
\section{Arrays}
\label{section:arrays}
An array is a linear data structure addressable by the natural numbers. Each array element is stored in a cell so that the first $n$ elements have respective addresses $0, 1, \ldots, n-1$.
\subsection{Ruleset: Nim}
\ruleset{Nim} is a simple combinatorial game that has been well-studied\cite{Bouton:1901}.
\begin{definition}[Nim]
\label{def:nim}
\ruleset{Nim} is an impartial combinatorial game where each position is represented by a list of natural numbers: $K = (k_0, k_1, \ldots, k_{n-1})$, $k_i \in \mathbb{N}$. The position represented by $M = (m_0, m_1, \ldots, m_{n-1})$ is an option of $K$ if $\exists i \in 0, 1, \ldots, n-1: $
\begin{itemize}
\item $\forall j \neq i: k_j = m_j$, and
\item $m_i < k_i$
\end{itemize}
\end{definition}
The grundy value of any \ruleset{Nim} position can be found in linear time: $\mathcal{G}(k_0, k_1, \ldots, k_{n-1})= k_0 \oplus k_1 \oplus \cdots \oplus k_{n-1}$. There is a winning move from position $K$ exactly when the grundy value, $\mathcal{G}(K)$, is not zero.\cite{Bouton:1901}
\section{Sets}
\label{section:sets}
A set is a data structure where there is no implicit order on the elements, and copies of elements are not recorded, (each element appears at most once).
\subsection{Ruleset: Antonim}
Instead of keeping a list (or bag) of heaps as in \ruleset{Nim}, we employ a set. In the resulting \ruleset{Antonim}, each turn consists of choosing a heap from the set, removing some number of sticks from that heap, then replacing the original heap with the new heap. (If the resulting heap has no sticks, we don't bother to add it back.) The catch here is that each heap is simply a natural number, so if there is already a heap of that size in the set, the new heap is forgotten about; that player might as well have taken all the sticks in that heap. We provide a formal description in Definition \ref{def:antonim}.
\begin{definition}[Antonim]
\label{def:antonim}
\ruleset{Antonim} is an impartial combinatorial game where each position is represented by $S \subset \mathbb{N} \setminus \{0\}$. The position represented by $S'$ is an option of $S$ if:
\begin{itemize}
\item $\exists x \notin S'$ such that $S = S' \cup \{x\}$, or
\item $\exists x \in S, y \in S'$ such that $x \notin S'$ and $y < x$
\end{itemize}
\end{definition}
Finding the outcome class for \ruleset{Antonim} has already been solved for positions with up to three piles \cite{WinningWays:2001}. These rules are as follows:
\begin{itemize}
\item $\{\}$ is a terminal position; so the position is in $\mathcal{P}$.
\item One pile: $\{k\} \in \mathcal{N}$, as there is always a winning move (by taking all sticks).
\item Two piles: $\{a, b\} \in
\begin{cases}
\mathcal{P}, & \{a, b\} = \{2k+1, 2k+2\} \mbox{ for some } k \in \mathbb{N} \cr
\mathcal{N}, & \mbox{otherwise}
\end{cases}$
\item Three piles: $\{a, b, c\} \in
\begin{cases}
\mathcal{P}, & (a+1) \oplus (b+1) \oplus (c+1)=0 \cr
\mathcal{N}, & \mbox{otherwise}
\end{cases}$
\end{itemize}
This last case can be proven inductively or by a reduction to standard Nim. A winning move is to add one stone to each heap, make an appropriate move in Nim, then remove a stone from each heap. The appropriate Nim move will not eliminate any heap entirely since that would require two equal-sized heaps to be present, which is not possible in Antonim. Play continues until the game is down to two heaps
There is currently no known polynomial time algorithm for determine the outcome class for an Antonim position containing four or more heaps, and computer analysis has not yet shed light on a possible method.
\section{Stacks}
\label{section:stacks}
A stack is a data structure where elements are added and removed. Each remove operation pulls out the most recently-added element. We can represent this with a list where items are added and removed from the same end, called the top. The remove operation (called \emph{pop}) then always fetches the most recently added (\emph{pushed}) element.
Consider the following stack (the top is on the right-hand side): $(5, 2, 4)$. If we pop an element off the stack, $(5, 2)$ will remain. If we then push $1$ followed by $6$, followed by another, pop, the resulting stack will be $(5, 2, 1)$.
\subsection{Ruleset: Tower Nim}
\ruleset{Tower Nim} is another spin-off of \ruleset{Nim}. The difference here is that the heaps are arranged linearly. A move consists of removing sticks from the top (or front) heap. If all sticks in one heap are removed, then that heap is removed from the stack. \ruleset{Tower Nim} is similar to \ruleset{End Nim} \cite{DBLP:journals/combinatorics/AlbertN01
but with the added restriction that only one end is accessible. We formalize this in Definition \ref{def:towerNim}.
\begin{definition}[\ruleset{Tower Nim}]
\label{def:towerNim}
\ruleset{Tower Nim} is an impartial ruleset where positions are described by a list of non-zero natural numbers: $L = (a_0, a_1, \ldots, a_{n-1}, a_n)$. The position represented by $L'$ is an option of $L$ if either
\begin{itemize}
\item $L' = (a_0, \ldots, a_{n-1})$, or
\item $\exists b \in \mathbb{N} \setminus \{0\}$ such that $b < a_n$ and $L' = (a_0, \ldots, a_{n-1}, b)$.
\end{itemize}
\end{definition}
After a handful of games, it becomes clear that the winnability of the game is often determined by whether the top heap has 1 stick.
\begin{observation}
\label{obs:stackAllOnes}
For games with a stack of all ones, $(1, 1, \ldots, 1)$, the nimber is the xor-sum of all heaps.
\end{observation}
\begin{lemma}
\label{lem:towerNimBigTop}
When only the top is non-one, the position is always in \ensuremath{\mathcal{N}}.
\end{lemma}
\begin{proof}
Let $(1, 1, \ldots, 1, x)$ represent a position, $G$, of \ruleset{Tower Nim}, where $x > 1$. There are two cases, based on the length of the list.
\begin{itemize}
\item If the position has an even number of heaps, then the next player can take $x-1$ sticks, reducing the game to a zero position.
\item If the position has an odd number of heaps, then the next player can take all $x$ sticks in the top heap, reducing the number of sticks to an even amount, a zero position.
\end{itemize}
Since the current player always has a winning move, $G$ must be in \ensuremath{\mathcal{N}}.
\end{proof}
\begin{corollary}
\label{corol:towerOnesOnTop}
Let $(\ldots, x, 1, \ldots, 1)$ represent a position, $G$, of \ruleset{Tower Nim}, where $x > 1$ and $n$ is the number of 1-heaps above $x$. Then the nimber of $G$ is the parity of $n$.
\end{corollary}
\begin{proof}
After $n$ turns, the current position must be represented by $(\ldots, x)$, which is in \ensuremath{\mathcal{N}}.
\end{proof}
\begin{lemma}
Let $(\ldots, 1, x)$ represent a position, $G$, of \ruleset{Tower Nim}. There are two cases for the nimber of $G$:
\begin{itemize}
\item[$x = 1$] Then either $G = 0$ or $G = *$.
\item[$x \geq 2$] Then $G = *x$.
\end{itemize}
\end{lemma}
\begin{proof}
The first case is taken care of for us, using the previous corollary.
The second case can be proven by strong induction on $x$. In the base case, $G$ has both $0$ and $*$ and options, by the previous corollary, and must equal $*2$. For the inductive case, when $x = k+1$, it has all the children that $x = k$ has $(0, *, *2, \ldots, *(k-1))$, as well as $*k$, but does not include $*(k+1)$. Thus, by the mex rule, $G = *(k+1)$.
\end{proof}
\begin{corollary}
Let $(\ldots, y, x)$, where $y \geq 2$, represent a position, $G$, of \ruleset{Tower Nim}. There are two cases for the nimber of $G$:
\begin{itemize}
\item[$x = 1$] Then $G = 0 (G \in \ensuremath{\mathcal{P}})$.
\item[$x \geq 2$] Then $G \in \ensuremath{\mathcal{N}}$.
\end{itemize}
\end{corollary}
\begin{proof}
The first case occurs because the only option is a non-zero position as described in the previous lemma. The second case occurs because the first case is an option.
\end{proof}
With these results, we can determine the outcome class of any position by counting the number of consecutive size-1 heaps on top of the stack and whether there is a bigger heap underneath the consecutive 1-heaps.
\begin{corollary}[\ruleset{Tower Nim} is in \cclass{P}]
The outcome class of any \ruleset{Tower Nim} position can be determined in \cclass{P}.
\end{corollary}
\begin{proof}
Consider a \ruleset{Tower Nim} position $G = (x_{n-1}, \ldots, x_1, x_0)$. Then:
\begin{itemize}
\item if $\forall i: x_i = 1$ then, by Observation \ref{obs:stackAllOnes} $G =
\begin{cases}
0\ (G \in \ensuremath{\mathcal{P}}), & n \mbox{ is even}\cr
*\ (G \in \ensuremath{\mathcal{N}}), & n \mbox{ is odd}
\end{cases}$
\item otherwise, let $x_k$ be the non-one heap closest to the top. Now, by Corollary \ref{corol:towerOnesOnTop} $G \in
\begin{cases}
\ensuremath{\mathcal{N}}, & k \mbox{ is even}\cr
\ensuremath{\mathcal{P}}, & k \mbox{ is odd}
\end{cases}$
\end{itemize}
\end{proof}
\section{Queues}
\label{section:queues}
Queues are another data structure very similar to stacks. The only difference is that the remove operation extracts the least-recently added element. We can represent this as a list:
queues push elements to the \emph{back} of the list and pop from the \emph{front}, the opposite end.
Consider the following queue (the front is the left-hand side, the back is on the right): $(5, 2, 4)$. If we pop an element from the queue, $(2, 4)$ will remain. If we then push $1$ followed by $6$, followed by another, pop, the resulting queue will be $(4, 1, 6)$.
\subsection{Ruleset: Rotisserie Nim}
\ruleset{Rotisserie Nim} is the queue-counterpart to \ruleset{Tower Nim}. In this game, the next player removes sticks from the heap in the front of the queue. Then, if there are any sticks left in that heap, it's queued back in to the back of the line. We formalize this in Definition \ref{def:rotisserieNim}. This is equivalent to \ruleset{Adjacent Nim}, which is itself a special case of \ruleset{Vertex Nim} played on a directed cycle\cite{DBLP:journals/tcs/DucheneR14}.
\begin{definition}[\ruleset{Rotisserie Nim}]
\label{def:rotisserieNim}
\ruleset{Rotisserie Nim} is an impartial ruleset where positions are described by a list of non-zero natural numbers: $L = (a_0, a_1, \ldots, a_n)$. The position represented by $L'$ is an option of $L$ if either
\begin{itemize}
\item $L' = (a_1, \ldots, a_n)$, or
\item $\exists b \in \mathbb{N} \setminus \{0\}$ such that $b < a_0$ and $L' = (a_1, \ldots, a_n, b)$.
\end{itemize}
\end{definition}
We begin by characterizing two heap games.
\begin{theorem}\label{thm:adjTwoHeaps}
If $L=(a_0,a_1)$, then $L\in \mathcal{N}$ iff $a_0>a_1$.
\end{theorem}
\begin{proof}
First, assume $a_0>a_1$. The next player can move to the position $(a_1,a_1)$, and continue to match the other player's moves until reaching the terminal position. Next, assume that $a_1\geq a_0$. The only move from $L$ is to a position of the form $(a_1,a_0^-)$. Since $a_0^-<a_0\leq a_1$, this is an $\mathcal{N}$-position by the above argument, and therefore $L\in \mathcal{P}$.
\end{proof}
The following theorem, proven in \cite{DBLP:journals/tcs/DucheneR14}, completely characterizes the classes $\mathcal{P}$ and $\mathcal{N}$ where each heap has size at least two.
\begin{theorem}\label{thm:adjNim}
Let $L = (a_0, a_1, \ldots, a_n),a_i\geq 2, \forall 0\leq i\leq n$ be a position in \ruleset{Rotisserie Nim}, and let $a_-=\min\{a_i\}_0^n$. Then $L\in \mathcal{N}$ iff $n$ is odd or the smallest index $j$ for which $a_j=a_-$ is even.
\end{theorem}
We extend this to some small positions with heaps of size one.
\begin{theorem}\label{thm:adjThreeHeaps1}
If $a_0=1$ then $L\in \mathcal{P}$ if $a_1>a_2$.
\end{theorem}
\begin{proof}
Assume $a_1>a_2$ and $a_0=1$. The only valid move is to $(a_1,a_2)$ which is in $\mathcal{N}$ by Theorem \ref{thm:adjTwoHeaps}.
\end{proof}
In order to complete the characterization of all three heap positions in \ruleset{Rotisserie Nim} we must first prove the following lemma.
\begin{lemma}\label{lem:adjStrategy}
Let $L=(a_0,a_1,\ldots ,a_n)$ is a position in \ruleset{Rotisserie Nim}. If $(a_1,\ldots ,a_n)\in \mathcal{N}$ then:
\begin{itemize}
\item If $(n+1)$ is odd and $L\in \mathcal{N}$ then $(a_1,\ldots ,a_n, 1)\in \mathcal{P}$, and
\item If $(n+1)$ is even and $L\in \mathcal{N}$ then $(a_1,\ldots ,a_n, a_0-1)\in \mathcal{P}$.
\end{itemize}
\end{lemma}
\begin{proof}
If $L$ has an even number of heaps, then the same player who encounters heap $a_0$ will also encounter what remains of this heap on a subsequent turn. Therefore, if removing the entire heap does not move the game into a $\mathcal{P}$-position then there is no advantage to removing more than a single stick. Similarly, with an odd number of heaps there is no advantage to leaving more than a single stick for the other player to encounter on a subsequent turn.
\end{proof}
Lemma \ref{lem:adjStrategy} provides an interesting strategy, which leads to a win of if $L\in \mathcal{N}$. Using this strategy we prove another lemma necessary to complete our characterization of three heap games.
\begin{lemma}\label{lem:adjCompare}
If $L=(a_0,a_1,\ldots)\in \mathcal{P}$ then so is $L'=(a_0^-,a_1^+,a_2^-,a_3^+,\ldots)$.
\end{lemma}
\begin{proof}
Assume to the contrary that $L\in \mathcal{P}$ and $L'\in \mathcal{N}$. Therefore there is a move from $L'$ to a $\mathcal{P}$-position $(a_1^+,a_2^-,a_3^+,\ldots ,a_0^{--})$. That means that a similar move can take $L$ to $(a_1,a_2,a_3,\ldots, a_0^{--})$ since $a_0^-\leq a_0$. In fact, each move to a position in $\mathcal{P}$ from the starting position of $L'$ can be copied by the first player in the starting position $L$, and hence $L\in \mathcal{N}$. This contradicts our assumption, hence $L'\in \mathcal{P}$ as well.
\end{proof}
\begin{theorem}\label{thm:adjThreeHeaps2}
If $a_0>1$ then $L\in \mathcal{P}$ iff $a_1>1$ and $a_2=1$.
\end{theorem}
\begin{proof}
Assume $L=(a_0,2,1)$. Since $(2,1)\in \mathcal{N}$ by Theorem \ref{thm:adjTwoHeaps}, Lemma \ref{lem:adjStrategy} tells us that the first move should be to the position $(2,1,1)$. The second player can then move to $(1,1)$, which is in $\mathcal{P}$. Therefore, $L\in \mathcal{P}$. By Lemma \ref{lem:adjCompare}, $(a_0,a_1,1)\in \mathcal{P}$ whenever $a_2\geq 2$.
Next, assume $a_0,a_2>1$ and $a_1>a_2$. Because $(a_1,a_2)\in \mathcal{N}$ by Theorem \ref{thm:adjTwoHeaps}, by Lemma \ref{lem:adjStrategy} the first player will move to position $(a_1,a_2,1)$. Similarly, because $(a_2,a)\in \mathcal{N}$ the next player will move to $(a_2,1,1)$. The first player can then move to $(1,1)\in \mathcal{P}$. Therefore, $(a_0,a_1,a_2)\in \mathcal{N}$ when $a_0,a_2>1$ and $a_1>a_2$, and hence when $a_0>1$, the position $(a_0,a_1,a_2)\in \mathcal{P}$ iff $a_2=1$.
Finally, consider the case where $a_0>1$ and $a_1<a_2$. Since $(a_1,a_2)\in \mathcal{P}$ by Theorem \ref{thm:adjTwoHeaps}, t$(a_0,a_1,a_2)\in \mathcal{N}$. Therefore, if $a_0>1$ then $(a_0,a_1,a_2)\in \mathcal{P}$ iff $a_1>1$ and $a_2=1$.
\end{proof}
\section{Priority Queues}
\label{section:priorityQueues}
Priority queues are similar to stacks and queues, except that the order of elements does not necessarily determine the order they will be removed. When a remove-operation is called, the largest element is always removed.
Consider the following priority queue: $(5, 2, 4)$. If we remove an element from the queue, $(2, 4)$ will remain. If we then add $1$ followed by $6$, followed by another, remove, the resulting priority queue will be $(2, 4, 1)$.
\subsection{Ruleset: Greedy Nim}
\ruleset{Greedy Nim} is a ruleset just like \ruleset{Nim}, except that players must choose from a heap with the greatest size to remove sticks from\cite{WinningWays:2001}. We give a formal version in Definition \ref{def:greedyNim}.
\begin{definition}[Greedy Nim]
\label{def:greedyNim}
\ruleset{Greedy Nim} is an impartial ruleset where positions are described by a multiset of non-zero natural numbers, $M$. $M'$ is an option of $M$ exactly when $\exists x \in M: \forall y \in M: x \geq y$ and $\#_M(x) = 1 + \#_{M'}(x)$ and either:
\begin{itemize}
\item $\forall a \in (M \cup M')$ either $x = a$ or $\#_M(a) = \#_{M'}(a)$, or
\item $\exists b \in M \setminus \{x\}: \#_M(b) + 1 = \#_{M'}(b)$ and $\forall a (\neq b) \in (M \cup M')$ either $x = a$ or $\#_M(a) = \#_{M'}(a)$.
\end{itemize}
\end{definition}
This game has already been solved; a polynomial-time algorithm exists to determine the outcome class of the game\cite{MR2056015}.
\begin{theorem}[Greedy Nim]
A \ruleset{Greedy Nim} position, $G$, is in \ensuremath{\mathcal{P}} exactly when there are an even number of heaps with the greatest size.
\end{theorem}
\section{Linked Lists}
\label{section:linkedLists}
A linked list is a data structure with more explicit structure than a stack, queue, or priority queue. Instead of describing which element gets removed, a linked list is a method for organizing elements. Indeed, stacks and queues can both be implemented using linked lists.
Each element in a linked list exists inside a list node, which also includes a reference to the next list node. (We consider only singly-linked lists, meaning each node does not have a reference to the previous node.) In other terms, a linked list is like a directed path graph, with a value at each vertex.
\subsection{Ruleset: Myopic Col}\label{sec:myopiccol}\label{sec:myopiccol}\label{sec:myopiccol}\label{sec:myopiccol}\label{sec:myopiccol}
There are many choices of rulesets that could be used on directed paths. In order to deviate from impartial games, we use a variant of \ruleset{Col}, \ruleset{Myopic Col}.
In \ruleset{Col}, a turn consists of a player choosing an uncolored vertex and painting it their own color (blue or red). However, they not allowed to choose a vertex adjacent to another vertex with their color.
\ruleset{Myopic Col} is played on a directed graph instead of undirected. When a player chooses a vertex, they are only restricted from choosing vertices with arcs pointing \emph{to} neighboring vertices of their color. Arcs directed towards the chosen vertex do not need to be considered.
As with other games discussed, we provide a rigorous definition.
\begin{definition}[Myopic Col]
\label{def:myopicCol}
\ruleset{Myopic Col} is a combinatorial game where positions are described by a directed graph, $G = (V, E)$, and a coloring of vertices either uncolored or painted red or blue, $(c: V \rightarrow \{uncolored, red, blue\})$. Each arc, $(a, b) \in E$ points from $a$ to $b$. Another graph-coloring pair $(G', c')$ is an option of $(G, c)$ for player $A \in \{red, blue\}$ exactly when both
\begin{itemize}
\item $G' = G$, and
\item $\exists v \in V:$
\begin{itemize}
\item $\forall v' \in V \setminus \{v\}: c(v') = c'(v')$, and
\item $c(v) = uncolored$ and $c'(v) = A$ and $\nexists (v,b) \in E: c(b) = A$
\end{itemize}
\end{itemize}
\end{definition}
The positions we consider in this section are on paths. ...
Consider a positions on a paths described as a list of the colors. For example, $(blue, red, uncolored, uncolored, blue)$ represents path with five vertices with arcs going from-left-to-right.
Note that the game on a path with an arc from a colored vertex to another vertex is equivalent to the sum of two paths created by removing that arc. Thus, we must only consider situations where there is at most one colored vertex, and that vertex is at the end of the path.
Before we prove things about the Col paths, we need a general lemma about games with numeric and star values:
\begin{lemma}
\label{lem:numbersPlusStar}
$\forall x \in \mathbb{R}:$
\begin{align}
x + * = \{x | x\}\\
x = \{x + * | x + *\}
\end{align}
\end{lemma}
\begin{proof}
First we prove $(1)$:
\begin{align*}
\{ x | x \} & = x \pm 0 & \mbox{(by the definition of switches)} \\
& = x + \{0|0\}\\
& = x + *
\end{align*}
Next we prove $(2)$ by showing that $\{x + * | x + *\} - x = 0$. Notice that either player loses by playing on $\{x + * | x + *\}$, because $x + * - x = * \in \outcomeClass{N}$. However, since $- x$ is a number and $\{x + * | x + *\}$ is not, it is still better for both players to make that losing move. Thus, the sum is a loss for both players and equals $0$.
\end{proof}
\begin{lemma}\label{lem:pathMyopicColValues}
Let $G$ be a Myopic Col game played on a path with $n$ uncolored vertices, possibly followed by a colored vertex, $v$. Then $G =
\begin{cases}
n \times *, & \mbox{no } v\\
(n-1) \times * - 1, & v\mbox{ is blue}\\
(n-1) \times * + 1, & v\mbox{ is red}
\end{cases}$
\end{lemma}
\begin{proof}
We proceed by strong induction on $n$:
Base Case: $n = 1$.
All three cases hold. In the first case, there is only one vertex that either can play on, so $G = *$, as the lemma states. In the second case, there is a single vertex only Right can color. Thus, $G = -1$. In the third case, $G = 1$ by the same logic.
Inductive Case: Assume $n > 1$ and both lemmas are true for all $0 < k < n$.
Consider $G$, the game played on a path with $n$ uncolored vertices. The moves for Left can be derived from the inductive hypotheses. Starting by coloring the head, these are:
\begin{align*}
(n-1) \times *, \\
(0 \times * - 1) + (n-2) \times * & = (n-2) \times * - 1, \\
(1 \times * - 1) + (n-3) \times * & = (n-2) \times * - 1, \\
\vdots\\
((n-3) \times * - 1) + 1 \times * & = (n-2) \times * - 1, \\
((n-2) \times * - 1) + 0 \times * & = (n-2) \times * - 1
\end{align*}
Except for the first one, these are all equal to $(n-2) \times * - 1$. Since $(n-2)\times * - 1 < (n-1) \times *$, they are all dominated by that first move. Thus, the best move for Left is to color the head of the path.
The same logic holds to show that Right's best move is also to color the head of the path. Thus, $G = \{(n-1) \times * | (n-1) \times *\} = n \times *$.
Now consider the case where $G$ is a path of $n$ uncolored vertices followed by a single blue vertex. Again consider the moves for Left, starting with playing at the head and continuing down the tail:
\begin{align*}
(n-2) \times * - 1, & \\
(0 \times * - 1) + ((n-3) \times * - 1) & = (n-3) \times * - 2, \\
(1 \times * - 1) + ((n-4) \times * - 1) & = (n-3) \times * - 2, \\
\vdots, \\
((n-4) \times * - 1) + (1 \times * - 1) & = (n-3) \times * - 2, \\
((n-3) \times * - 1) + (0 \times * - 1) & = (n-3) \times * - 2
\end{align*}
All but the first simplify to $(n-3) \times * - 2$, worse for Left than the first option, $(n-2) \times * - 1$.
Right has one more option:
\begin{align*}
(n-2) \times * - 1,\\
(0 \times * + 1) + ((n-3) \times * - 1) & = (n-3) \times *, \\
(1 \times * + 1) + ((n-4) \times * - 1) & = (n-3) \times *, \\
\vdots, \\
((n-3) \times * + 1) + (0 \times * - 1) & = (n-3) \times *, \\
(n-2) \times * + 1
\end{align*}
All but the first and last simplify to $(n-3) \times *$. $(n-2) \times * - 1 < (n-3) \times * < (n-2) \times * + 1$, so Right's best move is also to $(n-2) \times * - 1$.
Now,
\begin{align*}
G &= \{(n-2) \times * - 1 | (n-2) \times * - 1\} \\
& = (n-2) \times * - 1 + *\\
& = (n-1) \times * - 1
\end{align*}
The same steps can be used to show that a path ending in a red vertex has value $(n-1) \times * + 1$.
\end{proof}
\begin{corollary}
On a path with $n$ uncolored vertices, it is always optimal to color the head.
\end{corollary}
\begin{proof}
For all of our cases in our analysis above, the best move was always to color the vertex at the head.
\end{proof}
Now we can evaluate any path graph (or collection of paths). The total value of the game is
\begin{corollary}
\label{corol:pathMyopicCol}
For a graph that consists only of paths, $G$, the total value of $M$, the \ruleset{Myopic Col} position on $G$ equals $a \times * + b - c$, where:
\begin{itemize}
\item $a$ is the number of uncolored vertices that either have no outgoing arc or have an outgoing arc that leads to another uncolored vertex.
\item $b$ is the number of uncolored vertices with their outgoing arc leading to a red vertex.
\item $c$ is the number of uncolored vertices with their outgoing arc leading to a blue vertex.
\end{itemize}
\end{corollary}
\begin{proof}
We can break each path into sections by removing edges leaving colored vertices. By Lemma \ref{lem:pathMyopicColValues}, this splitting does not change the values of the positions at all. Now the final formula, $a \times * + b - c$, is just the sum of all of the sums generated by using Lemma \ref{lem:pathMyopicColValues} on each piece.
\end{proof}
This formula is easy to evaluate, so we can solve this game efficiently.
\begin{corollary}[\ruleset{Myopic Col} on paths is in \cclass{P}]
The winnability of \ruleset{Myopic Col} when played on path graphs can be determined in \cclass{P}.
\end{corollary}
\begin{proof}
The formula from Corollary \ref{corol:pathMyopicCol} can be evaluated in $O(n)$ time, where $n$ is the number of vertices in the graph.
\end{proof}
In the next section, we put \ruleset{Myopic Col} to work again, but on slightly more complex graphs. The computational complexity there is not yet known.
\section{Binary Trees}
\label{section:binary-trees}
In the same way that a linked list is a directed path graph, a binary tree is a directed tree graph with at most two outgoing arcs per vertex. Each node in the tree (each vertex) contains a value in the data structure, as well as up to two references to other nodes.
Binary trees are used to implement quickly-searchable structures on ordered data, as well as heaps, which can be used for fast priority queues.
\subsection{Ruleset: Myopic Col on Trees}
We can reuse the ruleset for \ruleset{Myopic Col} from Section \ref{sec:myopiccol}, and now consider the game played on binary trees. Unlike on paths, \ruleset{Myopic Col} on binary trees has non integer/integer-plus-star values. Other dyadic rationals exist, such as $1/2$ and $-3/4$.
It seems natural that in a tree with an uncolored node as the root with two uncolored nodes as children of the root, coloring the root is the best move for both players, just as for a path that begins with two uncolored vertices.
\begin{conjecture}
Consider a directed tree $T_0$ where the root vertex, $r$, is uncolored, and the child trees of $r$, $T_1$ and $T_2$, are both non-null and have uncolored roots. For any tree $T$, let $G(T)$ be the \ruleset{Myopic Col} position on $T$. Then $G(T_0) = * + G(T_1) + G(T_2)$.
\end{conjecture}
\section{Graphs}
\label{section:graphs}
Graphs covered in a data structures course are exactly the same as mathematical graphs with vertices, edges (or directed arcs), and values embedded in the vertices and/or edges. There are a wide host of rulesets on graphs (\ruleset{Col}\cite{ONAG:2001}, \ruleset{Clobber}, \ruleset{Hackenbush}\cite{WinningWays:2001}, \ruleset{NimG}\cite{DBLP:journals/tcs/Fukuyama03}, and more). Some games, such as \ruleset{Undirected Vertex Geography}, are in \cclass{P}\cite{DBLP:journals/tcs/FraenkelSU93}; others, such as \ruleset{Snort}, are \cclass{PSPACE}-complete\cite{DBLP:journals/jcss/Schaefer78}.
\section{Conclusions}
This paper presents game rulesets that rely heavily on different common data structures. When relevant rulesets did not exist, new rulesets have been created: \ruleset{Tower Nim}, and \ruleset{Myopic Col}.
We show polynomial-time algorithms that solve both \ruleset{Tower Nim} and \ruleset{Myopic Col} on paths.
\section{Future Work}
Three open problems exist concerning the computational difficulty of games presented here: \ruleset{Rotisserie Nim}, \ruleset{Anotnim}, and \ruleset{Myopic Col} on binary trees.
Although we have lots of results about \ruleset{Rotisserie Nim} positions, we don't yet have an efficient algorithm for the general state.
\begin{openProblem}[Rotisserie Nim]
What is the computational complexity of \ruleset{Rotisserie Nim}?
\end{openProblem}
\ruleset{Antonim} is a classic game that has resisted a solution. Recent work reduces the problem to a dynamic programming form\cite{arXiv:1506.01042v1}. Unfortunately, since integers can be represented with a logarithmic number of bits for even a small number of heaps, the size of the table can be exponential in the number of bits needed to describe the game. Therefore, it remains unknown whether a polynomial-time algorithm to determine the outcome class of an \ruleset{Antonim} position exists.
\begin{openProblem}[Antonim]
What is the computational complexity of \ruleset{Antonim}?
\end{openProblem}
For \ruleset{Myopic Col}, a polynomial-time solution exists on paths, but this does not immediately yield a solution on binary trees.
\begin{openProblem}[Binary Tree Myopic Col]
What is the computational complexity of \ruleset{Myopic Col} played on binary trees?
\end{openProblem}
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In general statistical ensembles are constructed
to be equivalent in the thermodynamic limit, but there are exceptions to this
rule.
This paper deals with phenomena arising when this equivalence breaks down.
Pairs of conjugate ensembles are
obtained by controlling the system either by fixing the value of some
extensive quantity, through appropriate isolating walls, or by putting
it in contact with a reservoir of that same quantity. A familiar
example, which will be of central interest in the following,
is that of the canonical and grand canonical pair resulting from fixing either the
density of particles or the chemical potential, while keeping the
system thermalized.
Basically, equivalence holds in situations
in which correlations are short ranged. Then, the central limit theorem
guarantees that fluctuations of extensive quantities become negligible in the thermodynamic
limit, so that it doesn't matter whether the system is controlled by enforcing a rigid constraint or through the contact with a reservoir~\cite{Huang,Pathria,Puglisi}.
By the same token lack of equivalence is to be expected when correlations are long
ranged. This is a more rare occurrence, but very
interesting since new physics obtains by switching from one
ensemble to the other within a conjugate pair.
Best known and
recently much studied is the case of systems with long-range
interactions~\cite{Campa}.
There is one instance of nonequivalence which stands apart: The one which
materializes as an
ideal Bose gas (IBG) is driven through the Bose-Einstein condensation (BEC).
In the canonical ensemble (CE) fluctuations of the condensate behave normally
while in the grand canonical ensemble (GCE) do persist even in the thermodynamic limit
~\cite{Pathria,Ziff}. Although this is an exact result, it is somewhat puzzling because,
dealing with an ideal gas,
it is not at all obvious where the long range correlations responsible of the
nonequivalence could come from.
An unbiased attitude ought to
advise to take the facts at face value and to inquiry on possible different mechanisms
underlying BEC in the two ensembles.
Instead, due to a widespread aversion to macroscopic fluctuations of an extensive quantity, which are not suppressed by lowering the temperature, the GCE result has been
variously regarded as unacceptable~\cite{Holthaus},
unphysical~\cite{Fujiwara,Ziff,Stringari,Scully} or even
wrong~\cite{Yukalov} and is commonly referred to as the {\it grand
canonical catastrophe}.
The need to reconsider afresh this matter has been
prompted by the recent observation of BEC in the lab~\cite{Klaers,Schmitt} in a gas of photons
under grand canonical conditions, which has changed the outlook by
producing evidence
for the existence of the macroscopic fluctuations of the condensate.
Therefore, after reckoning with the absence of any catastrophe,
the challenge is to uncover the mechanism responsible of the nonequivalence.
Due to the fundamental character of the question posed, we shall leave the experiment in the background and we shall explore the basic issues in the simplest possible context of the uniform IBG in a box of volume $V$, aiming primarily to outline the conceptual framework
needed to approach this interesting and multifaceted problem.
\section{The Problem}
At the
phenomenological level the mechanism of BEC appears to be the
same in the CE and in the GCE.
Denoting by $d$, $d^*$ and $d_0$ the total density, the density in the excited states and the density
in the ground state, respectively, from the obvious identity $d=d^* +
d_0$ follows the sum rule which must be satisfied by the average quantities
irrespective of the choice of the ensemble
\be
\rho = \langle d^* \rangle + \langle d_0 \rangle,
\label{sumrule.1}
\ee
where $\rho$ stands for $\langle d \rangle$ and the brackets for the average over
either ensemble. The condensate density $\langle d_0 \rangle$
is called the BEC order parameter.
Now, for space dimensionality $d > 2$ and in the thermodynamic limit, $ \langle d^* \rangle$ is superiorly bounded by a
finite critical value $\rho_c$~\cite{Huang,Pathria}. Consequently, keeping $T$ fixed and using $\rho$ as control parameter, from Eq.~(\ref{sumrule.1}) immediately follows the density-driven BEC
\be
\langle d_0 \rangle = \left \{ \begin{array}{ll}
0, \;\; $for$ \;\; \rho \leq \rho_c,\\
\rho - \rho_c, \;\; $for$ \;\; \rho > \rho_c,
\end{array}
\right .
\label{dens.1}
\ee
which, we emphasize, holds irrespective of the ensemble.
Thus, as far as $\langle d_0 \rangle$ is concerned,
CE and GCE are equivalent. However, a striking difference between the two emerges when the fluctuations of $d_0$ are considered, since in the condensed phase, as previously anticipated, one has~\cite{Ziff}
\be
\left \langle \left (d_0 - \langle d_0 \rangle \right )^2 \right \rangle = \left \{ \begin{array}{ll}
0, \;\; $in the CE$,\\
\langle d_0 \rangle^2 \;\; $in the GCE$,
\end{array}
\right .
\label{dens.2}
\ee
i.e. normal behavior in the CE and macroscopic fluctuations in the GCE.
The crux of the matter is that at this level of observation no insight can be obtained
as to the why fluctuations ought to behave so differently in the two ensembles.
The point of view that we propose in this paper is that the picture is rationalized by
shifting the description to the finer and underlying level of the field-theoretic microscopic degrees of freedom, which however are not directly observable.
In order to clarify the interplay of the different levels of description, in the next
paragraph we shall exploit the analogy with magnetic systems,
where a quite similar and well understood situation arises.
\section{Spherical and Mean-Spherical Model}
The IBG in the CE and in the GCE
is well known~\cite{KT,Kac2,Cannon} to be closely related
to the spherical and the mean-spherical models of magnetism.
Let ${\cal H}(\boldsymbol{\varphi}) = \int_V d\vec r \, \varphi(\vec r) \left
(-\frac{1}{2}\nabla^2 \right )\varphi(\vec r)$ be the energy function of a classical scalar paramagnet \cite{Ma} in the volume $V$, where $\boldsymbol{\varphi}$ stands for a configuration of the local unbounded spin variable $\varphi(\vec r)$.
Due to its bilinear character the above Hamiltonian can be diagonalized by Fourier transform ${\cal H} = \frac{1}{V}\sum_{\vec k} k^2|\varphi_{\vec k}|^2$.
In the spherical model (SM) of Berlin and Kac~\cite{BK} a coupling among the modes is
induced by the imposition of an overall constraint on
the square magnetization
${\cal S}(\boldsymbol{\varphi}) = \int_V d \vec r \,
\varphi^2(\vec r) = \frac{1}{V}\sum_{\vec k} |\varphi_{\vec k}|^2$. Then, in thermal equilibrium the statistical ensemble reads
\be
P_\textrm{SM}(\boldsymbol{\varphi}|\sigma) = \frac{1}{Z_\textrm{SM}}
e^{-\beta {\cal H}(\boldsymbol{\varphi})}
\, \delta \left (\mathit{s}(\boldsymbol{\varphi})-\sigma \right ),
\label{Gauss.4}
\ee
where $Z_\textrm{SM}$ is the partition function,
$\mathit{s}(\boldsymbol{\varphi}) = \frac{1}{V}{\cal S}(\boldsymbol{\varphi})$
the square magnetization density and
$\sigma$ a positive number, which usually is set $\sigma = 1$, but here will be
kept free to vary as a control parameter. In the mean-spherical model (MSM)~\cite{LW,KT} the constraint is imposed in the mean: An ${\cal S}$-dependent exponential bias is introduced in place of the $\delta$ function
\be
P_\textrm{MSM}(\boldsymbol{\varphi}|\sigma) = \frac{1}{Z_\textrm{MSM}}e^{-\beta [{\cal H}(\boldsymbol{\varphi}) +\frac{\kappa}{2} {\cal S}(\boldsymbol{\varphi})]},
\label{Gauss.3}
\ee
and the intensive parameter $\kappa$ conjugate to ${\cal S}$ must be adjusted so as to satisfy the requirement
\be
\langle \mathit{s}(\boldsymbol{\varphi}) \rangle_\textrm{MSM} = \sigma.
\label{msph.1}
\ee
Although it is the common usage to refer to these as models, it should be clear from Eqs. (\ref{Gauss.4}) and (\ref{Gauss.3}) that we are dealing with two conjugate ensembles, distinguished by conserving or letting to fluctuate the density $\mathit{s}$. Separating the excitations from the ground state contribution $\mathit{s} = \mathit{s}^*+ \mathit{s}_0$, where $\mathit{s}^* = \frac{1}{V^2}\sum_{\vec k \neq 0} |\varphi_{\vec k}|^2$ and $\mathit{s}_0 = \frac{1}{V^2} \varphi_0^2$,
taking the average and using the constraint
$\langle \mathit{s} \rangle = \sigma$, independently from the choice of the
model there follows the sum rule analogous to Eq.~(\ref{sumrule.1})
\be
\sigma = \langle \mathit{s}^* \rangle + \langle \mathit{s}_0 \rangle.
\label{sumrule.2}
\ee
Therefore, the variables
$\mathit{s}, \mathit{s}^*, \mathit{s}_0$ and $\sigma$ do correspond to the IBG ones
$d, d^*,d_0$ and $\rho$, with the important difference that in
the present context these are composite variables, built in terms of the microscopic
set of the magnetization components $[\varphi_{\vec k}]$.
Furthermore, also in this case
for $d > 2$ and in the thermodynamic limit the excitation contribution $\langle \mathit{s}^* \rangle$ is superiorly bounded by a finite critical value $\sigma_c$, see Appendices~\ref{appA} and~\ref{appB} for details.
Hence, keeping $T$ fixed and varying $\sigma$, from Eq.~(\ref{sumrule.2}) there follows
\be
\langle \mathit{s}_0 \rangle = \left \{ \begin{array}{ll}
0, \;\; $for$ \;\; \sigma \leq \sigma_c,\\
\sigma - \sigma_c, \;\; $for$ \;\; \sigma > \sigma_c,
\end{array}
\right .
\label{BEC.001}
\ee
showing that $\langle \mathit{s}_0 \rangle$ behaves like the BEC order parameter
and that, as far as $\langle \mathit{s}_0 \rangle$ is concerned, the two models are equivalent.
However, at the microscopic level
a different scenario opens up, since
there is no unique way to form a finite expectation $\langle \mathit{s}_0 \rangle$.
Let us introduce the probability that $\mathit{s}$
takes the value $\sigma^{\prime}$ in the MSM, given by
$K_\textrm{MSM}(\sigma^{\prime}|\sigma) = \int d\boldsymbol{\varphi}
\, P_\textrm{MSM}(\boldsymbol{\varphi}|\sigma)
\delta \left (\mathit{s}(\boldsymbol{\varphi})-\sigma^{\prime} \right )$. Then,
just as a consequence of definitions,
the distributions ~(\ref{Gauss.4}) and~(\ref{Gauss.3}) are related by
\be
P_\textrm{MSM}(\boldsymbol{\varphi}|\sigma) = \int_0^\infty d\sigma^{\prime} \,
P_\textrm{SM}(\boldsymbol{\varphi}|\sigma^{\prime})
K_\textrm{MSM}(\sigma^{\prime}|\sigma).
\label{kac.1}
\ee
The kernel has been worked out by
Kac and Thompson~\cite{KT}, obtaining
$K_\textrm{MSM}(\sigma^{\prime}|\sigma) = \delta(\sigma^{\prime} - \sigma)$ for $\sigma < \sigma_c$, which implies that the two distributions coincide and, therefore, that the two models are equivalent below $\sigma_c$. Conversely, when $\sigma$ is above $\sigma_c$
the kernel vanishes for $\sigma^{\prime} < \sigma_c$, while for $\sigma^{\prime} > \sigma_c$ is of the spread out form
\be
K_\textrm{MSM}(\sigma^{\prime}|\sigma) =
\frac{e^{-\frac{\sigma^{\prime} - \sigma_c}{2(\sigma - \sigma_c)}}}
{\sqrt{2\pi (\sigma^{\prime} - \sigma_c)(\sigma - \sigma_c)}},
\label{kac.2}
\ee
revealing nonequivalence.
In the following we shall restrict
to the $\sigma > \sigma_c$ domain, where nontrivial behavior is expected.
Integrating out the $\vec k \neq 0$ modes from Eq.~(\ref{kac.1}),
an identical relation between the marginal probabilities of $\psi_0 = \frac{1}{V}\varphi_0$
is obtained.
In the left hand side there appears the Gaussian distribution
$P_\textrm{MSM}(\psi_0|\sigma) \propto \exp \{-\beta \kappa V \psi_0^2 /2 \}$,
as it can be verified by inspection from Eq.~(\ref{Gauss.3}),
since $P_\textrm{MSM}(\boldsymbol{\varphi}|\sigma)$
factorizes in Fourier space.
From this follows $\langle \mathit{s}_0 \rangle = (\beta \kappa V)^{-1}$.
So, from the second line of Eq.~(\ref{BEC.001}) we get
\be
\kappa = 1/[\beta V (\sigma - \sigma_c)],
\label{kappa.001}
\ee
which implies
\be
P_\textrm{MSM}(\psi_0|\sigma) =
\frac{e^{-\frac{1}{2(\sigma-\sigma_c)}\psi_0^2}}{\sqrt{2\pi (\sigma-\sigma_c)}}.
\label{ensmbl.001}
\ee
Hence, plugging in the explicit expression of $K_\textrm{MSM}(\sigma^{\prime}|\sigma)$
it is not difficult to verify that Eq.~(\ref{kac.1}) is satisfied by the ansatz
\be
P_\textrm{SM}(\psi_0|\sigma) =
\frac{1}{2}[\delta (\psi_0 - m_-) +
\delta (\psi_0 - m_+)],
\label{ensmbl.010}
\ee
where $m_\pm = \pm \sqrt{\sigma - \sigma_c}$
is the spontaneous magnetization density which would be obtained, for instance, by switching off an external magnetic field~\cite{BK,KT}.
Thus, we have two quite different
distributions, as it is clearly illustrated by the plots in Fig.~\ref{fig1}. \begin{figure}[!tb]
\centering
\includegraphics[width=0.8\columnwidth,clip=true]{fig_delta2.eps}
\caption{Distributions of $\psi_0$ in the MSM model (a) and in the SM model (b)
for $\sigma > \sigma_c$. The spikes in the bottom panel stand for $\delta$ functions.}
\label{fig1}
\end{figure}
We are now in the position to draw some conclusions. The BEC-like order parameter
$\langle \mathit{s}_0 \rangle$ can be computed microscopically as
the average composite variable $\langle \psi_0^2 \rangle$. Then,
it is straightforward to check from
Eqs.~(\ref{ensmbl.001}) and~(\ref{ensmbl.010}) that
Eq.~(\ref{BEC.001}) is satisfied in both cases. However, it is enough to take a look at
Fig.~\ref{fig1} to realize that the numerically identical result
$\langle \psi_0^2 \rangle = (\sigma - \sigma_c)$ for $\sigma > \sigma_c$ in the two models
stands for two different phenomena. The double peaked distribution of the SM case
is the familiar one for a ferromagnet in the magnetized phase,
each peak being associated to a pure state and with the up-down symmetry of the model
spontaneously broken. Namely, the distribution is the even mixture of these two
pure states. This means that in the SM the BEC-like transition observed at the level of
$\langle \mathit{s}_0 \rangle$ is the manifestation of an underlying {\it ordering} transition,
and that the BEC order parameter is
the square of the spontaneous magnetization, i.e.
$\langle \psi_0^2 \rangle = m_\pm^2$. By contrast,
in the MSM case we have the opposite situation, since $\langle \psi_0^2 \rangle$ is the variance
of a broad Gaussian distribution centered on the origin. Therefore there is no ordering, no breaking of the symmetry. In this case the BEC-like transition undergone by $\langle \mathit{s}_0 \rangle$ is the manifestation of the microscopic
variable $\psi_0$ developing finite fluctuations.
The reason for this can be grasped intuitively.
In the SM, due to the sharp constraint, there
is enough nonlinearity to produce ordering.
In the MSM framework this cannot be achieved, since the statistics are Gaussian. Then,
the only mean to build up the finite value of $\langle \mathit{s}_0 \rangle$
needed to saturate the sum rule~(\ref{sumrule.2}) above $\sigma_c$
is by growing fluctuations in the {\it single} degree of freedom $\psi_0$. Elsewhere~\cite{EPL,CCZ,Zannetti,Merhav,Marsili}, this type of transition, characterized by the fluctuations of an extensive quantity condensing into {\it one} microscopic component, has been referred to as condensation of fluctuations.
The phenomenological picture is completed
by the fluctuations of $\mathit{s}_0$ itself which, as it follows easily from Eqs.~(\ref{ensmbl.001}) and~(\ref{ensmbl.010}), are given by
\be
\left \langle \left (\mathit{s}_0 - \langle \mathit{s}_0 \rangle \right )^2 \right \rangle = \left \{ \begin{array}{ll}
0, \;\; $in the SM$,\\
2 \langle \mathit{s}_0 \rangle^2, \;\; $in the MSM$.
\end{array}
\right .
\label{flcts.2}
\ee
Comparing this with Eq.~(\ref{dens.2}), the analogy is evident. However, now no catastrophical behavior can be envisaged,
because the fluctuations of $\mathit{s}_0$ are trivially a consequence of the different microscopic statistics
in the two models.
Having analysed how the nonequivalence unfolds, the remaining task is to clarify where it does to originate from, which ultimately must be in the presence of long range correlations.
The explanation is that in the MSM the parameter $\kappa$ is related to the correlation length $\xi$
by $\kappa = \xi^{-2}$~\cite{Ma} and from Eq.~(\ref{kappa.001}) we see that
in the thermodynamic limit
$\kappa$ vanishes like $1/V$ when $\sigma$ is fixed above $\sigma_c$.
Therefore, in the entire condensed phase the MSM is critical, while the SM is not.
Hence, the lack of equivalence. We emphasize that the onset of these critical correlations
in the MSM is the unifying thread behind the BEC-like transition accompanied by macroscopic fluctuations
of $\mathit{s}_0$.
\section{Back to the IBG}
We may now go back to the main topic of the IBG
with the advantage of hindsight, since we know what to look for: The microscopic
variables underlying the phenomenological level,
in terms of which we expect to expose both the different mechanisms of BEC
in the CE and GCE and the nonequivalence cause. This is accomplished by introducing the creation and destruction operators and by using the representation of the density matrix in the associated coherent state basis.
Let us first diagonalize the energy and number operators by Fourier transform ${\cal H} = \sum_{\vec k} \epsilon_{\vec k} a_{\vec k}^\dagger a_{\vec k}$ and ${\cal N} = \sum_{\vec k} a_{\vec k}^\dagger a_{\vec k}$, where $\epsilon_{\vec k}$ is
the single particle energy. In the
Glauber-Sudarshan P-representation~\cite{Glauber,Sudarshan} the density matrix is given by $D(\rho) = \int d^2\boldsymbol{\alpha} \, P(\boldsymbol{\alpha}|\rho)
\ketbra{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}$,
where $\ket{\boldsymbol{\alpha}}$
are product states
$\prod_{\vec k} \ket{\alpha_{\vec k}}$ and the $\vec k$-mode factor $\ket{\alpha_{\vec k}}$ is the eigenvector of the annihilation operator $a_{\vec k}\ket{\alpha_{\vec k}}=\alpha_{\vec k} \ket{\alpha_{\vec k}}$ with complex eigenvalue $\alpha_{\vec k}$.
Then, in the GCE the weight function reads~\cite{Glauber}
\be
P_\textrm{GCE}(\boldsymbol{\alpha}|\rho) =
\prod_{\vec k} \frac{1}{\pi \langle n_{\vec k} \rangle} \exp \left \{-\frac{|\alpha_{\vec k}|^2}
{\langle n_{\vec k} \rangle} \right \},
\label{PGCE.1}
\ee
where $\langle n_{\vec k} \rangle = [e^{\beta(\epsilon_k - \mu)}-1]^{-1}$ is the usual Bose average occupation number of the state $\ket{\vec k}$~\cite{Huang} and $\mu$ stands for
the chemical potential.
Using the identity
$\langle |\alpha_{\vec k}|^2 \rangle = \langle n_{\vec k} \rangle$, which easily follows
from Eq.~(\ref{PGCE.1}), the equation fixing $\mu$ for the given value of $\rho$
reads $\frac{1}{V}\sum_{\vec k} \langle |\alpha_{\vec k}|^2 \rangle
= \rho$. Since this is nothing but Eq.~(\ref{sumrule.1}), we may
write $d = \frac{1}{V}\sum_{\vec k} |\alpha_{\vec k}|^2$ and, consequently,
$d_0 = |\eta_0|^2$, after setting $\eta_0 = \frac{1}{\sqrt{V}}\alpha_0$.
This allows to identify $[\alpha_{\vec k}]$ with the sought for set of microscopic variables
analogous to $[\varphi_{\vec k}]$.
Following the magnetic example, we must focus on the statistics of
the zero component, keeping in mind however that now is a complex quantity
$\eta_0 = |\eta_0|e^{i\theta}$.
The starting point is the relation between ensembles analogous
to Eq.~(\ref{kac.1}) (see Appendices~\ref{appC} and~\ref{appD})
\be
P_\textrm{GCE}(\boldsymbol{\alpha}|\rho) = \int_0^\infty d\rho^{\prime} \,
P_\textrm{CE}(\boldsymbol{\alpha}|\rho^{\prime})
K_\textrm{GCE}(\rho^{\prime}|\rho),
\label{kac.10}
\ee
where
$K_\textrm{GCE}(\rho^\prime|\rho)$ is the probability in the CGE
that the density takes the value $\rho^\prime$. This is known as the Kac function~\cite{Ziff}, whose
form is similar to that of Eq.~(\ref{kac.2}). Namely,
$K_\textrm{GCE}(\rho^{\prime}|\rho) = \delta(\rho^{\prime} - \rho)$ for $\rho < \rho_c$, while when $\rho$ is above $\rho_c$ it vanishes
for $\rho^{\prime} < \rho_c$ and is given by
\be
K_\textrm{GCE}(\rho^{\prime}|\rho) =
\frac{e^{-\frac{\rho^{\prime} - \rho_c}{\rho - \rho_c}}}
{\rho - \rho_c}, \quad \text{for} \quad \rho^{\prime} > \rho_c.
\label{kac.5}
\ee
The relation between the $\eta_0$ marginal distributions
is then obtained by integrating out
the $\vec k \neq 0$ modes $P_\textrm{GCE}(\eta_0|\rho) = \int_0^\infty d\rho^{\prime} \,
P_\textrm{CE}(\eta_0|\rho^{\prime}) \,
K_\textrm{GCE}(\rho^{\prime}|\rho)$. Inserting in the left hand side the contribution
from the first factor of Eq.~(\ref{PGCE.1})
\be
P_\textrm{GCE}(\eta_0|\rho) =
\frac{V}{\pi \langle n_{0} \rangle} \exp \left \{-\frac{V|\eta_0|^2}
{\langle n_{0} \rangle} \right \},
\label{PGCE.2}
\ee
and substituting for $K_\textrm{GCE}(\rho^{\prime}|\rho)$ the above expression, the equation
is solved by the ansatz
\be
P_\textrm{CE}(\eta_0|\rho) =
\frac{1}{\pi} \, \delta (|\eta_0|^2 - (\rho - \rho_c)).
\label{CE.010}
\ee
\begin{figure}[!tb]
\centering
\includegraphics[width=0.8\columnwidth,clip=true]{fig_delta3.eps}
\caption{Distributions of $|\eta_0|$ in the GCE (a) and in the CE (b)
for $\rho > \rho_c$. The spike in the bottom panel stands for the $\delta$
function distribution.}
\label{fig2}
\end{figure}
From the plot of the two distributions~(\ref{PGCE.2}) and~(\ref{CE.010}) in Fig.~\ref{fig2},
we see
by inspection that we are confronted with a situation qualitatively
similar to the one in the magnetic case.
In both ensembles $|\eta_0|$ develops a nonobservable finite expectation value
\be
\langle |\eta_0| \rangle = \left \{ \begin{array}{ll}
\sqrt{\rho - \rho_c}, \;\; $CE$,\\
\frac{1}{2}\sqrt{\pi(\rho - \rho_c)}, \;\; $GCE$.
\end{array}
\right .
\label{A.2}
\ee
The observable
BEC order parameter exhibits the same numerical value as
in Eq.~(\ref{dens.1})
\be
\langle d_0 \rangle = \langle |\eta_0|^2 \rangle = \left \{ \begin{array}{ll}
(\rho - \rho_c), \;\; $CE$,\\
(\rho - \rho_c), \;\; $GCE$,
\end{array}
\right .
\label{A.2b}
\ee
which is achieved through fluctuations in the GCE and without fluctuations in
the CE
\be
\left \langle \left (|\eta_0| - \langle |\eta_0| \rangle \right )^2 \right \rangle = \left \{ \begin{array}{ll}
0, \;\; $CE$, \\
(1-\pi/4)(\rho - \rho_c), \;\; $GCE$.
\end{array}
\right .
\label{A.3}
\ee
This means that the sum rule~(\ref{sumrule.1}) in the CE is saturated by fixing the modulus
to the precise finite value
$|\eta_0| = \sqrt{\rho - \rho_c}$. Because of
this freezing of $|\eta_0|$, BEC in the CE fits into the scheme of an ordering
transition akin to the ferromagnetic transition in the SM.
Conversely, BEC in the GCE does not take place through ordering.
Rather, the saturation of the sum rule
is achieved by growing the macroscopic fluctuations of $|\eta_0|$, as Eq.~(\ref{A.3}) shows.
Therefore, in this case BEC fits into the scheme of the condensation
transition. Ordering is ruled out because the width of the probability distribution persists in the thermodynamic limit. Moreover, assuming
$\epsilon_k \sim k^{\alpha}$, where the power $\alpha$ depends on the dispersion relation,
for small $k$ and small $\mu$ we may approximate
$\langle n_{\vec k} \rangle^{-1} \simeq \beta [ k^{\alpha} - \mu]$ and inserting this into
Eq.~(\ref{PGCE.1}) we have that the chemical potential, like $\kappa$ in the preceding
case, is connected to the correlation length
by $-\mu=\xi^{-\alpha}$. Since the formation of the condensed phase in the $\textrm{GCE}$ requires $\mu$
to vanish in the
thermodynamic limit~\cite{Huang}, we have that
the condensed phase is critical throughout in the GCE
but not in the CE. This explains
the origin of nonequivalence which, as in the magnetic case,
is not revealed by the BEC order parameter
but emerges only at the level of the higher cumulant
\be
\left \langle \left (|\eta_0|^2 - \langle |\eta_0|^2 \rangle \right )^2 \right \rangle = \left \{ \begin{array}{ll}
(\rho - \rho_c)^2, \;\; $GCE$,\\
0, \;\; $CE$.
\end{array}
\right .
\label{A.4}
\ee
Hence, the phenomenological result of Eq.~(\ref{dens.2}), rather than being pathological,
is now accounted for as a byproduct of the critical correlations in the GCE.
\section{Summary}
In this paper we have investigated the differences arising
when BEC in a homogeneous IBG is treated in the CE and in the GCE. The analysis has been
carried out by taking advantage of the close analogy with the
the spherical and mean spherical models of magnetism.
The problem is of particular interest because the ensemble nonequivalence issue encroaches
the fundamental question of the nature of BEC. We have shown that
ordering takes place in the CE, while condensation takes place in the GCE, whose prominent manifestation are the macroscopic
fluctuations of the condensate.
Therefore, we suggest that the recent experimental realization of BEC in a gas of photons \cite{Klaers,Schmitt,Klaers2}
ought to be regarded as qualitatively different from other experimental instances of BEC,
such as those with cold atoms, precisely because the grand canonical conditions
lead to BEC as condensation of fluctuations.
Moreover, by retracing the origin of nonequivalence to the onset of critical correlations
in the condensed phase of the GCE, we have pointed out that the observable phenomenology follows as a consequence. So, knowledge of the existence of these
correlations could possibly serve as a useful guide in the planning of future experiments.
As a final remark, notice that the above analysis has involved the modulus, but not the phase of $\eta_0$.
This means that the distinction between ordering and condensation is decoupled
from the issue of the breaking of the gauge symmetry. This is a separate and important
problem which will be the object of future work.
\acknowledgements{Informative and quite useful conversations on photons BEC with Prof. Claudio Conti are gratefully acknowledged. AS acknowledges
support from the ``Programma VALERE'' of University of Campania
``L. Vanvitelli'' and from MIUR PRIN project ``Coarse-grained description for non-equilibrium systems and transport phenomena (CO-NEST)'' n. 201798CZLJ.}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Groups and $\mathbb{F}_p[G]$-modules}\label{se:gm}
Let $p$ be a prime and $G=\langle \sigma\rangle$ an abstract group
of order $p^n$. We recall some facts concerning $R$-modules, where
$R$ is the group ring $\mathbb{F}_p[G]$. Because we frequently view $R$ as a
module over $R$, to prevent confusion we write the module $R$ as
\begin{equation*}
R=\oplus_{j=0}^{p^n-1} \mathbb{F}_p \tau^j,
\end{equation*}
where $\sigma$ acts by multiplication by $\tau$. For convenience we
set $\rho:=\sigma-1$.
The set of nonzero cyclic $R$-modules is identical to the set of
nonzero indecomposable $R$-modules, and these are precisely the
$p^n$ quotients $M_j:=R/\langle (\tau-1)^j\rangle$, $1\le j\le p^n$.
Each $M_j$ is a local ring, with unique maximal ideal $\rho M_j$,
and is annihilated by $\rho^{j}$ but not $\rho^{j-1}$.
Moreover, for each $j$ there exists a $G$-equivariant isomorphism
from $M_j$ to its dual $M_j^*$, as follows. For each $i\in \{1,
\dots, p^n\}$ we choose the $\mathbb{F}_p$-basis of $M_j$ consisting of the
images of $\{1,(\tau-1),\dots,(\tau-1)^{j-1}\}$ and define an
$\mathbb{F}_p$-linear map $\lambda:M_j\to \mathbb{F}_p$ by
\begin{equation*}
\lambda\left(f_0+f_1\overline{(\tau-1)}+\cdots+f_{j-1}
\overline{(\tau-1)}^{j-1}\right) = f_{j-1},
\end{equation*}
where $f_k \in \mathbb{F}_p$, $k=0, \dots, j-1$. Observe that $\ker\lambda$
contains no nonzero ideal of $M_j$. Then
\begin{equation*}
Q:M_j\times M_j\to \mathbb{F}_p, \qquad Q(a,b):=\lambda(ab),\ a,b\in M_j
\end{equation*}
is a nonsingular symmetric bilinear form. Thus $M_j$ is a symmetric
algebra. (See \cite[page~442]{La}.) Moreover, $Q$ induces a
$G$-equivariant isomorphism $\psi:M_j\to M_j^*$ given by
$(\psi(a))(b) = Q(a,b)$, $a,b \in M_j$.
\begin{remark*}
In order for $\psi$ to be $G$-equivariant, we must define the
action on $M_j^*$ by $\sigma f(m) = f(\sigma m)$ for all $m \in
M_j$, and since $G$ is commutative, this action is well-defined.
It is worthwhile to observe, however, that $M_j^*$ is
$\mathbb{F}_p[G]$-isomorphic to the module $\tilde M_j^*$ on which the
action of $G$ is defined by $\sigma f(m) = f(\sigma^{-1} m)$ for
all $m \in M_j$. Indeed by the $G$-equivariant isomorphism between
$M_j$ and $M_j^*$ it is sufficient to show that the
$\mathbb{F}_p[G]$-module $\tilde M_j$ obtained from $M_j$ by twisting the
action of $G$ via the automorphism $\sigma \to \sigma^{-1}$ is
naturally isomorphic to $M_j$. But this follows readily by
extending the automorphism $\sigma \to \sigma^{-1}$ to the
automorphism of the group ring $\mathbb{F}_p[G]$ and then inducing the
required $\mathbb{F}_p[G]$-isomorphism between $M_j$ and $M_j^*$.
\end{remark*}
We also recall some facts about the semidirect products $H_j :=
M_j\rtimes G$, $j=1,\dots, p^n$. For each $j$, the group $H_j$ has
order $p^{j+n}$; exponent $p^n$, except when $j=p^n$, in which case
the exponent is $p^{n+1}$; nilpotent index $j$; rank (the smallest
number of generators) $2$; and Frattini subgroup $\Phi(H_j) =(\rho
M_j)\rtimes G^p$. Finally, for $j<k$, $H_j$ is a quotient of $H_k$
by the normal subgroup $\rho^{j}M_k\rtimes 1$.
\section{Kummer Theory with Operators}\label{se:kummer}
For Sections~\ref{se:kummer} through \ref{se:jepsilon} we adopt the
following hypotheses. Suppose that $G=\mathrm{Gal}(K/F)=\langle
\sigma\rangle$ for an extension $K/F$ of degree $p^n$ of fields of
characteristic not $p$. We let $\xi_p$ be a primitive $p$th root of
unity and set $\hat F:=F(\xi_p)$, $\hat K:=K(\xi_p)$, and $J:=\hat
K^\times/\hat K^{\times p}$, where $\hat K^\times$ denotes the
multiplicative group $\hat K\setminus \{0\}$. We write the elements
of $J$ as $[\gamma]$, $\gamma\in \hat K^\times$, and we write the
elements of $\hat F^\times/\hat F^{\times p}$ as $[\gamma]_{\hat
F}$, $\gamma\in \hat F^\times$. We moreover let $\epsilon$ denote a
generator of $\mathrm{Gal}(\hat F/F)$ and set $s=[\hat F:F]$. Since $p$ and
$s$ are relatively prime, $\mathrm{Gal}(\hat K/F)\simeq \mathrm{Gal}(\hat F/F)\times
\mathrm{Gal}(K/F)$. Therefore we may naturally extend $\epsilon$ and
$\sigma$ to $\hat K$, and the two automorphisms commute in
$\mathrm{Gal}(\hat K/F)$. Using the extension of $\sigma$ to $\hat K$, we
write $G$ for $\mathrm{Gal}(\hat K/\hat F)$ as well. Then $J$ is an
$\mathbb{F}_p[G]$-module. Finally, we let $t\in \mathbb{Z}$ such that
$\epsilon(\xi_p)=\xi_p^t$. Then $t$ is relatively prime to $p$, and
we let $J_\epsilon$ be the $t$-eigenspace of $J$ under the action of
$\epsilon$: $J_\epsilon = \{ [\gamma] \ :\ \epsilon [\gamma] =
[\gamma]^t \}$.
Observe that since $\epsilon$ and $\sigma$ commute, $J_\epsilon$ is
an $\mathbb{F}_p[G]$-subspace of $J$. By \cite[\S 5, Proposition]{W}, we
have a Kummer correspondence over $K$ of finite subspaces $M$ of the
$\mathbb{F}_p$-vector space $J_\epsilon$ and finite abelian exponent $p$
extensions $L$ of $K$:
\begin{multline*}
M= ((\hat K L)^{\times p}\cap \hat K^\times)/\hat K^{\times p}
\leftrightarrow \\ L=L_M = \text{maximal $p$-extension of $K$
in\ } \hat L_{M} := \hat K(\root{p}\of{\gamma}:[\gamma]\in M).
\end{multline*}
As Waterhouse shows, for $M\subset J_\epsilon$, the automorphism
$\epsilon\in \mathrm{Gal}(\hat K/K)$ has a unique lift $\tilde\epsilon$ to
$\mathrm{Gal}(\hat L_M/K)$ of order $s$, and $L_M$ is the fixed field of
$\tilde\epsilon$.
In the next proposition we provide some information about the
corresponding Galois modules when $L_M/F$ is Galois. Recall that in
the situation above, the Galois groups $\mathrm{Gal} (L_M/F)$ and $\mathrm{Gal}(\hat
L_M/\hat K)$ are naturally $G$-modules under the action induced by
conjugations of lifts of the elements in $G$ to $\mathrm{Gal}(L_M/F)$ and
$\mathrm{Gal} (\hat L_M/\hat F)$. Furthermore, because the Galois groups
have exponents dividing $p$, we see that $\mathrm{Gal}(L_M/F)$ and
$\mathrm{Gal}(\hat L_M/\hat F)$ are in fact $\mathbb{F}_p[G]$-modules.
\begin{proposition}\label{pr:kummer}
Suppose that $M$ is a finite $\mathbb{F}_p$-subspace of $J_\epsilon$. Then
\begin{enumerate}
\item\label{it:k1} $L_M$ is Galois over $F$ if and only if
$M$ is an $\mathbb{F}_p[G]$-submodule of $J_\epsilon$.
\item\label{it:k2} If $L_M/F$ is Galois, then base extension
$F\to \hat F$ induces a natural isomorphism of $G$-modules
$\mathrm{Gal}(L_M/F)\simeq \mathrm{Gal}(\hat L_M/\hat F)$.
\item\label{it:k3} If $L_M/F$ is Galois, then as
$G$-modules,
\begin{equation*}
\mathrm{Gal}(L_M/K)\simeq \mathrm{Gal}(\hat L_M/\hat K)\simeq M.
\end{equation*}
\end{enumerate}
\end{proposition}
\begin{proof}
(\ref{it:k1}). Suppose first that $L_M/F$ is Galois. Then $\hat
L_{M}=L\hat K/\hat F$ is Galois as well. Every automorphism of
$\hat K$ extends to an automorphism of $\hat L_M$, and therefore
$M$ is an $\mathbb{F}_p[G]$-submodule of $J$. From \cite[\S 5,
Proposition]{W} we see that $M$ is an $\mathbb{F}_p[G]$-submodule of
$J_\epsilon$.
Going the other way, suppose that $M$ is a finite
$\mathbb{F}_p[G]$-submodule of $J_\epsilon$. By the correspondence
above, $L_M/K$ is Galois. Then $M$ is also an $\mathbb{F}_p[\mathrm{Gal}(\hat
K/F)]$-submodule of $J_\epsilon$ and therefore $\hat L_M/F$ is
Galois. Now since $K/F$ is Galois, every automorphism of $\hat
L_M$ sends $K$ to $K$. Moreover, since $L_M$ is the unique
maximal $p$-extension of $K$ in $\hat L_M$, every automorphism
of $\hat L_{M}$ sends $L_M$ to $L_M$. Therefore $L_M/F$ is
Galois.
(\ref{it:k2}). Suppose $L_M/F$ is Galois. Since $\hat F/F$ and
$L_M/F$ are of relatively prime degrees, we have $\mathrm{Gal}(L_M\hat
F/F)\simeq \mathrm{Gal}(\hat F/F)\times \mathrm{Gal}(L_M/F)$. Therefore we have
a natural isomorphism $G=\mathrm{Gal}(K/F)\simeq \mathrm{Gal}(\hat K/\hat F)$,
and the natural isomorphism $\mathrm{Gal}(\hat L_M/\hat
F)\simeq\mathrm{Gal}(L_M/F)$ is an isomorphism of $G$-extensions.
(\ref{it:k3}). Suppose $L_M/F$ is Galois. By (\ref{it:k2}), it
is enough to show that $\mathrm{Gal}(\hat L_M/\hat K)\simeq M$ as
$G$-modules. Under the standard Kummer correspondence over $\hat
K$, finite subspaces of the $\mathbb{F}_p$-vector space $J$ correspond to
finite abelian exponent $p$ extensions $\hat L_M$ of $\hat K$,
and $M$ and $\mathrm{Gal}(\hat L_M/\hat K)$ are dual $G$-modules under a
$G$-equivariant canonical duality $\langle m,g\rangle =
g(\root{p}\of{m})/\root{p}\of{m}$. (See \cite[pages 134 and
135]{W} and \cite[\S 2.3]{MS2}.) Because $M$ is finite, $M$
decomposes into a direct sum of indecomposable $\mathbb{F}_p[G]$-modules.
From Section~\ref{se:gm}, all indecomposable $\mathbb{F}_p[G]$-modules
are $G$-equivariant self-dual modules. Hence there is a
$G$-equivariant isomorphism between $M$ and its dual $M^*$, and
$\mathrm{Gal}(\hat L_M/\hat F)\simeq M$ as $G$-modules.
\end{proof}
\section{The Index}\label{se:index}
We keep the same assumptions given at the beginning of
Section~\ref{se:kummer}. Set $A := \ann_J \rho^{p^n-1} = \{
[\gamma] \in J : \rho^{p^{n-1}-1} [\gamma] = [1] \}$. The
following homomorphism appears in a somewhat different form in
\cite[Theorem 3]{W}:
\begin{definition*}
The \emph{index} $e(\gamma)\in \mathbb{F}_p$ for
$[\gamma]\in A$ is defined by
\begin{equation*}
\xi_p^{e(\gamma)} = \left( \root{p} \of{N_{\hat K/\hat
F}(\gamma)}\right)^{\rho}.
\end{equation*}
\end{definition*}
The index is well-defined, as follows. First, since
\begin{equation*}
1+\sigma+\dots+\sigma^{p^n-1}=(\sigma-1)^{p^n-1}=\rho^{p^n-1}
\end{equation*}
in $\mathbb{F}_p[G]$, $[N_{\hat K/\hat F}(\gamma)]=[\gamma]^{\rho^{p^n-1}}$,
which is the trivial class $[1]$ by the assumption $[\gamma]\in A$.
As a result, $\root{p}\of{N_{\hat K/\hat F}(\gamma)}$ lies in $\hat
K$ and is acted upon by $\sigma$ and therefore $\rho$. Observe that
$e(\gamma)$ depends neither on the representative $\gamma$ of
$[\gamma]$ nor on the particular $p$th root of $N_{\hat K/\hat
F}(\gamma)$.
The index function $e$ is a group homomorphism from $A$ to $\mathbb{F}_p$.
Therefore the restriction of $e$ to any submodule of $A$ is either
trivial or surjective. Moreover, the index is trivial for any
$[\gamma]$ in the image of $\rho$:
\begin{equation*}
\xi_p^{e(\gamma^\rho)} = \root{p}\of{ N_{\hat K/\hat
F}(\gamma^\rho)}^{\rho} = \root{p}\of{(N_{\hat K/\hat
F}(\gamma))^\rho} = \root{p}\of{1}^{\rho} = 1,
\end{equation*}
or $e(\gamma^\rho)=0$.
Following Waterhouse, we show how the index function permits the
determination of $\mathrm{Gal}(\hat L_M/\hat F)$ as a $G$-extension.
For $1\le j\le p^n$ and $e\in \mathbb{F}_p$, write $H_{j,e}$ for the group
extension of $M_j$ by $G$ with $\tilde \sigma^{p^n} =
e(\tau-1)^{j-1}$, where $\tilde\sigma$ is a lift of $\sigma$.
Observe that $H_{j,0}=H_j=M_j\rtimes G$.
Let $N_\gamma$ denote the cyclic $\mathbb{F}_p[G]$-submodule of $J$ generated
by $[\gamma]$.
\begin{proposition}\label{pr:groupoflm} (See \cite[Theorem 2]{W}.)
Let $[\gamma]\in J_\epsilon$ and $M=N_\gamma$.
\begin{enumerate}
\item\label{it:lm1} If $M\simeq M_j$ for $1\le j<p^n$ and
$e=e(\gamma)$, then $\mathrm{Gal}(L_M/F)\simeq H_{j,e}$ as
$G$-extensions.
\item\label{it:lm2} If $M\simeq \mathbb{F}_p[G]$ then
$\mathrm{Gal}(L_M/F)\simeq \mathbb{F}_p[G]\rtimes G$.
\end{enumerate}
\end{proposition}
Before presenting the proof, we note that Waterhouse also tells us
that for $j<p^n$ and $e\neq 0$, $H_{j,e}\not\simeq H_j$ (see
\cite[Theorem 2]{W}), and for $j=p^n$ then there is a $G$-extension
isomorphism $H_{p^n,e}\simeq H_{p^n}$ for every $e$. In particular,
we may use Proposition~\ref{pr:groupoflm} later to deduce that if
$M\simeq M_j$ for $j<p^n$ and $\mathrm{Gal}(L_M/F)\simeq M_j\rtimes G$, then
$e(\gamma)=0$.
\begin{proof}
Suppose $M\simeq M_j$ for some $1\le j\le p^n$. By
Proposition~\ref{pr:kummer}(\ref{it:k3}), $\mathrm{Gal}(L_M/K)\simeq
M_j$ as $G$-modules. Hence $\mathrm{Gal}(L_M/F)\simeq H_{j,e}$ for some
$e$. If $j=p^n$ then from the isomorphism $H_{p^n,e}\simeq
H_{p^n}$ above we have the second item. By
Proposition~\ref{pr:kummer}(\ref{it:k2}), it remains only to
show that if $j<p^n$, $\mathrm{Gal}(\hat L_M/\hat F)\simeq
H_{j,e(\gamma)}$.
Let $\tilde\sigma$ denote a pullback of $\sigma\in G$ to
$\mathrm{Gal}(\hat L_M/\hat F)$. Then $\tilde\sigma^{p^n}$ lies in
$Z(\mathrm{Gal}(\hat L_M/\hat F)) \cap \mathrm{Gal}(\hat L_M/\hat K)$, where
$Z(\mathrm{Gal}(\hat L_M/\hat F))$ denotes the center of $\mathrm{Gal}(\hat
L_M/\hat F).$ Using the $G$-equivariant Kummer pairing
\begin{equation*}
\langle \cdot,\cdot\rangle \colon \mathrm{Gal}(\hat L_M/\hat K)
\times M \to \langle \xi_p \rangle \simeq \mathbb{F}_p
\end{equation*}
we see that $Z(\mathrm{Gal} (\hat L_M/\hat K))$ annihilates $\rho M$.
Furthermore, since this pairing is nonsingular we deduce that
$Z(\mathrm{Gal}(\hat L_M/\hat K)) \simeq M/\rho M$ and we can choose a
generator $\eta$ of $Z(\mathrm{Gal} (\hat L_M/\hat K))$ such that
\begin{equation*}
\langle \eta, [\gamma] \rangle =
\eta(\root{p}\of{\gamma})/\root{p}\of{\gamma} = \xi_p.
\end{equation*}
In particular, if $\tilde \sigma^{p^n} = \eta^e$ then
\begin{equation*}
(\root{p}\of{\gamma})^{(\tilde\sigma^{p^n}-1)} =
\xi_{p}^{e}.
\end{equation*}
Therefore
\begin{equation*}
\root{p}\of{\gamma}^{(\tilde\sigma^{p^n}-1)} =
\root{p}\of{\gamma}^{(1+\tilde\sigma+
\dots+\tilde\sigma^{p^n-1}) (\tilde\sigma-1)} =
\left(\root{p}\of{ N_{\hat K/\hat F}(\gamma)}\right)^{\rho}=
\xi_p^{e(\gamma)}.
\end{equation*}
\end{proof}
\section{The $\mathbb{F}_p[G]$-module $J_\epsilon$}\label{se:jepsilon}
Again we keep the same assumptions given at the beginning of
Section~\ref{se:kummer}. In this section we develop the crucial
technical results needed for Theorem~\ref{th:main}: a decomposition
of the $\mathbb{F}_p[G]$-module $J_\epsilon$ into cyclic direct summands, and
a determination of the value of the index function $e$ on certain of
the summands.
We first show that $J_\epsilon$ is indeed a summand of $J$. Then we
combine a decomposition of $J$ into indecomposables, taken from
\cite[Theorem 2]{MSS}, with uniqueness of decompositions into
indecomposables, to achieve important restrictions on the possible
summands of $J_\epsilon$. Much of the remainder of the proof is
devoted to establishing that we have an ``exceptional summand'' of
dimension $p^r+1$ on which the index function is nontrivial. In the
argument we need \cite[Proposition 7]{MSS} in particular to derive a
lower bound for the dimension of that summand.
\begin{theorem}\label{th:jepsilon}
Suppose that $p>2$ or $n>1$. The $\mathbb{F}_p[G]$-module $J_\epsilon$
decomposes into a direct sum $J_\epsilon = U\oplus_{a\in \mathcal{A}}
V_\alpha$, with $\mathcal{A}$ possibly empty, with the following
properties:
\begin{enumerate}
\item\label{it:j1} For each $\alpha\in \mathcal{A}$ there exists
$i\in \{0,\dots,n\}$ such that $V_\alpha\simeq M_{p^i}$.
\item\label{it:j2} $U\simeq M_{p^r+1}$ for some $r\in
\{-\infty,0,1,\dots,n-1\}$.
\item\label{it:j3} $e(U)=\mathbb{F}_p$.
\item\label{it:j4} If $V_\alpha\simeq M_{p^i}$ for $0\le
i\le r$, then $e(V_\alpha)=\{0\}$.
\end{enumerate}
\end{theorem}
\noindent Here we observe the convention that $p^{-\infty}=0$.
\begin{proof}
We show first that $J_\epsilon$ is a direct summand of $J$ by
adapting an approach to descent from \cite[page~258]{Sa}. Recall
that $[\hat F:F] = s$ and $\epsilon(\xi_p)= \xi_p^t$. Thus $s$
and $t$ are both relatively prime to $p$. Let $z\in \mathbb{Z}$ satisfy
$zst^{s-1}\equiv 1\ (\bmod\ p)$, and set
\begin{equation*}
T = z\cdot \sum_{i=1}^st^{s-i}\epsilon^{i-1} \in
\mathbb{Z}[\mathrm{Gal}(\hat K/F)].
\end{equation*}
We calculate that $(t-\epsilon)T\equiv 0\ (\bmod\ p)$, and hence
the image of $T$ on $J$ lies in $J_\epsilon$. Moreover,
$\epsilon$ acts on $J_\epsilon$ by multiplication by $t$, and
therefore $T$ acts as the identity on $J_\epsilon$. Finally,
since $\epsilon$ and $\sigma$ commute, $T$ and $I-T$ commute
with $\sigma$. Hence $J$ decomposes into a direct sum
$J_\epsilon\oplus J_\nu$, with associated projections $T$ and
$I-T$.
We claim that $e((I-T)A)=\{0\}$. Since $\xi_p\in \hat F$, the
fixed field $\mathrm{Fix}_{\hat K}(\sigma^p)$ may be written $\hat
F(\root{p}\of{a})$ for a suitable $a\in \hat F^\times$. By
\cite[\S 5, Proposition]{W}, $\epsilon([a]_{\hat F})=[a]_{\hat
F}^t$. Suppose $\gamma\in \hat K^\times$ satisfies $[\gamma]\in
A$. Then, since $\epsilon$ and $\sigma$ commute,
\begin{equation*}
[N_{\hat K/\hat F}(\epsilon(\gamma))]_{\hat F} =
[\epsilon(N_{\hat K/\hat F}(\gamma))]_{\hat F} =
\epsilon([N_{\hat K/\hat F}(\gamma)]_{\hat F}) = [N_{\hat
K/\hat F}(\gamma)]_{\hat F}^t.
\end{equation*}
Hence $e(\epsilon([\gamma]))=t\cdot e([\gamma])$, and we then
calculate that $e(T[\gamma])=e([\gamma])$. Therefore
$e((I-T)[\gamma])=0$, as desired.
Now since $\mathbb{F}_p[G]$ is an Artinian principal ideal ring, every
$\mathbb{F}_p[G]$-module decomposes into a direct sum of cyclic
$\mathbb{F}_p[G]$-modules \cite[Theorem~6.7]{SV}. Since cyclic
$\mathbb{F}_p[G]$-modules are indecomposable, we have a decomposition of
$J=J_\epsilon\oplus J_\nu$ as a direct sum of indecomposables.
From Section~\ref{se:gm} we know that each of these
indecomposable modules are self-dual and local, and therefore
they have local endomorphism rings. By the
Krull-Schmidt-Azumaya Theorem (see \cite[Theorem 12.6]{AF}), all
decompositions of $J$ into indecomposables are equivalent. (In
our special case one can check this fact directly.)
On the other hand, we know by \cite{MSS} several properties of
$J$, including its decomposition as a direct sum of
indecomposable $\mathbb{F}_p[G]$-modules, as follows. By
\cite[Theorem 2]{MSS},
\begin{equation*}
J=X\oplus \bigoplus_{i=0}^n Y_i,
\end{equation*}
where each $Y_i$ is a direct sum, possibly zero, of
$\mathbb{F}_p[G]$-modules isomorphic to $M_{p^i}$, and $X=N_{\chi}$ for
some $\chi\in \hat K^\times$ such that $N_{\hat K/\hat
F}(\chi)\in a^w\hat F^{\times p}$ for some $w$ relatively prime
to $p$. Moreover, $X\simeq M_{p^r+1}$ for some $r\in \{-\infty,
0,\dots,n-1\}$. We deduce that $e(\chi)\neq 0$ and that $e$ is
surjective on $X$. Furthermore, considering each $Y_i$ as a
direct sum of indecomposable modules $M_{p^i}$, we have a
decomposition of $J$ into a direct sum of indecomposable
modules.
We deduce that every indecomposable $\mathbb{F}_p[G]$-submodule appearing
as a direct summand in $J_\epsilon$ is isomorphic to $M_{p^i}$
for some $i\in \{0,\dots,n\}$, except possibly for one summand
isomorphic to $M_{p^r+1}$. Moreover, we find that $e$ is
nontrivial on $J_\epsilon$, as follows. From the hypothesis
that either $p>2$ or $n>1$ we deduce that $p^r+1<p^n$. Therefore
since $N_\chi\simeq M_{p^r+1}$ we have $[\chi]\in A$. Let
$\theta, \omega\in \hat K^\times$ satisfy $[\theta]=T[\chi]\in
J_\epsilon$ and $[\omega]=(I-T)[\chi]$. From $e((I-T)A)=\{0\}$
we obtain $e(\omega)=0$. Therefore $e(\theta)\neq 0$. Observe
that $\rho^{p^r+1}[\theta]=[1]$.
We next claim that $e$ is trivial on any $\mathbb{F}_p[G]$-submodule $M$
of $J_\epsilon$ such that $M\simeq M_j$ for $j<p^r+1$. Suppose
not: $M$ is an $\mathbb{F}_p[G]$-submodule of $J_\epsilon$ isomorphic to
$M_j$ for some $j<p^r+1$ and $e(M)\neq \{0\}$. Then
$M=N_\gamma$ for some $\gamma\in \hat K^\times$. Since $e$ is
an $\mathbb{F}_p[G]$-homomorphism and $M$ is generated by $[\gamma]$, we
have $e(\gamma)\neq 0$. But \cite[Proposition 7 and Theorem
2]{MSS} tells us that $c=p^r+1$ is the minimal value of $c$ such
that $\rho^{c}[\beta]=[1]$ for $\beta\in \hat K$ with $N_{\hat
K/\hat F}(\beta)\not\in \hat F^{\times p}$. Hence we have a
contradiction.
Because $J_\epsilon$ decomposes into a direct sum of cyclic
$\mathbb{F}_p[G]$-modules, we may write $\theta$ as an $\mathbb{F}_p[G]$-linear
combination of generators of such $\mathbb{F}_p[G]$-modules, and we will
use this combination and the fact that $e(\theta)\neq 0$ to
prove that there exists a summand isomorphic to $M_{p^r+1}$ on
which $e$ is nontrivial. Let $M=N_\delta$ be an arbitrary
summand of $J_\epsilon$. Then $M\simeq M_{j}$ for some $j$. Let
$[\theta_\delta]$ be the projection of $[\theta]$ on $M$. Since
$\rho^{p^r+1}[\theta]=[1]$, we deduce that
$\rho^{p^r+1}[\theta_\delta] = [1]$. Now if $j>p^{r}+1$ then
$[\theta_\delta]$ lies in a proper submodule of $M$. Because
$\rho M$ is the unique maximal ideal of $M$ and $e$ is an
$\mathbb{F}_p[G]$-module homomorphism, $e(\theta_\delta)=0$. On the
other hand, if $j<p^r+1$ then we have already observed that
$e(M)=\{0\}$. From $e(\theta)\neq 0$ we deduce that there must
exist a summand isomorphic to $M_{p^r+1}$ and on which $e$ is
nontrivial. Let $U$ denote such a summand.
Now let $\{V_{\alpha}\}$, $\alpha\in \mathcal{A}$, be the collection of
summands of $J_\epsilon$ apart from $U$. Hence $J_\epsilon = U
\oplus_{\alpha\in \mathcal{A}} V_\alpha$. Since every summand of
$J_\epsilon$ is isomorphic to $M_{p^i}$ where $i\in
\{0,1,\dots,n\}$, except possibly for one summand isomorphic to
$M_{p^r+1}$, we have (\ref{it:j1}). From the last paragraph, we
have (\ref{it:j2}) and (\ref{it:j3}). Finally, since $e$ is
trivial on $\mathbb{F}_p[G]$-submodules isomorphic to $M_j$ with
$j<p^r+1$, we have (\ref{it:j4}).
\end{proof}
\section{Proof of Theorem~\ref{th:main}}\label{se:proof}
\begin{proof}
We first consider the case $\chr F\neq p$.
Suppose that $L/F$ is a Galois extension with group $M_{p^i+c}
\rtimes G$, where $0\le i<n$ and $1 \le c < p^{i+1}-p^i$. Let
$K=\mathrm{Fix}_L (M_{p^i+c})$ and identify $G$ with $\mathrm{Gal}(K/F)$.
Define $\hat F$, $\hat K$, $J$, $J_\epsilon$, and $A$ as in
Sections~\ref{se:kummer} through \ref{se:jepsilon}. By the
Kummer correspondence of Section~\ref{se:kummer} and
Proposition~\ref{pr:kummer}, $L=L_M$ for some $\mathbb{F}_p[G]$-submodule
$M$ of $J_\epsilon$ such that $M\simeq \mathrm{Gal}(L/K)\simeq
M_{p^i+c}$ as $\mathbb{F}_p[G]$-modules. Let $\gamma\in \hat K^\times$
be such that $M=N_\gamma$. Since $p^i+c<p^n$, we see that
$M\subset A$ and so $e$ is defined on $M$. By
Proposition~\ref{pr:groupoflm} and the discussion following it,
from $\mathrm{Gal}(L/F)\simeq M_{p^i+c} \rtimes G$ we deduce
$e(\gamma)=0$.
Observe that if $p=2$ then from $p^i + c < p^{i+1}$ and $1 \le
c$ we see that $i > 0$ and hence $n > 1$. By
Theorem~\ref{th:jepsilon}, $J_\epsilon$ has a decomposition into
indecomposable $\mathbb{F}_p[G]$-modules
\begin{equation*}
J_\epsilon = U\oplus \bigoplus_{\alpha\in \mathcal{A}} V_\alpha
\end{equation*}
such that each indecomposable $V_\alpha$ is isomorphic to
$M_{p^j}$ for some $j\in \{0,\dots,n\}$, $U\simeq M_{p^r+1}$ for
some $r\in \{-\infty,0,\dots,n-1\}$, $e(U)=\mathbb{F}_p$, and
$e(V_\alpha)= \{0\}$ for all $V_\alpha\simeq M_{p^i}$ with $0\le
i\le r$. Let $U=N_\chi$ for some $\chi\in \hat K^\times$. Then
$e(\chi)\neq 0$.
Because $\rho^{p^i+c-1} M\neq \{0\}$ we know that $J_\epsilon$
is not annihilated by $\rho^{p^i+c-1}$. Therefore either
$\rho^{p^i+c-1}$ does not annihilate $U\simeq M_{p^r+1}$, whence
$p^r+1\ge p^i+c$, or $p^r+1<p^i+c$ and there exists an
indecomposable summand isomorphic to $M_{p^j}$ for some $j>i$.
Suppose first that $p^r+1<p^i+c$ and $J_\epsilon$ contains an
indecomposable summand $V$ isomorphic to $M_{p^j}$ for some
$j>i$. If $j=n$ then by Proposition~\ref{pr:kummer} there
exists a Galois extension $L_V/F$ such that $\mathrm{Gal}(L_V/K)\simeq
M_{p^n}\simeq \mathbb{F}_p[G]$. By
Proposition~\ref{pr:groupoflm}(\ref{it:lm2}), we have
$\mathrm{Gal}(L_V/F)\simeq \mathbb{F}_p[G]\rtimes G$. Since $M_{p^{i+1}}\rtimes
G$ is a quotient of $\mathbb{F}_p[G]\rtimes G$, we deduce that
$M_{p^{i+1}}\rtimes G$ is a Galois group over $F$.
If instead $j<n$, then let $\gamma\in \hat K^\times$ such that
$V=N_\gamma$. Because $e$ is surjective on $U$ we may find
$\beta\in \hat K^\times$ such that $[\beta]\in U$ and $e(\beta)=
e(\gamma)$. Now set $\delta := \gamma/\beta$. Then
$e(\delta)=0$ and we consider $N_\delta$. From $p^j > p^i+c >
p^r+1$ and $\rho^{p^r+1}[\beta]=[1]$ we deduce that
$\rho^{p^j-1}[\beta] = [1]$. Then $\rho^{p^j}[\delta] = [1]$
while $\rho^{p^j-1}[\delta]\neq [1]$, so $N_\delta\simeq
M_{p^j}$. Let $W=N_\delta$. By Propositions~\ref{pr:kummer} and
\ref{pr:groupoflm} we obtain a Galois field extension with
$\mathrm{Gal}(L_W/F)\simeq M_{p^j}\rtimes G$. Since $M_{p^{i+1}}\rtimes
G$ is a quotient of $M_{p^j}\rtimes G$, we deduce that
$M_{p^{i+1}}\rtimes G$ is a Galois group over $F$.
Suppose now that for every $j>i$ there does not exist an
indecomposable summand isomorphic to $M_{p^j}$. We claim that
$r>i$. Suppose not. Then from $p^r+1\ge p^i+c$ we obtain $r=i$
and $c=1$. Moreover, $U$ is the only summand of $J_\epsilon$ not
annihilated by $\rho^{p^i}$. Let $\theta\in \hat K^\times$ such
that $[\theta] = \proj_U \gamma$. If $[\theta]\in \rho U$, then
$\rho^{p^i}[\gamma]=[1]$, whence $\rho^{p^i}M=\{0\}$, a
contradiction. Since $[\theta]\in U\setminus \rho U$ and $\rho
U$ is the unique maximal ideal of $U$, we obtain that
$U=N_{\theta}$. Since $e(U)=\mathbb{F}_p$, we deduce that $e(\theta)\neq
0$. Now if $V_\alpha\simeq M_{p^j}$ for $j\le r$ then
$e(V_\alpha) = \{0\}$. Hence $e(V_\alpha)=\{0\}$ for all
$\alpha\in \mathcal{A}$. We deduce that $e(\gamma)\neq 0$, a
contradiction. Therefore $r\ge i+1$.
Let $\omega=\rho\chi$ and consider $N_{\omega} = \rho N_\chi =
\rho U$. We obtain that $e(\omega)=0$ and $N_\omega\simeq
M_{p^{r}}$. By Propositions~\ref{pr:kummer} and
\ref{pr:groupoflm}, we have that $\mathrm{Gal}(L_W/F)\simeq
M_{p^{r}}\rtimes G$ for some suitable cyclic submodule $W$ of
$J_\epsilon$. Since $M_{p^{i+1}}\rtimes G$ is a quotient of
$M_{p^r}\rtimes G$, we deduce that $M_{p^{i+1}}\rtimes G$ is a
Galois group over $F$.
Finally we turn to the case $\chr F=p$. Recall that we denote
$M_j \rtimes G$, $j = 1,\dots,p^n$, by $H_j$. We have short
exact sequences
\begin{equation*}
1\to \mathbb{F}_p\simeq \rho^{p^i+c+k}M_{p^i+c+k+1}\rtimes 1 \to
H_{p^i+c+k+1} \to H_{p^i+c+k}\to 1
\end{equation*}
for all $1\le i<n,$ $1\le c<p^{i+1}-p^i$, and $0\le
k<p^{i+1}-p^i-c$. For all of these, the kernels are central, and
the groups $H_{p^i+c+k+1}$ and $H_{p^i+c+k}$ have the same rank,
so the sequences are nonsplit. By Witt's Theorem, all central
nonsplit Galois embedding problems with kernel $\mathbb{F}_p$ are
solvable. (See \cite[Appendix~A]{JLY}.) Hence if $H_{p^i+c}$ is
a Galois group over $F$, one may successively solve a chain of
suitable central nonsplit embedding problems with kernel $\mathbb{F}_p$
to obtain $H_{p^{i+1}}$ as a Galois group over $F$.
\end{proof}
\section{Acknowledgements}
Andrew Schultz would like to thank Ravi Vakil for his encouragement
and direction in this and all other projects. John Swallow would
like to thank Universit\'e Bordeaux I for its hospitality during
2005--2006.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Supplemental Material}
\makeatletter \renewcommand{\thefigure}{S\@arabic\c@figure} \renewcommand{\thetable}{S\@arabic\c@table} \makeatother
\makeatletter
\setcounter{figure}{0}
\renewcommand*{\@biblabel}[1]{[S#1]}
\renewcommand*{\@cite}[1]{[S#1]}
\makeatother
\normalsize
{\bf {Methods}}
Each second-long experimental cycle has a 12~ms detection period, which consists of 20~$\si{\micro}$s measurement times, a time window arbitrarily chosen to be much longer than the EIT lifetime to allow the continuous measurement of signal photons, interleaved with 20~$\si{\micro}$s preparation times that ensure the atoms are optically pumped to the $|g\rangle$ state. For cross correlation measurements such as Fig.\ref{fig2}(a) an average of approximately 8000 experimental cycles were used.
The temperature of the cloud in the dipole trap is about 120 $\si{\micro}$K corresponding to a measured atomic decoherence rate of $\gamma_0/2\pi\simeq100$ kHz, dominated by the Doppler broadening. The signal path detection efficiency is $q_s\simeq0.3$ including the fiber coupling efficiency and photodetector quantum efficiency. The optical cavity has a waist size of $35~\si{\micro}$m, length of 13.7 mm, and out-coupling efficiency of 66\%.
The single-photon Rabi frequency for $\sigma^+$ polarized light is $2g=2\pi\times 2.5$ MHz. Thus, the single-atom cooperativity for an atom on the cavity axis (along $z$) at an antinode of the cavity standing wave is given by $\eta=4g^2/\kappa\Gamma=8.6>1$, i.e. the system operates in the strong coupling regime of cavity quantum electrodynamics. The cavity resonance frequency matches the atomic frequency $|d\rangle\to|e\rangle$
{\bf {Detection and transmission probabilities.}}
The probability to observe a probe photon when a cavity photon is present and a signal photon is propagating through the EIT window at $\tau=0$ is given by \cite{Beck:prl2014}
\begin{eqnarray}
\varepsilon_0& = &\frac{1}{4}\frac{\eta^2}{(1+\eta)^2} \left[1-\exp\left(-\mathcal{D}/2\zeta\right)\right]^2,
\label{eq: epsilon}
\end{eqnarray}
where
\begin{eqnarray}
\zeta & = & \left(1+\frac{\gamma \Gamma}{\Omega^2}\right)\left(1+\frac{\Omega^2/\kappa\Gamma+\gamma/\kappa}{1+\eta}\right).
\end{eqnarray}
Here, $\eta=4.3$ is the spatially-averaged cavity cooperativity, $\mathcal{D}$ is the effective optical density that overlaps with the cavity mode, $\Omega$ is the control Rabi frequency, $\kappa= 2\pi\times140$ kHz is the decay rate of the cavity, $\gamma\simeq \gamma_c+\gamma_0$, $\gamma_0\approx2\pi\times100$ kHz is atomic decoherence rate in the absence of cavity photons, $\gamma_c$ is cavity-induced decoherence, and $\Gamma$ is the Cs excited-state decay rate. The decoherence rate, $\gamma_c$, caused by cavity light scattering manifests itself as: (1)~loss of atomic coherence given by $\langle n^{in}_c \rangle \kappa\eta/(1+\eta)^2$ where $\langle n^{in}_c \rangle$ is the mean $\sigma^+$-polarized input cavity photon number, (2)~reduction of signal transmission as a result of inhomogenous coupling of cavity light to atoms (see below). For the anti-correlation data shown in the inset of Fig.~2a, when we take into account the cavity blocking due to an atom in state $|d\rangle$, we obtain $4\varepsilon_0=0.1$ and a blocking probability for $\sigma^+$ light of $P=1-(1-\sqrt{4\varepsilon_0})^2=0.55$. This is in good agreement with the measured probability of $1-g^{(2)}_{s\sigma^+}(0)=0.59(7)$. A detailed theoretical treatment of the cavity interaction with atomic ensemble is given in Ref. [25].
In the nondestructive detection where horizontally-polarized cavity light is used, the detection probability is defined as the field amplitude of the transmitted $\sigma^+$ light, which interacts with atoms in state $|d\rangle$ as described in Ref. [23], combined with the field amplitude of $\sigma^-$ light on the output polarization beamsplitter. The field amplitude addition results in the factor 1/4 in Eq.~\ref{eq: epsilon}. In principle, this reduction can be avoided by impinging only $\sigma^+$ light onto an impedance-matched cavity and measuring the reflected photons. In our present lossy cavity, the reflection in the absence of signal photons causes a large background for the probe light.
Cavity-induced decoherence reduces the transmission probability of the signal photon and the EIT coherence time \cite{Beck:prl2014}. The signal transmission in the presence of cavity photons is given by:
\begin{eqnarray}
T_s = T_0\exp\left(-\frac{\mathcal{D}}{1+\Omega^2/\Gamma\gamma}\right)
\end{eqnarray}
where $T_0=\exp\left(-\frac{\mathcal{D}'}{1+\Omega^2/\Gamma\gamma_0}\right)$ is the EIT transmission corresponding to atoms outside the cavity waist and $\mathcal{D}'$ is the corresponding optical density.
An additional limit to the signal transmission is caused by the standing-wave nature of the cavity light in combination with the uniform distribution of atoms between nodes and antinodes of the cavity. Once the signal is detected, the spatial mode of the polation is projected onto the cavity mode resembling a grating imprinted onto the polariton structure. This effect leads to reduction in transmission of the signal. The overlap between the polariton before and after detection of a probe photon can be calculated as
\begin{eqnarray}
\mathcal{F}_p=\frac{1}{\pi}\int_0^{\pi}\frac{\eta\cos^2(\theta)}{1+\eta\cos^2(\theta)}d\theta=1- \frac{1}{\sqrt{1+\eta}}
\end{eqnarray}
where $\theta=kz$, $k$ is the wave-number of cavity light and $z$ is the position along the cavity axis. At large cooperativity, $\eta\gg1$, the expected maximum transmission approaches 100\%. For our system parameters this evaluates to about 70\%.
Also, the atomic cloud extended beyond the cavity region introduces additional signal transmission loss. This is because the signal photon wave-packet is localized inside the cavity region upon detection of a probe photon and therefore its spectral bandwidth exceeds the EIT bandwidth. Hence, after detection via the cavity, the signal photon propagating through the EIT window experiences dispersion and loss. Our numerical simulations predicts a loss of 30\% in signal transmission given the experimental parameters. In principle, this loss can be eliminated by removing atoms outside the cavity region.
{\bf {Quantum correlation between probe and signal photons.}}
The mean photon rate entering the cavity can be calculated from the total detected photon rate exiting the cavity, $R^{s=0}_c$, in absence of signal photons as
\begin{eqnarray}
\langle R^{in}_c \rangle= \frac{R^{s=0}_c}{q_d(\frac{\mathcal{T}}{\mathcal{T}+\mathcal{L}})}
\end{eqnarray}
where $q_d=0.3$ accounts for detection losses including fiber coupling, filter losses and photodetector quantum efficiency and $\frac{\mathcal{T}}{\mathcal{T}+\mathcal{L}}=0.66$ is the cavity out-coupling efficiency with $\mathcal{L}$ and $\mathcal{T}$ being mirror loss and transmissivity, respectively. In the following, we combine $q_d$ and the cavity out-coupling efficiency into a single parameter $q_p$. The mean cavity photon number in absence of signal photons is then $\langle n^{in}_c \rangle =R^{in}_c \tau_c$ where $\tau_c=(\kappa/2)^{-1}$. The mean signal photon number in the relevant time window, i.e. the EIT life time $\tau_{_{EIT}}=(\Omega^2/(\Gamma\mathcal{D})+\gamma_0)^{-1}$, is given by $\langle n^{in}_s \rangle= R^{in}_s\tau_{_{EIT}}/q_s$ where $R^{in}_s/q_s$ is the signal photon rate entering the medium and $q_s=0.3$ accounts for detection losses. In absence of population in state $|d\rangle$, the linearly polarized cavity light is rotated by atoms in state $|g\rangle$ due to the differences in the coupling strengths for $\sigma^+$ and $\sigma^-$ polarized light interacting with state $|g\rangle$ and excited states. Ideally, this rotation is constant and we compensate for it with a waveplate at the output of the cavity. However, the shot-to-shot atom number fluctuation during loading provides a varying background, $\alpha q_p\langle n^{in}_c \rangle$, that dominates the probe port at low signal photon rates. We typically measure a maximum fractional background of $\alpha\approx3\times 10^{-3}$ of the total detected cavity photons. The detection events consists of a background given by $\langle b \rangle = \alpha q_p \langle n^{in}_c \rangle +\langle r_p \rangle$, where $\langle r_p \rangle$ denotes the dark counts of the probe detector $D_p$. We define the detected mean signal photon number $\langle n_s \rangle$, true detection events $\langle t \rangle$ and total detected mean probe photon number $ \langle n_p \rangle$ as
\begin{eqnarray}
\langle n_s \rangle & = & q_s T_s\langle n^{in}_s \rangle + \langle r_s\rangle\\
\langle t \rangle & = & (\varepsilon_0 +\epsilon_b) q_p \langle n^{in}_c \rangle \langle n^{in}_s \rangle \\
\langle n_p \rangle & = & \langle t \rangle + \langle b \rangle= (\varepsilon_0 \langle n^{in}_s \rangle+\epsilon_b \langle n^{in}_s \rangle+\alpha)q_p \langle n^{in}_c \rangle \nonumber\\
&&+\langle r_p\rangle
\end{eqnarray}
where $\langle r_s\rangle$ denotes the dark-counts of the signal detector $D_s$ and $\epsilon_b=\varepsilon_d f_s$ is the probability of detecting a probe photon for a decohered atoms in state $|d\rangle$, $\varepsilon_d$, multiplied by the fraction of signal photons, $f_s$, incoherently mapped to state $|d\rangle$ via absorption. The coincidence counts are
\begin{eqnarray}
\langle n_sn_p \rangle & = & \varepsilon_0 q_p\langle n^{in}_c \rangle\times T_s q_s \langle n^{in}_s \rangle +\\ \nonumber
& & (\alpha+\epsilon_b \langle n^{in}_s \rangle) q_p\langle n^{in}_c \rangle\times T_s q_s \langle n^{in}_s \rangle +\nonumber \\
&& T_s q_s\langle n^{in}_s \rangle\langle r_p \rangle +((\varepsilon_0+\epsilon_b) \langle n^{in}_s \rangle+\nonumber \\
&& \alpha)\times q_p \langle n^{in}_c \rangle \langle r_s \rangle + \langle r_p \rangle \langle r_s \rangle.
\end{eqnarray}
Here, we assume that the conditional signal transmission is approximately equal to the mean signal transmission, $T_s$. Note that all terms, except the first, are caused by background sources. The cross-correlation function, neglecting the detectors' dark counts, can be approximated as
\begin{eqnarray}
g^{(2)} (\tau=0)& = & \frac{ \langle n_sn_p \rangle }{ \langle n_s \rangle \langle n_p \rangle } \simeq \frac{1+\beta}{\beta+\langle n^{in}_s \rangle}
\label{eq: g2}
\end{eqnarray}
where $\beta=\frac{\alpha+\epsilon_b \langle n^{in}_s \rangle}{\varepsilon_0}$. When background processes are negligible ($\alpha, \epsilon_b, \langle r_s \rangle, \langle r_p \rangle \ll1 $), the maximum cross-correlation function at $\tau=0$ is simply approximated by $ g^{(2)} \simeq 1/\langle n^{in}_s\rangle$ for $\langle n^{in}_s\rangle<1$. Note that in the regime where $\langle r_{s} \rangle, \langle r_{p} \rangle\ll 1$, the correlation function $g^{(2)}$ is independent of the cavity photon number as both the detection probability and background scale linearly with it. However, the measured $g^{(2)}(\tau=0)$ drops at low cavity photon numbers where probe-part dark counts, $\langle r_p \rangle$, are not negligible compared to the detected cavity mean photon number.
\begin{figure}[!t]
\centerline{\includegraphics[width=\columnwidth]{g2dp.eps}}
\caption{Observed cross-correlation for double-pass signal beam, measured with $\langle n^{in}_c \rangle=4.4$ and $\Omega/2\pi= 2.9$ MHz. The fitted values are $g^{(2)}=4.4(5)$, $\tau_{<}=1.3(3)$ $\si{\micro}$s, and $\tau_{>}=0.5(2)$ $\si{\micro}$s. }
\label{fig: g2dp}
\end{figure}
To further increase the photon-photon interaction, we carried out an experiment to increase the effective optical density by transmitting the signal through the atomic ensemble twice. The retro-reflected signal is collected by a 90/10 fiber-beam splitter used at the signal input. We simultaneously measure auto-correlations of $g_{ss}^{(2)}=1.6(3)$, $g_{pp}^{(2)}=5.6(1)$ and the cross-correlation as plotted in Fig.~\ref{fig: g2dp}.\newline
\newline
{\bf {Quantum efficiency.}}
The conditional nondestructive quantum efficiency of detecting a signal photon with mean input photon number $\langle n^{in}_s \rangle\ll1$ can be written as
\begin{eqnarray}
\text{Q} & = & \varepsilon q_p\langle n^{in}_c \rangle \simeq \frac{\langle n_sn_p \rangle- \langle n_p \rangle\langle n_s \rangle}{ \langle n_s \rangle}
\end{eqnarray}
where $\varepsilon$ is the total probability of having a probe photon given a signal photon traveling through the medium. It can be obtained from the asymptotic quantum efficiency and integrating the area under the $g^{(2)}$ function as
\begin{eqnarray}
\varepsilon&=&\frac{\text{Q}}{q_p\langle n^{in}_c \rangle}= \frac{1}{q_p\langle n^{in}_c \rangle(1-\langle n^{in}_s \rangle)} \int{(g^{(2)}(\tau)-1) R_p d\tau} \nonumber \\
&=& \varepsilon_0\frac{\tau_c+\tau_{_{EIT}}}{\tau_c}.
\label{eq: epsilon2}
\end{eqnarray}
The probability $\varepsilon$ is calculated from the slope of the fitted lines in Fig.~4c and is plotted for different control Rabi frequencies in Fig.~\ref{figs2: intg2}. These extracted probabilities agree with theoretical predictions.
\begin{figure}[!t]
\centerline{\includegraphics[width=\columnwidth]{intg2b.eps}}
\caption{The total probability $\varepsilon$ calculated from the slope of the linear fits to the data in Fig.~3. The fitted curve represents the theory using Eq.\ref{eq: epsilon2} with fitted optical density $\mathcal{D}=4(2)$.}
\label{figs2: intg2}
\end{figure}
{\bf {Detection probabilities and QND requirements.}}
The QND requirements can be quantified using the measurement error, $\Delta X$, the transfer coefficient of input signal to meter (probe), $\mathcal{T}_M$, and transfer coefficient of input signal to output signal, $\mathcal{T}_S$ \cite{Grangier:Nature1998}. Using the formalism provided by Ralph {\it{et al.}} [Phys. Rev. A {\bf{73}}, 012113 (2006)], one can link the measurement probabilities in the discrete variable (DV) regime and $\mathcal{T}_S$ and $\mathcal{T}_M$ in continuous variable (CV) regime through different fidelity measures. The transfer coefficients in terms of measurement fidelity, $F_M$, and QND fidelity, $F_{QND}$, can be written as
\begin{eqnarray*}
\mathcal{T}_M &= & (\frac{2}{F_M^2} -1)^{-1}\\
\mathcal{T}_S &=&(\frac{2}{F_{QND}^2} - 1)^{-1}
\end{eqnarray*}
where
\begin{eqnarray*}
F_M & = & P_{11}+P_{01} = \frac{\langle n_p\rangle}{\langle n^{in}_s\rangle}\\
F_{QND} & = & P_{11}+P_{10} = T_s.
\end{eqnarray*}
To estimate the measurement error in the CV regime, the conditional variance of the signal is measured and is compared to the shot-noise limit. In the DV regime, however, as the particle aspect of photons are detected and not the wave aspect, the conditional correlation function, $g^{(2)}_{ss|m}$ (signal auto-correlation function conditioned on detecting a meter photon), can be used instead to quantify the measurement error. In particular, a QND measurement satisfies $g^{(2)}_{ss|m}<1$ (quantum state preparation)and $\mathcal{T}_S+\mathcal{T}_M>1$.
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In the last few decades, the sp${^2}$ carbon-based materials like fullerenes, nanotubes, graphene and few-layer graphene have been a subject of intense experimental and theoretical investigations because of their unique properties, originating from their zero-, one-, two-, and three-dimensionality.\cite{jori11} In particular, significant progress has already been achieved in the synthesis and the study of the properties and application of carbon nanotubes in bulk samples as well as at the single nanotube level.\cite{rtm04,jori08,jari13,liu17,fuji16,devo13,sait11,levs17} The application of nanotubes in nanoelectronics requires their precise structural characterization. For this purpose, the Raman scattering of light by phonons is the experimental technique of choice, being a fast and nondestructive characterization method.\cite{thom07}
The single-walled nanotube (SWNT) can be viewed as obtained by wrapping up of a graphene sheet into a seamless cylinder. It has a few intense Raman bands, arising from the radial-breathing mode (RBM) and the totally symmetric longitudinal and transverse G modes (also named tangential G modes). In semiconducting SWNT, the longitudinal G mode has higher frequency than the transverse one and vice versa in metallic ones. However, the observed higher-frequency G band is always denoted as G$^+$ and the lower-frequency one as G$^-$. The RBM frequency is found to be inversely-proportional to the nanotube radius and is normally used for fast sample characterization.\cite{arau08} The G modes also depend on the nanotube radius and can be used for supporting the assignment of the spectrum to particular nanotube but contain additional information that allows differentiating between metallic and semiconducting nanotubes.\cite{jori02,mich09,pail10}
The double-walled carbon nanotube (DWNT) is a layered structure, consisting of two nested SWNTs bound together by weak Van der Waals interactions. The Raman spectra of $C_{60}$-derived DWNTs are found to exhibit several intense bands due to radial-breathing like modes (RBLMs) and G modes of the two layers.\cite{pfei08,villa10} The modification of the RBLMs by interlayer interactions can be modeled straightforwardly within continuum models\cite{popo02,doba03} and the derived results can be applied directly to the assignment of the RBLMs of DWNTs.\cite{pfei04} It has also been observed that the G bands of the inner $(6,5)$ layers of $C_{60}$-derived DWNTs shift with respect to those of the isolated layers due to interlayer interactions.\cite{villa10} G-band shifts have also been measured on individual suspended DWNTs, produced by the catalytic chemical vapor deposition (CVD) method and structurally characterized using electron microscopy, electron diffraction and Raman spectrocopy.\cite{levs11,levs15,tran17,levs17,levs17a} To our knowledge, systematic theoretical investigation of the G-band shift in DWNTs has not been reported so far, while theoretical data on this shift can be important for supporting the characterization of the DWNTs.
The theoretical description of G-band shift has so far been hindered by computational difficulties. To begin with, the calculation of the G mode of SWNTs cannot be done accurately enough within force-constant models or models using empirical potentials, because they do not describe sufficiently well the electronic response to the atomic displacements.\cite{sait98,popo00} The latter response can be accounted for explicitly in full electronic calculations within the ab-initio approach.\cite{sanc99} A major drawback of the ab-initio models is that they become computationally very expensive with the increase of the number of atoms in the unit cell of the nanotube and cannot encompass the majority of the observable nanotube types. Alternatively, with smaller but still sufficient accuracy, the G mode can be calculated within the symmetry-adapted ab-initio-based non-orthogonal tight-binding (NTB) model.\cite{popo04} Secondly, with a few exceptions, the DWNTs do not have translational periodicity and, therefore, the all-electron models for periodic structures are not applicable. The lack of translational periodicity poses a serious problem for the estimation of the G modes, which requires special theoretical treatment. Here, we propose a computational scheme that uses the NTB model and relies on approximations for deriving the dependence of the G-band shift on the inner-layer radius and interlayer separation. We focus on the G bands of the inner layers of DWNTs, because these layers are mostly perfect and the shifts are predominantly due to interlayer interactions, contrary to the outer layers, which are influenced by the environment and often have adsorbed atoms. We constrain ourselves to semiconducting inner layers and leave out the case of metallic layers, where additional, computationally expensive corrections to the G mode, due to the strong electron-phonon interactions, are mandatory.\cite{pisc07,popo10a}
The paper is organized as follows. The theoretical background is presented in Sec. II. The accomplished work is given and discussed in Sec. III. The paper ends up with conclusions, Sec. IV.
\section{Theoretical background}
A SWNT can be considered as obtained by cutting out a rectangle of graphene, defined by the pair of orthogonal lattice vectors $\vec T$ and $\vec C$, and rolling the rectangle along $\vec C$ into a seamless cylinder. This rolled-up nanotube can be characterized by the radius $R = \| \vec{C} \| /2\pi$, translation period $\| \vec{T} \|$, as well as the chiral angle $\theta$, defined as the angle between $\vec C$ and the nearest zigzag of carbon atoms. All structural parameters of the rolled-up nanotube can be expressed by means of the nearest-neighbor interatomic distance and the indices $(n,m)$ of $\vec C$. Therefore, the indices $(n,m)$ specify uniquely the SWNT. Normally, the total energy of the rolled-up nanotube is not minimal and the atomic structure of the nanotube has to be subjected to relaxation in order to find the structure with minimum energy, which is a necessary step before phonon calculations.
Furthermore, a DWNT is composed of two coaxially nested SWNTs and can be labeled as $(n_i,m_i)@(n_o,m_o)$, where the indices $i$ and $o$ denote ``inner'' and ``outer'' layer, respectively. In the general case, the two layers of a DWNT are incommensurate. The electron and phonon eigenvalue problems for such DWNTs cannot be solved by the usual computational approaches for systems with translational symmetry and one has to resort to approximations. Previously, for the structural relaxation and the calculation of the RBLMs of DWNTs, the layers were considered as elastic continuum cylinders, interacting with each other via Lennard-Jones (LJ) potential.\cite{popo02,pfei04} Alternatively, in the case of commensurate layers, the atomic structure of the layers was taken into account and minimization of the total energy, consisting of the energy of the layers within the force-constant model and the interlayer interaction energy via LJ potentials, was carried out.\cite{popo02} We extend the latter approach to the general case of incommensurate layers by relaxing the total energy of the DWNT, expressed as the sum of total energy of each layer per unit length, derived within the NTB model, and the interlayer interaction energy per unit length, averaged over a very long piece of the DWNT.
While, in the structural relaxation step, the incommensurability problem can be overcome by the proposed approximations, this problem cannot be solved as easily for the calculation of the G mode. The straightforward approach for derivation of the shift of the G mode would be to use quantum-mechanical perturbation theory. Here, we follow a less rigorous approach, performed in two steps. First, the DWNTs are fully relaxed and the G-band shift is calculated for the relaxed structure without interlayer interactions. This shift will be referred to as the \textit{relaxation-induced shift}. For the calculation of this shift, additional external radial forces are necessary for keeping the separate layers in equilibrium. Secondly, the additional shift due to interlayer interactions between the relaxed layers is a small correction and can be estimated by perturbation theory. This shift will be referred to as the \textit{interlayer interaction-induced shift}. The use of perturbation theory for estimation of this shift is computationally expensive and we take this shift over from calculations on Bernal bilayer graphene.
The straightforward calculation of the electronic band structure and phonons for a large variety of SWNTs is accompanied with insurmountable computational difficulties because of the very large translational unit cells of most of the SWNTs. Fortunately, the SWNTs have screw symmetry that allows reducing the computational efforts by resorting to two-atom unit cells. This symmetry-adapted approach has been used for calculation of the electronic structure\cite{popo04} and phonon dispersion\cite{popo06} of several hundred SWNTs within the NTB model. In this model, the Hamiltonian and overlap matrix elements are derived as a function of interatomic separation from an ab-initio study on carbon dimers\cite{pore95} and the Slater-Koster scheme is adopted for the angular dependence of the matrix elements.
\section{Results and Discussion}
\subsection{Relaxed DWNT structure}
\begin{figure}[tbph]
\includegraphics[width=80mm]{fig1-popov}
\caption{Relaxation-induced change of the inner layer radius of DWNTs, $\Delta R_i$, for DWNTs with given $R_i$ from $3.7$ to $11.2$ \AA~ and different $R_o$. The upper inset shows $D_0$=$D(\Delta R_i=0)$ vs $R_i$. The lower inset shows the dependence of $\Delta R_i$ on $R_i$ at $D = 3.8$ \AA. The calculated data are drawn by solid circles. The lines are guides for the eye.}
\end{figure}
We consider DWNTs with semiconducting inner layers with radii in the interval from $3.7$ to $11.2$ \AA~ with a step of about $0.5$ \AA, namely, $(6,5)$, $(8,4)$, $(8,6)$, $(10,5)$, $(13,3)$, $(14,3)$, $(11,9)$, $(18,1)$, $(12,11)$, $(14,10)$, $(22,4)$, $(16,11)$, $(22,5)$, $(16,14)$, $(21,10)$, and $(25,6)$ in order of increasing radius. The outer layers are all layers for which the unrelaxed interlayer separation falls in the interval between $3$ and $4$ \AA~and which corresponds to the observable interlayer separations. The structural relaxation of the DWNTs is performed within the NTB model.\cite{popo04} The circular cross-section and the coaxiality of the layers is preserved during the relaxation procedure. For calculating the average interlayer interaction energy, a $100$ \AA-long piece of the DWNTs is considered, for which the average interlayer energy converges below $10^{-7}$ eV. The relaxed radii of the isolated layers will be denoted as $R_{i0}$ and $R_{o0}$, while those of the relaxed layers of the DWNTs will be denoted as $R_i$ and $R_o$.
In Fig. 1, the calculated change of the inner layer radius $\Delta R_i = R_i - R_{i0}$ of the relaxed DWNTs is shown for the considered inner layers as a function of the relaxed interlayer separation $D = R_o - R_i$ for various outer layers. It can be seen, that with increasing separation, $\Delta R_i$ increases from negative to positive values, changing sign close to $D \approx 3.4$ \AA. This behavior can be explained with the pressure on the layers due to the interlayer interactions. For $D < 3.4$ \AA, the pressure is ``positive'' and tends to shrink the inner layer, while for $D > 3.4$ \AA, the pressure is ``negative'' and expands the inner layer.\cite{cris07,liu13,levs15}
The curves for a given inner layer and different outer layers cross the horizontal line $\Delta R_i = 0$ at $D \approx 3.4$ \AA. The separation $D_0$=$D(\Delta R_i=0)$ vs $R_i$ decreases exponentially from $3.40$ to $3.38$ \AA~for $R_i$ increasing from $3.7$ to $11.2$ \AA~(Fig. 1, upper inset). In the limiting case of $R_i \rightarrow \infty$, $D_0$ should tend to that for graphite of $\approx 3.35$~\AA.
\begin{figure}[tbph]
\includegraphics[width=80mm]{fig2-popov}
\caption{Relaxation-induced change of the outer layer radius of DWNTs, $\Delta R_o$, for DWNTs with given $R_i$ from $3.7$ to $11.2$ \AA~ and different $R_o$. The upper inset shows the linear dependence of $\Delta R_o/R_{o0}$ vs $\Delta R_i/R_{i0}$. The lower inset shows the distance between the axes of the layers, $D_{off}$ vs $D$. The calculated data are drawn by solid circles. The lines are guides for the eye.}
\end{figure}
At a given $D$, the absolute value of $\Delta R_i$ increases with increasing $R_i$ as a second-degree power function. This is illustrated for $D = 3.8$ \AA~ in the lower inset of Fig. 1.
In Fig. 2, the results for the change of the outer radius $\Delta R_o = R_o - R_{o0}$ are presented in a manner, similar to these for $\Delta R_i$. The upper inset in Fig. 2 shows the dependence of $\Delta R_o/R_{o0}$ on $\Delta R_i/R_{i0}$. The almost equal relative changes of the radii for the inner and outer layers can be explained by a simple mechanical model. Each isolated layer has a RBM with frequency $\omega = \sqrt{\kappa \slash m}$, where $m$ and $\kappa$ are layer's mass and spring constant, respectively. The changes of the equilibrium radii of the two layers of a DWNT, $\Delta R_o$ and $\Delta R_i$, due to switching-on of coupling between them, satisfy the relation $\kappa_o \Delta R_o=- \kappa_i \Delta R_i$. Bearing in mind that $\omega \propto 1/R$ and $m \propto R$, where the proportionality coefficients are equal for both layers,\cite{popo06} we arrive at the relation $\Delta R_o/R_{o0}=-\Delta R_i/R_{i0}$.
The calculations show that for interlayer separations $D > 3.8$ \AA~ the DWNT structure becomes unstable with respect to off-axial displacement of the inner layer with respect to the outer layer and the equilibrium structure is characterized by nonzero distance between the axes of the two layers, $D_{off}$,\cite{pfei04} (Fig.2, lower inset). Such off-axial configurations of DWNTs have not been observed yet, but the proposed computational scheme can be extended to encompass such cases as well.
\begin{figure}[tbph]
\includegraphics[width=80mm]{fig3-popov}
\caption{Relaxation-induced shift of the G$^-$ band of the inner layer for DWNTs with given $R_i$ from $3.7$ to $11.2$ \AA~ and different $R_o$ (solid circles). Inset: interpolation-derived relaxation-induced shift vs $R_{i}$ at given values of $D$ (open circles) and total shift (solid circles). The lines are guides for the eye.}
\end{figure}
\subsection{G-band shift}
\begin{figure}[tbph]
\includegraphics[width=80mm]{fig4-popov}
\caption{Relaxation-induced shift of the G$^+$ band of the inner layer for DWNTs with given $R_i$ from $3.7$ to $11.2$ \AA~ and different $R_o$ (solid circles). Inset: interpolation-derived relaxation-induced shift $R_{i}$ at given values of $D$ (open circles) and total shift (solid circles). The lines are guides for the eye.}
\end{figure}
The \textit{relaxation-induced shifts} of the G$^-$ and G$^+$ bands of the inner layer of the relaxed DWNTs with respect to the bands of the relaxed isolated layers are shown as a function of D in Figs. 3 and 4. Both figures exhibit a general trend of decreasing of the shift, changing the sign of the shift from positive to negative at $\approx 3.4$ \AA, and further increasing the shifts in absolute value with increasing $D$. There is a correlation of this behavior with that of $\Delta R_i$ vs $D$. Namely, for $D < 3.4$ \AA, the G bands are blue-shifted due to decreased interatomic distances in the inner layer, while for $D > 3.4$ \AA, the G bands are red-shifted due to increased interatomic distances in the inner layer. Both bands have almost zero shift close to $D = 3.4$ \AA. The G-bands shifts of the outer layers undergo opposite changes vs D (not shown). For a given DWNT, in absolute value, the G$^-$-band shift is larger than the G$^+$-band shift. The G$^-$- and G$^+$-band shifts for the inner layer fall in the range between $-20$ and $35$ cm$^{-1}$, and $-15$ and $25$ cm$^{-1}$, respectively. For a given $D$, the G-band shifts increase in absolute value with increasing $R_i$ (insets of Figs. 3 and 4, open circles). The dependence of the shifts on $R_i$ for a given $D$ is almost linear with minor deviations from linearity for the G$^+$-band shift.
The \textit{interaction-induced shifts} of the G bands should be treated in perturbation theory. This approach can be extremely computer-time consuming because it has to be accomplished at the quantum-mechanical level. Here, we follow an alternative route and calculate exactly the effect of interlayer interactions on the G mode (Raman-active E$_{2g}$ mode) in Bernal bilayer graphene. For this purpose, the bilayer structure is relaxed at fixed interlayer separations and the G mode is calculated with and without interlayer interactions using the NTB model. The switching-on of the interlayer interactions lifts the degeneracy of the two E$_{2g}$ Raman-active modes of the two layers and gives rise to two modes of E$_{2g}$ and E$_{1u}$ symmetry, shifted downwards and upwards, respectively. We are interested in the modification of the former, which is downshifted by the interlayer interactions. In particular, with increasing the layer separation from $D = 3.2$ to $3.8$ \AA~ with a step of $0.1$ \AA, the G-mode shift decreases in absolute value and has the following values: $-4.87$, $-3.81$, $-2.96$, $-2.19$, $-1.56$, $-1.06$, and $-0.67$ cm$^{-1}$. These values are comparable to the ab-initio derived ones and agree well with the available experimental data.\cite{sun15} The obtained results for bilayer graphene can be transferred to DWNTs, because of the similar local environment of the atoms of the two layers. Namely, an atom of a given layer interacts with a small, almost planar portion of the other layer. For small radius of the layers, small deviations from the results for bilayer graphene, due to curvature effects, can be expected.
\begin{figure}[tbph]
\includegraphics[width=80mm]{fig5-popov}
\caption{Total shift (solid circles) of the G$^-$ band of the inner layer of DWNTs as a function of $R_{iu}$ for different $D_u$. The lines are guides for the eye.}
\end{figure}
Finally, the total G-band shift can be written as the sum of the relaxation-induced shift and the interlayer interaction-induced one, where the former is calculated for the particular relaxed DWNT with switched-off interlayer interactions and the latter is taken over from bilayer graphene. The resulting total shifts are presented as a function of the relaxed inner layer radius $R_{i}$ for several values of the interlayer separation D in the insets of Figs. 3 and 4 (solid circles). For a given $D$, the total G-band shifts increase in absolute value with increasing $R_i$. The resulting curves of these dependencies are relatively smooth for the G$^{-}$ band but have wiggles in the case of the G$^{+}$ band. The latter can be explained with the dependence of this mode not only on the radius but also on the translation period, because of the longitudinal character of this mode.
\subsection{Comparison to experiment}
\begin{figure}[tbph]
\includegraphics[width=80mm]{fig6-popov}
\caption{Total shift (solid circles) of the G$^+$ band of the inner layer of DWNTs as a function of $R_{iu}$ for different $D_u$. The lines are guides for the eye.}
\end{figure}
The relaxed layer radii and the corresponding interlayer separation of DWNTs are difficult to determine with sufficient accuracy from experiment (e.g. electron diffraction). Hence, it is a common practice in the literature to use unrelaxed isolated layers radii $R_{iu}$ and $R_{ou}$, which are calculated directly from ($n_{i(o)}$,$m_{i(o)}$) by the usual relation $R_{i(o)u}=\sqrt{3}a_{C-C}\sqrt{m_{i(o)}^2+m_{i(o)}n_{i(o)}+n_{i(o)}^2} \slash 2\pi$ with $a_{C-C}$=1.42 \AA. The unrelaxed interlayer distance is defined as $D_u = R_{ou} - R_{iu}$. For the practical purposes of the comparison with experimental data, we now plot, in Figs. 5 and 6, the obtained results for the total calculated G$^-$-band shifts and the total calculated G$^+$-band shifts, respectively, as a function of $R_{iu}$ and for several values of $D_u$. These plots permit to evaluate the shift of the G$^{-}$ and G$^{+}$ bands of the inner layer for given $D_u$ and $R_{iu}$.\\
In Fig. 7, we compare our theoretical data for the G$^-$-band shifts (solid circles) with experimental ones obtained (i) on several $C_{60}$-derived DWNTs\cite{villa10} (green open triangles), and (ii) on individual suspended index-identified CVD-DWNTs\cite{levs15,levs17a} (red open circles). Fig. 8 displays the comparison between the calculated G$^+$-band shifts (solid circles) and experimental G$^+$-band shifts (red open circles) measured on individual suspended index-identified CVD-DWNTs.\cite{levs15,levs17a}\\
Concerning the results obtained on $C_{60}$-derived DWNTs, all tubes are $(6,5)@(n_o,m_o)$. The experimental G$^-$-band shifts are directly calculated from the frequencies of the G$^-$ band, given in Ref.\cite{villa10} versus the one measured on the (6,5) SWNT.\cite{telg12} The index assignment of the outer tubes of these DWNTs, and hence the $D_u$ interlayer distances, was revised using a more accurate approach (coupled-oscillator model \cite{liu13}) than in Ref. \cite{villa10} (for details, see Ref.\cite{tran17}). The data are given in Table I.\\
\begin{figure}[tbph]
\includegraphics[width=80mm]{fig7-popov}
\colorcaption{Comparison of calculated (solid circles) with experimental G$^-$-band shifts for several $C_{60}$-derived DWNTs\cite{villa10} (green open triangles) and for several index-identified DWNTs (red open circles) (Ref.\cite{levs15,levs17a} and this work).}
\end{figure}
\begin{table}[b]
\caption{\label{tab:table1}
Comparison of experimental G$^{-}$-band shifts\cite{villa10} and calculated ones in cm$^{-1}$ for several $C_{60}$-derived $(6,5)@(n_o,m_o)$ DWNTs. In comparison with Ref.\cite{villa10} the $(n_o,m_o)$ used here were re-attributed accounting for the interlayer interactions between the layers within a coupled-oscillator model (without structural relaxation, see text).\cite{tran17}
}
\begin{ruledtabular}
\begin{tabular}{ccrr}
&&\multicolumn{2}{c}{$\Delta \omega (G^-)$}\\
\textrm{$(n_o,m_o)$\cite{tran17}}&
\textrm{$D_u$ (\AA)\cite{tran17}}&
\textrm{Expt.\cite{villa10}} &
\textrm{Calc.}\\
\colrule
$(13,8)$ & $3.45$ & $-1.4$ & $-4.2$ \\
$(13,8)$ & $3.45$ & $-5.3$ & $-4.2$ \\
$(16,4)$ & $3.44$ & $-0.6$ & $-4.1$ \\
$(17,2)$ & $3.35$ & $-1.6$ & $0.1$ \\
$(14,6)$ & $3.22$ & $3.4$ & $8.8$ \\
$(14,6)$ & $3.22$ & $7.4$ & $8.8$ \\
$(14,6)$ & $3.22$ & $3.7$ & $8.8$ \\
$(14,6)$ & $3.22$ & $4.9$ & $8.8$ \\
$(14,6)$ & $3.22$ & $8.7$ & $8.8$ \\
$(14,6)$ & $3.22$ & $4.1$ & $8.8$ \\
$(13,7)$ & $3.15$ & $13.5$ & $16.4$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
Concerning the individual suspended index-identified CVD-DWNTs, the experimental data come from Ref.\cite{levs15,levs17a} and from this work (these additional DWNTs are indicated in Table II). The experimental shifts are estimated relative to the G-band frequencies of isolated layers (SWNTs). For the G$^-$ band, the SWNT reference frequencies were calculated as the average of the values obtained using an empirical law\cite{telg12} and a theoretical law\cite{pisc07}. In the case of the G$^+$ band, the experimental shifts are estimated relative to the G-band frequencies of isolated layers, calculated as the average of the values of an empirical law\cite{telg12} and 1592 cm$^{-1}$ (Ref.\cite{pail06}). The errors on the estimated shifts are set at $2.5$ cm$^{-1}$, by summing the experimental uncertainty of $1$ cm$^{-1}$ on the measured DWNTs G-band frequencies and the maximum difference ($1.5$ cm$^{-1}$) between the SWNTs G-band frequencies values deduced using the different laws. The experimental and calculated values of the shifts are given in Table II.\\
\begin{figure}[tbph]
\includegraphics[width=80mm]{fig8-popov}
\colorcaption{Comparison of calculated (solid circles) with experimental (red open circles) G$^+$-band shifts for several index-identified DWNTs (Ref.\cite{levs15,levs17a} and this work).}
\end{figure}
\begin{table}[b]
\caption{\label{tab:table2}
Experimental\cite{levs15,levs17a}$^{,}$\footnote{This work.} and calculated G-band shifts in cm$^{-1}$ for several index-identified DWNTs. Empirical and theoretical relations are used for estimation of the G-band frequencies of non-interacting layers (see text)).
}
\begin{ruledtabular}
\begin{tabular}{lccrrrr}
&&&\multicolumn{2}{c}{$\Delta \omega (G^+)$}&\multicolumn{2}{c}{$\Delta \omega (G^-)$}\\
\textrm{$(n_i,m_i)@(n_o,m_o)$}&
\textrm{$D_u$ (\AA)}&\textrm{$D$ (\AA)}&
\textrm{Expt.}&\textrm{Calc.}&\textrm{Expt.}&\textrm{Calc.}\\
\colrule
$(12,8)@(16,14)$ & $3.35$ & $3.35$ & $-1.6$ & $0.4$ & $-2.1$ & $0.0$ \\
$(16,12)@(27,10)$ & $3.45$ & $3.42$ & $-2.4$ & $-4.9$ & $-4.5$ & $-6.2$ \\
$(14,7)@(21,10)^a$ & $3.48$ & $3.45$ & $-6.5$ & $-5.2$ & $-8.2$ & $-6.5$ \\
$(10,9)@(18,11)$ & $3.48$ & $3.45$ & $-6.1$ & $-4.9$ & $-7.3$ & $-6.2$ \\
$(16,2)@(16,14)^a$ & $3.49$ & $3.46$ & $-6.1$ & $-4.8$ & $-3.0$ & $-6.5$ \\
$(13,9)@(24,7)$ & $3.52$ & $3.49$ & $-7.5$ & $-6.6$ & $-9.0$ & $-8.6$ \\
$(22,11)@(27,17)$ & $3.65$ & $3.56$ & $-7.5$ & $-11.6$ & $-13.7$ & $-16.4$ \\
$(30,1)@(27,19)^a$ & $3.73$ & $3.62$ & $-14.4$ &$-13.1$ & $-$ & $-19.3$\\
\end{tabular}
\end{ruledtabular}
\end{table}
Overall, the plotted theoretical data for the G-band shift follow rather well the trend of the experimental ones (Figs. 7 and 8). For the $C_{60}$-derived DWNTs, the deviation of the G$^{-}$-band shift and the dispersion of the results could be due to the influence of the the substrate which is not included in this study.
For the suspended DWNTs, the interlayer separations $D_u$ range from $3.35$ to $3.73$ \AA. The DWNT with $D_u$ close to $3.35$ \AA~is almost undeformed by the interlayer interactions, while that with $D_u$ close to $3.73$ \AA~undergoes radius change as large as about $0.11$ \AA. The predicted shifts of both inner-layer G bands for the former one are almost zero and the predicted shifts for the latter one are $-13.1$ and $-19.3$ cm$^{-1}$. All calculated shifts agree well with the measured ones within the experimental uncertainty except for the two DWNTs with largest $D_u$. The origin of the deviations from the measured values can be sought in the existence of factors, which are not accounted for in the current calculations. For example, due to the large radius of the latter two DWNTs, possible non-circular deformations of the layers could modify the G modes and produce smaller G band shifts. More sophisticated calculations should be performed in order to throw light on the origin of this discrepancy and yield better agreement with experiment.\\
\section{Conclusions}
We have presented calculations of the G-band shift of DWNTs within a non-orthogonal tight-binding model. The lack of translational periodicity of most of the observed DWNTs predetermines the use of approximations, namely, the shift is calculated in two feasible steps. First, the DWNTs structure is relaxed and the shift is calculated for the relaxed structure but for switched-off interlayer interactions. Secondly, contribution of the interlayer interactions, derived for bilayer graphene, is added to the shift. The agreement with experiment is excellent in most cases, but certain discrepancy is observed in a few cases. The latter could be resolved by future elaborations of the computational approach.
\acknowledgments
VNP and DL acknowledge visitor grants at L2C Montpellier from CNRS. DL acknowledges Metchnikov grant from the French Embassy in Russia. JLS and MP acknowledge financial support by the ANR GAMBIT project, grant ANR-13-BS10-0014 of the French Agence Nationale de la Recherche.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
First attempts in the 1990s to define quantum gravity nonperturbatively with the help of Dynamical Triangulations (DT)
were based on an intrinsically Euclidean path integral, whose configuration space consists of
four-dimensional, curved Riemannian spaces. DT works with a regularized version of this space in terms of triangulations,
piecewise flat spaces of positive definite metric signature. Its elementary building block is a four-simplex,
a generalization to four dimensions of a triangle (two-simplex) and a tetrahedron (three-simplex). An individual building
block is a piece of flat four-dimensional Euclidean space and therefore does not carry any curvature.
However, numerical investigations of the
nonperturbative dynamics of DT quantum gravity found that it has neither a large-scale limit
compatible with general relativity \cite{Ambjorn:1991pq,Agishtein:1992xx,Catterall:1994pg}, nor a second-order phase
transition allowing for a continuum limit in the sense of lattice quantum field theory \cite{Bialas:1996wu, deBakker:1996zx}.
Despite the negative nature of this result, the fact that the model contains criteria which could be used for
its falsification should be appreciated. For many other candidate theories of quantum gravity this is not obviously the case.
Causal Dynamical Triangulations (CDT) were introduced in \cite{al} in an attempt to overcome these problems.
The key new idea of CDT is to incorporate aspects of the {\it causal structure} of classical general relativity
at a more fundamental level into the nonperturbative gravitational path integral.\footnote{By contrast, Euclidean gravity does not
distinguish between space and time, and therefore has no causal structure and no notion of causality.} The elementary
building blocks of CDT quantum gravity are flat four-simplices of {\it Lorentzian} signature, that is, pieces of
Minkowski space. The carrier space of the corresponding path integral consists of piecewise flat simplicial
manifolds assembled from these building blocks. In addition, each path integral history has a distinguished discrete
fo\-liation and an associated notion of (discrete) proper time, which ensures the pre\-sence of a well-defined causal structure globally
(see \cite{cdtreviews,physrep} for details on motivation, construction and results in CDT).
Numerical simulations in 3+1 dimensions have shown that these modifications lead to a completely different quantum dynamics,
compared to the earlier Euclidean DT model: CDT quantum gravity in 3+1 dimensions contains a phase whose nonperturbative
ground state of geometry is extended, macroscopically four-dimensional and on large scales can be matched to a de
Sitter universe \cite{cdt4d,desitter}. Moreover, the theory has recently been shown to possess a second-order phase transition,
which according to standard arguments is a prerequisite for the existence of a continuum limit \cite{trans}.
The causal structure of CDT is realized by putting together its simplicial building blocks such that each
CDT configuration has a product structure, not just at the level of
topology -- usually chosen as $[0,1]\times{}^{(3)}\Sigma$, for fixed three-topology ${}^{(3)}\Sigma$ -- but {\it as triangulations}.
A (3+1)-dimensional CDT geometry consists of a sequence of slabs or layers, each of thickness 1, which may be thought of
as a single unit of proper time. The orientation of the light cones of all four-simplices in a given slab is consistent
with this notion of time. In this way the causal structure becomes linked to a preferred discrete foliation of spacetime.
To understand better how the preferred time foliation on the one hand and the causal structure on the other
contribute to the evidence of a good classical limit -- the key
distinguishing feature of CDT quantum gravity -- it would be highly desirable to disentangle these
two elements of ``background structure". We recently proposed a modification of standard CDT which does exactly that
\cite{jl}. The main idea, applicable in any spacetime dimension $d$, is to enlarge the set of $d$-simplices by new types of
simplicial building blocks, also pieces of $d$-dimensional Minkowski space, but with different
link type assignments and therefore a different orientation of the light cone relative to the boundaries of the simplex.
Including the new building blocks is in general not compatible with the preferred foliation of CDT. Nevertheless,
with a suitable choice of gluing
rules one can still obtain Lorentzian simplicial manifolds with a well-defined causal structure, at least locally.
In this way the issue
of causality becomes dissociated from the notion of a preferred foliation. The interesting question is then
whether the quantum-gravitational model using ``nonfoliated CDT" can reproduce the results of the
standard formulation, especially those concerning the large-scale properties of quantum spacetime. The main conclusion
of the present paper, previously announced in \cite{jl}, is that it {\it can}, at least in space-time dimension 2+1.
At least in higher dimensions, this result appears to weaken the potential link of CDT quantum gravity with
Ho\v rava-Lifshitz gravity \cite{hl,phasediagram},
where the presence of a preferred foliation is a key ingredient.\footnote{This differs
from 1+1 dimensions, where CDT is a quantum-mechanical system of a single length variable and has been shown
to coincide with projectable Ho\v rava-Lifshitz gravity \cite{2dcdthl}.}
To summarize, our intention is to get rid of the {\it distinguished} foliation (and associated discrete time label $t$) of CDT,
whose leaves for integer $t$ coincide with simplicial spatial hypermanifolds consisting entirely of {\it space}like
subsimplices of codimension 1. This does not mean that the nonfoliated CDT configurations cannot in principle be foliated
with respect to some continuous notion of time, but only that there is in general no canonical way of doing this in terms of
a distinguished substructure of the triangulated spacetimes.
Of course, having {\it a} notion of time at the level of the regularized geometries is important. In the standard formulation
of CDT, we get a notion of (discrete proper) time $t$ for free, simply by counting consecutive ``slabs", as described earlier.
The presence of this time label allows us to define a transfer matrix and an associated propagator with the correct
behaviour under composition (whose continuum limit in dimension 1+1 can be computed analytically \cite{al}),
and prove a version of reflection positivity \cite{3d4d}.
Yet another advantage of having an explicit time variable is that we can construct observables like the volume profile,
which measures the distribution of spatial volume as a function of time. The analysis of these volume profiles in
3+1 dimensions has been crucial in relating the large-scale behaviour of CDT to a de Sitter cosmology
in the continuum \cite{desitter,semiclassical}.\footnote{One does not know a priori whether $t$ or any other
time label of the regularized model will assume a distinguished physical role in the continuum theory; this can only
be determined by studying suitable continuum {\it observables}, like the volume profiles.}
By contrast, in the enlarged CDT set-up we will be considering, typical triangulations will be more complicated,
in the sense that the purely spatial subsimplices of codimension 1 will no longer align themselves into a neat sequence
of simplicial hyper{\it manifolds}, but instead will form branching structures, as will be explained in more detail
below. This also implies that we no longer have a distinguished time variable
at our disposal.
Nevertheless, as we shall demonstrate for the nontrivial case of 2+1 dimensions, it is possible to construct a meaningful
time variable, whose restriction to
standard, nonbranching CDT configurations agrees with the usual proper time label $t$. This will enable us to
extract volume profiles from the numerical simulations and compare their dynamics with that of
the standard formulation.
There are a number of good reasons for beginning our investigation in 2+1 dimensions, as we
are doing in the present work. In standard CDT, the
large-scale properties of the quantum universe generated by the nonperturbative
quantum dynamics are qualitatively very similar to those in 3+1 dimensions, and are well described by a (three-dimensional)
Euclidean de Sitter universe \cite{3dcdt,benedettihenson}.
Furthermore, as we will describe in Sec.\ \ref{sec:numsetup}, the nonfoliated CDT model requires
new Monte Carlo moves, which are significantly more difficult to implement than the generalized Pachner moves used
in standard CDT. Writing the simulation software for the new model becomes a very challenging task already in 2+1 dimensions.
At the same time, the increased complexity of the simulation software leads to longer simulation times.
It is not even clear currently whether analogous simulations in 3+1 dimensions could be performed with acceptable
running times using contemporary simulation hardware.
Before embarking on our exploration of the dynamics of nonfoliated CDT, let us comment briefly on prior work which
considered explicitly a possible relaxation of CDT's strict time slicing.\footnote{In CDT in 1+1 dimensions, one can
rederive the exact continuum propagator without in\-voking an explicit proper-time slicing (see, for example, \cite{durhuuslee}).
It is unclear how to generalize such a construction to higher dimensions.}
A soft way of relaxing the foliation in 1+1 dimensions was studied in \cite{markopoulousmolin}, where under
certain conditions the timelike links
were allowed to have varying length. In this approach the foliation is still present,
since the connectivity of the underlying triangulation is unchanged, but the individual leaves of the foliation are
not placed at equidistant intervals. The authors argued that this should not affect the continuum limit of the model.
A similar idea in 2+1 dimensions was considered in \cite{konopka}, where it was also suggested to add new
elementary building blocks to CDT, which extend over two slabs of the foliation instead of one. This study
did not include details of how the path integral of the corresponding generalization of CDT quantum gravity should
be formulated or simulated.
The remainder of this article is structured as follows.
We begin with a brief review of Causal Dynamical Triangulations in Sec.\ \ref{cdt}. In Sec.\ \ref{sec:ncdt_2d} we discuss
aspects of CDT without preferred foliation in 1+1 dimensions, to illustrate the basic geometric idea behind the enlarged model.
Sec.\ \ref{sec:nfct21} contains a detailed study of the kinematical aspects of nonfoliated CDT
in 2+1 dimensions. Sec.\ \ref{sec:actions} deals with actions and the Wick rotation, and
Sec.\ \ref{sec:numsetup} summarizes the new numerical set-up. This includes
an overview of the Monte Carlo moves in 2+1 dimensions, a prescription for how a notion of time can be
defined on the quantum ensemble, and how the corresponding volume profiles can be extracted.
In Sec.\ \ref{sec:explo} we explore the phase diagram of the model numerically.
In Sec.\ \ref{sec:tetdist} we study distributions of tetrahedra as a function of the coupling constants, which allows us
to understand how foliated (in the sense of regular CDT) typical configurations are.
The results of our analysis of the volume profiles and their matching to a de Sitter universe are presented in
Sec.\ \ref{sec:voldist}, and our conclusions in Sec.\ \ref{sec:conclusions}. --
Further details on the numerical implementation as well as a documentation of the relevant software
can be found in \cite{thesis}.
\section{Review of Causal Dynamical Triangulations}
\label{cdt}
To set the stage for our subsequent generalization, we recall in the following some elements of Causal Dynamical
Triangulations, mainly based on the original literature on CDT in 1+1 \cite{al}, 2+1 \cite{3dcdt} and
3+1 dimensions \cite{cdt4d,desitter}.
For more extensive reviews and lecture notes on CDT we refer the interested reader
to \cite{cdtreviews,physrep,cdtlectures}.
The central object of interest in the CDT approach to quantum gravity is the gravitational path integral,
which in the continuum can be written formally as
\begin{equation}
\label{eq:pigrav}
Z(G,\Lambda)=\hspace{-.6cm}\mathop{\int}_{\mathrm{geometries\, [g]}}\hspace{-.7cm}{\mathcal{D}[g]}\exp(iS_{\mathrm{EH}}[g]),
\end{equation}
where $S_{\mathrm{EH}}[g]$ is the Einstein-Hilbert action, written as functional of the metric $g$, $\mathcal{D}[g]$ is a measure
on the space of geometries (the space of equivalence classes $[g]$ of metrics under the action of the diffeomorphism group),
$G$ is Newton's constant and $\Lambda$ is the cosmological constant.
To define the path integral properly it needs to be regularized, which in CDT is done by performing the ``sum over histories"
(\ref{eq:pigrav}) over a set of piecewise flat, simplicial Lorentzian geometries -- in other words, triangulations -- effectively
discretizing the curvature degrees of freedom of spacetime.
The way in which triangulations encode curvature is illustrated best in two dimensions.
In the Euclidean plane, consider a flat disc consisting of six equilateral triangles which share a central vertex, and
remove one of the triangles (Fig.\ \ref{fig:reggecurvature}, left). By identifying the opposite sides of the gap thus created,
the piecewise flat disc acquires
nontrivial (positive Gaussian) curvature, whose magnitude is equal to the deficit angle $\pi/3$ at the vertex
(Fig.\ \ref{fig:reggecurvature}, right).
This also coincides with the rotation angle undergone by a two-dimensional vector parallel-transported around the vertex, and is
therefore an {\it intrinsic} property of the two-dimensional disc. The principle of encoding curvature through deficit
angles (located at subsimplices of dimension $d-2$ in a $d$-dimensional triangulation) works in any dimension and
for any metric signature.
\begin{figure}[t]
\centerline{\scalebox{0.55}{\includegraphics{reggecurvature.eps}}}
\caption{A disc in the flat Euclidean plane is triangulated in terms of six equilateral triangles,
one of which is subsequently removed (left).
Regluing the remaining triangles one obtains a disc with positive curvature, characterized by a deficit angle
$\theta=\pi/3$ and independent of the embedding space (right).}
\label{fig:reggecurvature}
\end{figure}
Let us review the difference between the (Euclidean) DT and the (Lorentzian) CDT path integral, again for
simplicity in dimension two. Recall that the geometric properties of a flat triangle (or, in higher dimensions, a flat simplex)
are com\-pletely determined by the lengths of its edges.\footnote{To treat space-, time- and lightlike edges on the same footing, it is
convenient to work with the {\it squared} edge lengths.}, including the {\it signature} of the flat metric in the building
block's interior.
In the Euclidean DT approach in two dimensions one works with a single type of building block, an
equilateral triangle of Euclidean signature, all of whose edges have some fixed spacelike length,
the same length for all triangles in the
triangulation. As we have seen above, the number of such triangles around each interior vertex of the triangulation characterizes
the local curvature at that vertex. In this way the formal integral over curved geometries in the continuum path integral (\ref{eq:pigrav})
is turned into a sum over triangulations, typically subject to some manifold conditions, which ensure that the triangulation
looks like a two-dimensional space everywhere.
\begin{figure}[t]
\centerline{\scalebox{0.7}{\includegraphics{dt2d.eps}}}
\caption{Two-dimensional triangulations: from Euclidean DT, consisting of equilateral triangles with
Euclidean signature (left), and from Lorentzian CDT, constructed from Minkowskian
triangles with one spacelike and two timelike links (right).
In both cases, curvature is present at all interior vertices whose coordination number (number of triangles meeting at the vertex) is
not six. Timelike links are shown in red, spacelike ones in blue.}
\label{fig:dt2d}
\end{figure}
By contrast, the flat triangles used in CDT quantum gravity have Lorentzian signature, something that cannot be achieved for
equilateral triangles. The standard choice of an elementary CDT building block in 1+1 dimensions is one whose base
has squared length $\ell_s^2>0$ (and therefore is spacelike) and whose remaining two edges both have squared
length $\ell_t^2<0$ (and therefore are timelike). From the point of view of triangulating Lorentzian spacetimes,
one could in principle have chosen $\ell_t^2$ to be space- or lightlike, but
then CDT's prescription of a Wick rotation -- to be described below -- would no longer be applicable.
Fig.\ \ref{fig:dt2d} illustrates the fundamental building blocks of two-dimensional DT (left) and CDT (right), and how they are
put together to obtain piecewise flat mani\-folds of Euclidean and Lorentzian signature. The two-dimensional graphs correctly
represent the neighbourhood relations of adjacent triangles, but not the actual length assignments, since it is impossible to
flatten out a curved surface and preserve the edge lengths at the same time. Note that in the Lorentzian case the spacelike
edges form a sequence of one-dimensional simplicial submanifolds, which can be interpreted as hypermanifolds of
constant time $t=0,1,2, ...$, endowing each triangulation with a distinguished notion of (proper) time. This time
can be extended continuously to the interior of all triangles \cite{dl}.
The ensemble of spacetimes forming the carrier
space of the CDT path integral are all triangulations which consist of a fixed number $t_{tot}$ of triangulated strips $\Delta t=1$,
where each strip is an arbitrary sequence of up- and down-triangles between times $t$ and $t+1$. The topology of space
(usually chosen to be $S^1$) is not allowed to change in time, that is, branchings into multiple $S^1$-universes are
forbidden. It was shown in \cite{benedettihensonmatrix} that the global foliation of a 1+1 dimensional
CDT spacetime into such strips can
be understood as consequence of a {\it local} regularity condition, namely, that precisely two spacelike edges be incident
on any vertex. Note that these geometries are causally well behaved and obey a piecewise linear analogue of
global hyperbolicity.
As already described in the introduction, the idea of creating individual path integral configurations with a
well-behaved causal structure by imposing a preferred foliation on the underlying simplicial manifold
is also realized in CDT in higher dimensions.
Fig.\ \ref{fig:foliation} shows part of a 2+1 dimensional CDT spacetime. Each leaf at integer-$t$ of the foliation forms a
two-dimensional triangulation of the same fixed topology ${}^{(2)}\Sigma$, and consists of equilateral spacelike triangles
with link length $\ell_s$. Adjacent triangulated spatial hypermanifolds are connected using Lorentzian tetrahedra to form
a 2+1-dimensional simplicial manifold with spacetime topology $[0,1]\times {}^{(2)}\Sigma$, which again is a sequence of $t_{tot}$
triangulated ``slabs" $\Delta t=1$. Each such geometry comes with a time label $t$ which can be defined continuously
throughout the triangulation, and for integer values coincides with the discrete labeling of the simplicial leaves just
described \cite{dl}. For reasons of simplicity, in the simulations time is often compactified.
The topology then becomes $S^1\times {}^{(2)}\Sigma$.
\begin{figure}[t]
\centerline{\scalebox{0.6}{\includegraphics{foliation.eps}}}
\caption{Two adjacent spatial slices in a piece of foliated CDT triangulation in 2+1 dimensions.
Tetrahedra of both types are shown.}
\label{fig:foliation}
\end{figure}
The fundamental building blocks of three-dimensional CDT quantum gravity are flat Minkowskian tetrahedra, whose
geometric properties are determined by their edge lengths. As in 1+1 dimensions, one considers two different
link lengths, one spacelike with squared length $\ell_s^2$ and one timelike with $\ell_t^2=-\alpha \ell_s^2$.
Without loss of generality we have introduced here a positive constant $\alpha$, which quantifies the relative magnitude
of space- and timelike edge lengths and in what follows will be referred to as the \emph{asymmetry parameter}.
CDT path integral configurations are assembled from two different tetrahedron types, which can be distinguished
by their orientation with respect to the preferred foliation.
As can be seen from Fig.\ \ref{fig:foliation}, spacelike edges of a tetrahedron are always contained in a spatial
submanifold of integer $t$, whereas timelike edges always connect different spatial slices. The (3,1)-tetrahedron
has three space\-like links forming a spacelike triangle, while the (2,2)-tetrahedron contains
only two spacelike links. The notation $(i,j)$ indicates that $i$ vertices of the tetrahedron are located on one spatial slice
and the remaining $j$ vertices on an adjacent one.
The kinematical set-up of CDT quantum gravity in 3+1 dimensions can be defined in a similar way.
The leaves of the preferred foliation are three-dimensional
Euclidean triangulations of fixed topology, and neighbouring slices are connected using Minkowskian four-simplices.
Path integral configurations are simplicial manifolds assembled from
two types of these building blocks, denoted by (4,1) and (3,2), depending on how their vertices are distributed
among adjacent spatial slices. By labeling the foliation with increasing integers we again get a time variable for free,
with every vertex being assigned a definite discrete time label.
Another ingredient that needs to be specified to make the model complete is an implementation
on piecewise flat geometries of the Einstein-Hilbert
action in the path integral (\ref{eq:pigrav}), which in CDT quantum gravity is done following
Regge's prescription
\cite{regge}. One should point out that models of the type we are studying tend to be very robust with respect to
changes in the precise form of the action (which obviously is subject to discretization ambiguities) and of the configuration
space, in the sense that a wide range of different regularizations and kinematical ingredients will lead to the same
continuum physics, if the latter can be defined meaningfully. An exception to this is of course the imposition of
causality constraints, which distinguishes CDT from DT quantum gravity and leads in all dimensions studied so far
to genuinely different continuum results. In two dimensions, this can be demonstrated exactly, for example, by
comparing specific observables and critical exponents, since the CDT model
can be solved analytically \cite{al}. In higher dimensions, information about the behaviour of observables comes
primarily from numerical simulations: three-dimensional CDT quantum gravity has only been solved partially and
for restricted classes of triangulations \cite{blz}, while in four dimensions analytical methods are mostly unavailable
and one must resort to Monte Carlo simulations to extract physical results.
\begin{figure}[t]
\centerline{\scalebox{0.6}{\includegraphics{fourtriangles.eps}}}
\caption{The four triangle types which can be constructed from using just two link lengths, spacelike (blue) and timelike (red).
Only the two Minkowskian triangles (of types $sst$ and $tts$) at the centre have the correct signature for
triangulating 2d Lorentzian spacetimes. -- The green lines inside the Lorentzian triangles indicate light rays
through the corner vertices.}
\label{fig:fourtriangles}
\end{figure}
In order to analyze the dynamics of CDT quantum gravity using such simulations, a Wick rotation must be performed
to convert the complex path integral amplitudes to real Boltzmann weights. This can be achieved by performing an
analytic continuation of the asymmetry parameter, by rotating it in the lower half of the complex plane such that
$\alpha$ is mapped to $-\alpha$ \cite{3d4d}. As a consequence, the gravitational path integral
becomes a statistical partition function of the form
\begin{equation}
\label{partition}
Z=\sum_{T\in \cal{C}}\frac{1}{C(T)}\exp(-S_{\mathrm{Regge}}^{\mathrm{eucl}}(T)),
\end{equation}
where $\cal{C}$ is the space of all causal, Lorentzian triangulations $T$, $S_{\mathrm{Regge}}^{\mathrm{eucl}}$
the Euclideanized
Regge action and $1/C(T)$ the discrete analog of the path integral measure, with $C(T)$ denoting the order of the
automorphism group of $T$. In Sec.\ \ref{sec:actions} below we will derive and discuss the explicit
functional form of the Regge implementation of the three-dimensional Einstein-Hilbert action in terms of the
triangulation data and the coupling constants of the nonfoliated CDT model.
\section{Relaxing the foliation: 1+1 dimensions}
\label{sec:ncdt_2d}
As a warm-up for the three-dimensional case, we will in this section illustrate our general strategy for relaxing the distinguished
foliation by discussing the situation in 1+1 dimensions. The key idea is to add new elementary Minkowskian building blocks, while
sticking to two types of links, one spacelike and one timelike, where we will continue to use the notation $\ell_s^2$ and
$\ell_t^2\! =\!-\alpha \ell_s^2$ for their squared lengths. Fig.\ \ref{fig:fourtriangles} shows the four types of triangles which can be
built from these two link types. By calculating the metric inside the triangles one finds that there are exactly two which have
Lorentzian signature $(-+)$, the two at the centre of the figure. We will call them the $sst$- and the $tts$-triangle respectively, in
reference to the spacelike ($s$) and timelike ($t$) edges they contain. Note that in standard CDT in two dimensions only the
$tts$-triangle is used.
The use of Lorentzian building blocks is not a sufficient condition for the triangulation to have a well-defined causal
structure locally, we also need to check that we obtain well-defined light cones in points where triangles
are glued together. If the gluing happens according to the standard rule of only identifying links of the same type
(spacelike with
spacelike, timelike with timelike), the only local causality violations\footnote{Whenever we talk about geometries being
{\it causal}, what we have in mind is that they possess a well-behaved {\it causal structure}. This should
not be confused with the notion of causality in standard (quantum) field theory, which refers to the behaviour of matter
fields on a given background that typically already comes with a fixed causal structure.} can occur
at the vertices of the triangulation.
Counting past and future light cones separately, the point is that one may obtain more or fewer than the required
two light cones at a vertex, as illustrated by the local neighbourhoods depicted in Fig.\ \ref{fig:causality2d}.
Local causality implies crossing exactly two light cones when going around a vertex or, equivalently, crossing
exactly four lightlike lines emanating radially from the vertex.
\begin{figure}[t]
\centerline{\scalebox{0.65}{\includegraphics{causality2d.eps}}}
\caption{Three examples of vertex neighbourhoods in 1+1 dimensions. Causality is violated at the central vertex whenever
the number of light cones is not two (left and centre). At a causally well-behaved vertex, one crosses exactly two light cones
(equivalently, four lightlike directions) when going around the vertex (right).}
\label{fig:causality2d}
\end{figure}
As a next step, we will consider the global causal structure of individual triangulated manifolds
satisfying the local vertex causality condition everywhere. This global structure will in general depend on the chosen topology.
For simplicity, we will restrict ourselves to the cases $[0,1]\times S^1$ (space is a compact circle)
and $[0,1]\times [0,1]$ (space
is a closed interval), where the initial and final boundary are assumed to be spacelike, and any other boundaries (in the second
case) timelike. We will call such a spacetime globally causal if it does not contain any closed timelike curves.
A ``timelike curve" for our purposes will be a sequence of oriented, timelike links in a triangulation. Since the interior of
every Minkowskian triangle has a well-defined light cone structure, a choice of orientation (i.e. a choice of which one
is the past and which one the future light cone) induces an orientation on its timelike edges, which can be captured
by drawing a future-pointing arrow onto the edge. Conversely, these arrow assignments fix the orientation of the
triangle uniquely. To follow the ``flow of time" in a triangulation it is convenient to also associate a future-pointing
arrow with each spacelike edge, which is drawn perpendicular to the edge, see Fig.\ \ref{fig:flowoftime2d}.
Choosing a consistent orientation for all building blocks in standard CDT in this way is completely straightforward:
each triangle sits in a strip between discrete times $t$ and $t+1$, which fixes its orientation uniquely. Independent
of the spatial boundary conditions, there are no closed timelike curves (unless we impose periodic boundary
conditions {\it in time}, which trivially makes any timelike curve closed, a situation we are not considering here).
\begin{figure}[t]
\centerline{\scalebox{0.55}{\includegraphics{flowoftime2d.eps}}}
\caption{Two Lorentzian triangles with consistent assignments of future-pointing arrows to its edges,
as explained in the text (left).
The time orientation of a given triangle determines the time orientation of its direct neighbours (right).}
\label{fig:flowoftime2d}
\end{figure}
By contrast, the situation in nonfoliated CDT
is slightly more involved. Given a time-oriented triangle, the orientation of a neighbouring triangle that shares an
edge with the first one is uniquely determined by consistency. It is easy to see that when vertex causality is violated
(like in the example of Fig.\ \ref{fig:causality2d}, left), inductively assigning orientations in this way will fail -- i.e. lead to
contradictions -- even for a local vertex neighbourhood. If vertex causality is satisfied, one can show that for noncompact
spatial topology there are no closed timelike curves \cite{hoekzema}. For compact spatial slices, where the spacetime
topology is that of a cylinder, one can construct explicit geometries which exhibit noncontractible, closed timelike curves.
Of course, we do not know a priori whether the presence of closed timelike curves in individual path integral configurations
has any influence on the continuum limit of the model, and perhaps leads to undesirable continuum properties.
It appears that in the context of our present investigation this issue is largely circumvented. Although the causality
conditions we impose are of a local nature, and may admit the presence of closed timelike curves, it turns out that
the geometries dominating the sum over histories dynamically retain a weak degree of foliation
(see Sec.\ \ref{sec:tetdist} below), which suggests that such curves are certainly not abundant. We have not seen
closed timelike curves in random samples, but have not systematically tested for their presence either.
\begin{figure}[t]
\centerline{\scalebox{1.1}{\includegraphics{sourcesink.eps}}}
\caption{A source and a sink of time in 1+1 dimensions. In both cases, vertex causality is violated at
the central vertex.}
\label{fig:sourcesink2d}
\end{figure}
Anticipating the choice of boundary conditions we will make in 2+1 dimensions, we
may relax the local causality constraint slightly by allowing for the presence of an isolated
``source" and ``sink" of time. By this we mean a vertex where only timelike links meet, all of them either
time-oriented away from the
vertex (source) or toward it (sink), as illustrated by Fig.\ \ref{fig:sourcesink2d}. For compact spatial boundary
conditions, choosing a source and a sink as initial and final (degenerate spatial) boundaries
will convert the cylinder into a
spherical $S^2$-spacetime topology. A similar choice of boundary conditions in 2+1 dimensions will lead to a
$S^3$-spacetime topology, with the source and sink forming the south and north pole of the sphere, as we will
see later.
\begin{figure}[b]
\centerline{\scalebox{0.4}{\includegraphics{bubblestrip.pdf}}}
\caption{A bubble (left) and a strip configuration (right) in 1+1 dimensions.}
\label{fig:bubblestrip}
\end{figure}
There is a particular substructure of the triangulations, called a {\it bubble} \cite{hoekzema},
which involves the newly
added building blocks and is helpful in analyzing the geometry of nonfoliated CDT.
In 1+1 dimensions it is simply a pair of $sst$-triangles with a chain of $tts$-triangles in between.
This is the general structure of a two-dimensional connected region bounded by a closed loop of spacelike links,
and whose interior contains only timelike links, as shown in Fig.\ \ref{fig:bubblestrip}, left (we are assuming that vertex causality is
satisfied everywhere). This should be contrasted with the structure of a strip, which likewise denotes a
two-dimensional piece of triangulations bounded by spacelike links and without spacelike links in its interior, but
whose boundary is disconnected (Fig.\ \ref{fig:bubblestrip}, right). CDT quantum gravity in 1+1 dimensions has only strips, whereas
the version without distinguished foliation has both strips and bubbles. Analogous structures will play a role
in our analysis in 2+1 dimensions too, where also the interior structure of a bubble can become more complicated.
\section{Relaxing the foliation: Kinematics in 2+1 dimensions}
\subsubsection*{Local causality conditions}
\label{sec:nfct21}
Following an analogous procedure in 2+1 dimensions to arrive at a model without distinguished simplicial hypermanifolds,
we first must determine which flat tetrahedra -- again only built from two types of edge lengths -- give rise to
Minkowskian building blocks of the correct Lorentzian signature $(-++)$. Fig.\ \ref{fig:tetrahedra} shows all types of
tetrahedra which can be constructed using space- and timelike links with fixed squared lengths $\ell_s^2$ and
${\ell_t^2\! =\! -\alpha \ell_s^2}$ respectively. In the rest of this document we will set $\ell_s\! =\! 1$.
By calculating the metric in the interior of each tetrahedron type, one finds that only $T_2$, $T_3$, $T_5$ and $T_9$ have
the required signature for all values $\alpha\! >\! 0$ of the asymmetry parameter, and type $T_7$ only for $0\! <\! \alpha\! <\! 1$.
Note that standard CDT quantum gravity only uses the tetrahedra $T_5$ and $T_9$, in Sec.\ \ref{cdt} referred to as (3,1)-
and (2,2)-tetrahedra respectively.
\begin{figure}[t]
\centerline{\scalebox{0.55}{\includegraphics{tetrahedra.eps}}}
\caption{All tetrahedra in 2+1 dimensions that can be constructed from time- and spacelike links of fixed squared length,
allowing for any signature. The tetrahedra highlighted in yellow have the correct Lorentzian signature; for
the $T_7$-tetrahedron this only holds for $\alpha<1$. The link labeling shown for the first tetrahedron will be used
for the other tetrahedra too.}
\label{fig:tetrahedra}
\end{figure}
In the present work we will for reasons of simplicity
investigate the version of the model where causal spacetimes are assembled from the tetrahedral
types $T_2$, $T_3$, $T_5$ and $T_9$ (without $T_7$). As we will see, this already serves our purpose of breaking up
the fixed foliated structure.
\begin{figure}[t]
\centerline{\scalebox{0.7}{\includegraphics{causality3d.eps}}}
\caption{Dihedral angle at the link with label ``2" of a $T_2$-tetrahedron (left). Star of a link, all of whose
tetrahedra contribute dihedral angles at the link (centre). Two-dimensional cut perpendicular to the link. In the case
depicted, the space is Lorentzian, with green lines representing light rays originating at the centre (right).}
\label{fig:causality3d}
\end{figure}
In 2+1 dimensions violations of local causality -- which should therefore be forbidden by the gluing rules --
can in principle occur at the links and the vertices of a triangulation. To check whether the light cone structure at
a given link is well-behaved, it is sufficient to consider the geometry of a two-dimensional piecewise flat
surface orthogonal to the link at its midpoint. This geometry is completely characterized by
the set of tetrahedra sharing the link, the so-called star of the link (Fig.\ \ref{fig:causality3d}, centre).
Each tetrahedron in the star contributes a dihedral angle, defined by the intersection of the tetrahedron with
the plane perpendicular to the link (Fig.\ \ref{fig:causality3d}, left).
The plane segments spanned by all the dihedral angles associated to the given link form a new plane\footnote{This is
a slight misnomer; in general, this ``plane" will not be flat, because the vertex at the centre will carry a nonvanishing
deficit angle.}, as shown in Fig.\ \ref{fig:causality3d} (right).
We can distinguish between two cases. If the link at the centre of the star is timelike, the metric of the plane
has Euclidean signature, all dihedral angles are Euclidean, and there are no further causality conditions to be satisfied.
If on the other hand the link is spacelike, the orthogonal plane is Lorentzian, and so are the dihedral angles. Like in the
1+1 dimensional case discussed in the previous section, we must then require that there is exactly one
pair of light cones at the central vertex, and that we encounter exactly four lightlike directions when
circling around it. We say that the triangulation satisfies \emph{link causality} if this condition is satisfied for
every spacelike link.
\begin{figure}[t]
\centerline{\scalebox{0.65}{\includegraphics{vertexcausality.eps}}}
\caption{In a triangulation obeying vertex causality, intersecting the boundary of a spherical vertex neighbourhood
with the light cones at the vertex $V$ results in two disconnected circles (left).
Part of the surface triangulation of a unit ball around $V$,
showing timelike radial links as red, spacelike ones as blue and light cone crossings as green dots (right).}
\label{fig:vertexcausality}
\end{figure}
Link causality guarantees that light cones everywhere in the triangulation are regular, except possibly at vertices.
Intersecting the light cone(s) at a vertex $V$ with the surface of a unit ball around $V$, we obtain
two disconnected circles if and only if local causality holds at $V$, see Fig.\ \ref{fig:vertexcausality} (left).
In terms of the triangulated surface $\cal S$ of the unit neighbourhood around $V$, vertex causality can be characterized as
follows. Mark the end of a timelike link between $V$ and $\cal S$ by a red dot and that of a spacelike one by a blue dot.
In addition, whenever the light cone through $V$ crosses a link on $\cal S$, mark the link with a green dot.
Recalling the situation depicted in Fig.\ \ref{fig:fourtriangles}, it is clear that a green dot will always occur on a
surface link which connects a red and a blue dot. If we cut all links that are marked with a green dot, the surface
triangulation will break up into a number of connected components.
If two of the components thus obtained contain red vertices and one component contains blue vertices, we say that
\emph{vertex causality} holds at $V$. If this is true for all vertices, we say that the triangulation satisfies
vertex causality.
We have not found any Monte Carlo moves which destroy vertex causality but maintain link causality.
Also, we have not been able to explicitly construct a triangulation that satisfies link causality and violates vertex
causality, but we do not currently have a proof that link causality implies vertex causality.
In order to compute the explicit action for the generalized CDT model of 2+1 dimensional quantum gravity, we will need the
values of all dihedral angles. As usual, we will use Sorkin's complex angle prescription \cite{sorkin} for the latter, which
conveniently keeps track of both Euclidean and Lorentzian angles. The analytic expressions for the cosines and sines of the
dihedral angles are listed in Table\ \ref{tab:tetraangles}, from which the angles can be computed uniquely. Closer
inspection of the geometry of the tetrahedra reveals that a dihedral angle contains a light cone crossing whenever
the two triangles bounding the angle are a pair of a spacelike (Euclidean) and a Lorentzian triangle.
Local link causality therefore implies that the triangle type changes exactly four times between spacelike and non-spacelike
when we circle around a spacelike link once. The number of light cone crossings in the case of a Lorentzian angle is
also contained in the table.
\begin{table}[t]
\begin{center}
\renewcommand{\arraystretch}{1.6}
\begin{tabular}{|c|c|c|c|c|}
\hline
tetrahedron & links & $\cos(\Theta)$ & $\sin(\Theta)$ & light cone crossings\\
\hline
\hline
$T_2$ & $1,3,5,6$ & $\frac{i}{\sqrt{3}}\sqrt{\frac{\alpha}{4+\alpha}}$ & $\frac{2}{\sqrt{3}}\frac{\sqrt{3+\alpha}}{\sqrt{4+\alpha}}$ & $1$ \\
& $2$ & $1+\frac{2\alpha}{3}$ & $\frac{2i\sqrt{\alpha(3+\alpha)}}{3}$ & $0$ \\
& $4$ & $\frac{2+\alpha}{4+\alpha}$ & $\frac{2\sqrt{3+\alpha}}{4+\alpha}$ & $-$ \\
\hline
$T_3$ & $1$ & $\frac{i(1+2\alpha)}{\sqrt{3+12\alpha}}$ & $2\sqrt{\frac{1+\alpha(4+\alpha)}{3+12\alpha}}$ & $1$ \\
& $2,3$ & $\frac{2+\alpha}{\sqrt{3}\sqrt{-\alpha(4+\alpha)}}$ & $\frac{2}{\sqrt{3}}\sqrt{\frac{1+\alpha(4+\alpha)}{\alpha(4+\alpha)}}$ & $1$ \\
& $4,5$ & $\frac{1}{\sqrt{17+\frac{4}{\alpha}+4\alpha}}$ & $\frac{2}{\sqrt{4+\frac{\alpha}{1+\alpha(4+\alpha)}}}$ & $-$ \\
& $6$ & $\frac{2+\alpha(4+\alpha)}{\alpha(4+\alpha)}$ & $-\frac{2i\sqrt{1+\alpha(4+\alpha)}}{\alpha(4+\alpha)}$ & $0$ \\
\hline
$T_5$ & $1,2,3$ & $-\frac{i}{\sqrt{3}\sqrt{1+4\alpha}}$ & $\frac{2}{\sqrt{3}}\frac{\sqrt{1+3\alpha}}{\sqrt{1+4\alpha}}$ & $1$ \\
& $4,5,6$ & $\frac{1+2\alpha}{1+4\alpha}$ & $\frac{2\sqrt{\alpha(1+3\alpha)}}{1+4\alpha}$ & $-$ \\
\hline
$T_9$ & $1,6$ & $\frac{3+4\alpha}{1+4\alpha}$ & $-\frac{2i\sqrt{2+4\alpha}}{1+4\alpha}$ & $0$ \\
& $2,3,4,5$ & $-\frac{1}{1+4\alpha}$ & $\frac{2\sqrt{2}\sqrt{\alpha(1+2\alpha)}}{1+4\alpha}$ & $-$ \\
\hline
\end{tabular}
\end{center}
\caption{Dihedral angles $\Theta$ for all tetrahedra types, given in terms of their trigonometric functions.
The link numbers refer to the numbering given
in Fig.\ \ref{fig:tetrahedra}. For Lorentzian angles, also the number of light cone crossings is given.}
\label{tab:tetraangles}
\end{table}
Just like in 1+1 dimensions, choosing a time-orientation for a tetrahedron induces an orientation on its timelike links,
as well as on the normal to any of its spacelike triangles. It is operationally convenient to keep track of these data
in terms of future-oriented arrow assignments, as illustrated by Fig.\ \ref{fig:flowoftime3d}.
Again, the local causality conditions do not guarantee that the time orientation can be extended to the full triangulation.
In addition to these conditions, we will therefore require (and enforce by way of our computer algorithm)
that the complete triangulation can be time-oriented consistently.
\begin{figure}[b]
\centerline{\scalebox{0.70}{\includegraphics{flowoftime3d.eps}}}
\caption{The four fundamental tetrahedral building blocks, each equipped with one (out of two
possible) time orientations.}
\label{fig:flowoftime3d}
\end{figure}
\subsubsection*{Simplicial substructures}
In trying to understand the local geometry of nonfoliated CDT configurations and how it is affected by the Monte Carlo
moves defined in Sec.\ \ref{sec:numsetup} below, it is useful to isolate specific local substructures built from the
fundamental tetrahedra of Fig.\ \ref{fig:flowoftime3d}.
To start with, note that only tetrahedra of type $T_2$ and $T_3$ contain triangles with exactly two spacelike edges.
Furthermore, both tetrahedra have exactly two such triangles. If we glue two of them together along such a triangle,
the resulting simplicial complex again has two such triangles on its boundary.
Iterating this gluing procedure we end up with a chain of tetrahedra of type $T_2$ and $T_3$.
We conclude that in a triangulation without boundary, the set of all tetrahedra of type $T_2$ and $T_3$
necessarily organizes itself into a collection of closed rings. In a triangulation with boundary also open chains
are possible.
\begin{figure}[t]
\centerline{\scalebox{0.6}{\includegraphics{rings.eps}}}
\caption{(a) A ring of $T_2$-tetrahedra, forming a simple bubble. (b) By inserting tetrahedra of type $T_3$,
we can form more complicated bubbles. (c) More general bubbles also contain other tetrahedra types.
(d) A ring of $T_3$-tetrahedra gives rise to a pinching, where two spatial discs meet in a single vertex.
(e) By adding two $T_2$-tetrahedra, the pinching is extended to a link.
(f) Bubbles and pinchings can occur in combination (timelike links are omitted here).}
\label{fig:rings}
\end{figure}
Using these simplicial substructures, we can construct three-dimensional analogues of the ``bubbles"
of Sec.\ \ref{sec:ncdt_2d} above, by which we will mean connected pieces of triangulation enclosed by a
surface made of only spacelike triangles, with no such triangles in its interior.
If a ring only contains tetrahedra of type $T_2$, we get a simple bubble,
consisting of two spatial discs with identical structure and a timelike link in its interior (Fig.\ \ref{fig:rings}a.)
By inserting $T_3$-tetrahedra into a ring of $T_2$-tetrahedra we can form more complicated bubbles, as illustrated
by Fig.\ \ref{fig:rings}b. More general bubbles consist of an outer ring of $T_2$- and $T_3$-tetrahedra, enclosing one
or more tetrahedra of the other two types, like the one shown in Fig.\ \ref{fig:rings}c.
We can also consider a ring of $T_3$-tetrahedra, as depicted in Fig.\ \ref{fig:rings}d. The spacelike triangles marked in
yellow form a spatial disc, with a similar spacelike disc just below. Both discs meet in a single vertex, their
respective centres, which we will refer to as a {\it pinching} at that vertex.
This situation can be generalized by inserting tetrahedra of type $T_2$ into the $T_3$-ring, as shown in
Fig.\ \ref{fig:rings}e. The effect is that the two spatial discs now intersect in a link rather than a vertex.
Bubbles and pinchings can occur in combination to create even more complicated structures,
an example of which is shown in Fig.\ \ref{fig:rings}f. A feature of bubbles which we have not yet mentioned is
that they can self-overlap, in the sense that the spherical (or possibly higher-genus) surface bounding a bubble
may touch itself along some subset of the surface triangulation. As explained below,
we will exclude one kind of self-overlapping bubbles from our simulations,
namely, those that wrap nontrivially around the spatial two-sphere.
\subsubsection*{Kinematical constraints}
\label{sec:constraints}
The simplest information one can extract from a triangulation is the number of its subsimplices of a particular type.
We will use the following counting variables for the four fundamental tetrahedra and
the lower-dimensional subsimplex types, as well as their sums in each dimension:
\begin{align}
\label{countvar}
N_0&=\textrm{number of vertices}\nonumber \\
N_1^s&=\textrm{number of spacelike links}\nonumber \\
N_1^t&=\textrm{number of timelike links}\nonumber \\
N_1&:=N_1^s+N_1^t \nonumber\\
N_2^{sss}&=\textrm{number of triangles with three spacelike links}\nonumber \\
N_2^{sst}&=\textrm{number of triangles with two spacelike links}\nonumber \\
N_2^{tts}&=\textrm{number of triangles with one spacelike link} \\
N_2&:=N_2^{sss}+N_2^{sst}+N_2^{tts} \nonumber \\
N_3^{T_2}&=\textrm{number of tetrahedra of type}\ T_2 \nonumber\\
N_3^{T_3}&=\textrm{number of tetrahedra of type}\ T_3 \nonumber \\
N_3^{T_5}&=\textrm{number of tetrahedra of type}\ T_5 \nonumber \\
N_3^{T_9}&=\textrm{number of tetrahedra of type}\ T_9 \nonumber\\
N_3&:=N_3^{T_2}+N_3^{T_3}+N_3^{T_5}+N_3^{T_9}.\nonumber
\end{align}
There exist linear identities among these numbers, which for CDT have been described in \cite{3d4d}.
Here we will repeat the analysis for the extended ensemble, including the new tetrahedral building blocks.
The first identity,
\begin{equation}
\label{id1}
N_0-N_1+N_2-N_3=\chi,
\end{equation}
involves the Euler characteristic $\chi$ of the simplicial spacetime manifold. Since every tetrahedron contains four
triangles and every triangle is shared by two tetrahedra, we also have the constraint
\begin{equation}
\label{id2}
N_2=2 N_3 .
\end{equation}
Both relations (\ref{id1}) and (\ref{id2}) are shared by Euclidean DT and standard CDT.
In the latter we also have the foliation constraint $2 N_2^{sss}\! =\! N_3^{T_5}$, which expresses the fact that in
CDT every spacelike triangle is shared by two tetrahedra of type $T_5$, while every such tetrahedron contains
exactly one spacelike triangle. In the present case, the analogous relation is
\begin{equation}
\label{id3}
2 N_2^{sss}=2 N_3^{T_2}+N_3^{T_3}+N_3^{T_5} .
\end{equation}
This is easily understood by counting all spacelike faces in the triangulation -- the right-hand side of (\ref{id3}) --
and noting that the number of spacelike triangles is half the number of spacelike faces.
In CDT we have two more constraints which explicitly involve the leaves of the preferred foliation.
One is the Euler constraint $N_0-N_1^s+N_2^{sss}= t_{tot} \tilde{\chi}$, where $t_{tot}$ counts the number of leaves
(for periodic boundary conditions in time), and $\tilde{\chi}$ is the Euler characteristic of a spatial section.
This constraint no longer exists in the generalized model, since we have Monte Carlo moves which change the quantity
$N_0-N_1^s+N_2^{sss}$. Furthermore, in standard CDT every spacelike triangle has three spacelike links and every
spacelike link is shared by two spacelike triangles, yielding the relation $N_1^s\! =\! 3 N_2^{sss}/2$.
As shown in \cite{thesis}, this can be generalized to the case at hand, leading to the linear relation
\begin{equation}
N_1^s=\frac{1}{2}(3N_2^{sss}-N_3^{T_2}) .
\end{equation}
Lastly, a constraint which does not have a counterpart in foliated CDT follows directly from our earlier observation of
$T_2$- and $T_3$-tetrahedra forming closed rings (assuming compact spatial topology), namely,
\begin{equation}
N_2^{sst}=N_3^{T_2}+N_3^{T_3} .
\end{equation}
We have checked that the Monte Carlo moves for the nonfoliated CDT model, described in Sec.\ \ref{sec:numsetup} below,
are not compatible with the existence of other linear relations among the counting variables (\ref{countvar}).
This means that we have a total of 5 such relations for 10 variables, compared with 5 relations for 7 counting variables
for standard CDT quantum gravity in 2+1 dimensions. In the next section, we will express the gravitational action
as function of the five remaining independent counting variables.
\section{Action and Wick rotation}
\label{sec:actions}
The gravitational path integral (\ref{eq:pigrav}) assigns to every spacetime geometry $[g]$ a complex amplitude $\exp(iS[g])$,
where $S[g]$ is its classical action. As already noted, we will use the same Regge implementation of the Einstein-Hilbert action
in 2+1 dimensions as previous work on CDT quantum gravity \cite{3d4d}, namely,
\begin{align}
\label{eq:reggeaction3d}
S_{\mathrm{Regge}}\!=\! k\!\!\! \sum_{\substack{\mathrm{spacelike}\\ l}} \!\!\! V(l)\frac{1}{i}
\left(2\pi- \!\!\!\!\! \sum_{\substack{\mathrm{tetrahedra}\\ \mathrm{at}\, l}} \!\!\!\! \Theta\right)+
k\!\!\! \sum_{\substack{\mathrm{timelike}\\ l}} \!\!\! V (l)\left(2\pi-\!\!\!\!\!\sum_{\substack{\mathrm{tetrahedra}\\ \mathrm{at}\, l}}
\!\!\!\! \Theta\right)
\! -\!\lambda\!\!\!\!\!\! \sum_{\substack{ \mathrm{tetrahedra}\, T }}\!\!\!\!\!\! V(T),
\end{align}
where $k$ and $\lambda$ are the gravitational and cosmological couplings (up to rescaling), $V(l)$ and $V(T)$ the
volumes of a link $l$ and a tetrahedron $T$, and $\sum\Theta$ denotes the sum over dihedral angles contributed by
the tetrahedra sharing a link $l$. It was shown in
\cite{3d4d} that an analytic continuation $\alpha\!\mapsto\! -\alpha$ in the asymmetry parameter through the lower-half
complex plane defines a nonperturbative Wick rotation which converts the amplitudes $\exp(iS_{\mathrm{Regge}})$ to
real weights $\exp(-S_{\mathrm{Regge}}^\mathrm{eucl})$, and thereby makes it possible to analyze the path integral
with the help of Monte Carlo simulations. Maintaining the relation $\ell_t^2\! =\! - \alpha \ell_s^2$
between time- and spacelike length assignments, this implies that timelike links acquire a {\it positive} squared
length after the Wick rotation and therefore effectively become spacelike. The requirement that the full set of link
lengths correspond to a proper triangulation {\it after} the Wick rotation means that they have to obey triangle inequalities,
which in turn puts a restriction on the value of $\alpha$ before the Wick rotation, which for CDT in 2+1 dimensions takes
the form $\alpha\! >\! 1/2$.
Let us study how the enlargement of the ensemble of configurations in nonfoliated CDT affects the behaviour of
the Regge action (\ref{eq:reggeaction3d}) under the map $\alpha\!\mapsto\! -\alpha$.
In the first term, $V(l)\! =\! 1$, because the link is spacelike. The plane orthogonal to the link has
Lorentzian signature. Because we have imposed link causality, we will cross the light cone four times when
circling around the link in the plane. According to our complex angle prescription, each crossing adds
a real contribution $\pi/2$ to the total dihedral angle, such that the deficit angle -- the expression
$(2\pi\! -\! \sum\Theta)$ inside the parentheses -- becomes purely imaginary, like
in usual CDT, and under the analytic continuation becomes a real deficit angle.
\begin{table}[t]
\begin{center}
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{|c|c|c|}
\hline
tetrahedron & volume & Wick rotation condition\\
\hline
\hline
$T_2$ & $\frac{1}{12}\sqrt{\alpha(3+\alpha)}$ & $0 < \alpha < 3$ \\
\hline
$T_3$ & $\frac{1}{12}\sqrt{1+4\alpha+\alpha^2}$ & $2-\sqrt{3} < \alpha < 2+\sqrt{3}$\\
\hline
$T_5$ & $\frac{1}{12}\sqrt{1+3\alpha}$ & $\frac{1}{3} < \alpha $ \\
\hline
$T_9$ & $\frac{1}{6}\sqrt{\frac{1}{2}+\alpha}$ & $ \frac{1}{2} < \alpha$ \\
\hline
\end{tabular}
\end{center}
\caption{Volumes of the four elementary tetrahedra and conditions on the asymmetry parameter $\alpha$,
which ensure that the building blocks after the Wick rotation are well defined.}
\label{tab:tetravolumes}
\end{table}
In the second term, the plane orthogonal to the timelike link is Euclidean and remains so after the Wick rotation.
On the other hand, we have $V(l)\! =\!\sqrt{\alpha}$, which acquires a factor of $-i$ under the analytic continuation.
This implies that the second term changes from real to purely imaginary, as it should.
To evaluate the third term we need the volumes of the tetrahedra, which are shown in Table\ \ref{tab:tetravolumes}.
The three-volumes as functions of $\alpha$ are useful quantities to look at. In the Lorentzian sector $(\alpha \! >\! 0)$,
they are all real and positive. After Wick rotation, a vanishing of the volume $V(T)$ signals a geometric degeneracy
of the underlying (Euclidean) tetrahedron $T$,
associated with a violation of the triangle inequalities. In addition, note that for the Wick-rotated expressions to give the
correct contributions to the Euclidean action, the arguments of the square roots in the second column have to be negative
after the substitution $\alpha$ by $-\alpha$, leading to the restrictions on the original $\alpha$-values displayed in
the third column of the table. Since all of these constraints have to be satisfied simultaneously, we conclude that
in nonfoliated CDT in three dimensions we need
\begin{equation}
\frac{1}{2} < \alpha < 3
\end{equation}
in order for the usual Wick rotation to be well defined, which
is stronger than the corresponding condition $1/2 < \alpha$ in CDT, where only the building blocks $T_5$ and
$T_9$ are used.
Evaluating the Regge action (\ref{eq:reggeaction3d}) with the help of the expressions in Tables\ \ref{tab:tetraangles}
and \ref{tab:tetravolumes}, and applying the Wick rotation leads to
\begin{equation}
\label{eq:linearaction2}
S^\mathrm{eucl}=\widetilde{c_1} N_0 + \widetilde{c_2} N_3 + \widetilde{c_3} N_3^{T_2} +
\widetilde{c_4} N_3^{T_3} + \widetilde{c_5} N_3^{T_5}
\end{equation}
for the Euclideanized Regge action, as function of a specific linearly independent subset of the counting variables
(\ref{countvar}),
where the explicit functional form of the coefficients $\widetilde{c_i}=\widetilde{c_i}(k,\lambda,\alpha)$ has been
derived in \cite{thesis}.
For the special case $\alpha\! =\! -1$ and after some manipulations using the kinematical constraints,
the action (\ref{eq:linearaction2}) can be written as
\begin{equation}
\label{eq:action_edt}
S^\mathrm{eucl}=-2 k \pi N_1 + \left(\frac{\lambda}{6\sqrt{2}}+6 k \arccos\frac{1}{3}\right)N_3,
\end{equation}
which coincides with the action of Euclidean dynamically triangulated gravity in three dimensions.
We have also checked that by setting $N_3^{T_2}\! =\! N_3^{T_3}\! =\! 0$ the action (\ref{eq:linearaction2})
can be rewritten to precisely match the Regge action for standard CDT quantum gravity in 2+1 dimensions
given in \cite{3d4d}.
\section{Numerical set-up}
\label{sec:numsetup}
\subsubsection*{Monte Carlo moves}
\begin{figure}[t]
\centerline{\scalebox{0.75}{\includegraphics{genpach.eps}}}
\caption{The three generalized Pachner moves for CDT in 2+1 dimensions: 2-6 move (left), 2-3 move (centre)
and 4-4 move (right). The links shown in green are removed or created by the corresponding move.}
\label{fig:genpach}
\end{figure}
To set up numerical simulations of nonfoliated CDT quantum gravity in 2+1 dimensions, we need to define a set of
Monte Carlo moves. In this section, we will present a compact description of the moves, which fall into two groups;
further details can be found in \cite{thesis}.
The first group contains generalizations of the moves that were already used for CDT simulations in 2+1 dimensions
\cite{3d4d}, and which in turn are adapted versions of the original Pachner moves for Euclidean DT \cite{pachner,gross}.
Fig.\ \ref{fig:genpach} shows the three adapted Pachner moves in 2+1 dimensions. They all change the interior of
a small compact region of the simplicial manifold, while leaving its boundary invariant.
For CDT triangulations, once the location of a link to be added has been fixed, its type (timelike or spacelike) is also fixed.
This is no longer true in the nonfoliated CDT model, where each of these moves comes in several ``flavours".
A move of this kind is called a ``$m$-$n$ move'' if $m$ and $n$ are the numbers of tetrahedra in the local simplicial
neighbourhood before and after the move is executed. The 2-6 move is the only generalized Pachner move which creates
a new vertex.
The Monte Carlo moves in the second group are new compared to standard CDT. Three of them
implement the collapse of a link, and only differ in the types of links and the local neighbourhood involved,
as illustrated by Fig.\ \ref{fig:newmoves}.
They can be seen as special cases of the most general link collapse move, of which we currently do not know
whether and how it can be implemented efficiently.
The bubble move (Fig.\ \ref{fig:newmoves}, top left) operates on a ring of $T_2$-tetrahedra with a single timelike link
in its interior, forming a ``bubble" according to the definition given in Sec.\ \ref{sec:nfct21}.
It collapses the timelike link to a single vertex and simultaneously collapses the bubble to a spatial disc.
The pinching move (Fig.\ \ref{fig:newmoves}, bottom left) operates on a pair of spatial discs whose centres are connected
by a timelike link. It collapses this link, leading to a configuration where the discs touch in a single vertex, thereby
forming a ``pinching" as described in Sec.\ \ref{sec:nfct21} (c.f. Fig.\ \ref{fig:rings}d).
We have also implemented a move which collapses a spacelike link (Fig.\ \ref{fig:newmoves}, top right).
Note that in the configuration before the collapse the link types of the upper and lower disc do not necessarily
have to match. In the special cases when they do, we call this move {\it symmetric}. It means that during the
collapse, only links of the same type get identified pairwise. To keep the complexity of the implementation at
a manageable level, we have restricted ourselves to the symmetric version of this move.
\begin{figure}[t]
\centerline{\scalebox{0.65}{\includegraphics{newmoves.eps}}}
\caption{New Monte Carlo moves in nonfoliated CDT. Three link collapse moves: bubble move (top left),
pinching move (bottom left) and link collapse move for spacelike links (top right). Also shown is the
polar move (bottom right), with arrows indicating a time orientation. -- Spacelike links are blue,
timelike ones red, and purple links can be either space- or timelike.}
\label{fig:newmoves}
\end{figure}
Lastly, recall our introduction in Sec.\ \ref{sec:ncdt_2d} of an isolated source and sink of time in 1+1 dimensions
(Fig.\ \ref{fig:sourcesink2d}). We will use a straightforward generalization to 2+1 dimensions
of these local (causality-violating) configurations as our boundary conditions. The polar move operates on the
neighbourhood of such a source or sink of time, and moves it around. Fig.\ \ref{fig:newmoves} (bottom, right) illustrates the
situation for a time source, initially located at the top of the single tetrahedron on the left.
The move subdivides the tetrahedron into four, with the newly created vertex at the centre becoming the
new source of time.
An important feature of a set of Monte Carlo moves is that it should be ergodic, that is, any element of the configuration
space can be reached in a finite number of moves. In our case, the configuration space
$\tilde{\mathcal{C}}$ consists of all locally causal gluings of the elementary building blocks $T_2$, $T_3$, $T_5$ and $T_9$
that can be time-oriented consistently, and satisfy further
regularity conditions specified in the next subsection. We have made the standard
choice of a direct product $[0,1]\!\times\! S^2$ for the spacetime topology.
It is possible that the moves described here are ergodic in this configuration space; in fact,
the original motivation for introducing additional building blocks was to let us move around in the
space of triangulations more efficiently. However, we do not have a proof of ergodicity, and suspect this could be
rather nontrivial, given the nonlocal character of part of the causality conditions.
\subsubsection*{Defining the ensemble}
As already mentioned in Sec.\ \ref{cdt}, previous simulations of CDT in 2+1 dimensions have worked with a fixed
spacetime topology of direct-product form $[0,1]\!\times\! {}^{(2)}\Sigma$, or $S^1\!\times\! {}^{(2)}\Sigma$ if time is
compactified. The standard, simplest choice\footnote{For a recent investigation with toroidal slices, see
\cite{bl}.} for the spatial topology -- which we will also employ in the present work -- is the sphere, ${}^{(2)}\Sigma\! =\! S^2$.
A posteriori, the choice of compactifying time in this case does not appear to make much of a difference, because it
turns out that the {\it dynamics} of 2+1 CDT quantum gravity (for sufficiently large time extension $t_{tot}$ of the
configurations) drives the shape of the universe towards a de Sitter space with $S^3$-topology \cite{3dcdt}.
As we will now go on to explain, the most convenient choice of boundary conditions for nonfoliated CDT
is that of a direct-product spacetime $[0,1] \times S^2$, where the beginning and end of time are allowed
to degenerate to a point,
leading effectively to an $S^3$-topology. Recall that
in simulations of CDT quantum gravity, the number of time steps $t_{tot}$ is fixed. Since genuine foliated
CDT triangulations form a subset of the present ensemble $\tilde{\cal C}$, the question arises whether it is possible to
go from one strictly foliated configuration to another one with a different number $t'_{tot}$ of time steps via
nonfoliated configurations and using the Monte Carlo moves described in the last subsection.
As explained in detail in \cite{thesis},
the answer is yes. It follows that if there is a region in the phase diagram where the configurations are close to foliated,
the standard CDT notion of the ``number of time steps" will also make sense approximately
and one can ask which equilibrium value for this quantity is found after thermalization.
During early test simulations in the ensemble $\tilde{\cal C}$ with compactified time we did find configurations that
were approximately foliated, but the number of time steps would not thermalize properly.
We have been able to circumvent this problem by not compactifying time, and adding a source and a sink of time
as the two poles of a three-sphere.
Another technical issue which appeared during early test runs was that the simulations would often end up
in ``frozen" states where virtually no progress could be made using the implemented Monte Carlo moves.
The problem could be traced back to the presence of globally self-overlapping bubbles, winding once or
multiple times around the spatial sphere (c.f. our discussion in Sec.\ref{sec:nfct21}).
Since we were unable to overcome this problem by finding additional moves, we looked for a mechanism to
prevent the globally self-overlapping bubbles from appearing.
We found that these problematic structures do not form when we forbid all moves which merge or split bubbles.
The moves of Fig.\ \ref{fig:newmoves} are essentially unaffected by these restrictions (see \cite{thesis} for details).
The simulations on the reduced ensemble behaved much better after this alteration, although
there are still phase space regions where they do not thermalize sufficiently well, as we will discuss later.
Finally, we will use local regularity conditions for the gluings that make the triangulations into
simplicial manifolds, which means that each (interior) vertex has a ball-like neighbourhood whose surface is a triangulated
two-sphere. This is the choice made in
most of the work on higher-dimensional CDT quantum gravity, and will allow for a better comparison of results.
\subsubsection*{Re-introducing time}
The distribution of spatial volume as a function of time is an important large-scale observable, and its analysis has been
instrumental in relating CDT quantum gravity to a de Sitter minisuperspace cosmology in 2+1 and 3+1
dimensions \cite{3dcdt,desitter}. In order to perform a similar analysis also in nonfoliated CDT, we need to define a time
coordinate on its generalized configurations. As explained in the introduction, the fact that spatial slices will generally
branch and form ``bubbles" means that we can no longer use them to define a distinguished time variable.
To explain our alternative prescription of ``time", consider a time-oriented member of the configuration space
$\tilde{\cal C}$.
Given a vertex $V$, consider the set of all future-oriented paths connecting $V$ with the
north pole. The number of links in each path defines a distance between $V$ and the north pole. By averaging
this quantity over all paths we obtain an average distance $d_f$. Repeating the procedure for past-oriented paths, connecting
$v$ to the south pole, gives another average distance $d_p$. The time coordinate of $V$ is then defined as
$t\! =\! d_f-d_p$. Note that for foliated CDT configurations, this coincides with the usual discrete proper time, up to a trivial factor.
We have experimented with other notions of time, including that of shortest distance to the poles; they generally lead
to a ``washing out" of the tetrahedron distributions described below. It is possible that alternative notions of
time are more appropriate or practical for observables different from the ones studied here.
Since the number of oriented paths between a vertex and a pole can become very large, in practice we
used a modified algorithm, which calculates $t$ in an approximate fashion.
For each vertex we constructed a fixed number of future-oriented paths, using a random process which jumps
iteratively from a vertex to a randomly chosen future neighbour until the north pole is reached.
We then repeated this process for past-oriented paths and finally calculated the time coordinate as the
difference of both average distances.
Given this new notion of time, we can now also assign an (average) time to spatial slices.
We define a {\it spatial slice} in nonfoliated CDT -- with boundary conditions as specified above --
as any subset of spatial triangles forming a two-sphere, such that by cutting along the
sphere the spacetime triangulation decomposes into two disconnected parts, with time flowing consistently from
one side of the cut to the other. In other words, the future-pointing arrows introduced in Fig.\ \ref{fig:flowoftime3d}
are all lined up to point in the same direction away from the slice.
The time coordinate we assign to such a slice is the average of all time coordinates of its vertices. Note that
unlike in standard CDT, where different spatial slices are always disjoint, spatial slices here can have any amount
of overlap.
We now have all the ingredients to measure the desired volume profiles.
Since the number of spatial slices of an individual path integral configuration can become very large, we use a
statistical method to generate a subset of spatial slices which is evenly distributed along the time direction.
In order to perform an ensemble average of the volume distribution we use a nontrivial averaging
algorithm, details of which are described in \cite{thesis}.
\section{Exploring the phase diagram}
\label{sec:explo}
We have developed the necessary Monte Carlo simulation software, using C{}\verb!++! as programming language and
taking advantage of object-oriented design principles to incorporate modularity and flexibility into the software.
We had anticipated that the software would be more complex than for the CDT simulations, but in the event its
complexity even surpassed our expectations. An extended discussion of the details of the software implementation,
with special emphasis on the Monte Carlo moves, is given in \cite{thesis}. In what follows, we will present the
results obtained with the simulation software, beginning with an exploration of the phase diagram of
nonfoliated CDT quantum gravity in 2+1 dimensions.
The Regge form (\ref{eq:reggeaction3d}) of the gravitational action contains two couplings, $k$ and $\lambda$,
which are proportional to the inverse bare Newton's constant and the bare cosmological constant respectively.
When evaluating the action on causal triangulations, a third parameter -- the asymmetry $\alpha$ -- naturally
appears because of the distinction between space- and timelike links.
Together they span a three-dimensional space of bare actions.
Of course, from the way $\alpha$ is introduced in the regularized theory, there is no a priori reason why it
should play the role of a coupling constant. Different $\alpha$-values should lead to the same continuum
gravity theory. This expectation is consistent with the dynamical results found below.\footnote{In
3+1 dimensions the status of the analogous parameter $\alpha$ is more involved \cite{trans,physrep}.}
\begin{figure}[t]
\centerline{\scalebox{0.8}{\includegraphics{pd_lcdt.eps}}}
\caption{The phase diagram of nonfoliated CDT quantum gravity is the region inside the
strip $-3\! <\! \alpha\! <\! -1/2$, visualized in green.
Our investigation has probed the phase space along the two axes drawn in the figure: at various $k$-values for
constant $\alpha\! =\! -1$,
and at various $\alpha$-values for constant $k\! =\! 0$.}
\label{fig:pd_lcdt}
\end{figure}
As usual in dynamically triangulated systems, we do simulations at fixed system size $N_3$ and then perform
a finite-size scaling analysis to extrapolate to the limit of infinite size. This
means that the phase diagram of the model is spanned by the parameters $k$ and $\alpha$.
As we have derived in Sec.\ \ref{sec:actions}, the existence of a Wick rotation limits the allowed values for $\alpha$
to the region $1/2\! <\! \alpha\! <\! 3$. Sticking with the notation ``$\alpha$" for this parameter also after the
analytic continuation, the Wick rotation maps this region to the range $-3\! <\! \alpha\! <\! -1/2$.
The phase diagram of our generalized CDT model is therefore a strip bounded by these two $\alpha$-values,
as illustrated by Fig.\ \ref{fig:pd_lcdt}.
In the following we will analyze the dynamics of our model at a range of points along the two axes drawn in
the figure.
While the simulations work well on the axis defined by constant $\alpha\! =\! -1$,
we encountered difficulties when exploring the axis of constant $k\! =\! 0$ in the region $-3\! <\! \alpha\! <\! -1$.
As one moves away from $\alpha\! =\! -1$ towards $\alpha\! =\! -3$, the number of accepted Monte Carlo moves goes
down significantly and the thermalization time increases rapidly.
A closer analysis revealed that the severity of the problems correlates with the presence of bubbles with a
complicated internal structure.
These problems imply that we currently must concentrate our investigation on the region $-1\! <\! \alpha\! <\! -1/2$.
\subsubsection*{Bounds on the vertex density}
In 2+1 dimensions there are kinematical bounds on the ratios of certain counting variables, like
the vertex density $N_0/N_3$ and the link density $N_1/N_3$.
In the case of CDT the link density satisfies $1\! \leq\! N_1/N_3\!\leq\! 5/4$ \cite{3d4d}, which should be compared
with the weaker bound $1\!\leq\! N_1/N_3 \!\leq\! 4/3$ for DT.
Using the linear relations (\ref{id1}) and (\ref{id2}), one easily derives $N_0/N_3=N_1/N_3-1$ in the
infinite-volume limit, which means that we can translate the
link density bounds into the vertex density bounds
$0\!\leq\! N_0/N_3\!\leq\! 1/4$ for CDT and $0\! \leq\! N_0/N_3 \!\leq\! 1/3$ for Euclidean triangulations.
The derivation of the link density bound in CDT involves the spatial Euler constraint, which is not present in the
ensemble $\tilde{\cal C}$ we have specified in Sec.\ \ref{sec:numsetup} above.
To find the analogous bound for nonfoliated CDT configurations we follow \cite{3d4d}
in considering all Monte Carlo moves that create a vertex.
Since all of them change the number of tetrahedra by some amount $\Delta N_3$, the strategy is to select
those moves for which $\Delta N_3$ is minimal.
Starting with a minimal triangulation and repeatedly applying only the selected moves, the vertex density
and thus also the link density will be maximized, and the corresponding bounds follow upon taking the infinite-volume limit.
\begin{figure}[t]
\centerline{\scalebox{0.7}{\includegraphics{n0n3_k.eps}}}
\caption{Measurement of the average vertex density $N_0/N_3$ as function of the inverse gravitational coupling $k$,
for $\alpha\! =\! -1$ and system size $N_3\! =\! 40.000$.
The dots represent actual measurements, the lines linear interpolations.
The dashed lines mark the kinematical bounds for standard DT (upper line) and CDT (lower line).}
\label{fig:n0n3_k}
\end{figure}
In the case at hand we have two Monte Carlo moves which create one vertex and three tetrahedra, namely, the
bubble move and the polar move described in Sec.\ \ref{sec:numsetup} above. Both are unconstrained moves
which can always be executed. We conclude that in nonfoliated CDT quantum gravity the vertex and link densities
satisfy the bounds
\begin{equation}
0\leq \frac{N_0}{N_3} \leq \frac{1}{3} \qquad \mathrm{and} \qquad
1\leq \frac{N_1}{N_3} \leq \frac{4}{3} .
\end{equation}
These relations agree with those for Euclidean DT, but
the configurations which saturate them differ substantially, as we will see.
Note that a relaxation (in the sense of \cite{thor}) of the local regularity conditions for a simplicial manifold
would weaken these bounds, since then bubbles with fewer than three tetrahedra can occur.
\begin{figure}[t]
\centerline{\scalebox{0.7}{\includegraphics{n3_k.eps}}}
\caption{The numbers of tetrahedra of each of the four types, averaged over the sampled triangulations, as function of the
coupling $k$, at $\alpha\! =\! -1$ and $N_3\! =\! 40.000$.}
\label{fig:n3_k}
\end{figure}
Fig.\ \ref{fig:n0n3_k} shows the measurements of the average vertex densities for various values of the
coupling $k$, from simulations with $\alpha\! =\! -1$ and system size $N_3\! =\! 40.000$.
The vertex density increases monotonically with $k$, which is
not surprising since (at fixed $N_3$) larger values of $k$ favour the creation of vertices.
This can be seen easily by rewriting eq.\ (\ref{eq:action_edt}) with the help of the kinematical constraints, yielding
$S^\mathrm{eucl}\! =\! -2 k \pi N_0$ plus a term proportional to $N_3$.
As expected, the measured curve in Fig.\ \ref{fig:n0n3_k}
approaches the upper kinematical bound of $1/3$ for large values of $k$.
We also see a clear signal of a phase transition between $k\! =\! 0.24$ and $k\! =\! 0.28$, from a phase of low
to one of high vertex density, reminiscent of the first-order transitions in the inverse gravitational coupling
found in both DT \cite{edt3d} and CDT \cite{3dcdt}. Analogous measurements for fixed $k\! =\! 0$ and varying
$\alpha$ show that the vertex densities are approximately constant at low values, without any sign of a phase
transition. Of course, since we are only investigating the region $-1\! <\!\alpha\! <\! -1/2$ of the phase diagram,
we cannot exclude the presence of further phase transitions in the complementary region.
\subsubsection*{Emergence of foliated triangulations}
Foliated CDT geometries form a subset of the ensemble $\tilde{\cal C}$, characterized by the condition
$N_3^{T_2}\! =\! N_3^{T_3}\! =\! 0$.\footnote{With periodic boundary conditions in time one could in principle
construct a triangulation obeying $N_3^{T_2}\! =\! N_3^{T_3}\! =\! 0$ consisting of a single bubble winding
around both space and time, which clearly is not foliated. However, such configurations do not lie in $\tilde{\cal C}$
because in our case time is not compactified and we do not allow for globally self-overlapping bubbles.}
By plotting the number of tetrahedra of these two types as function of the couplings we can therefore look for
regions in phase space where foliated triangulations emerge dynamically. Fig.\ \ref{fig:n3_k} shows the numbers of
all tetrahedral types used, averaged over the configurations sampled from $\tilde{\cal C}$, as function of the coupling
constant $k$. In the phase with low vertex density on the left, although the building blocks of standard CDT dominate,
also the other two types appear in significant numbers, from which we deduce that the triangulations
along the line $\alpha\! =\! -1$ apparently are not foliated.
\begin{figure}[t]
\centerline{\scalebox{0.7}{\includegraphics{n3_alpha.eps}}}
\caption{The numbers of tetrahedra of each of the four types, averaged over the sampled triangulations, as function of the
coupling $\alpha$, at $k\! =\! 0$ and $N_3\! =\! 40.000$.}
\label{fig:n3_alpha}
\end{figure}
Conversely, Fig.\ \ref{fig:n3_alpha} shows the expectation values of the numbers of tetrahedra at fixed $k\! =\! 0$,
as function of $\alpha$. Note that the phase boundary $\alpha\! =\! -0.5$ does not belong to the phase diagram,
since the Wick rotation is not defined there.
The measurements corresponding to the rightmost data points in the figure have been performed at $\alpha\! =\! -0.52$.
We find that both $N_3^{T_2}$ and $N_3^{T_3}$ approach zero as we move towards the phase boundary.
At $\alpha\! =\! -0.52$ we have measured $\langle N_3^{T_2}\rangle\!\approx\! 2.9$ and
$\langle N_3^{T_3}\rangle\!\approx\! 14.3$,
which means that in the entire system consisting of 40.000 tetrahedra almost none of the building blocks belong to
the new types $T_2$ and $T_3$.
We conclude that the configurations appearing close to $\alpha\! =\! -0.5$ are almost perfectly foliated and belong to
the phase with low vertex density. Effectively, the dynamics should therefore be very close to the known dynamics of
2+1 dimensional CDT in the extended phase \cite{3dcdt}, and we would expect the geometries
to be macroscopically extended with a characteristic blob-shaped volume distribution.
These expectations will be confirmed later on.
\section{Tetrahedron distributions}
\label{sec:tetdist}
We have seen in the last section that foliated configurations emerge close to the boundary of the phase diagram.
As we move away from the boundary, the configurations become less foliated.
A strict foliation is attained whenever $N_3^{T_2}\! =\! N_3^{T_3}\! =\! 0$, but it is unclear how to translate nonzero values
into a measure of foliatedness of a triangulation.
We would like to have a more refined observable which tells us how foliated a triangulation is.
In the following we will use tetrahedron distributions based on the time coordinate introduced in
Sec.\ \ref{sec:numsetup} as a qualitative tool to investigate the degree of foliatedness of a triangulation.
To start with, let us assume that all vertices have been assigned a time coordinate using the algorithm described in
Sec.\ \ref{sec:numsetup}. For each tetrahedron we then calculate the sum of the time coordinates of its four vertices and
round this value to the nearest integer. By definition, this value gets assigned to the tetrahedron
as its new time coordinate. This ``tetrahedron time" clearly has a different relative normalization compared to the ``vertex time"
from which it was derived, but this does not matter as long as we do not use both time coordinates simultaneously.
\begin{figure}[t]
\centerline{\scalebox{1.0}{\includegraphics{prof_cdt.eps}}}
\caption{Typical tetrahedron distribution of a strictly foliated CDT configuration as function of ``tetrahedron time",
extracted from a simulation
with foliation constraint enabled (left). Zooming in on the central region of the distribution, one obtains the graph
shown on the right.}
\label{fig:prof_cdt}
\end{figure}
In a given configuration, we can now count the number of tetrahedra that share the same value of (tetrahedron) time,
and plot these numbers as a function of time to generate a tetrahedron distribution.
Fig.\ \ref{fig:prof_cdt} (left) shows such a distribution for a strictly foliated CDT geometry, which we have generated
by running our simulation with foliation constraint enabled.
We observe that the distribution appears as a superposition of two blob-shaped distributions.
One can show that one of them consists of $T_5$-tetrahedra and the other one of $T_9$-tetrahedra \cite{3dcdt}.
An enlarged version of the central part of the distribution is shown in Fig.\ \ref{fig:prof_cdt} (right), which illustrates
that the peaks are organized in groups of three. One can show that every such group corresponds
to a ``thick slice'', which is the triangulation enclosed between two adjacent simplicial spatial hypermanifolds \cite{thesis},
the higher-dimensional analogue of a ``strip" in 1+1 dimensions. Note that for pure CDT configurations
the tetrahedron time is effectively a refinement of the number of time steps associated with the preferred foliation,
similar to what was considered in \cite{semicl} to produce finer-grained volume distributions in 3+1 dimensions.
\begin{figure}[t]
\centerline{\scalebox{0.75}{\includegraphics{prof_k0dot0a-0dot7.eps}}}
\caption{Tetrahedron distribution of a typical path integral history from a simulation of
the generalized CDT model at $k=0.0,\ \alpha=-0.7$.}
\label{fig:prof_k0.0a-0.7}
\end{figure}
Let us return to the more general setting of nonfoliated CDT quantum gravity and focus on triangulations which
are a little further away from the phase boundary. Fig.\ \ref{fig:prof_k0.0a-0.7} shows the tetrahedron distribution
of a single triangulation extracted from a simulation at $k\! =\! 0,\ \alpha\! =\! -0.7$, with volume $N_3=40.000$.
We observe a sequence of peaks, with some remnants of the three-peak structure exhibited by Fig.\ \ref{fig:prof_cdt}.
The tendency of these structures to become blurred most likely depends both on changes in the actual triangulation
and on the precise algorithm used to define the time coordinate and the tetrahedron distribution.
Consequently, the relevant information lies not so much in the precise structure of the peaks, but in the overall pattern
formed by the succession of all the peaks. Comparison of the two configurations suggests that
every peak in Fig.\ \ref{fig:prof_k0.0a-0.7} corresponds to a group of three peaks in Fig.\ \ref{fig:prof_cdt}
and describes a structure which resembles a thick slice in a foliated triangulation.
Another aspect in which the two configurations differ is the fact that the gaps between each group of three peaks
in Fig.\ \ref{fig:prof_cdt} (right) -- marking the location of spatial triangulated hypermanifolds -- are getting filled in when the
foliation is relaxed. This can be interpreted as a ``decoration" of the spatial slices by the creation of bubbles.
Based on these findings we can interpret the pattern shown in Fig.\ \ref{fig:prof_k0.0a-0.7} as a triangulation
where decorated spatial slices alternate with modified thick slices.
A triangulation exhibiting such a structure will be called \emph{weakly foliated}. This is obviously not a sharp definition
since we are not providing a sharp criterion for when a weakly foliated triangulation changes into a truly nonfoliated one.
We will take an operational point of view here and consider a triangulation to be weakly foliated whenever
the tetrahedron distribution shows the characteristic alternating pattern.
\begin{figure}[t]
\centerline{\scalebox{1.0}{\includegraphics{prof_alpha.eps}}}
\caption{Tetrahedron distributions of typical path integral configurations from simulations at coupling
$k=0.0$ and system size $N_3\! =\! 40.000$, for various choices of the asymmetry parameter $\alpha$. Note
that all geometries display some degree of being weakly foliated, which becomes weaker with increasing $| \alpha |$. }
\label{fig:prof_alpha}
\end{figure}
The next important step is to understand how the foliatedness of a triangulation changes as one moves
around in phase space. Fig.\ \ref{fig:prof_alpha} shows a sequence of typical tetrahedron distributions of single
triangulations extracted from simulations at $k\! =\! 0$, for various choices of $\alpha$.
From an almost strict foliation at $\alpha\! =\! -0.52$ the signal -- although remaining distinctly visible -- gradually weakens as we
move towards $\alpha\! =\! -1$. It would be interesting to follow the development of this pattern beyond this point
towards the other phase boundary, but technical issues currently prevent us from doing so, as we have discussed earlier.
We have performed a similar analysis on the line of constant $\alpha\! =\! -1$, and have observed
that the alternating pattern remains visible, but becomes less pronounced when one moves from $k\! =\! 0$
towards $k\! =\! -1$, indicating a further weakening of the foliation.
When going from $k\! =\! 0$ in the other direction towards the phase transition, the data quality decreases significantly,
to such an extent that an interpretation based on the tetrahedron distribution becomes unreliable.
To summarize, it appears that all investigated configurations in the phase of
low vertex density exhibit some kind of (weak) foliation, whose degree varies significantly, from
an almost strict foliation near the phase boundary at $(k,\alpha)\! =\! (0,-0.5)$ to a much less pronounced one
for larger $| \alpha |$.
\section{Volume distributions}
\label{sec:voldist}
\subsubsection*{A phase of extended geometry}
\begin{figure}[t]
\centerline{\scalebox{1.0}{\includegraphics{vp_k.eps}}}
\caption{Average volume profile $\langle N_2(t)\rangle$ as function of time $t$,
measured at $\alpha\! =\! -1$ for various values of $k$.
The scale of the vertical axis is the same for all plots. Some of the profiles have a tail which is not visible here,
but shown in Fig.\ \ref{fig:vp_k_3} below.}
\label{fig:vp_k}
\end{figure}
In Sec.\ \ref{sec:numsetup} we introduced the notions of {\it time} and {\it spatial slices} for a general, nonfoliated CDT geometry.
The presence of these ingredients allows us to measure volume
distributions -- also called {\it volume profiles} -- just like in standard CDT quantum gravity.
In the following we will present the results of our numerical investigations.
Fig.\ \ref{fig:vp_k} shows the expectation value of the measured volume distributions for various
values of the coupling $k$, for fixed $\alpha\! =\! -1$.
In all cases the average geometry is macroscopically extended and the average volume profile has a
characteristic blob shape, strongly reminiscent of what is found in CDT in the physically interesting phase \cite{3dcdt}.
We will report later in this section on a quantitative analysis of the average volume distributions.
\begin{figure}[b]
\centerline{\scalebox{1.0}{\includegraphics{vp_k_3.eps}}}
\caption{The last three average volume profiles from Fig.\ \ref{fig:vp_k},
with a different scaling of the time axis and a small upward shift away from the axis to exhibit the tails of the distributions.}
\label{fig:vp_k_3}
\end{figure}
\begin{figure}[t]
\centerline{\scalebox{0.8}{\includegraphics{vp_tube.eps}}}
\caption{Volume distribution $N_2(t)$ of a typical triangulation from a simulation at
$(k,\alpha)\! =\! (0.4,-1)$ and $N_3=40.000$ in the phase of high vertex density. The plot shows
individual measurement points.}
\label{fig:vp_tube}
\end{figure}
Fig.\ \ref{fig:vp_k} illustrates that with increasing $k$ the time extension of the average geometry also increases.
In addition, as one approaches the phase transition, the emergent geometry develops a ``tail" at both ends of the
volume profile, by which we mean a region of small, approximately constant spatial volume.
Since this structure is not resolved in Fig.\ \ref{fig:vp_k}, we have replotted
the distributions close to the phase transition (at $k\! =\! 0,\, 0.2,\, 0.24$) in Fig.\ \ref{fig:vp_k_3}, with
an enlarged scale for the time axis and a small upward shift of the distribution curves.
This tail looks similar to the stalk observed in simulations of CDT, but is not necessarily related because of the
different choices of boundary conditions. In CDT, its presence is enforced by the fact that the simulations are run
at a fixed total time extent (equal to the number of spatial slices in the foliation), {\it and} that the two-volume is not
allowed to vanish, but is bounded below by a minimum of four triangles, set by the manifold conditions.
In the present case, we employ the same regularity condition, but
the time extension of the geometry is dynamical and the stalk
develops spontaneously as we move from $k\! =\! 0$ towards the phase transition.
Anticipating our interpretation below of the volume profiles in terms of de Sitter universes, the
appearance of the tails could be related to quantum corrections to the underlying effective
minisuperspace action near the phase transition.
Consider now a volume distribution on the line $\alpha\! =\! -1$ {\it beyond} the transition, that is,
in the phase of high vertex density.
Fig.\ \ref{fig:vp_tube} shows a volume distribution of a typical path integral configuration from a simulation at
$(k,\alpha)\! =\! (0.4,-1)$ with 40.000 tetrahedra.
The qualitative picture in this phase is completely different:
the vast majority of spatial slices have (almost) minimal size $N_2(t)$, and the
triangulation forms a very long stalk with minimal spatial extent almost everywhere.
At this phase space point, we have checked that the time extension of the stalk scales linearly with the system size.
In the infinite-volume limit, it would therefore appear that the ``universe" becomes a one-dimensional timelike string.
We can now summarize our findings. At all phase space points investigated we have found average geometries
which are macroscopically extended and whose volume profile has a characteristic blob-like shape.
The time extension of the average geometry increases with increasing $k$, and near the phase transition the
geometry starts to develop tails. On the other side of the transition the geometries degenerate into long
tubes, unrelated to any 2+1 dimensional classical geometry.
\subsubsection*{Evidence for three-dimensionality from finite-size scaling}
We will investigate next whether we can assign a macroscopic dimensionality to the extended structure of the
volume profiles found in the phase of low vertex density, by performing a systematic finite-size scaling analysis.
To this end, we have run another extended series of simulations, taking data at six points along the axis of constant
$\alpha\! =\! -1$, ranging from $k\! =\! -1.0$ to $k\! =\! 0.0$, and at six points along the axis of constant $k\! =\! 0$,
ranging from $\alpha\! =\! -0.52$ to $\alpha\! =\! -1.0$. At each point we have performed four simulations with different system
sizes, namely, $N_3\! =\! 40$, 80, 120 and 160$k$.
\begin{figure}[t]
\centerline{\scalebox{1.2}{\includegraphics{vpfss_k0dot0.eps}}}
\caption{Average volume profiles at $(k,\alpha)\! =\! (0.0,-1.0)$ for four different system sizes
$N_3\! = $40, 80, 120 and 160$k$, before (left) and after (right) rescaling the axes as indicated, with $d\! =\! 2.91$
to achieve a best overlap, which is seen to be of excellent quality.}
\label{fig:vpfss_k0.0}
\end{figure}
Fig.\ \ref{fig:vpfss_k0.0} (left) shows the measurements of the expectation value of the volume
distributions for the four different system sizes
at $(k,\alpha)\! =\! (0.0,-1.0)$. Following the strategy of \cite{desitter} for foliated CDT triangulations in 3+1 dimensions,
we will use finite-size scaling to achieve a best overlap of these curves.
Assuming that the average geometry has macroscopic dimension $d$, we expect time intervals to scale like $N_3^{1/d}$
and spatial volumes like $N_3^{(d-1)/d}$. When plotting the distributions with axes rescaled accordingly,
the measured curves should fall on top of each other.
To find an estimate for $d$ we have run an algorithm that scans through an interval of $d$-values in steps of
$\Delta d=0.005$, which for each $d$-value measures how well the volume profiles overlap.
We have employed a standard least-squares measure with appropriate normalization to quantify the quality of the overlap.
The value of $d$ which minimizes this measure is taken as an estimate for the macroscopic dimension.
For the case at hand, the algorithm yields a best estimate of $d\! =\! 2.91$.
The plot in Fig.\ \ref{fig:vpfss_k0.0} (right) shows all four distributions with axes rescaled using this value
for the dimension, resulting in a virtually perfect overlap.
\begin{figure}[t]
\centerline{\scalebox{1.2}{\includegraphics{d.eps}}}
\caption{Estimates for the macroscopic dimension $d$ from finite-size scaling, from measurements
at fixed $\alpha\! =\! -1.0$ (left plot) and fixed $k\! =\! 0.0$ (right plot).
Large dots represent measurements where the volume profiles overlap with excellent quality,
the smaller dots stand for overlaps of lesser quality.}
\label{fig:d}
\end{figure}
We have repeated the same analysis for the other points in the phase diagram.
Fig.\ \ref{fig:d} summarizes the calculated estimates for the macroscopic dimension $d$,
for fixed $\alpha\! =\! -1$ (left) and fixed $k\! =\! 0$ (right). The large dots indicate measurements with an
overlap of excellent quality, the small dots those of a somewhat lesser quality.
We observe that all six high-quality measurements yield macroscopic dimensions between $d\! =\! 2.85$ and $d\! =\! 3.00$,
while the values from the remaining six measurements have a larger spread.
We have not included error bars in the plots of Fig.\ \ref{fig:d} and the values obtained for the dimension $d$,
because they are dominated by systematic errors
we currently cannot estimate, one possible source being algorithmic dependences.
The calculation of the dimension observable is highly nontrivial and involves several algorithmic choices
which potentially affect the final result. Recall that we first had to define a time coordinate, which we did by calculating the
average distance between a vertex and the poles of the three-sphere. Secondly, we assigned a time coordinate to the
spatial slices, by averaging over the time coordinates of the vertices in the slice.
Finally, we ran a rather sophisticated averaging algorithm to produce the final distributions.
The nonuniqueness of this entire process is likely to lead to systematic errors not captured by standard error
algorithms such as the bootstrap method.
It is clear from this discussion that a single dimension measurement does not provide sufficient evidence
to support the $d\! =\! 3$ hypothesis, which would imply compatibility with the standard CDT result.
On the other hand, all twelve results together suggest strongly that the average geometries in the
phase of low vertex density are three-dimensional. The results on
the functional form of the volume profiles presented below will strengthen this preliminary conclusion
even further.
\subsubsection*{Comparison with the three-sphere}
\begin{figure}[t]
\centerline{\scalebox{0.9}{\includegraphics{fit_k0dot0.eps}}}
\caption{Rescaled average volume profiles at $(k,\alpha)\! =\! (0.0,-1.0)$,
corresponding to dimension $d\! =\! 2.91$, and best fit to the $\cos^2$-ansatz.}
\label{fig:fit_k0.0}
\end{figure}
A crucial piece of evidence that CDT quantum gravity has a well-defined classical limit comes from
matching the average distributions of spatial volumes with those of a Wick-rotated version of a
solution to the classical Einstein equations, namely, a de Sitter universe \cite{desitter}.
More specifically, the distributions coming from the simulations have been compared
to a volume profile of the form $V_3(t)=a \cos^3(b t)$, where $t$ is by assumption proportional to
proper time. This is the volume profile of Euclidean de Sitter space (equivalently, the round four-sphere),
where the two free parameters $a$ and $b$ depend on the overall size of the universe and a
finite relative scaling between spacelike and timelike directions. The measured volume profiles
in 3+1 dimensional CDT can be fitted with high accuracy to the analytical $\cos^3$-expression \cite{desitter,semicl},
with the exception of the region very close to the end points of the curve, which cannot be resolved with
sufficient precision and is obscured by the regularity condition $\langle N_2(t)\rangle\! \geq\! 4$, as we
have discussed earlier.
\begin{figure}[t]
\centerline{\scalebox{0.9}{\includegraphics{fit_k-0dot8.eps}}}
\caption{Rescaled average volume profiles at $(k,\alpha)\! =\! (-0.8,-1.0)$,
corresponding to dimension $d\! =\! 2.98$, and best fit to the $\cos^2$-ansatz.}
\label{fig:fit_k-0.8}
\end{figure}
We will perform an analogous analysis of nonfoliated CDT in 2+1 dimensions, using
the average volume distributions from our simulations.
The volume profile of the corresponding continuum de Sitter universe in three dimensions has the
functional form $V_2(t)\! =\! a \cos^2(b t)$, where $a$ and $b$ are constants. To extract an optimal fit to this
two-parameter family of curves from our Monte Carlo data, we have selected only those points in the phase diagram
where the rescaled average volume profiles overlap with excellent quality, and where we have a well-defined curve
to compare to.
Fig.\ \ref{fig:fit_k0.0} shows the outcome of this comparison at the point $(k,\alpha)\! =\! (0.0,-1.0)$.
Obviously, the only relevant part of the fit function is the region between the two zeros of the $\cos^2$-function.
We see that the functional ansatz fits the average volume distributions almost perfectly,
except at the two ends, where the simulation data show a small tail which is not present in the fit function
once we cut away the parts outside the two minima.
We have already commented earlier on the appearance of such tails in the vicinity of the phase
transition (see also Fig.\ \ref{fig:vp_k_3}); in the context of the de Sitter interpretation of our universe
they indeed seem to be related to small-scale deviations from the classically expected result.
From this point of view it is interesting to understand how the situation changes when one repeats the
comparison at a point further away from the phase transition.
Fig.\ \ref{fig:fit_k-0.8} shows the result of the same analysis performed at the point
$(k,\alpha)\! =\! (-0.8,-1.0)$. We again observe an almost almost perfect fit in the region where the spatial volume
$N_2(t)$ is nonminimal. Remarkably, now even the total time extension of the dynamically generated
universe and the de Sitter fit function between the two minima agrees, and we get an almost perfect
semiclassical matching. The quality of the fit becomes slightly reduced
towards both ends, which is not surprising because discretization effects become large when the
spatial volumes become small.
\section{Summary and conclusions}
\label{sec:conclusions}
We begun our investigation with the aim to isolate and understand the role of the preferred time slicing in standard
CDT quantum gravity, while maintaining causality of the individual path integral histories.
In this article we have presented many details of the kinematical and dynamical properties of the new,
``nonfoliated" CDT model in 2+1 dimensions, which implements the dissociation of the causal
structure and the preferred notion of time. Due to the presence of new elementary building blocks, the foliation
in terms of equally spaced triangulated spatial hypermanifolds is broken up in this extended version of CDT,
acquiring novel simplicial substructures such as ``bubbles" and ``pinchings".
Gravitational dynamics in the new model is implemented in terms of the standard Regge action,
defined as a linear function on the space of independent counting variables, which is five-dimensional, compared
to CDT's two dimensions.
After fixing the total system size, there are two coupling constants spanning the phase space of the model,
the bare inverse Newton coupling $k$ and the coupling $\alpha$, which quantifies the anisotropy between space- and timelike
length assignments in the regularized theory. This asymmetry parameter has to satisfy the inequalities
$1/2\! <\! | \alpha | \! <\! 3$ for the Wick rotation to exist, which is necessary to be able to probe the nonperturbative
properties of the model with the help of Monte Carlo simulations. This introduces two boundaries in the phase diagram.
The presence of thermalization problems, preventing the effective implementation of the Monte Carlo algorithm,
led us to eliminate certain global simplicial substructures.
This allowed us to investigate the region $-1\!\lesssim\! \alpha\! <\! -1/2$ of the phase diagram (in terms
of the analytically continued $\alpha$), while we still observed severe thermalization problems in the complementary
region. We ran the simulations with the spacetime topology of a three-sphere with a source and a sink of (Euclidean)
time at the two poles.
In terms of results, we have found two phases of geometry with low and high vertex density, for $k$-values below
and above some critical value $k_c$ of the inverse gravitational coupling respectively.
The analysis of the tetrahedron distributions revealed that the triangulations remain weakly foliated throughout
the investigated phase space region of low vertex density, but that the strength of this signal varies significantly
as function of the bare couplings.
In addition, we observed the emergence of almost perfectly
foliated simplicial geometries close to the boundary $\alpha\! =\! -1/2$ of the phase diagram.
We constructed a volume distribution observable and an averaging procedure to study the expectation value of
the volume profiles of the emergent geometries in the weakly foliated phase.
A finite-size scaling analysis provided strong evidence that the extended geometries are
macroscopically three-dimensional. Additional support for this came from fitting the measured profiles to a
$\cos^2$-ansatz corresponding to a classical de Sitter universe, which found an almost perfect agreement.
We have repeated the analysis for various points in the phase diagram, giving consistent results.
These results provide compelling evidence that the phases of low vertex density of both foliated and
nonfoliated CDT quantum gravity have the same large-scale properties in the continuum limit and
lie in the same universality class. Since apart from removing the {\it distinguished} time slicing we
essentially left all other ingredients of the kinematics intact, this would imply that the presence or absence of
a preferred foliation in CDT is not a relevant ingredient.
As remarked already in the introduction,
the same is not true for Ho\v rava-Lifshitz gravity \cite{hl} -- to the extent that our nonperturbative,
coordinate-free set-up can be
compared with this continuum formulation -- where a fixed spatial foliation is essential.
It does not mean that CDT, or suitable extensions like that studied in \cite{cdthorava}, cannot provide a
framework suitable for studying anisotropic gravity models.
Our results also conform with the expectation that in 2+1 dimensions the value of the parameter $\alpha$
is irrelevant from the point of view of the continuum theory.
Because of the strong similarities of the large-scale
properties of CDT quantum gravity in 2+1 and 3+1 dimensions, it is plausible to conjecture that also in
3+1 dimensions the presence or absence of a
direct-product structure of the triangulations does not influence the final outcome.
If this is the case, one may want to stick with the simpler ``standard" formulation of Causal Dynamical Triangulations
as a matter of convenience and computational simplicity, as we have already pointed out elsewhere \cite{jl}.
\vspace{.5cm}
\noindent {\bf Acknowledgements.}
We thank J. Ambj\o rn for a critical reading of the manu\-script.
The authors' contributions are part of the research programme of the Foundation for Fundamental Research
on Matter (FOM), financially supported by the Netherlands Organisation for Scientific Research (NWO).
The work was also sponsored by NWO Exacte Wetenschappen (Physical Sciences) for the use
of supercomputer facilities, with financial support from NWO.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Gradient and Hessian of $V_i(\boldsymbol{x};\eta)$}
\begin{lemma}\label{lemma:gradV}
\begin{equation}
\begin{aligned}
& \nabla V_i(\boldsymbol{x};\eta) \\
= & \nabla f_i(\boldsymbol{x}) - (I - \eta \nabla^2 f_i(\boldsymbol{x}) E_i ) \nabla f_i(\boldsymbol{y}(\boldsymbol{x};i,\eta)) \label{defgradVa}
\end{aligned}
\end{equation}
where $E_i = F_i F_i^T$ with $F_i \in \mathbb{R}^{n \times n_i}$ defined as
$F_i^T = \begin{bmatrix} \boldsymbol{0}_{n_i \times \sum_{j=1}^{i-1}n_j } &
I_{i} & \boldsymbol{0}_{n_i \times \sum_{j=i+1}^N n_j} \end{bmatrix}$
and $I \in \mathbb{R}^{n \times n}$ and $I_{i} \in \mathbb{R}^{n_i \times n_i}$ are identity matrices.
\end{lemma}
\begin{proof}
\begin{subequations}
Based on the definition of $\boldsymbol{y}(\boldsymbol{x};i,\eta)$ in~\eqref{defVy}, the derivative of
$\boldsymbol{y}(\boldsymbol{x};i,\eta)$ w.r.t $\boldsymbol{x}$ is
\begin{equation}
\begin{aligned}
& \nabla_i y_i(\boldsymbol{x};i,\eta) = I_i - \eta\nabla^2_{ii}f_i(\boldsymbol{x}),
\nabla_j y_i(\boldsymbol{x};\eta) = - \eta\nabla^2_{ij} f_i(\boldsymbol{x})
\end{aligned}\label{defgrady1}
\end{equation}
Using the chain rule for differentiation,
\begin{equation}
\begin{aligned}
\nabla_i V_i(\boldsymbol{x};\eta) = & \nabla_i f_i(\boldsymbol{x}) -\nabla_i \boldsymbol{y}(x;i,\eta) \nabla_i f_i(\boldsymbol{x}) \\
\overset{\eqref{defgrady1}}{=} & \nabla_i f_i(\boldsymbol{x})
- \nabla_i f_i(\boldsymbol{y}(\boldsymbol{x};i,\eta)) \\
& + \eta\nabla^2_{ii} f_i(\boldsymbol{x}) \nabla_i f_i(\boldsymbol{y}(\boldsymbol{x};i,\eta)) \\
\nabla_j V_i(\boldsymbol{x};\eta)
\overset{\eqref{defgrady1}}{=} & \nabla_j f_i(\boldsymbol{x}) -
\nabla_j f_i(\boldsymbol{y}(\boldsymbol{x};i,\eta)) \\
& + \nabla_i \boldsymbol{y}(\boldsymbol{x};i,\eta) \nabla_i f_i(\boldsymbol{x}) \\
\overset{\eqref{defgrady1}}{=} & \nabla_j f_i(\boldsymbol{x}) -
\nabla_j f_i(y_i(\boldsymbol{x};i,\eta)) \\
& + \eta\nabla^2_{ji} f_i(\boldsymbol{x}) \nabla_i f_i(\boldsymbol{x}).
\end{aligned}\label{defgradV112}
\end{equation}
The expression for $\nabla V_1$ in~\eqref{defgradV} can be obtained by combining the
expressions in~\eqref{defgradV112}. The expression for $\nabla V_2$ can be derived in an identical manner.
\end{subequations}
\end{proof}
\begin{lemma}\label{lemma:hessianV}
\begin{equation}
\begin{aligned}
& \nabla^2 V_i(\boldsymbol{x};\eta)
= \nabla^2 f_i(\boldsymbol{x})
+ \eta \nabla^3 f_i(\boldsymbol{x}) [E_i \nabla f_i(\boldsymbol{y}_i(\boldsymbol{x};i,\eta)] \\
& - (I - \eta \nabla^2 f_i(x) E_i ) \nabla^2 f_i(\boldsymbol{y}(\boldsymbol{x};i,\eta)) (I - \eta E_i \nabla^2 f_i(\boldsymbol{x}))
\end{aligned}
\end{equation}
where $\nabla^3 f_i(\boldsymbol{x}) [d] = \lim_{\alpha \rightarrow 0} \frac{\nabla^2 f_i(\boldsymbol{x}+\alpha d) -
\nabla^2 f_i(\boldsymbol{x})}{\alpha}$ is the action of the third derivative along the direction $d$.
\end{lemma}
\begin{proof}
The proof follows from the chain rule of differentiation and can be obtained along the
lines of the proof to Lemma~\ref{lemma:gradV}.
\end{proof}
\section{Residual Minimization}\label{sec:residual_min}
\input{residual.tex}
\section{Conclusions}
We presented a novel formulation for Nash equilibrium computation in multi-player games by introducing the Gradient-based Nikaido-Isoda (GNI) function. The GNI formulation for games allows individual players to locally improve their objectives using steepest descent while preserving local stability and convergence guarantees. We showed that the GNI function is a valid merit function for multi-player games and presented an approximate descent algorithm. We compared our method against several popular descent schemes on multiple game settings and empirically demonstrated that our method outperforms all other techniques. Future research will explore the GNI method in stochastic settings, that may enable their applicability to GAN optimization.
\subsection{Quadratic Objectives}\label{sec:quadobjs}
In the following, we explore a popular setting of quadratic objective function and explore the
implication of Theorem 2. Note that the bilinear case is a special case of the quadratic objective.
Consider the $f_i(\boldsymbol{x})$'s to be quadratic. For this setting \S\ref{sec:gniconvex} showed that
GNI function $V_i(\boldsymbol{x})$ is a convex quadratic function.
This proves that $V_i(\boldsymbol{x};\eta)$ has
$(3L_f)$-Lipschitz continuous gradient. It is well known that
for a composition of a linear function with a strongly convex function, we have that
Polyak-{\L}ojasiewicz inequality holds~\cite{LuoTseng93}, i.e., there exists $\mu > 0$ such that
$V(\boldsymbol{x};\eta) \leq \frac{1}{2\mu} \|\nabla V(\boldsymbol{x};\eta)\|^2$ holds.
Hence, we can state the following stronger result for quadratic
objective functions.
\begin{corollary}
Suppose $f_i(\boldsymbol{x})$ are quadratic and player convex, i.e. $f_i(\boldsymbol{x})$ is
convex in $x_i$. Let $\rho = \frac{1}{3 L_f N}$.
Then, the sequence $\{V(\boldsymbol{x}^k)\}$ converges linearly to 0, \emph{i.e.}
$\{\boldsymbol{x}^k\}$ converges to $\boldsymbol{x}^\star \in {\cal S}^{NE}$.
\end{corollary}
\subsection{Bi-Linear Two-player Game:}
We consider the following two-player game:
\begin{equation}
f_1(x) = x_1^TQx_2 + q_1^T x_1 + q_2^T x_2 = -f_2(x) \label{eq:bilinear},
\end{equation}
where $f_1$ and $f_2$ are the player's payoff functions -- a setting explored in~\cite{VIGAN}. The GNI for this game leads to a convex objective. For GNI, we use a step-size $\eta=1/L$, where $L=\norm{Q}$, and $\rho=0.01$, while for other methods we use a stepsize of $\eta=0.001$\footnote{Other values of $\eta$ did not seem to result in stable descent.}. The methods are initialized randomly -- the initialization is seen to have little impact on the convergence of GNI, however changed drastically for that of others.
\begin{figure}
\centering
\subfigure[]{\includegraphics[width=4cm]{bilinear_conv.eps}}
\subfigure[]{\includegraphics[width=4cm]{bilinear_traj_conv.eps}}
\caption{(a) shows GNI against other methods for bilinear min-max game.~(b) shows convergence trajectories for 1-dimensional players. For (b), the initial point is shown in red diamond.}
\label{fig:bilinear}
\end{figure}
In Figure~\ref{fig:bilinear}(a), we plot the gradient convergence (using 10-d data). In this plot (and all subsequent plots of gradient
convergence), the norm of the gradient
$\| \nabla f (\boldsymbol{x}^k) \| = \|(\nabla_1 f_1(\boldsymbol{x}^k),\ldots,\nabla_N f_N(\boldsymbol{x}^k)) \|$.
We see that GNI converges linearly. However, other methods, such as gradient descent and mirror descent iterates diverge, while the extragradient and Adam are seen to converge slowly. To understand the descent better, in Figure~\ref{fig:bilinear}(b), we use $x_1,x_2\in\mathbb{R}^1$, and plot them for every 100-th iteration starting from the same initial point (shown by the red-diamond). Interestingly, we find that the extragradient and mirror-descent methods show a circular trajectory, while Adam (with $\beta_1=0.9$ and $\beta_2=0.999$) takes a spiral convergence path. GNI takes a more straight trajectory steadily decreasing to optima (shown by the blue straight line).
\begin{figure}[!ht]
\centering
\subfigure[non-convex QP]{\includegraphics[width=4cm]{quad_indefinite_v1.eps}}
\subfigure[convex-QP]{\includegraphics[width=4cm]{quad_conv_semidefinite_v1.eps}}
\caption{Convergence of GNI against other methods for Quadratic games. (a) Non-convex QP with indefinite Q matrices for each player, (b) convex QP with semi-definite Q matrices.}
\label{fig:quad_semidefinite}
\end{figure}
\subsection{Two-Player Quadratic Games:}
We consider two-player games (multiplayer extensions are trivial) with the payoff functions:
\begin{equation}
f_i(x) = \frac{1}{2} x^TQ_ix + r_i^T x\text{, for } i = 1,2 \label{eq:quadObj}
\end{equation}
where $Q_i \in \mathbb{R}^{n \times n}$ is symmetric. We consider cases when each $Q_i$ is indefinite (i.e., non-convex QP) and positive semi-definite. As with the bilinear case, all the QP payoffs result in convex GNI reformulations. We used 20-d data, the same stepsizes $\eta=\max_i(\norm{Q_i})$ and $\rho=0.01$ for GNI, while using $\eta=10^{-4}$ for other methods. The players are initialized from $N(0,I)$.
In Figure~\ref{fig:quad_semidefinite}, we compare the descent on these quadratic games. We find that the competitive methods are difficult to optimize for the non-convex QP and almost all of them diverge, except Adam which converges slowly. GNI is found to converge to the stationary Nash point (as it is convex-- in~\S\ref{sec:gniconvex}). For the convex case, all methods are found to converge. To gain insight, we plot the convergence trajectory for a 1-d convex quadratic game (i.e., $x_1,x_2\in \mathbb{R}^1$) in Figure~\ref{fig:quad_1d_convergence}. The initializations are random for both players and the parameters are equal. We see that all schemes follow similar trajectories, except for Adam and GNI -- all converging to the same point.
\begin{figure}[!h]
\centering
\includegraphics[width=4cm]{quad_1d_convergence_v1.eps}
\includegraphics[width=4cm]{quad_1d_trajectory_new_v1.eps}
\caption{Convergence of GNI against other methods on a convex $1-$d quadratic game. Left: the convergence achieved by different algorithms. Right: the trajectories of the two players to the NE.}
\label{fig:quad_1d_convergence}
\end{figure}
\subsection{Dirac Delta GAN}
This is a one-dimensional GAN explored in~\cite{VIGAN}. In this case, the real data is assumed to follow a Dirac delta distribution (with a spike at say point -2). The payoff functions for the two players are:
\begin{align}
f_1 =& \log(1+\exp(\theta x_1)) + \log(1+\exp(x_1 x_2))\nonumber\\
f_2 =& -\log(1+\exp(x_1x_2)),
\label{eq:deltagan}
\end{align}
where $\theta\in\mathbb{R}^1$ is the location of the delta spike. Unlike other game settings described above, we do not have an analytical formula to find the Lipscitz constant for the payoffs. To this end, we did an empirical estimate (more details to follow). We used $L=2$, $\eta=\rho=1/L$ and initialized all players uniformly from $[0,4]$.
Figure~\ref{fig:dirac-delta-gan} shows the comparison of the convergence of the dirac delta GAN game to a stationary Nash point. The GNI achieves faster convergence than all other methods, albeit having a non-convex reformulation in contrast to the bilinear and QP cases discussed above. The game has multiple local solutions and the schemes may converge to varied points depending on their initialization (see supplementary material for details).
\begin{figure}
\centering
\includegraphics[width=4.5cm]{vigan_convergence_v1.eps}
\caption{Convergence of GNI against other methods on the Dirac-Delta GAN.}
\label{fig:dirac-delta-gan}
\end{figure}
\begin{figure*}[!h]
\centering
\subfigure[Varying $\eta$, $\rho=1$]{\includegraphics[width=4.5cm]{dcgan_D_parameters_step_G.eps}}
\subfigure[varying $\eta$, $\rho=1$ ]{\includegraphics[width=4.5cm]{dcgan_D_parameters_step_D.eps}}
\subfigure[varying $\rho$, $\eta=0.1$ ]{\includegraphics[width=4.5cm]{dcgan_D_parameters_rho_G.eps}}
\caption{Study of the influence of the step sizes ($\rho$ and $\eta$) on the convergence of GNI reformulations for the linear GAN game.}
\label{fig:my_dcgan_parameter_study}
\end{figure*}
\begin{figure*}[!h]
\centering
\subfigure[Generator, $P_r=N(\mu,I)$]{\includegraphics[width=4.5cm]{dcgan_G.eps}}
\subfigure[Discriminator, $P_r=N(\mu,I)$]{\includegraphics[width=4.5cm]{dcgan_D.eps}}
\subfigure[Generator, $P_r=N(\mu,\Sigma)$]{\includegraphics[width=4.5cm]{dcgan_G_mu_sigma.eps}}
\caption{Convergence of GNI against other methods on the linear GAN two-player game. The real-data distribution is sampled from $N(\mu,I)$ for (a) and (b), while we use $N(\mu,\Sigma)$ for (c), where $\Sigma=\diag(\xi),\xi\sim U(0,1]$. Note that, when the optimization converges, the discriminator is expected to be confused between the real and fake data distributions (i.e., classification accuracy is 0.5).}
\label{fig:dcgan_convergence}
\end{figure*}
\subsection{Linear GAN}
We now introduce a more general GAN setup -- a variant of the non-saturating GAN described in~\cite{Goodfellow16}, however using a linear generator and discriminator. We designed this experiment to serve two key goals: (i) to exposit the influence of the GNI hyperparameters in a more general GAN setting, and (ii) show the performance of GNI on a setting for which it is harder to estimate a Lipschitz constant $L$. While, our proposed setting is not a neural network, it allows to understand the behavior of GNI when other non-linearities arising from the layers of a neural network are absent, and thereby study GNI in isolation.
\noindent\paragraph*{Experimental Setup:} The payoff functions are:
\begin{align}
f_1 =& -\!\mathbb{E}_{\theta\sim P_r}\!\!\log\left(x_1^T\theta\right)\!-\!\mathbb{E}_{z\sim P_z}\log\left(1-x_1^T\diag\left(x_2\right)z\right),\!\!\nonumber\\
f_2 =& -\!\mathbb{E}_{z\sim P_z}\log\left(x_2^T\diag\left(x_1\right)z\right),\label{eq:lineargan}
\end{align}
where $P_r$ and $P_z$ are the real and the noise data distributions, the latter being the standard normal distribution $N(0,I)$. The operator $\diag$ returns a diagonal matrix with its argument as its diagonal. We consider two cases for $P_r$: (i) $P_r=N(\mu,I)$ for a mean $\mu$ and (ii) $P_r=N(\mu,\Sigma)$ for a covariance matrix $\Sigma\in\mathbb{R}^{d\times d}$. In our experiments to follow, we use $\mu=2e$, $e$ being a $d$-dimensional vector ($d=10$) of all ones. We initialized $x_1=x_2=e/d$ for all the methods.
\noindent\paragraph*{Evaluation Metrics:} To evaluate the performance on various hyper-parameters of GNI, we define two metrics: (i) \emph{discriminator-accuracy}, and (ii) the \emph{distance-to-mean}. The discriminator-accuracy measures how well the learned discriminator classifies the two distributions, defined as:
\begin{equation}
\label{eq:dis_acc}
\dist{acc}\!\!=\!\!\frac{1}{2M}\!\!\sum_{i=1}^M \mathcal{I}(x_1^T\theta_i \geq \zeta) + \mathcal{I}\left(x_1^T\!\diag(x^i_2)z_i\leq (1-\zeta)\right),\notag
\end{equation}
where $\mathcal{I}$ is the indicator function, $M$ is the number of data points sampled from the respective distributions, and $\zeta\in[0,1]$ is a threshold for the indicator function. We use $\zeta=0.7$. While $\dist{acc}$ measures the quality of the discriminator learned, it does not tell us anything on the convergence of the generator. To this end, we present another measure to evaluate the generator; specifically, the~\emph{distance-to-mean}, that computes the distance of the generated distribution from the first moment of the true distribution, defined as:
\begin{equation}
\dist{mean} = \norm{\mathbb{E}_{z\sim P_z} {\diag(x_2)z} - \mathbb{E}_{\theta\simP_r} \theta}
\end{equation}
\noindent\paragraph*{Hyper-parameter Study:} The goal of this experiment is to analyze the descent trajectory of GNI-based gradient descent when the hyper-parameters are changed. To this end, we vary $\eta$ and $\rho$ separately in the range $10^{-5}$ to $10$ in multiples of $10$, while keeping the other parameter fixed (we use $\eta=0.1$ and $\rho=1$ as the base settings). In Figure~\ref{fig:my_dcgan_parameter_study}, we plot the discriminator-accuracy and distance-to-mean against GNI iterations for the generator and discriminator separately. From Figures~\ref{fig:my_dcgan_parameter_study}(a) and (b), it appears that higher value of $\eta$ biases the descents on the generator and discriminator separately. For example, $\eta\geq 0.01$ leads to a sharp descent to the optimal solution of the discriminator, however, $\eta>1$ leads to a generator breakdown (Figure~\ref{fig:my_dcgan_parameter_study}(a)). Similarly, a small value of $\rho$, such as $\rho<10^{-5}$ shows high distance-to-mean, i.e., generator is weak, while, $\rho=1$ leads to good descents for both the generator and the discriminator. We found that a higher $\rho$ leads to unstable descent, skewing the plots and thus not shown. In short, we found that making the discriminator quickly converge to its optimum could lead to a better convergence trajectory for the generator for this linear GAN setup using the GNI scheme.
\noindent\paragraph*{Comparisons to Other Algorithms:}
In Figures~\ref{fig:dcgan_convergence}(a) and (b), we plot the distance-to-mean and discriminator-accuracy of linear GAN using $\eta=0.1$ and $\rho=1$, and compare it to all other descent schemes. Interestingly, we found that Adam shows a different pattern of convergence, with the distance-to-mean steadily decreasing to zero; on close inspection (Figure~\ref{fig:dcgan_convergence}(b)), we see that the discriminator-accuracy simultaneously goes to zero as well, suggesting the non-optimality of the descent. In contrast, our GNI converges quickly. In Figure~\ref{fig:dcgan_convergence}(c), we plot the convergence when using a real data distribution $P_r=N(\mu,\Sigma)$, where $\mu\sim U(0,1)^d$; a d-dimensional uniform distribution and $\Sigma$ is a randomly-sampled diagonal covariance matrix. The descent in this general setting also looks similar to the one in Figure~\ref{fig:dcgan_convergence}(a).
\subsection{Convexity Properties of GNI: An Example}\label{sec:gniconvex}
In this section, we present an example NE reformulation of a (non-) convex game using the GNI setup. Suppose the player's objective is quadratic, \emph{i.e.,}
$f_i(\boldsymbol{x}) = \frac{1}{2} \boldsymbol{x}^T \boldsymbol{Q}_i \boldsymbol{x} + \boldsymbol{r}_i^T \boldsymbol{x}$. Then, the GNI function is
\begin{align}
& V_i(\boldsymbol{x}) = f_i(x) - f_i(\boldsymbol{x} - \eta E_i (\boldsymbol{Q}_i \boldsymbol{x} + \boldsymbol{r}_i)) \\
=& \frac{1}{2} \boldsymbol{x}^T \left( \boldsymbol{Q}_i - \widehat{\boldsymbol{Q}}_i^T\boldsymbol{Q}_i \widehat{\boldsymbol{Q}}_i \right) \boldsymbol{x}
+ \eta \boldsymbol{r}_i^TE_i \boldsymbol{Q}_i(I + \widehat{\boldsymbol{Q}}_i) \boldsymbol{x} \nonumber \\
&\, + \frac{1}{2} \eta \boldsymbol{r}_i^T (2E_i - \eta E_i\boldsymbol{Q}_i E_i) \boldsymbol{r}_i \nonumber
\end{align}
where $\widehat{\boldsymbol{Q}}_i = (I - \eta E_i\boldsymbol{Q}_i)$. Suppose $\|\boldsymbol{Q}_i\| \leq L_f$ and let
$\eta \leq \frac{1}{L_f}$, then
\begin{equation}
(\boldsymbol{Q}_i-\widehat{\boldsymbol{Q}}_i^T\boldsymbol{Q}_i \widehat{\boldsymbol{Q}}_i)
=\eta(\boldsymbol{Q}_iE_i) (2I - \eta \boldsymbol{Q}_i) (E_i \boldsymbol{Q}_i) \succeq 0
\end{equation}
where the positive semidefiniteness holds since for all $u \neq 0$
$u^T (\boldsymbol{Q}_iE_i) (2I - \eta \boldsymbol{Q}_i) (E_i \boldsymbol{Q}_i) u$
= $ (\boldsymbol{Q}_iE_i u)^T (2I - \eta \boldsymbol{Q}_i) (\boldsymbol{Q}_iE_i u) \geq 0$.
Hence, when $f_i(\boldsymbol{x})$ is quadratic, the GNI function is a convex, quadratic function.
Note that the convexity of GNI function holds regardless of the convexity of the original
function $f_i(\boldsymbol{x})$. However, for general nonlinear functions $f_i(\boldsymbol{x})$, the GNI function $V_i(\boldsymbol{x})$ does
not preserve convexity.
\section{Introduction}\label{sec:introduction}
\input{intro.tex}
\section{Related Work}\label{sec:related_work}
\input{related_work.tex}
\section{Gradient-based Nikaido-Isoda Function}\label{sec:GNI}
\input{gnifunction.tex}
\subsection{GNI is Locally Stable}\label{sec:stability}
\input{gnilocal.tex}
\input{gniconvexity.tex}
\section{Descent Algorithm for GNI}\label{sec:algorithm}
\input{descentGNI.tex}
\section{Modified Descent Algorithm for GNI}\label{sec:moddescent}
\input{modifieddescent.tex}
\section{Experiments}\label{sec:expts}
\input{expts.tex}
\input{conclude.tex}
\subsection{Convergence Rate for Bilinear and Quadratic Games:}
We provide plots that suggest linear convergence rate for bilinear and strongly-convex quadratic games as was described in the main paper. For both cases we use $20$-d variables for both players which are initialized arbitrarily. From the plots shown in Figure~\ref{fig:convergence_rate}, we observe that $V$ function decays linearly to close to zero and then it slows down as the gradient of $V$ starts to vanish (suggested by Theorem $2$ in the main paper). It is noted that the guarantees for linear convergence are for the $V$ function (and not for $\nabla f$) and thus we skip plots for $\nabla f$.
\begin{figure}[!h]
\centering
\includegraphics[width=4cm]{convergeplot_V_bilinear.eps}
\includegraphics[width=4cm]{convergeplot_V_quad_convex.eps}
\caption{Convergence rate for bilinear and convex quadratic games using the GNI method. Left: Decay of $V$ function for Bilinear Game. Right: Decay of $V$ function for Strongly-convex Quadratic Game.}
\label{fig:convergence_rate}
\end{figure}
\subsection{Two-Player Quadratic Games:}
We describe an experiment for non-convex two-player quadratic games with indefinite $Q$ matrices for both players. We show the decay of the gradient and the $V$ function for the GNI formulation. The other optimization algorithms are seen to be diverging for the indefinite cases (as was shown in the main text) and thus are not shown here. We used $50-$d data, the same stepsizes $\eta=\max_i(\norm{Q_i})$ and $\rho=0.01$ for GNI. The methods are initialized randomly from $N(0,I)$. For clarification, we show the plot on log scale. As can be observed from the plots in Figure~\ref{fig:quad_indefinite_convergence}, $\nabla f$ goes to zero as $V$ goes to zero.
\begin{figure}[!h]
\centering
\includegraphics[width=4cm]{quad_conv_indefinite_supp.eps}
\includegraphics[width=4cm]{quad_conv_indefinite_grad_f.eps}
\caption{Convergence of GNI method for non-convex quadratic game setting shown on a semi-log plot for clarity. Left: Decay of $V$ function. Right: Decay of the $\nabla f$.}
\label{fig:quad_indefinite_convergence}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=4cm]{vigan_convergence_2.eps}
\includegraphics[width=4cm]{vigan_convergence_2-traj.eps}
\caption{Convergence of GNI against other methods on the Dirac-Delta GAN. Left: Convergence of different methods seen by the decay of $\nabla f$. Right: Trajectory of the two players to the optima. }
\label{fig:ddgan_convergence_with}
\end{figure}
\begin{figure*}[ht]
\centering
\begin{tabular}{||c c c c c c c||}
\hline
Algorithm & GNI & Adam & ExGrad & Grad & OMD &ExPol \\ [0.5ex]
\hline\hline
Mean Error & $2.11$ & $0.64$ & $2.17$ & $2.18$ & $1.87$ & $1.87$\\
\hline
Mean number of Iterations & $77$ & $3048$ & $10000$ & $10000$ & $10000$ & $10000$ \\
\hline
\end{tabular}
\caption{\label{tab:error_stats}Error Statistics for GNI compared against other techniques for the Dirac Delta GAN}
\end{figure*}
\subsection{Linear GAN:} We also show some additional results for the Linear GAN which suggests convergence of the proposed method to a NE. The second derivative of the objective function for both the players is positive semidefinite (see Equation $(23)$ in the main paper) indicating all stationary points are minimas. In the following plots in Figure~\ref{fig:linear_gan_convergence}, we show the convergence of the $V$ function and the $||\nabla f||$ for the GNI formulation. We observe very fast convergence for both the $V$ and the $||\nabla f||$ indicating convergence to a SNP. Additionally, since all SNPs are NEs in this particular setting, the GNI converges to a NE.
\begin{figure}[h]
\centering
\includegraphics[width=4cm]{convergence_V_LGAN.eps}
\includegraphics[width=4cm]{convergence_gradf_LGAN.eps}
\caption{Convergence of $V$ function and $||\nabla f||$ for the Linear GAN discussed in the main paper. Left: Decay of $V$ function. Right: Decay of the $\nabla f$.}
\label{fig:linear_gan_convergence}
\end{figure}
\subsection{Dirac Delta GAN:}
In this section, we show another experiment for the Dirac Delta GAN that was discussed in the main text. All the parameters for all optimizers are kept constant as in the main text for Dirac Delta GAN. In Figure~\ref{fig:ddgan_convergence_with}, we see the convergence of $\nabla f$ as well as the trajectories followed by the two players to the NE. All the methods converge to the same optima- however, the GNI converges faster than any other method. As observed in the convex quadratic case, we see all descent methods following the same trajectory except for the GNI and Adam. However, it was observed that the GNI and the other algorithms do not converge to the same solution when initialized arbitrarily. To investigate this, we perform an experiment where the game was initialized randomly from $1000$ initial conditions in a square region in $[-4,4]\times [-4,4]$. The error from the ground truth was computed after $10000$ iterations or up on convergence (the minimum of two). Results of the experiment are shown as a table in Figure~\ref{tab:error_stats}. It is observed that the game doesn't converge to the known ground truth for the game-- Adam is able to get closest to the ground truth while GNI converges to a stationary Nash point much faster than all other algorithms. This behavior could be explained by recalling that GNI is using $V$ function to descend and thus, converges to the closest stationary Nash point where $V$ vanishes.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Lattice gauge actions are usually constructed from the link
variable $U$. Similarly, the gauge field tensor is constructed
from the clover-leaf like plaquettes.
The advent of Neuberger's overlap operator~\cite{neu98} has the
implication in a new direction. The overlap Dirac operator is
development to solve the fermion chirality problem in lattice
QCD~\cite{neu01}. It does not have $O(a)$ error for the fermion
operators. The small $O(m^2 a^2)$ errors and non-perturbative
renormalization via current algebra relations and Ward identities
make it a desirable fermion formulation for both light and heavy
quarks~\cite{liu05}. An index theorem was formulated by
Hasenfratz, Laliena, and Niedermayer~\cite{hln98} based on the
operator at finite cut-off. It is then pointed out by L\"{u}scher
that the anomalous behavior of the fermion partition function
under a flavor-singlet transformation is expressed by the index of
the the overlap Dirac operator arising from the Jacobian,
providing a clear understanding of the exact index
theorem~\cite{lus98,hln98} in the path-integral formalism.
Following developments have seen the explicit derivation of the
local lattice topological charge in terms of the local overlap
operator via weak coupling expansion by Kikukawa and
Yamada~\cite{ky99}, explicit calculations without gauge coupling
expansion by Adams~\cite{ada02}, Fujikawa~\cite{fuj99} and
Suzuki~\cite{suz99}, i.e.
\begin{equation} \label{top}
q_L(x) = {\rm tr}_{cs}\, \gamma_5(1 - 1/2 a D_{ov}(x,x))
\end{equation}
where $D_{ov}$ is the overlap operator and the trace is over the
color and Dirac spin indices. The lattice topological charge
operator $q_L(x)$ so defined approaches the topological charge
density $q(x)$ at the continuum limit,
\begin{equation}
q_L(x)_{\stackrel{-\!\!\longrightarrow}{a \rightarrow 0}}
a^4 q(x) + O(a^6).
\end{equation}
This formulation of the topological charge operator has not only
been used to check the Atiya-Singer index theorem at finite
lattice cut-off~\cite{zbb02}, the topological
susceptibility~\cite{dgp05}, but has also been adopted to study
the local topological structure of the
vacuum~\cite{hdd02,hdd03a,hdd03b,hov05,haz05a,haz05b,ahz05}. It is
with this operator that the sub-dimensional coherent sign
structures have been discovered in 4-D
QCD~\cite{hdd03b,hov05,haz05a,haz05b} and 2-D CP(N-1)
model~\cite{alt05}. It is argued that the chiral filtering of the
overlap fermion is responsible for optimally filtering out the
ultraviolet fluctuation to allow the structures to be
revealed~\cite{hor06a}. Indeed, other conventional topological
charge operator constructed from the gauge links were used, but
could not decipher the curvilinear structure observed with the
overlap operator~\cite{alt05}. This leads to the possibility that
gauge field tensor and gauge action derived from the overlap
operator might be good alternatives to those from the gauge links
directly. Recently, I. Horv\'{a}th~\cite{hor06b} proposed a
formulation of lattice QCD wherein all elements of the theory
(gauge action, fermionic action, theta-term, and other operators)
are constructed from a single object, namely the lattice Dirac
operator $D_{ov}$. In this talk, I will present the results of
the derivation of the gauge field tensor as the classical
continuum limit from the overlap operator. The detailed
derivation~\cite{lah06} and the numerical evaluation~\cite{ahl06}
are under preparation and will be posted on the arXiv soon. I will
also discuss how to simulate such an action with the Hybrid Monte
Carlo algorithm.
\section{Gauge Field Tensor}
To begin with, we note that a gauge covariant operator
which is a functional of $U$ satisfies the condition
\begin{equation} \label{cov}
O(g^{-1}Ug) = g^{-1}O(U)g,
\end{equation}
where $g$ is the local gauge transformation. It is obvious that
the local operator with the color trace ${\rm tr}_c\, O(U)(x,x)$
is gauge invariant
\begin{equation} \label{inv}
{\rm tr}_c\, O(g^{-1}Ug)(x,x) = {\rm tr}_c\,
g^{-1}(x)O(U)(x,x)g(x) = {\rm tr}_c\, O(U)(x,x).
\end{equation}
Since the overlap operator, being a Dirac fermion operator, is
gauge covariant and is not ultra-local, it is expected that the
classical continuum limit of the trace in both the color and spin
indices ${\rm tr}_{cs}\, \Gamma D_{ov}(x,x)$ will be the lowest
dimensional local gauge operator which reflects the Lorentz
structure of the gamma matrix $\Gamma$. Thus, it is not surprising
that ${\rm tr}_{cs}\, \gamma_5 D_{ov}(x,x)$ gives the local
topological charge density at the continuum limit. Therefore, one
expects that~\cite{hor06b}
\begin{equation} \label{action}
{\rm tr_{cs}}\, (D_{ov}(x,x)-D_{ov}^0(x,x))
_{\,\,\stackrel{\longrightarrow}{a \rightarrow 0}\,\,} a^4 tr_c
F_{\mu\nu}^2 + O(a^6),
\end{equation}
where $D_{ov}^0$ is the non-interacting overlap operator with
gauge link $U$ set to unity. This has been verified with constant
field and with numerical evaluation~\cite{ahl06}.
In addition to gauge invariant operators, one can obtain gauge
covariant operators as well. Since $D_{ov}(x,x)$ is gauge
covariant and a Lorentz scalar, one expects that
$tr_s\,\sigma_{\mu\nu}D_{ov}(x,x)$ with the spin index traced over
will result in a lowest dimensional gauge covariant operator with
the $\mu\nu$ indices in the classical continuum limit which is
just the gauge field tensor~\cite{hor06b}. In other words,
\begin{equation}
{\rm tr_{s}}\,
\sigma_{\mu\nu}D_{ov}(x,x)_{\,\,\stackrel{\longrightarrow}{a
\rightarrow 0}\,\,} a^2 F_{\mu\nu} + O(a^4).
\end{equation}
We should note that the possibility of obtaining the lattice gauge
field tensor from the overlap operator by expanding in $a$ was
first pointed out by Niedermayer~\cite{nie99}. Also, it was
suggested by Gattringer~\cite{gat02} that the field tensor can be
defined through the square of various lattice Dirac operators,
i.e. $tr_s\, \gamma_{\mu}\gamma_{\nu}D^2(x,y)$ with a weighted sum
over $y$.
We have derived the classical limit following Adams~\cite{ada02}
and Suzuki's method~\cite{suz99}. While details of the derivation
will be given elsewhere~\cite{lah06}, we shall present the results
here.
\begin{eqnarray}
&& {\rm tr_{s}} \,\sigma_{\mu\nu}\, a D_{ov}(x,x)
_{\stackrel{-\!\!\longrightarrow}{a \rightarrow 0}} \nonumber \\
&&\int_{-\pi}^{\pi}\frac{d^4k}{(2\pi)^4}\frac{-2}{(Z^{\dagger}Z)^{3/2}}
\left[(m+r\sum_{\lambda}(c_{\lambda}
-1)c_{\mu}c_{\nu}+2rc_{\mu}s_{\nu}^2\right]a^2F_{\mu\nu}(x)+
O(a^4),
\end{eqnarray}
where $s_{\mu} = sin\,k_{\mu}, c_{\mu} = cos\,k_{\mu}$ and
\begin{equation}
Z^{\dagger}Z = \sum_{\nu}s_{\nu}^2 + \left[ m+
r\sum_{\mu}(c_{\mu} -1)\right]^2.
\end{equation}
For the case where $r=1, m= 1.368$ (which
corresponds to Wilson $\kappa = 0.19$ in the kernel), the
proportional constant is $-0.08156$ for the overlap
fermion~\cite{lah06}. This has been numerically
verified~\cite{ahl06}. One can try to use this operator to
calculate glue properties, such as the glue momentum and angular
momentum fractions in the nucleon to see if they are better than
the glue operators constructed from the link variables as is in
the case of the local topological charge.
\section{Monte Carlo Simulation}
Take the proportional constant for the classical continuum limit
in Eq. (\ref{action}) to be $c$ which has been
evaluated~\cite{ahl06} as a function of the negative mass
parameter $\rho$ in $D_W$, the kernel of the overlap operator
$D_{ov}$. Then the lattice gauge action can be written as
\begin{equation}
S_g = \frac{1}{2cg^2}{\rm Tr}\, (D_{ov}-D_{ov}^0{}),
\end{equation}
where $Tr$ denotes the trace over all indices, e.g. color, spin,
and space-time. For a given lattice, the trace over the free quark
overlap operator, i.e. $Tr\, D_{ov}^0{}$ is a constant which has
no effect on the Markov process to obtain equilibrium gauge
configurations, we shall drop it. Noting that $D_{ov}$ obeys
$\gamma_5$ hermiticity and satisfies Ginsparg-Wilson relation, we
have
\begin{equation}
S_g = \frac{1}{2cg^2}{\rm Tr}\, D_{ov} = \frac{1}{4cg^2}{\rm
Tr}\,(D_{ov}+ D_{ov}^{\dagger}) = \frac{1}{4cg^2}{\rm Tr}\,
D_{ov}^{\dagger}D_{ov}.
\end{equation}
Now the lattice QCD partition function is
\begin{equation} \label{partition}
Z = \int {\mathcal{D}}U\,d\bar{\psi}_fd\psi_f e^{-S_g + \sum_{f
=1}^{N_f} \bar{\psi}_f(D_{ov}(m_f))\psi_f},
\end{equation}
where $D_{ov}(m_f)$ is the overlap operator for a quark with mass
$m_f$
\begin{equation}
D_{ov}(m_f) = \rho D_{ov} + m_f (1 - \frac{1}{2}D_{ov}),
\end{equation}
where $\rho$ is the negative mass parameter in $D_W$. Since $e^{Tr
M} = e^{{\rm Tr} \ln e^M} = \det e^M$, the gauge part of the
partition function in Eq. (\ref{partition}) can be written in
terms of a fictitious fermion field $\psi_g$ so that
\begin{equation}
Z = \int {\mathcal{D}}U d\bar{\psi}_gd\psi_g d\bar{\psi}_fd\psi_f
e^{\bar{\psi}_g( e^{-\frac{1}{4cg^2}
D_{ov}^{\dagger}D_{ov}})\psi_g + \sum_{f =1}^{N_f}
\bar{\psi}_f(D_{ov}(m_f))\psi_f}.
\end{equation}
After integration of the fermion fields, it can be
written as
\begin{equation}
Z = \int {\mathcal{D}}U \det(e^{-\frac{1}{4cg^2}
D_{ov}^{\dagger}D_{ov}})\prod_{f=1}^{N_f} \det(D_{ov}(m_f)).
\end{equation}
We see that the gauge action plays the role of a UV-filtering for
the fermion action (note that $c > 0$ for the range of parameters
for the Wilson kernel in the overlap operator with $r = 1$ and $2
> m > 1$~\cite{ahl06}). The efficiency of an UV-filtered fermion determinant
has been studied by Duncan, Eichten, and Thacker~\cite{det99} and
by Borici~\cite{bor02}.
One way to carry out Monte Carlo simulation is to use
pseudofermions to simulate the determinant. For example, one can
equally split the gauge determinant and attach them to the fermion
determinants of different flavor. In terms of the pseudofermions,
it is
\begin{equation}
Z=\int {\mathcal{D}}U\, d\phi_f^{*}\,d\phi_f
\,e^{-\sum_{f=1}^{N_f}\phi_f^{*}e^{\frac{1}{4cN_fg^2}
D_{ov}^{\dagger}D_{ov}}D_{ov}^{-1}(m_f)\phi_f}.
\end{equation}
We shall discuss two ways to approximate the pseudofermion action.
Since $D_{oc}$ is normal, i.e. $[D_{ov}^{\dagger}, D_{ov}] = 0$,
one can write
\begin{equation}
\phi_f^{*}\,e^{\frac{1}{4cN_fg^2}
D_{ov}^{\dagger}D_{ov}}D_{ov}^{-1}(m_f)\phi_f =
\phi_f^*\,e^{\frac{1}{8cN_fg^2}D_{ov}^{\dagger}D_{ov}}D_{ov}^{-1}(m_f)
e^{\frac{1}{8cN_fg^2}D_{ov}^{\dagger}D_{ov}}\phi_f.
\end{equation}
The range of eigenvalues of $D_{ov}^{\dagger}D_{ov}$ is from 0 to
$4\rho^2$. If $\frac{1}{2cN_fg^2}$ is about unity or less, one can
consider the Chebyshev polynomial approximation to degree $M$
\begin{equation}
e^{\frac{1}{8cN_fg^2}D_{ov}^{\dagger}D_{ov}} \sim \sum_{i=1}^M c_i
(D_{ov}^{\dagger}D_{ov})^i.
\end{equation}
Alternatively, one can perform a Chebyshev rational polynomial
approximation for the operator $e^{\frac{1}{4cN_fg^2}
D_{ov}^{\dagger}D_{ov}}D_{ov}^{-1}(m_f)$ to degree $N$ for the
Rational Hybrid Monte Carlo algorithm (RHMC)~\cite{hks99}. For the
case of $2+1$ flavors, the pseudofermion action for the 2
degenerate flavors can be approximated by
\begin{equation} \label{pf2}
\phi^*\,e^{\frac{1}{6cg^2}D_{ov}^{\dagger}D_{ov}}(D_{ov}^{\dagger}
D_{ov}(m_f))^{-1}\phi \sim \phi^* \sum_{i=1}^N
\frac{a_i}{D_{ov}^{\dagger}D_{ov} + b_i}\phi,
\end{equation}
and the single flavor one approximated by
\begin{equation} \label{pf1}
\phi^*\,e^{\frac{1}{12cg^2}D_{ov}^{\dagger}D_{ov}}(D_{ov}^{\dagger}
D_{ov}(m_f))^{-1/2}\phi \sim \phi^* \sum_{i=1}^N
\frac{c_i}{D_{ov}^{\dagger}D_{ov} + d_i}\phi.
\end{equation}
In this case, the forces in the equation of motion in HMC come
from the effective pseudofermion actions in Eqs. (\ref{pf2}) and
(\ref{pf1}) which represent the combined gauge and fermion
forces.
If one uses a multi-mass algorithm for inversion and the coefficients
$b_i$ and $d_i$ are not smaller than $m_f^2$, the overhead of
incorporating the gauge action in RHMC is negligible compared to
the ordinary $2+1$ flavor simulation in HMC with the same
inverter.
One can of course accelerate the Rational Hybrid Monte Carlo
algorithm (RHMC) by splitting the determinant into fractional
flavors~\cite{ck04,jhl03} with more pseudofermion fields and
improve the overall efficiency as shown by Clark and
Kennedy~\cite{ck04}.
In summary, we have discussed possible Monte Carlo simulations of
the lattice gauge action from the trace of the overlap operator,
i.e. ${\rm Tr}\, D_{ov}$ together with the overlap fermion action
in the context of HMC. By virtue of the fact that the overlap
operator is exponentially local, the gauge action so defined is
expected to behave like a chirally smeared action. Furthermore,
the integrand of the gauge part of the partition function, written
in terms of a determinant, appears to be an UV-filter for the
fermion determinant. We should note that, similar to the overlap
fermion action, this gauge action is not reflection positive. Also
presented is our derived result of the lattice gauge field tensor
as the classical continuum limit of ${\rm tr_s}\,\sigma_{\mu\nu}
D_{ov}(x,x)$. This can be used to calculate glue matrix elements
in the hadrons and possibly glueballs.
This work is partially supported by DOE grant DE-FG05-84ER40154.
We wish to thank I.~Horv\'{a}th, A. Alexandur and A. Kennedy for
stimulating discussions. The author also wish to thank the
hospitality of the Institute of Physics, Academia Sinica, Taipei,
Taiwan, National e-Science Center, Edinburgh and DESY-Zeuthen,
Germany where the talk is prepared and the proceedings is written
up during the visit of these places.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The maximal subgroup problem for almost simple groups became a major focus for research
in group theory in the 1980s, and remains so today. In the case of the sporadic groups,
a systematic attack on the problem began earlier,
with Livingstone and his students in the 1960s.
The problem was solved in the 20th century
for $25$ of the $26$ sporadic
simple groups, and their automorphism groups, but one case, namely
the Fischer--Griess Monster group $\MM$, remains outstanding.
A great deal of work on this case has already been done.
The maximal $p$-local
subgroups were classified in \cite{oddlocals,MSh,Meier}, and some theoretical work on
non-local subgroups was accomplished in \cite {Anatomy1,Anatomy2}.
Following successful computer constructions of the Monster \cite{3loccon,2loccon}
other techniques became available, and more progress was made
\cite{post,A5subs,S4subs,L241,L227,L213B,Sz8A},
including discovery of five previously unknown
maximal subgroups, isomorphic to
\begin{itemize}
\item $\PSL_2(71)$, $\PSL_2(59)$,
$\PSL_2(41)$, $\PGL_2(29)$, $\PGL_2(19)$.
\end{itemize}
The cases left open by this published work are
possible maximal subgroups with socle isomorphic to one of the following simple groups:
\begin{itemize}
\item $\PSL_2(8)$, $\PSL_2(13)$, $\PSL_2(16)$, $\PSU_3(4)$, $\PSU_3(8)$.
\end{itemize}
Of these, $\PSL_2(8)$ and $\PSL_2(16)$ have been classified in unpublished work
of P. E. Holmes, although the results seem not to be publicly available.
In this paper we deal with the case $\PSU_3(8)$.
Specifically, we show that, up to conjugacy, there is a unique
subgroup $\PSU_3(8)$ in the Monster. Its normalizer is the
already known maximal subgroup $(A_5\times \PSU_3(8){:}3){:}2$.
Notation
follows \cite{Atlas,FSG}, where required
background information can be found.
\section{Existence}
Exactly one conjugacy class of subgroups of $\mathbb M$ isomorphic to
$\PSU_3(8)$ is contained in the known maximal subgroups. The normalizer of such
a group is $(A_5\times\PSU_3(8){:}3){:}2$, itself a maximal subgroup of $\mathbb M$.
For details, see \cite{Anatomy1}.
\section{Strategy for proving uniqueness}
The group $\PSU_3(8)$ can be generated from a group $\PSL_2(8)\times 3$ by
extending $D_{18}\times 3$ to $(9\times 3){:}S_3$. Note that $9\times 3$
contains three cyclic subgroups of order $9$, which are permuted by the $S_3$.
Similarly, there are three complements of order $3$, which are also permuted
by the $S_3$. Hence it is
sufficient to extend $9\times 3$ to $D_{18}\times 3$ normalizing one of the other
two cyclic subgroups of order $9$.
We note in particular that all cyclic groups of order $9$ in $\PSU_3(8)$
are conjugate, and hence we need only consider subgroups $\PSL_2(8)\times 3$
in which
the diagonal elements of order $9$ are
conjugate in the Monster
to the elements of order $9$ inside $\PSL_2(8)$. We shall show that there is
only one class of $\PSL_2(8)\times 3$ in the Monter
that satisfies this condition.
Moreover, the cyclic group of order $9$ extends to a unique
$D_{18}$ in $\PSU_3(8)$.
Hence the $D_{18}\times 3$ we wish to construct is conjugate
in the Monster to the one inside $\PSL_2(8)\times 3$.
\section{The subgroup $3\times\PSL_2(8)$}
Since $\PSL_2(8)$ contains elements of order $9$, the elements of order $3$
fuse to $\mathbb M$-class $3B$. Since it contains a pure $2^3$, the involutions are
in $\mathbb M$-class $2B$.
In \cite{Anatomy1} Norton accounts for many of the structure constants of type
$(2,3,7)$ in the Monster. In particular he shows that there is no $3\times\PSL_2(8)$
in which the $\PSL_2(8)$ is of type
$(2B,3B,7B)$. He also shows that there are three classes of $\PSL_2(8)$
of type $(2B,3B,7A)$, just two of which centralize elements of order $3$.
The respective normalizers are:
\begin{enumerate}
\item $\PSL_2(8){:}3\times 3S_6$.
Here the central $3$ in $3A_6$ is in Monster class $3A$, as are the $3$-cycles.
The elements mapping to fixed-point-free $3$-elements in $A_6$ are in Monster class $3C$.
\item $\PSL_2(8)\times 2\times S_4$. Here, the
elements of order $3$ in $S_4$ are in Monster class $3A$.
\end{enumerate}
Hence there are exactly four classes of $\PSL_2(8)\times 3$ in the Monster.
\section{Fusion of elements of order $9$}
Consider first the case where the central elements of $3\times \PSL_2(8)$ are in
class $3C$ in the Monster.
We restrict the character of degree $196883$ to $S_3\times \Th$. Using the character
values on $3C$ and $2A$, we obtain a decomposition as
$$2\otimes 65628 + 1^+\otimes 34999 + 1^-\otimes 30628$$
where the first factor denotes the representation of $S_3$. The values on classes $2A$
and $7A$ of $\Th$ are easily computed:
$$\begin{array}{rrr}
1A & 2A & 7A\cr
34999 & 183 & 13\cr
30628 & -92 & 3\cr
65628 & 92 & 17
\end{array}$$
from which it is easy to see that the decomposition into irreducibles of $\Th$ is given by
\begin{eqnarray*}
34999 &=& 30875+4123+1\\
30628&=& 30628\\
65628&=& 61256+4123+248+1
\end{eqnarray*}
It then follows that the values of the character of degree $196883$ on elements of
$\Th$-class $9A$, $9B$, and $9C$ are respectively $-1$, $-1$, $26$, while the
values on the corresponding diagonal elements in $3\times\Th$ are $26$, $26$, and
$-1$ respectively. In other words, the diagonal elements are always in a
different conjugacy class from the elements in $\Th$. Hence this case is
eliminated. (In fact, in this case the $\PSL_2(8)$ contains elements of $\Th$ class $9C$,
that is, Monster class $9A$.)
The remaining three classes of $\PSL_2(8)\times 3$, namely the ones with a central
$3A$-element, are contained in the double cover of
the Baby Monster. The work in \cite{maxB} then shows that in these cases the
elements of order $9$ in $\PSL_2(8)$ are in Baby Monster class $9B$, so
Monster class $9A$. Moreover, in two of the three cases, the diagonal elements
of order $9$ are in Baby Monster class $9A$, so Monster class $9B$. But in $\PSU_3(8)$
these two classes of elements of order $9$ are fused. Hence these cases cannot
extend to $\PSU_3(8)$ in the Monster.
The remaining case therefore is
a $3A$-type, with normalizer $\PSL_2(8){:}3\times 3S_6$.
We know there exists such a subgroup
$\PSU_3(8)$ in the Monster, so all
elements of order $9$ fuse to $9A$.
\section{The centralizer of a $9A$ element}
From \cite{oddlocals}, the centralizer of a $9A$-element in the Monster has shape
$[3^7].\PSU_4(2)$. Looking more closely, we see that the structure is the central product of
the cyclic group of order $9$ with a group of shape $3^6\udot \Omega_5(3)$, in which the
action of $\Omega_5(3)$ on $3^6$ is uniserial, with a trivial submodule and a natural
module as quotient.
Moreover, since
this group contains $9 \times 3\udot S_6$,
the extension is non-split, in the sense that $C(9)/9\cong 3^5\udot \Omega_5(3)$.
These facts can be checked computationally, using the construction of the subgroup
$3^{1+12}\udot2\udot\Suz{:}2$ described
in \cite{3loccon}. But in fact the proof below does not
depend on any of the subtleties, so the sceptical reader can ignore them.
\section{The centralizer of $9\times 3$}
Centralizing the additional element of order $3$ reduces the group from
$9\circ3^6\udot \Omega_5(3)$ to $9\circ3^6\udot A_6$. The structure of the latter group
is very subtle, and in particular it contains several conjugacy classes of $3\udot A_6$,
and it is not obvious which one centralizes $\PSL_2(8)$.
In any case, the group of elements which either centralize $9\times 3$ or extend it to
$D_{18}\times 3$ is of shape $(9\circ 3^6)\udot(A_6\times 2)=
(9\times 3).3^4.(2\times A_6)$. We must adjoin an
involution in the conjugacy class which maps to the central involution in the
quotient $A_6\times 2$. But this conjugacy class contains only $3^6=729$ elements,
while the group of symmetries is $9\times 3A_6$, of order $9720$. Hence every
group generated in the prescribed fashion has non-trivial centralizer in the Monster.
Indeed, this counting argument implies that such a centralizer has order at least
$9720/729=13\frac13$.
\section{Proof of the theorem}
The centralizer of an element of order $19$ is $19\times A_5$, containing elements of
classes $2A$, $3C$ and $5A$. The only subgroup of $A_5$ with order at least $14$
is $A_5$ itself. Hence every $\PSU_3(8)$ in the Monster has centralizer conjugate to this
$A_5$.
As a corollary we obtain new proofs of the uniqueness of $\PSU_3(8)$ as a subgroup of
the Baby Monster, the Thompson group, and the Harada--Norton group.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{#1}}
\textwidth 160mm \textheight 220mm
\newcommand{\vs}[1]{\vspace{#1 mm}}
\newcommand{\hs}[1]{\hspace{#1 mm}}
\newcommand{\bra}[1]{\langle{#1}|}
\newcommand{\ket}[1]{|{#1}\rangle}
\renewcommand{\a}{\alpha}
\renewcommand{\b}{\beta}
\renewcommand{\c}{\gamma}
\renewcommand{\d}{\delta}
\newcommand{\e}{\epsilon}
\newcommand{\dsl}{\pa \kern-0.5em /}
\newcommand{\half}{\frac{1}{2}}
\renewcommand{\t}{\theta}
\newcommand{\nn}{\nonumber\\}
\newcommand{\p}[1]{(\ref{#1})}
\newcommand{\lan}{\langle}
\newcommand{\ran}{\rangle}
\def\ii{{\rm i}}
\begin{document}
\topmargin 0pt \oddsidemargin 0mm
\vspace{2mm}
\begin{center}
{\Large \bf Generalized split monopole magnetospheres: the effects
of current sheets}
\vs{10}
{\large Huiquan Li \footnote{E-mail: [email protected]} and Jiancheng Wang}
\vspace{6mm}
{\em
Yunnan Observatories, Chinese Academy of Sciences, \\
650216 Kunming, China
Key Laboratory for the Structure and Evolution of Celestial Objects,
\\ Chinese Academy of Sciences, 650216 Kunming, China
Center for Astronomical Mega-Science, Chinese Academy of Sciences,
\\ 100012 Beijing, China}
\end{center}
\vs{9}
\begin{abstract}
We show that energy should be dissipated or extracted in the current
sheet (CS) of a split magnetosphere deviating from the Michel split
monopole, with the CS heating up or cooling down. But the
electromagnetic energy remains unchanged everywhere. Based on the
de-centered monopole solution generated by symmetry in flat
spacetime, we construct two generalized split monopole
configurations, in which the field lines intersect with the CS at
arbitrary angles. One configuration resembles the outer geometry of
the so-called ``new pulsar magnetosphere model", for which up to
$47\%$ of the spin down energy is transferred to the Joule heating
process in the CS. In the other configuration, we observe that
negative energy is dissipated in the CS, which is usually observed
in magnetospheres on rotating black holes. This means that energy is
extracted simultaneously from the central star and the CS to power
the output Poynting flux at infinity. We interpret the extraction of
energy from the CS as that thermal energy of charged particles in
the CS is transferred to the ordered kinetic energy of these
particles drifting in the force-free (FF) electromagnetic fields.
Hence, the CS is like an ``air conditioner" in the sky, which can
heat up or cool down, depending on the configurations.
\end{abstract}
\section{Introduction}
\label{sec:introduction}
To avoid the appearance of magnetic monopole, the splitting
technique by introducing an equatorial current sheet (CS) is usually
adopted in constructing monopole magnetosphere on a compact object.
In the popular pulsar magnetosphere model at present, the near-star
region is of a dipole structure
\cite{1969ApJ...157..869G,1973ApJ...180..207M}. The magnetic field
lines around the poles can extend beyond the light cylinder (LC) to
infinity. They asymptotically approach a split monopole in the outer
region, whose analytical solution is the one found by Michel
\cite{1973ApJ...180L.133M}. This split monopole results from a
splitting and then gluing of two centered monopoles with opposite
magnetic charge. The discontinuity of the fields across the equator
gives rise to a singular CS. In the Michel split monopole, the
surface of the CS is parallel to the magnetic field lines neighbor
to the equator. So the Lorentz force vanishes and no dissipation
occurs in the CS. This structure is part of the standard pulsar
magnetosphere model first realized numerically by
\cite{1999ApJ...511..351C}.
But this splitting method is not unique to obtain a split monopole.
It is possible that the magnetic field lines are not necessarily
parallel to the infinitely thin CS
\cite{2008JCAP...11..002G,2014ApJ...781...46C,2014MNRAS.445.2500G}.
This modification may cause non-trivial effects. As shown in the
numerical solutions \cite{2008JCAP...11..002G,2014ApJ...781...46C},
the non-parallel splitting leads to a CS where the spin down energy
is dissipated. It is unclear wether this dissipation process
consumes the electromagnetic energy in the CS. If it does (like the
magnetic reconnection case), the magnetosphere should evolve in
time.
In some other numerical simulations on rotating black holes, an
alternative role of the CS is explored. In
\cite{2004MNRAS.350..427K,2005ApJ...620..889U,2017PhRvD..96f3006C,
2018PhRvD..98b3008E}, it is found that the energy dissipated in the
CS developed within the ergosphere is gained by the force-free (FF)
fields to power the jet formation. The origin of the negative
dissipation energy remains vague. It looks like that this phenomenon
is specific to a gravitational system.
In this work, we construct two analytical split monopole models that
the magnetic field lines intersect with the infinitely thin CS at
arbitrary angles. Using these exact configurations, we can clarify
the current and energy flows in the systems in detail, and specify
the precise effects of the CS. The paper is organized as follows. In
terms of the translational freedom mentioned in Section.\
\ref{sec:trans}, we give the general monopole solution whose center
can be shifted along the spin axis in Section.\ \ref{sec:genmonsol}.
In Section.\ \ref{sec:splitmon}, we present the generalized split
monopole configurations based on the de-centered solution and
discuss the effects of the CS by calculating the exact amount of
flows in them. Finally, we summarize and discuss in the last
section.
\section{Translation of the magnetosphere}
\label{sec:trans}
We consider the force-free magnetosphere on an axisymmetric rotator.
The fields satisfy the following FF condition
\begin{equation}\label{e:ffcon}
\rho\vec{E}+\vec{j}\times\vec{B}=0.
\end{equation}
This condition implies $\vec{j}\cdot\vec{E}=0$ and
$\vec{E}\cdot\vec{B}=0$.
Under this condition, the Maxwell's equations can be reduced to a
simple system described by three correlated functions: the flux
$\psi$, the angular velocity $\Om(\psi)$ of field lines and the
poloidal electric current $I(\psi)$. In terms of them, The
electromagnetic fields in the unit basis of spherical coordinates
can be expressed as
\begin{equation}\label{e:E}
\vec{E}=-\vec{V}_\phi\times\vec{B}
=-\frac{\Om(\psi)}{r}\left(r\pa_r\psi, \pa_\th\psi, 0\right),
\end{equation}
\begin{equation}\label{e:B}
\vec{B}=\frac{1}{r^2\sin\th}\left(\pa_\th\psi, -r
\pa_r\psi, rI(\psi)\right).
\end{equation}
where $V_\phi=r\sin\th\Om$. The charge and current densities are
respectively
\begin{equation}\label{e:charge}
\rho=-\frac{1}{4\pi}\vec{\nabla}\cdot\left(\Om
\vec{\nabla}\psi\right),
\end{equation}
\begin{equation}\label{e:current}
\vec{j}=\rho r\sin\th\Om\vec{e}_\phi+\frac{1}{4\pi}I'\vec{B}.
\end{equation}
With the above relations and equations, we arrive at the so called
force-free pulsar magnetosphere equation. By redefining
$z=r\cos\th$ and $x=r\sin\th$, we express the equation in
cylindrical coordinates as
\begin{equation}\label{e:GSeqc}
(1-\Om^2x^2)\left(\pa_x^2\psi+\pa_z^2\psi\right)
-\frac{1}{x}(1+\Om^2x^2)\pa_x\psi+(\pa_z\psi)^2]=-I(\psi)I'(\psi),
\end{equation}
where the primes stand for derivative with respect to $\psi$.
It is easy to see that the differential equation is invariant under
the shift along the symmetry axis:
\begin{equation}\label{e:trans}
z\rightarrow z'=z-\ep.
\end{equation}
So any solution $\psi$ shifted along the rotation axis is still a
solution to the pulsar equation (\ref{e:GSeqc}). Under this
translation, the functional relations $\Om(\psi)$, $I(\psi)$ and the
global features are kept the same. In what follows, we consider the
translated version of Michel's monopole solution.
\section{De-centered monopole solution}
\label{sec:genmonsol}
The exact monopole solution found by Michel
\cite{1973ApJ...180L.133M} is quite simple, only relying on the
angle $\th$ in spherical coordinates:
\begin{equation}\label{e:michelsol}
\psi(\th)=-q\cos\th=-q\frac{z}{\sqrt{x^2+z^2}}.
\end{equation}
For an arbitrary $\Om(\psi)$, the electric current is:
\begin{equation}\label{e:I}
I(\psi)=\frac{1}{q}\Om(\psi)(\psi^2-q^2),
\end{equation}
where $q$ is the charge of the monopole. The solution gives rise to
a magnetosphere with magnetic domination and null current.
The above solution (\ref{e:michelsol}) under the translation
(\ref{e:trans}) becomes
\begin{equation}\label{e:gensol}
\psi(r,\th)=-q\frac{z-\ep}{\sqrt{x^2+(z-\ep)^2}}
=-q\frac{r\cos\th-\ep}{\sqrt{r^2-2\ep r\cos\th+\ep^2}}.
\end{equation}
The solution is now dependent on both poloidal coordinates in the
spherical coordinates. The functional relations between $I(\psi)$,
$\Om(\psi)$ and $\psi$ remain invariant, the same as shown in Eq.\
(\ref{e:I}). The solution describes a monopole magnetosphere whose
center is shifted away from the origin (the center of the star)
along the rotation axis by a distance $\ep$, which generalizes the
coincident Michel's solution.
For the solution, the electromagnetic fields are
\begin{equation}\label{e:}
\vec{E}=\frac{qr\sin\th\Om}{D^3}
\left(\ep\sin\th, -(r-\ep\cos\th), 0\right),
\end{equation}
\begin{equation}\label{e:}
\vec{B}=\frac{q}{D^3}\left(r-\ep\cos\th,
\ep\sin\th, -r\sin\th\Om D\right).
\end{equation}
where $D=\sqrt{r^2-2\ep r\cos\th+\ep^2}$. Thus, the invariant is
\begin{equation}\label{e:}
\vec{B}^2-\vec{E}^2=\frac{q^2}{D^4}>0,
\end{equation}
So it is also magnetically dominated.
The relations $B_\phi=E_\th=V_\phi B_r$ for the original Michel's
solution are replaced by
\begin{equation}\label{e:}
B_\phi=-\sqrt{E_r^2+E_\th^2}=V_\phi\sqrt{B_r^2+B_\th^2}.
\end{equation}
But, at large distances $r\gg|\ep|$, the former is still a good
approximation.
The Poynting flux is
\begin{equation}\label{e:poynting}
\vec{S}=\frac{1}{4\pi}\vec{E}\times\vec{B}
=\frac{(qr\sin\th\Om)^2}{4\pi D^5}\left(r-\ep\cos\th, \ep\sin\th,
\frac{D}{r\sin\th\Om}\right).
\end{equation}
From this, we can find that the the drift velocity
$\vec{v}_D=4\pi\vec{S}/B^2$ gets a non-vanishing component at the
$\th$ direction. The four-current is
\begin{equation}\label{e:4current}
J^\mu\equiv(\rho,\vec{j})=-\frac{q\Om(r\cos\th-\ep)}{2\pi D^4}
\left(D, r-\ep\cos\th, \ep\sin\th, 0\right).
\end{equation}
So it is null with $J^2=0$, meaning the particle travels at speed of
light. This is the same as Michel's centered solution. But the null
surface where the charge density and current vanish is not located
on the equator any more, but on the plane shifted by a distance
$\ep$: $z=r\cos\th=\ep$. It is noticed that the poloidal components
of the magnetic field, Poynting flux, the drift velocity and the
current are all parallel to each other.
\section{Generalized split monopoles}
\label{sec:splitmon}
Since the magnetic monopole has not yet been confirmed, the
splitting technique is usually adopted in constructing a monopole
magnetosphere. For the Michel solution, the split monopole
configuration on the two half-planes is expressed as:
\begin{equation}\label{e:censplitmon}
\psi(\th)=
\left\{
\begin{array}{cl}
q(1-\cos\th),
\textrm{ }\textrm{ }\textrm{
} \th\in[0,\pi/2) \\
q(1+\cos\th).
\textrm{ }\textrm{ }\textrm{ } \th\in(\pi/2,\pi]
\end{array}
\right.
\end{equation}
The splitting results in discontinuity on the equatorial plane,
giving rise to an infinitely thin CS there. In this case, the
magnetic field lines are parallel to the surface of the CS.
With the de-centered solution (\ref{e:gensol}), we make the similar
splitting. We denote the solution on the upper half-plane by
$\psi_\vee$ and the one on the lower hemisphere by $\psi_\vee$. On
the upper hemisphere $\th\in[0,\pi/2)$, we choose $\ep=\pm d$
($d>0$) and express the solution as
\begin{equation}\label{e:}
\psi_\vee^{(\pm)}=q\left(1-\frac{r\cos\th\pm d}
{\sqrt{r^2\pm 2dr\cos\th+d^2}}\right).
\end{equation}
On the lower one $\th\in(\pi/2,\pi]$, the solution is
\begin{equation}\label{e:}
\psi_\wedge^{(\pm)}=q\left(1+\frac{r\cos\th\pm d}
{\sqrt{r^2\pm2dr\cos\th+d^2}}\right).
\end{equation}
There are two trivial cases:
$\psi=(\psi_\vee^{(+)},\psi_\wedge^{(+)})$ and
$\psi=(\psi_\vee^{(-)},\psi_\wedge^{(-)})$, which just describe
de-centered versions of the Michel split monopole magnetosphere,
shifted as a whole respectively downward and upward by a distance
$d$. We are more interested in the two non-trivial configurations
that will be discussed as follows.
\begin{figure}
\center
\includegraphics[width=0.49\columnwidth]{f1a}
\includegraphics[width=0.49\columnwidth]{f1b}
\caption{Magnetic field lines for the configurations defined in
Section.\ \ref{subsec:con_I} (left) and \ref{subsec:con_II} (right) with
$q=1$, $r_0=2$ and $d=1$. The bold lines on the equators
represent the infinitely thin CS. The dashed lines on the
left panel represent the null surfaces (with $\rho=0$), across which
the charge density $\rho$ changes sign. In a realistic pulsar
magnetosphere model, these split monopole configurations are relevant
only to the geometry outside the LC.}
\label{fig:f1}
\end{figure}
\subsection{$\psi=(\psi_\vee^{(-)},\psi_\wedge^{(+)})$}
\label{subsec:con_I}
This configuration is shown in the left panel of Fig.\ \ref{fig:f1}.
The profile of the configuration resembles the outer geometry in the
``new pulsar magnetosphere model" obtained numerically in
\cite{2014ApJ...781...46C}. The valid region of the configuration is
restricted to be $r>d$, where integral of the magnetic field over
any closed surface leads to zero magnetic charge.
The expanded form of the split solution can be obtained in terms of
the general expanded form in the Appendix. It is clearly seen that
the full expanded solution of this spit monopole in the outer range
$d/r<1$ is a summation of a closed and an open field line component:
the odd order terms of $r$ are continuous across the equatorial
plane, contributing a closed field line component, while the even
order terms are discontinuous, contributing an open field line
component. The discontinuity in the latter gives rise to a current
sheet on the equatorial plane.
For $d/r<1$, the solution expanded to the order
$\mathcal{O}(d^2/r^2)$ is
\begin{equation}\label{e:expsol}
\psi\simeq
\left\{
\begin{array}{cl}
q[1-\cos\th+d\sin^2\th r^{-1}+(3d^2/2)\cos\th\sin^2\th r^{-2}],
\textrm{ }\textrm{ }\textrm{
} \th\in[0,\pi/2) \\
q[1+\cos\th+d\sin^2\th r^{-1}-(3d^2/2)\cos\th\sin^2\th r^{-2}].
\textrm{ }\textrm{ }\textrm{ } \th\in(\pi/2,\pi]
\end{array}
\right.
\end{equation}
At infinity $r\rightarrow\infty$, the solution is asymptotically the
Michel split monopole solution. In the near region with small $d/r$
(but not too small), the dipole part becomes more important.
Besides, there are two null surfaces in this split monopole
solution. This is coincident to the case of the corotating dipole
inside the LC in the standard pulsar magnetosphere model, where two
straight null surfaces extend from the star surface to the LC.
Hence, the new split solution here can better describe a smooth
transition from a dipole to a monopole. By adjusting the parameter
$d$, the two null lines in corotating dipole and in new split
monopole can be matched. In particular, the expanded solution
(\ref{e:expsol}) is exactly the solution outside the light torus of
the exact dipole magnetosphere \cite{2016arXiv160807998P}.
Let us now examine the dynamical consequence in this configuration.
From the solution, the force-free fields approaching the equator
$\th\rightarrow\pi/2$ from either side are given by
\begin{equation}\label{e:}
\vec{E}_\vee\rightarrow\frac{qr\Om}{(r^2+d^2)^
{\frac{3}{2}}}\left(d,-r,0\right),
\end{equation}
\begin{equation}\label{e:}
\vec{E}_\wedge\rightarrow\frac{qr\Om}{(r^2+d^2)^
{\frac{3}{2}}}\left(d,r,0\right).
\end{equation}
\begin{equation}\label{e:}
\vec{B}_\vee\rightarrow\frac{q}{(r^2+d^2)^
{\frac{3}{2}}}\left(r,d,-r\Om\sqrt{r^2+d^2}\right),
\end{equation}
\begin{equation}\label{e:}
\vec{B}_\wedge\rightarrow\frac{q}{(r^2+d^2)^
{\frac{3}{2}}}\left(-r,d,r\Om\sqrt{r^2+d^2}\right).
\end{equation}
We denote the continuous fields at the equator as:
\begin{equation}\label{e:CSfield}
E_c^r\equiv E^r_\vee(\th\rightarrow\frac{\pi}{2})
=E^r_\wedge(\th\rightarrow\frac{\pi}{2}), \textrm{ }\textrm{
}\textrm{ }
B_c^\th\equiv B^\th_\vee(\th\rightarrow\frac{\pi}{2})
=B^\th_\wedge(\th\rightarrow\frac{\pi}{2}).
\end{equation}
They are the field components that are non-vanishing within the CS.
The other field components are discontinuous.
The discontinuity of the perpendicular electric field $E^\th$ leads
to the surface charge density in the CS:
\begin{equation}\label{e:}
\si_c=\frac{qr^2\Om}{2\pi(r^2+d^2)^{\frac{3}{2}}}.
\end{equation}
The discontinuities of the parallel magnetic fields $B^r$ and
$B^\phi$ give rise to the surface current densities flowing in the
CS respectively along the $r$ and $\phi$ directions:
\begin{equation}\label{e:CScurrent}
i_c^\phi=\frac{qr}{2\pi(r^2+d^2)^{\frac{3}{2}}},
\textrm{ }\textrm{ }\textrm{ } i_c^r=\frac{qr\Om}{2\pi(r^2+d^2)}.
\end{equation}
In the FF regions, the total change rate of the charges through the
sphere (excluding the equator) at radius $r$ is
\begin{equation}\label{e:}
\dot{Q}^{FF}(r)=\int_\vee j^r_\vee ds^r+\int_\wedge j^r_\wedge ds^r
=\left[I_\vee(\psi)\right]_{\th=0}^{\th=\pi/2}
+\left[I_\wedge(\psi)\right]^{\th=\pi}_{\th=\pi/2},
\end{equation}
where $ds^r=2\pi r^2\sin\th d\th$ and the dot denotes the derivative
with respect to time. The change rate through the section of the CS
at $r$ is
\begin{equation}\label{e:}
\dot{Q}^{CS}(r)=2\pi ri_c^r.
\end{equation}
Thus, it is justified that the total electric current flowing
through a sphere at any radius $r$ is zero:
\begin{equation}\label{e:neutralcon}
\dot{Q}^{FF}(r)+\dot{Q}^{CS}(r)=0.
\end{equation}
This implies that the central star always remains neutral.
We take the value $\dot{Q}^{CS}(r=r_0)$ at some initial radius $r_0$
$(>d)$ as the current directly from the central star and the one
$\dot{Q}^{CS}(r\rightarrow\infty)$ at infinity as the output
current. From the second equation of Eq.\ (\ref{e:CScurrent}), the
latter is given by
\begin{equation}\label{e:}
\dot{Q}^{CS}(r\rightarrow\infty)=q\Om.
\end{equation}
It is the same as the Michel split monopole.
Towards the equator, the perpendicular electric currents along the
FF magnetic fields are
\begin{equation}\label{e:}
j^\th_\vee (\th\rightarrow\frac{\pi}{2})=-j^\th_\wedge
(\th\rightarrow\frac{\pi}{2})=\frac{q\Om d^2}{2\pi(r^2+d^2)^2}.
\end{equation}
This means that there is a net electric current flowing into the CS
from both the upper and the lower sides. Including the injected
current at $r_0$ and the output current at $r\rightarrow\infty$, we
find
\begin{equation}\label{e:currentcon}
\dot{Q}^{CS}(r=r_0)
+\int_{r_0}^\infty 2\pi r\left[j^\th_\vee(\th\rightarrow
\frac{\pi}{2})-j^\th_\wedge(\th\rightarrow\frac{\pi}{2})\right]dr
=\dot{Q}^{CS}(r\rightarrow\infty).
\end{equation}
This equation says that the output CS current comes from directly
the central star and the FF fields.
With the continuous fields, we obtain the non-vanishing Lorentz
force densities in the CS:
\begin{equation}\label{e:}
f_c^r=\si_cE_c^r-i_c^\phi B_c^\th
=\frac{q^2rd(r^2\Om^2-1)}{2\pi(r^2+d^2)^3},
\end{equation}
\begin{equation}\label{e:}
f_c^\phi=i_c^r B_c^\th
=\frac{q^2rd\Om}{2\pi(r^2+d^2)^{\frac{5}{2}}}.
\end{equation}
Both tend to zero in the Michel split monopole solution with $d=0$.
The directions of the radial component are opposite on the two sides
of the LC located at $r=r_{LC}=1/\Om$: the magnetic force dominates
inside the LC, while the electric force dominates outside the LC.
The fields become electrically dominated outside LC either. It is
usually assumed that the monopole solution only exists outside the
LC.
\begin{figure}[htbp]
\center
\includegraphics[width=0.95\columnwidth]{f2}
\caption{The field structure and the energy flows near the CS
outside the LC of the configuration in Section.\ \ref{subsec:con_I}.
The electric field lines (not displayed) are perpendicular to the
poloidal magnetic field lines $B^P$. A net Poynting flux flows
into the CS from the upper and lower sides to provide the energy
dissipated in the CS.}
\label{fig:f2}
\end{figure}
The Poynting fluxes for the FF fields in the limit
$\th\rightarrow\pi/2$ are:
\begin{equation}\label{e:}
\vec{S}_\vee\rightarrow\frac{(qr\Om)^2}{4\pi(r^2+d^2)^
{\frac{5}{2}}}\left(r,d,\frac{1}{r\Om}\sqrt{r^2+d^2}\right),
\end{equation}
\begin{equation}\label{e:}
\vec{S}_\wedge\rightarrow\frac{(qr\Om)^2}{4\pi(r^2+d^2)^
{\frac{5}{2}}}\left(r,-d,\frac{1}{r\Om}\sqrt{r^2+d^2}\right).
\end{equation}
The discontinuous perpendicular component indicates that there is a
net Poynting flux flowing into the CS from both sides of the FF
regions. Thus, the CS gains energy.
The FF regions should be dissipation free and the electromagnetic
energy should always be conserved. This can be expressed as:
\begin{equation}\label{e:encon}
\dot{\mathcal{E}}^{FF}_\vee=\dot{\mathcal{E}}^{FF}_\wedge=0,
\end{equation}
where the change rates of the electromagnetic energy are given by
\begin{equation}\label{e:Poyntingint}
\dot{\mathcal{E}}^{FF}_\vee=\oint_\vee\vec{S}_\vee\cdot d\vec{s},
\textrm{ }\textrm{ }\textrm{ }
\dot{\mathcal{E}}^{FF}_\wedge=\oint_\wedge\vec{S}_\wedge \cdot
d\vec{s}.
\end{equation}
The integrals go through all the boundaries of the FF regions.
We first consider the upper hemisphere. The change rate of the FF
electromagnetic energy due to the Poynting influx crossing the
hemisphere at the initial radius $r_0$ is
\begin{equation}\label{e:}
\dot{\mathcal{E}}^{FF}_\vee(r=r_0)=2\pi r_0^2\int_{0}^{\frac{\pi}{2}}
\sin\th S^r_\vee d\th=\frac{q^2\Om^2}{6}\left[
2+\frac{d(3r_0^2+2d^2)}{(r_0^2+d^2)^{\frac{3}{2}}}\right].
\end{equation}
This influx can be viewed as the one directly extracted from the
central star (via the inner magnetosphere). The change rate measured
at $r\rightarrow\infty$:
\begin{equation}\label{e:}
\dot{\mathcal{E}}^{FF}_\vee(r\rightarrow\infty)=-\frac{q^2\Om^2}{3}.
\end{equation}
The negative sign means that the energy flows out the FF region to
infinity. This result is also the same as the Michel solution. On
the boundary along the equator, the energy also flows out the FF
region:
\begin{equation}\label{e:}
\dot{\mathcal{E}}^{FF}_\vee(\th\rightarrow\frac{\pi}{2})
=-\int_{r_0}^{\infty} 2\pi r S_\vee^\th
(\th\rightarrow\frac{\pi}{2})dr=
-\frac{dq^2\Om^2(3r_0^2+2d^2)}{6(r_0^2+d^2)^{\frac{3}{2}}},
\end{equation}
The calculations on the lower hemisphere lead to identical results:
$\dot{\mathcal{E}}^{FF}_\wedge=\dot{\mathcal{E}}^{FF}_\vee$ for each
of the components. We denote the summation:
$\dot{\mathcal{E}}^{FF}=\dot{\mathcal{E}}^{FF}_\vee+
\dot{\mathcal{E}}^{FF}_\wedge=2\dot{\mathcal{E}}^{FF}_\vee$. Then we
have totally
\begin{equation}\label{e:}
\dot{\mathcal{E}}^{FF}(r=r_0)+\dot{\mathcal{E}}^{FF}
(\th\rightarrow\frac{\pi}{2})
+\dot{\mathcal{E}}^{FF}(r\rightarrow\infty)=0.
\end{equation}
Thus, the conservation law (\ref{e:encon}) is verified. This
indicates that the energy extracted from the star sources the
Poynting fluxes flowing into the CS and to infinity. For the Michel
split monopole, the second term vanishes and so the Poynting flux is
constant through any sphere.
We now turn to the energy conservation law in the CS. As given in
Eq.\ (\ref{e:CSfield}), the fields exist inside the CS are the
continuous fields: $E_c^r$ and $B_c^\th$. They give a toroidal
Poynting flux inside the CS, which is conserved itself. The electric
current flowing in and out the CS is also conserved in terms of Eq.\
(\ref{e:currentcon}). So the only change of the energy in the CS
comes from the Poyinting influx $-\dot{\mathcal{E}}^{FF}
(\th\rightarrow\pi/2)$ and the dissipated energy. The latter arises
from the Joule heating process due to the non-vanishing
$\vec{i}_c\cdot\vec{E}_c$. It leads to an increase of the CS energy
at a total rate:
\begin{equation}\label{e:work}
\dot{\mathcal{E}}^{CS}=\int_{r_0}^{\infty}2\pi r i_c^rE_c^rdr.
\end{equation}
It is clear that this dissipated energy is completely compensated by
the Poynting influx from both sides of the FF fields:
$i_c^rE_c^r=2S_\vee^\th (\th\rightarrow\pi/2)$ or
\begin{equation}\label{e:CSeqFF}
-\dot{\mathcal{E}}^{FF}(\th\rightarrow\frac{\pi}{2})
=\dot{\mathcal{E}}^{CS}.
\end{equation}
Hence, the energy is conserved and there is also no electromagnetic
energy lost in the CS.
The equation (\ref{e:CSeqFF}) should be a consequence of the
following process: as the charged particles flow into the CS along
the magnetic field lines, the perpendicular component of the drift
velocity will be eventually damped to zero. The kinetic energy is
transferred to the thermal internal energy in the CS (as shown in
Figure.\ \ref{fig:f2}).
Compared with the Michel split monopole, the spin down power is
enhanced in this configuration. The ratio of dissipated energy to
the total extracted energy is
\begin{equation}\label{e:}
\frac{\dot{\mathcal{E}}^{CS}}{\dot{\mathcal{E}}^{FF}(r=r_0)}=
\left[1+\frac{2(r_0^2+d^2)^\frac{3}{2}}{d(3r_0^2+2d^2)}\right]^{-1}.
\end{equation}
Since $r_0>d$, the maximum energy that can be dissipated in the CS
is $47\%$ of the total spin down energy, which is close to the
numerical result of \cite{2014ApJ...781...46C}. For larger
$r_0/d>1$, a smaller portion of energy is dissipated.
\subsection{$\psi=(\psi_\vee^{(+)},\psi_\wedge^{(-)})$}
\label{subsec:con_II}
The magnetic field distribution of this configuration is shown in
the right panel of Fig.\ \ref{fig:f1}. It looks similar to the split
magnetosphere in the presence of a thin accretion disk that contains
magnetic fields itself (e.g.,
\cite{1981MNRAS.196.1021B,2004MNRAS.350..427K,2005ApJ...620..889U,
2017PhRvD..96f3006C,2020arXiv201014470C}). But here the CS is not an
accretion disk since no gravity is involved.
The quantities for this configuration is given by the previous case
just with the replacement $d\rightarrow-d$. In the CS, only the
continuous fields $E_c^r$ and $B_c^\th$ exist. The discontinuous
fields lead to surface charge density $\si_c$ and current densities
$i_c^\phi$, $i_c^r$, which are the same as the previous case. The
currents also close with the same forms as given in Eqs.\
(\ref{e:neutralcon}) and (\ref{e:currentcon}). But the Lorentz force
take the opposite directions:
\begin{equation}\label{e:}
f_c^r=-\frac{q^2rd(r^2\Om^2-1)}{2\pi(r^2+d^2)^3},
\textrm{ }\textrm{ }\textrm{ }
f_c^\phi=-\frac{q^2rd\Om}{2\pi(r^2+d^2)^{\frac{5}{2}}}.
\end{equation}
The Poynting fluxes perpendicular to the CS are:
\begin{equation}\label{e:}
S_\vee^\th(\th\rightarrow\frac{\pi}{2})=-S_\wedge^\th
(\th\rightarrow\frac{\pi}{2})\rightarrow-
\frac{d(qr\Om)^2}{4\pi(r^2+d^2)^{\frac{5}{2}}}.
\end{equation}
It indicates that net Poynting fluxes flow off the CS into the FF
magnetosphere on both sides. So the FF magnetosphere gains energy
from the CS. Integrating the Poynting flux along the equator, we can
find that the energy gained by the FF fields is exactly that lost in
the CS:
\begin{equation}\label{e:CSconII}
\dot{\mathcal{E}}^{FF}(\th\rightarrow\frac{\pi}{2})
=-\dot{\mathcal{E}}^{CS}
=\frac{dq^2\Om^2(3r_0^2+2d^2)}{3(r_0^2+d^2)^{\frac{3}{2}}}.
\end{equation}
So the energy is conserved in the CS and the electromagnetic energy
density remains unchanged.
\begin{figure}[htbp]
\center
\includegraphics[width=0.95\columnwidth]{f3}
\caption{The field structure and the energy flows near the CS
outside the LC of the configuration in Section.\
\ref{subsec:con_II}. A net Poynting flux flows off the CS
to the FF regions on both sides as the CS loses energy.}
\label{fig:f3}
\end{figure}
Similarly, we can show that the electromagnetic energy is conserved
in the FF regions. With the above equation (\ref{e:CSconII}), the
conservation law can be expressed as:
\begin{equation}\label{e:}
\dot{\mathcal{E}}^{FF}(r=r_0)-\dot{\mathcal{E}}^{CS}=
\dot{\mathcal{E}}^{FF}(r\rightarrow\infty),
\end{equation}
where $\dot{\mathcal{E}}^{FF}(r\rightarrow\infty)$ is the same as
the previous case, also equal to the one in the Michel split
monopole. This equation means that the output energy flux at
infinity is simultaneously extracted from the central star and the
CS. For a given output power, the spin down energy extracted from
the star can only be $11.6\%$ of that by the Michel split monopole
since $r_0/d>1$.
Notice that here $\dot{\mathcal{E}}^{CS}$ is negative since
$\vec{i}_c\cdot\vec{E}_c=i^r_c E^r_c<0$. This mysterious negative
energy has been encountered in the numerical simulations on rotating
black holes
\cite{2004MNRAS.350..427K,2017PhRvD..96f3006C,2018PhRvD..98b3008E}.
It may be due to the observational effect in the gravitational
system. But here no gravity is involved in our system, which may
bring us new understanding on it. We think that the negative energy
here arises from the observational effect in the co-rotation frame,
which is an acceleration frame and takes similarity to a
gravitational system in terms of the Einstein equivalent principle
between gravity and acceleration.
Following the above analysis, we can interpret this negative energy
process as an inverse process of the one discussed in the previous
configuration (See Figure.\ \ref{fig:f3}): the charges flow away
from the CS to the FF regions along the magnetic field lines on both
sides to form the electric currents that constitute the split
monopole configuration. Then the internal energy of thermal motion
of the particles in the CS is transferred into the ordered drift
motion when the particles enter into the FF regions. So the CS
should cool down with the negative energy dissipated to provide the
extracted energy.
\section{Conclusions and discussions}
\label{sec:con}
The Michel split monopole model is not unique and the deviation from
it leads to non-trivial consequence. By varying the centered model
in different ways, we illustrate how the CS plays different roles.
Based on the de-centered monopole solution generated by the
translational symmetry in the axisymmetric case, we construct two
generalized split monopole configurations. One configuration
resembles the outer geometry of a new pulsar magnetosphere model,
while the other may be useful in describing the physical process in
a split magnetosphere with an accretion disk. These generalized
configurations can also be constructed in the oblique rotation case,
since the translational symmetry still exists in the magnetosphere
on an oblique rotator \cite{1998MNRAS.297..315U}.
It is shown that the CS is a site where energy is dissipated or
extracted. This will increase or decrease the spin down energy
extracted from the central star, for given output Poynting flux. We
interpret this process as a result that the internal energy of
thermal motion and the kinetic energy of drift motion are
transferred into each other. The electromagnetic energy is always
not lost everywhere, i.e., in the CS and the FF regions. When the
Poynting flux flows in, the CS is heated up and possibly leads to
synchrotron and inverse Compton radiations, which are observable
\cite{2016JPlPh..82e6302P}. On the contrary, energy is extracted as
the CS cools down. So the CS can also cause temperature
discontinuities in the systems. The effects of the thermal
non-equilibrium on the magnetohydrodynamics need further
investigations.
Our results will also apply to any variation of the split monopole
in the standard pulsar magnetosphere model. In a realistic
situation, the split monopole should not be exactly like the Michel
model. The CS may have finite size, different geometries or even be
dynamical with wavy structures. So all these variations will cause
extra energy dissipation or extraction in the CS in terms of our
results above.
\section*{Acknowledgements\markboth{Acknowledgements}{Acknowledgements}}
This work is supported by the Yunnan Natural Science Foundation
2017FB005.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
As more autonomous vehicles are being deployed on the road, safety is the most essential aspect to consider especially when interacting with other dynamic road entities. Being able to make high-fidelity future motion predictions for surrounding entities can greatly improve the safety level of autonomous vehicles, since actions can be generated accordingly to avoid potentially dangerous situations.
The challenges for building accurate prediction models lie in following aspects. Firstly, the prediction models are usually trained with a limited pool of real human motion data. Though able to learn the average behavior pattern in the data, the model is incapable to capture the nuances among heterogeneous individuals. Secondly, due to the stochastic and time-varying nature of human behaviors, the same individual may also exhibit different behaviors in different circumstances, which requires the prediction models to comprise dynamic parameters rather than fixed ones. Thirdly, although a trained model typically performs well on the training set, performance can drop significantly in a slightly different test scenario or under a slightly different data distribution.
To address these problems, this paper extends an existing hierarchical prediction model~\cite{wang2021hierarchical} for human driving behavior to include adaptability in real time. The pipeline of the framework is as follows: 1) learning the hierarchical prediction model via real human driving data, which consists of a semantic graph network for intention prediction and an encoder decoder network for trajectory prediction. 2) executing real-time $\tau$-step online adaptation for the trajectory prediction by the modified Extended Kalman Filter algorithm (MEKF$_\lambda$).
The contribution of the paper is as follows: 1) extending a hierarchical prediction model to be adaptable to account for heterogeneous and stochastic behaviors among different agents and scenarios. 2) applying MEKF$_\lambda$ online adaptation algorithm to a more complex problem and demonstrate its usefulness. 3) proposing a new set of metrics for systematic evaluations of online adaptation performance. Such metrics can be widely used to evaluate the performance of online adaptation in regardless of models and problems. Via these metrics, we also provided empirical studies on the best layer and the best steps of observation to adapt.
\section{Related Works}
Various methods have been proposed to make driving behavior predictions, including traditional method such as hidden markov models~\cite{deo2018would}, gaussian process~\cite{9357407}, and learning-based methods such as recurrent neural networks~\cite{park2018sequence}, and graph neural networks~\cite{hu2020scenario}. In terms of prediction objective, some works only focus on predicting intentions~\cite{hu2018framework} or trajectories~\cite{ma2019trafficpredict}, while some other works make both intention and trajectory predictions in a hierarchical manner~\cite{wang2021hierarchical}. While learning-based methods have shown greater expressiveness with little human effort, taking both intention and trajectory predictions into account could simplify learning and make the model more interpretable. A detailed survey can be found in~\cite{rudenko2020human}.
To address individual differences, many works have been utilizing feedback from historic observations to make adaptable predictions: adapting reward function parameters to make driving-style-aware prediction via bayesian inference algorithm~\cite{wang2021socially}, adapting the last layer of a feedforward neural network using the recursive least square parameter adaptation algorithm (RLS-PAA)~\cite{cheng2019human}, adapting the last layer of a recurrent neural network using RLS-PAA~\cite{8814238}, and adapting any layer of recurrent neural network using modified Extended Kalman Filter algorithm (MEKF$_\lambda$)~\cite{abuduweili2021robust}.
To account for heterogeneous behaviors among agents and scenarios, this paper extends the hierarchical prediction model~\cite{wang2021hierarchical} to include adaptability. The model conducts both intention and trajectory prediction on real human driving data, which is a more complicated model addressing more a complex problem compared to previous works on online adaptation. Also, a new set of metrics is proposed for systematic evaluation of online adaptation.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{Architecture.pdf}
\caption{Network architecture in our framework.}
\label{fig:architecture}
\end{figure}
\section{Problem Formulation}
Our goal is to generate more accurate driving behavior predictions in multi-agent traffic-dense scenarios. Formally, we focus on making predictions for any selected car and its interacting cars in next $T_f$ seconds $\hat{\textbf{Y}}_{t+1,t+T_f}$, based on observations of last $T_h$ seconds $\textbf{O}_{t-T_h,t}$:
\begin{equation}
\hat{\textbf{Y}}_{t+1,t+T_f} = f(\textbf{O}_{t-T_h,t}).
\end{equation}
Specifically, we first make intention prediction with a high-level semantic graph network (SGN), which takes in the semantic graph (SG) $\mathcal{G}_{t - T_f, t}$ extracted from raw observation $\textbf{O}_{t-T_h,t}$, and predicts the goal state $g_t$:
\begin{equation}
\mathcal{G}_{t - T_f, t} = f_{SG}(\textbf{O}_{t - T_f, t}).
\end{equation}
\begin{equation}
g_t = f_{SGN}(\mathcal{G}_{t - T_f, t}).
\end{equation}
The goal state $g_t$ is then delivered as the intention signal along with historic dynamics $\textbf{S}_{t - T_h, t}$ to the low-level encoder-decoder network (EDN) for trajectory prediction:
\begin{equation}
\hat{\textbf{Y}}_{t + 1, t + T_f} = f_{EDN}(\textbf{S}_{t - T_h, t}, g_t, \theta).
\end{equation}
To endow the trajectory prediction model with the capability to capture individual and scenario differences, we set up an online adaptation module, where a modified Extended Kalman Filter (MEKF$_\lambda$) algorithm is utilized to adapt the parameter of the model in real time. Specifically, we regard EDN as a dynamic system and estimate its parameter $\theta$ by minimizing the error between ground-truth trajectory in past $\tau$ steps $\textbf{Y}_{t-\tau}$ and predicted trajectory $\tau$ steps earlier $\hat{\textbf{Y}}_{t-\tau,t}$:
\begin{equation}
\theta_{t} = f_{MEKF_{\lambda}}(\theta_{t-1}, \textbf{Y}_{t-\tau, t}, \hat{\textbf{Y}}_{t-\tau,t}).
\end{equation}
\section{Methodology}
Our model consists of a high-level intention prediction model, a low-level trajectory prediction model, and an online adaptation module.
\subsection{Intention Prediction}
During driving in intense-traffic environment, humans will intentionally identify which slot is spatially and temporally suitable to insert into. Then a determined slot and associated goal position is generated, which is then used to guide low-level actions such as acceleration or steering. Based on this insight, we first adopt the dynamic insertion areas (DIA) to describe the slots on the road, which are regarded as nodes to construct a semantic graph (SG) $\mathcal{G}_{t - T_f, t}$. With SG, a semantic graph network (SGN) is then adopted to conduct relationship reasoning among the nodes in the graph and generate future goal state $g_t$. Detailed introduction on semantic graph and semantic graph network can be found at \cite{hu2020scenario, wang2021hierarchical}.
\begin{algorithm}[t]
\caption{$\tau$-step online adaptation with MEKF$_{\lambda}$}
\label{alg:adaptation}
\textbf{Input:}
Offline trained EDN network with parameter $\theta$, initial variance $\textbf{Q}_t$ and $\textbf{R}_t$ for measurement noise and process noise respectively, forgetting factor $\lambda$\newline
\textbf{Output:}
A sequence of generated future behavior $\{\hat{\textbf{Y}}_{t}\}_{t=1}^T$
\begin{algorithmic}[1]
\For{$t=1, 2, ..., T$}
\If {$t\geq\tau$}
\State stack recent $\tau$-step observations:
\State\hspace{2em}$\textbf{Y}_{t-\tau, t} = [y_{t-\tau+1}, ..., y_{t}]$
\State stack recent $\tau$-step generated trajectory:
\State\hspace{2em}$\hat{\textbf{Y}}_{t-\tau, t} = [\hat{y}_{t-\tau+1}, ..., \hat{y}_{t}]$
\State adapt model parameter via MEKF$_{\lambda}$:
\State\hspace{2em}$\textbf{H}_t = \frac{\partial\hat{\textbf{Y}}_{t-\tau,t}}{\partial \hat{\theta}_{t-1}}$
\State\hspace{2em}$\textbf{K}_t = \textbf{P}_{t-1}\cdot\textbf{H}_t^T\cdot(\textbf{H}_t\cdot\textbf{P}_{t-1}\cdot\textbf{H}_t^T+\textbf{R}_t)^{-1}$
\State\hspace{2em}$\textbf{P}_t=\lambda^{-1}(\textbf{P}_t-\textbf{K}_t\cdot\textbf{H}_t\cdot\textbf{P}_{t-1}+\textbf{Q}_t)$
\State\hspace{2em}$\hat{\theta}_{t} = \hat{\theta}_{t-1} + \textbf{K}_t \cdot (\textbf{Y}_{t-\tau,t} - \hat{\textbf{Y}}_{t-\tau,t})$
\Else
\State initialization: $\hat{\theta}_t=\theta$
\EndIf
\State collect goal state $g_t$ from SGN and historic states $\textbf{S}_t$.
\State generate future behavior:
\State\hspace{2em}$\hat{\textbf{Y}}_{t}=[\hat{y}_{t+1}, ...\hat{y}_{t+T_f}]=f_{EDN}(\textbf{S}_t, g_t, \hat{\theta}_t)$.
\EndFor
\State\Return sequence of predicted behaviors $\{\hat{\textbf{Y}}_{t}\}_{t=1}^T$
\end{algorithmic}
\end{algorithm}
\subsection{Trajectory Prediction}
To generate future behaviors of arbitrary length with sufficient expressiveness, we use the encoder-decoder network (EDN) \cite{cho2014learning,neubig2017neural} as the low-level trajectory prediction policy. The EDN consists of two GRU networks, namely the encoder and the decoder. At any time step t, the encoder takes in the sequence of historic states $\textbf{S}_t=[s_{t-T_h},..., s_t]$ and compresses all these information into a context vector $c_t$:
\begin{equation}
\label{eq:encoder}
c_t = f_{enc}(\textbf{S}_t; \theta^{E}),
\end{equation}
where $\theta^{E}$ denotes the parameter of encoder. The context vector $c_t$, the current state $s_t$, and the goal state $g_t$ generated by high-level policy are then fed into the decoder to recursively generate future behavior predictions:
\begin{equation}
\hat{\textbf{Y}}_t =[\hat{\textbf{y}}_{t+1}, ..., \hat{\textbf{y}}_{t+T_f}] = f_{dec}(\textbf{c}_t, s_t, g_t; \theta^{D}),
\end{equation}
where $\theta^{E}$ denotes the parameter of encoder. Specifically, the decoder takes vehicle's current state $s_t$ as the initial input to generate the first-step trajectory. In every following step, the decoder takes the output value of last step as the input to generate a new output. In this paper, we choose graph recurrent unit (GRU) as the basic RNN cell and stack three dense layers on the decoder for stronger decoding capability. According to the empirical results in \cite{wang2021hierarchical}, we append the goal state feature $g_t$ to the end of original input feature vector. The loss function is designed to can be simply designed minimize the error between the ground-truth trajectory and the generated trajectory:
\begin{equation}
\mathcal{L} = \sum_{i=0}^N||\hat{\textbf{Y}}_i - \textbf{Y}_i||_2,
\end{equation}
where $N$ denotes the number of training trajectory samples.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{Adaptation_Trajectory.pdf}
\caption{An illustration for online adaptation and proposed 4 metrics for performance evaluation. At time step $t$, the model parameters can be adapted by minimizing prediction error of the trajectory in past $\tau$ steps $\textbf{Y}_{t-\tau, t}$. The adapted parameter is then used to generate prediction $\textbf{Y}_t$ on current time $t$. 4 metrics are designed to evaluate the adaptation performance. ADE 1 can verify how the adaptation works on the source trajectory $\textbf{Y}_{t-\tau, t}$. ADE 2 can verify whether adaptation can benefit short-term prediction in the presence of behavior gap between earlier and current time. ADE 3 verifies whether we have obtained enough information from the source trajectory $\textbf{Y}_{t-\tau, t}$. ADE 4 verifies whether adaptation can benefit long-term prediction at current time $t$.}
\label{fig:adaptation_trajectory}
\end{figure}
\subsection{Online Adaptation}
Adaptable prediction to individuals requires the adaptation of parameters in the neural network in real time. Due to the huge time delay in obtaining ground-truth intention label, in this paper, we only consider the online adaptation for the low-level behavior-prediction policy (EDN). Formally, we regard the adaptation of the neural network as a parameter estimation process of a nonlinear system with noise:
\begin{figure}[t!]
\includegraphics[width=0.45\textwidth]{layer.pdf}
\caption{Percentage of error (ADE 2) reduced or raised after online adaptation, under 1) different adaptation step $\tau$; 2) different layer of parameters adapted; 2) different scenarios. Three conclusions from the results: 1) online adaptation works the best around the adaptation step of 2 or 3; 2) prediction accuracy was improved by a higher percentage in the roundabout scenario (transferred) than in the intersection scenario (trained); 3) the best adaptation performance is obtained usually by adapting $W^F_3$, the last layer of the FC network in the decoder.}
\label{fig:adaptation_layers}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.71\textwidth]{Adaptation_Analysis.pdf}
\caption{Online adaptation performance analysis. 1) ADE 1 and ADE 3: as adaptation step $\tau$ increased, the online adaptation could get more information and improve prediction accuracy by a higher percentage. 2) ADE 2: as $\tau$ increases, the improved percentage first grew higher due to more information obtained, but then declined due to the behavior gap between earlier time and current time. When $\tau$ was 3, the short-term prediction was improved by 20\% on intersection scenario (customizability) and 28\% on roundabout scenario(transferability). 3) ADE 4: online adaptation can barely benefit long-term prediction.}
\label{fig:adaptation_analysis}
\end{figure*}
\begin{equation}
\textbf{Y}_{t} = f_{EDN}( \textbf{S}_t, g_t, \hat{\theta}_{t}) + \textbf{u}_t
\end{equation}
\begin{equation}
\hat{\theta}_{t} = \hat{\theta}_{t-1} + \omega_{t}
\end{equation}
where $\theta$ denotes the parameter of EDN; the measurement noise $u_{t}\sim \mathcal{N}(0, \textbf{R}_t)$ and the process noise $\omega_{t}\sim \mathcal{N}(0, \textbf{Q}_t)$ are assumed to be Gaussian with zero mean and white noise. For simplicity, we assume $\textbf{Q}_t=\sigma_q\textbf{I}$ and $\textbf{R}_t=\sigma_r\textbf{I}$ where $\sigma_q>0$ and $\sigma_r>0$.
A modified Extended Kalman Filter (MEKF$_\lambda$) algorithm is then developed as in Algorithm~\ref{alg:adaptation} to make linearization of the model and adapt arbitrary layer in the neural network. To realize feedback decay from older observations, we modified EKF to include the forgetting factor $\lambda$. Note that a multi-step adaptation strategy is considered, meaning that prediction error of $\tau$ step is taken as the feedback to adapt the model.
\subsection{A new set of metrics}
As in Figure \ref{fig:adaptation_trajectory}, we proposed a new set of metrics to systematically analyze the performance of online adaptation:
\begin{enumerate}
\item ADE 1: This metric evaluates the prediction error of the adapted steps on the historic trajectory $Y_{t-\tau, t}$. Because these steps are the observation source used to conduct online adaptation, the metric can verify whether the algorithm is working or not.
\item ADE 2: This metric evaluates the prediction error of the adaptation steps on the current trajectory, which aims at verifying how the time lag is influencing the prediction. Also, this method can be used to verify whether adaptation could improve short-term behavior prediction.
\item ADE 3: This metric evaluates the prediction error of the whole historic trajectory, which shows if we have gotten enough information on the behavior pattern.
\item ADE 4: This metric evaluates the prediction performance of the whole current trajectory, which shows whether or not the adaptation based on historic information can help current long-term behavior generation.
\end{enumerate}
\section{Experiment}
\subsection{Experiment setting}
We verified our method with real driving data from INTERACTION dataset~\cite{interactiondataset}. Our model is trained on the MA intersection scenario by splitting 19084 data points to 80\% of training data and 20\% of testing data. The model is then directly tested on the FT roundabout scenario with 9711 data points to evaluate the transferability. In our experiments, we choose the historic time steps $T_h$ as 10 and future time step $T_f$ as 30 with frequency of 10Hz. We utilized Adam optimizer and sweep over more then 20 combinations of hyperparameter for best performance. We proposed a new set of 4 metrics to systematically evaluate the performance of online adaptation, as show in Figure~\ref{fig:adaptation_analysis}.
\subsection{The best layer to adapt}
There exist many layers in the END. We denote the encoder GRU's input-hidden weights as $W^E_{ih}$, the encoder GRU's hidden-hidden weights as $W^E_{hh}$, the decoder GRU's input-hidden weights as $W^D_{ih}$, the decoder GRU's hidden-hidden weights as $W^D_{hh}$, and the weights of the three-layer FC stacked on top of decoder as $W^F_1$, $W^F_2$, $W^F_3$ respectively. While it remains theoretically unknown which layer is the best to adapt due to the lack of interpretability of neural networks, as in Figure~\ref{fig:adaptation_layers} we empirically found that $W_3^F$, the last layer the FC network in the decoder, is the best to adapt.
\subsection{Trade-off on adaptation step $\tau$}
When we increase the adaptation step $\tau$, there is a trade-off between how much information we can get and the behavior gap between earlier time $t-\tau$ and current time $t$. Figure~\ref{fig:adaptation_analysis} shows our exploration on the best step $\tau$ to adapt. Several observation can be drawn: 1) prediction performance decayed when the model was directly transferred to the roundabout scenario. 2) online adaptation could benefit short-term prediction as in ADE 2, and could barely improve long-term predcition as in ADE 4. 3) When the $\tau$ increases the improved percentage in ADE 2 first rose, reached peak $\tau$ of 2 or 3, and then dropped. Specifically, the ADE 2 is reduced by 20\% in the intersection scenario (customizability), and by 28 \% in the roundabout scenario (transferability).
\section{Conclusion}
This paper extended a hierarchical prediction model to include adaptability, where MEKF$_\lambda$ was used to adapt to different human drivers and driving scenarios. A new set of metrics is proposed to systematically evaluate the performance of online adaption. Our experiments demonstrated online adaptation could improve short-term prediction accuracy by 20\% and 28\% in the trained and transferred scenario respectively. We also empirically found the best choice of layer is the last layer of FC in the decoder, and the best adaptation step $\tau$ is around 2 or 3. Future efforts include online adaptation of intention predictions, online adaptation on more driving scenarios, and interpretation of neural network.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
In recent years, photo- and electroproduction
of $\eta$ mesons on nucleons and nuclei have been studied intensively
\cite{TiB95,BeT95,SaF95a}. This
renewed interest has been triggered primarily by the significant
improvement on the quality of experimental data obtained with the new
generation of high-duty cycle electron accelerators like, e.g., ELSA in Bonn
and MAMI in Mainz \cite{Wil93,KrA95a,PrA95}. The special interest in the
electromagnetic production of $\eta$ mesons on the nucleon is based on the
fact that, being an isoscalar meson, it constitutes a selective filter for
isospin $I=\frac{1}{2}$ nucleon resonances $N^{*}$.
Furthermore, the e.m.\ $\eta$ production is dominated by the intermediate
excitation of the $S_{11}$(1535) resonance. Thus this reaction is an ideal
tool for investigating the characteristic features of this resonance, which
usually is difficult to separate from the other resonances because of their
large widths. For example, one can study its electromagnetic transition
multipoles and its decay channels, thus providing a good test for hadron
models.
Eta photoproduction on the deuteron is of particular interest since first it
allows to study this reaction on a bound nucleon in the simplest nuclear
environment, and second, it provides important information on this reaction
on the neutron. With respect to the latter case, the deuteron is considered
as a neutron target assuming that binding effects can be
neglected. The first data for coherent $\eta$ photoproduction on the
deuteron were obtained almost thirty years
ago by Anderson and Prepost \cite{AnP69}. Their results for the weighting of
isoscalar and isovector amplitude of the $S_{11}$(1535) resonance
were at variance
with quark model predictions and data analysis for pion photo production
on the nucleon in the region of the $S_{11}$ resonance.
Only with a large isovector
amplitude, the theory could explain the data assuming the validity of the
impulse approximation \cite{KrL76}. Other explanations were proposed by
Hoshi et al. \cite{HoH79} by invoking a dominance of $\eta$ rescattering on
the other nucleon or a large influence of the $P_{11}(1440)$ resonance.
The latter seems to be ruled out at present. Furthermore, their treatment
of rescattering is very approximate and a more recent evaluation of
rescattering effects, although still approximate, gave a much smaller effect
\cite{HaR89}. Therefore, improved calculations are urgently needed. Very
recently, rescattering has been considered in \cite{KaT95} in the
near threshold region showing only a small effect.
It is clear that new data will offer the
chance to investigate this problem in greater detail. With respect to new
experimental data, only upper limits exist at present for the reaction
on the deuteron \cite{Beu94,KrA95b} which,
however, are already at variance with the old data.
The aim of the present work is to initiate a systematic study of the
$d(\gamma,\pi^0)d$
reaction on the deuteron from threshold through the $S_{11}$ resonance.
As a first step, we will restrict
ourselves to the plane wave impulse approximation (PWIA) in order to study
details of the elementary reaction amplitude with respect to the yet
unknown neutron properties and to study different ways of implementing the
elementary amplitude in a bound system.
Among the phenomenological models describing existing data for the elementary
process on the proton for
energies below 1 GeV in a satisfactory manner, there are the isobar model
\cite{Hic73}, the coupled channel model \cite{BeT91} and the effective
Lagrangian approach \cite{BeM94,Kno94,SaF95}.
We have chosen the Langrangian approach of \cite{BeM94,Kno94} because the
resonance and background terms are treated on the same level and the number
of unknown parameters is much less than that for the two other models.
Near threshold the
resonances $P_{11}$(1440), $D_{13}$(1520) and $S_{11}$(1535) are
likely to contribute. Although the $P_{11}$(1440) is below the threshold,
it can influence the reaction because of its large
width. Little is known on the decay widths of the resonances $P_{11}$(1440) and
$D_{13}$(1520) to $\eta N$ and they are not listed in \cite{PDG94}.
In \cite{PDG92}, the branching ratio of the $D_{13}$(1520) into this channel
is listed to be $0.1\%$. Furthermore, the experiment of Krusche et al.\
\cite{KrA95a} did not give any evidence for the
influence of the $P_{11}$(1440), whereas the $D_{13}$(1520) was seen for the
first time. Nevertheless, it was shown that the influence of the
$D_{13}$(1520) to the total cross section was negligible. For these
reasons, it is legitimate to consider in the present study
only the dominant $S_{11}$ resonance.
In addition we include background terms such as nucleon pole terms
and the exchange of the vector mesons $\rho$ and $\omega$.
Apart from the resonance, also
the strength and nature (pseudoscalar or pseudovector) of the
$\eta NN$-coupling is widely unknown.
When implementing the elementary reaction amplitude into the deuteron, one
has to face several problems which are analogous to coherent
$\pi$ photoproduction on the deuteron. First of all, the energy of
the struck nucleon on which the reaction takes place is not well defined in
a bound nonrelativistic system. Consequently, the invariant mass of the
$\gamma N$ subsystem of the elementary reaction, on which the elementary
$t$-matrix depends, is not well defined. In particular, the invariant
mass determines the decay width of the resonance to which
the resulting cross section is very sensitive. Secondly,
the elementary reaction amplitude, which usually is given
in the $\gamma N$ c.m.\ frame, has to be transformed to an arbitrary
reference frame. This may be done either by a Lorentz boost of all four
momenta on which the elementary process depends or by calculating the
$t$-matrix with respect to an arbitrary frame right from
the beginning. The last method is more general because one does
not loose any terms which vanish in the $\gamma N$ c.m.\ frame.
But in both cases one faces
again the problem of assigning an energy to the bound nucleon.
We will compare both methods for the contributions of vector meson
exchange in order to check to what extent the differences matter.
The paper is organized as follows: In the next section we present the
model for the elementary reaction, and fix all parameters for the
process on the proton.
Then we discuss in Sect.\ III the theoretical changes that have to be
considered for the process on the deuteron. In Sect.\ IV we present
our results and compare them with existing experimental data and
other theoretical calculations. Finally, we give a brief summary and
an outlook. Some calculational details are collected in an appendix.
\section{The elementary process $\gamma + N \rightarrow \eta + N$}
\label{elprocess}
In this section, we will briefly review the elementary reaction in order to
introduce the model and to fix as many model parameters as possible for
the implementation of the process into the deuteron.
The Mandelstam variables of the reaction $\gamma (k) + N(p) \rightarrow
\eta (q) + N(p')$ are defined as usual
\begin{eqnarray}
s & = & (k_\mu+p_\mu)^{2}=(q_\mu+p_\mu^{\,\prime})^{2}, \\
t & = & (k_\mu-q_\mu)^{2}=(p_\mu-p_\mu^{\,\prime})^{2}, \\
u & = & (k_\mu-p_\mu^{\,\prime})^{2}=(q_\mu-p_\mu)^{2},
\end{eqnarray}
where $p_\mu$ ($p_\mu^{\,\prime}$) denotes the four momentum of the incoming
(outgoing) nucleon, $k_\mu$ and $q_\mu$ the momenta of photon and eta meson,
respectively. The energies of the participating particles are given by
\begin{eqnarray}
k_{0} & = & |\vec{k}|, \\
\omega_{q} & = & \sqrt{m_{\eta}^{2}+\vec{q}\,^{2}}\,, \\
E_{p^{(\prime)}} & = & \sqrt{M^{2}+\vec{p}\,^{(\prime)\,2}}\,.
\end{eqnarray}
The invariant mass of the $\gamma N$ system is given as function of the
photon lab energy $E^{Lab}_{\gamma}$ by
\begin{equation}\label{equ:inv}
W_{\gamma N}=\sqrt{s}=\sqrt{M^{2}+2E^{Lab}_{\gamma} M}\,,
\end{equation}
and the absolute values of the three momenta in the c.m.\ system by
\begin{eqnarray}
k & = & \frac{s-M^{2}}{2\sqrt{s}}, \\ \label{equ:q}
q & = & \frac{1}{2\sqrt{s}}
\sqrt{[s-(M+m_{\eta})^{2}][s-(M-m_{\eta})^{2}]}\,.
\end{eqnarray}
Nucleon and eta masses are denoted by $M$ and $m_\eta$, respectively.
In the c.m.\ system, the unpolarized differential cross section is given by
\begin{equation}
\frac{d \sigma}{d \Omega_{\eta}^{c.m.}}=\frac{1}{6}\frac{1}{16\pi^{2}}
\frac{q}{k}
\frac{E_k E_q}
{W_{\gamma N}^{2}}
\sum_{m^{\prime} m \mu}
|\tau_{m^{\prime} m \mu}(W_{\gamma N},\theta)|^{2}\,,
\end{equation}
where $\tau_{m^{\prime} m \mu}$ denotes the $t$-matrix element for initial
and final nucleon spin projections $m$ and $m'$, respectively, and photon
polarization $\mu$
\begin{equation}
\tau_{m^{\prime} m \mu}(W_{\gamma N},\, \theta) = \langle m'|
t_{\gamma \eta}(W_{\gamma N})|m \mu
\rangle\,,
\end{equation}
and $\theta$ the angle of the outgoing $\eta$ meson with respect to the
incoming photon momentum.
Here we have assumed covariant state normalization, i.e.,
\begin{equation}
\langle p^{\,\prime}\,|p\rangle=(2\pi)^3\frac{E_p}{M}\delta (\vec
p^{\,\prime}-\vec p)
\end{equation}
for fermions and
\begin{equation}
\langle p^{\,\prime}\,|p\rangle=(2\pi)^3 2\omega_p\delta (\vec
p^{\,\prime}-\vec p)
\end{equation}
for bosons.
Besides the dominant $S_{11}$ resonance in the $s$-channel, we consider
for the $t$-matrix the $S_{11}$ in the $u$-channel, the nucleon pole terms
in the $s$- and $u$-channels, as well as $\rho$ and $\omega$ exchange in the
$t$-channel as background contributions. The corresponding diagrams are
shown in Fig.\ \ref{fig:a}.
All vertices are calculated from the following effective Lagrangians
\cite{BeM94}
\begin{eqnarray}
{\cal L}_{\eta NN}^{PS} & = & -i g_{\eta} \bar\Psi \gamma_{5} \Psi \phi_{\eta}
\,,\\
{\cal L}_{\gamma NN} & = & -e \bar\Psi \gamma_{\mu} \frac{1+\tau_{0}}{2}
\Psi A^{\mu}-\frac{e}{4 M} \bar\Psi (\kappa^{s}+\kappa^{\nu} \tau_{0})
\sigma_{\mu \nu} \Psi F^{\mu \nu}
, \\
{\cal L}_{VNN} & = & - g_{v} \bar\Psi \gamma_{\mu} \Psi V^{\mu}- \frac{g_{t}}
{4 M} \bar\Psi \sigma_{\mu \nu} \Psi V^{\mu\nu}\,, \label{lvnn}\\
\label{equ:he}
{\cal L}_{V\eta \gamma} & = & \frac{e \lambda}{4 m_{\eta}} \varepsilon_{\mu\nu
\lambda\sigma} F^{\mu \nu} V^{\lambda
\sigma} \phi_{\eta}\,, \\
{\cal L}_{\eta NS_{11}} & = & -i g_{\eta N S_{11}} \bar\Psi \Psi_{S_{11}} \phi_{\eta}
+h.c\,, \\
{\cal L}_{\gamma NS_{11}} & = & -\frac{e}{2 (M_{S_{11}}+M)} \bar\Psi_{S_{11}} (\kappa^{s}_{
res}
+\kappa^{v}_{S_{11}} \tau_{0}) \gamma_{5} \sigma_{\mu \nu} \Psi F^{\mu \nu}
+h.c. \, .\label{lgns}
\end{eqnarray}
Here, $\Psi$ and $\Psi_{S_{11}}$ describe the fermion fields of the nucleon
and the $S_{11}$ resonance, respectively, $V^{\mu}$
the vector meson field, $\phi_{\eta}$ the pseudoscalar eta meson
field, and $A^{\mu}$ the photon field with its field tensor $F^{\mu \nu}$.
The vector meson field tensor is analogously
defined by $V^{\mu\nu}=\partial^{\mu}V^{\nu}-\partial^{\nu}V^{\mu}$.
The $S_{11}$ mass is denoted by $M_{S_{11}}$ and the isoscalar (isovector)
anomalous magnetic moment of the nucleon by $\kappa^{s}$ $(\kappa^{v})$.
For the the $\eta NN$-vertex we have chosen the pseudoscalar coupling
\cite{KrA95a,TiB94}. Values of the coupling constant range
usually between 0 and 7 \cite{BeM94,TiB94}. Fitting the model to
the data, we find as best choice $\frac{g_{\eta}^{2}}{4 \pi}=0.4$ in
good agreement with results from \cite{KrA95a} and with a recent
calculation by Kirchbach and Tiator \cite{KiT96} evaluating an effective
$\eta NN$-vertex associated with the $a_0(980) \pi N$ triangle diagram.
The coupling constant $\lambda$ of the $V\eta\gamma$-vertex
can be determined from the known electromagnetic
decay $V \rightarrow \gamma \eta $ , with $V=\rho$ or $\omega$. Here
we use $\Gamma_{\rho\rightarrow \gamma \eta}=57.46$ MeV and $\Gamma_{\omega
\rightarrow \gamma \eta}=3.96$ MeV \cite{PDG92}.
At the hadronic $VNN$-vertex, a phenomenological form factor $F(\vec q_V)$
of dipole type is introduced
\begin{equation}
F(\vec q_V\,)= \frac{(\Lambda_{V}^2-m_{V}^2)^2}{(\Lambda_{V}^2
+\vec q_V\,^2)^2},
\end{equation}
with $\vec q_V$ being the momentum of the vector meson and
$\Lambda_{V}$ the cut-off mass. A more detailed description, also for
the coupling constants of the hadronic vertex $g_{v}$ and $g_{t}$, may be found
in \cite{Kno94} and references therein. Table \ref{tab:d} summarizes all
parameters used for the vector mesons.
The photoexcitation of the $S_{11}$ resonance can be described by the
helicity amplitudes $A^{p,n}_{1/2}$ (with upper index $p$ referring to the
proton, $n$ to the neutron) which may be split into isoscalar and isovector
amplitudes by
\begin{equation}
A^{p,n}_{1/2}= A^{s}_{1/2} \pm A^{v}_{1/2}\,.
\end{equation}
They determine the resonance parameters $\kappa^{s/v}_{S_{11}}$ in
(\ref{lgns}) and are related to the helicity amplitudes $A^{s/v}_{1/2}$ by
\begin{equation}
e\kappa_{S_{11}}^{s/v}
= \sqrt{\frac{2M(M_{S_{11}}+M)}{M_{S_{11}}-M}} A_{1/2}^{s/v}\,.
\end{equation}
Furthermore, the $\eta N S_{11}$-coupling constant is given by
\begin{equation}
\frac{g_{\eta NS_{11}}^{2}}{4\pi} = \frac{2M_{S_{11}}^{2}}{q_{\eta}^{*}
((M_{S_{11}}+M)^{2}-m_{\eta}^{2})} \Gamma^{0}_{S_{11}} ,
\end{equation}
where $q_{\eta}^{*}$ is the $\eta$ momentum at resonance in the $\eta N$
c.m.\ frame, given by (\ref{equ:q}) for $s=M_{S_{11}}^2$,
and $\Gamma^{0}_{S_{11}}$ denotes the $S_{11}$ decay width.
The resonance mass $M_{S_{11}}$, the helicity
amplitudes $A^{p,n}_{1/2}$, the hadronic
coupling $g_{\eta NS_{11}}$ and the decay width $\Gamma^{0}_{S_{11}}$ are only
known within large uncertainties \cite{KrA95a,PDG94}.
For the resonance, we use the free propagator in the form
\begin{equation}
\Big[W_{\gamma N}-M_{S_{11}}+\frac{i}{2}\Gamma_{S_{11}}(W_{\gamma N})
\Big]^{-1}\,.
\label{propS11}
\end{equation}
The $S_{11}$ resonance mainly decays into $N\eta$ (50 \%), $N\pi$ (40 \%),
and $N\pi\pi$ (10 \%) \cite{PDG94}.
Consequently, one has for the total decay width at resonance position
\begin{eqnarray}
\Gamma_{S_{11}}(M_{S_{11}}) & = & \Gamma_{S_{11} \rightarrow\eta N}+\Gamma_{S_{11}
\rightarrow\pi N}+\Gamma_{S_{11} \rightarrow\pi\pi N}\\ \label{equ:235}
& = & 0.5\Gamma^{0}_{S_{11}}+0.4\Gamma^{0}_{S_{11}}+0.1\Gamma^{0}_{S_{11}}\,.
\end{eqnarray}
In (\ref{propS11}) we already have introduced an energy dependend decay width
$\Gamma_{S_{11}}(W_{\gamma N})$ depending on the invariant mass $W_{\gamma
N}$ of the system since it gives a much better description of experimental data
than using a constant width. For the two leading
decay channels, we have calculated the corresponding partial decay widths from
the Lagrangians in first order perturbation theory, whereas
the three body decay $S_{11} \rightarrow\pi+\pi+N$ is treated on a purely
phenomenological level keeping it constant above its threshold. Then we have
for the total width as function of the invariant mass
\begin{eqnarray} \label{equ:a}
\Gamma_{S_{11}}(W_{\gamma N}) & = & \frac{g_{\eta NS_{11}}^{2}
M}{2 \pi W_{\gamma N}}
{q}_{\eta}(W_{\gamma N}) \Theta(W_{\gamma N}-W^{th}_{\eta N}) \nonumber \\
& & +\frac{g_
{\pi NS_{11}}^{2} M}{2 \pi W_{\gamma N}} {q}_{\pi}(W_{\gamma N})
\Theta(W_{\gamma N}-W^{th}_{\pi N }) \nonumber \\
& & +0.1\Gamma_{o}\Theta(W_{\gamma N}-W^{th}_{\pi\pi N}).
\end{eqnarray}
Here, ${q}_{\eta}(W_{\gamma N})$ is given by (\ref{equ:q}) and
${q}_{\pi}(W_{\gamma N})$ by the
analogous equation replacing $m_\eta$ by $m_\pi$. The threshold masses for
the decay channels are correspondingly denoted by $W^{th}_{\eta N}$,
$W^{th}_{\pi N}$ and $W^{th}_{\pi\pi N}$.
The coupling constants $g_{\eta N S_{11}}$ and $g_{\pi N S_{11}}$ are
fixed by evaluating the partial decay widths at resonance position.
In Table \ref{tab:a} we list the set of resonance parameters for
which we obtain the best fit \cite{Bre95}
to the total cross section data as can be seen in
Fig.\ \ref{tot:etaN}. Only above 850 MeV, the theoretical cross section
overestimates the data. The
differential cross sections calculated with these parameters are shown
in Fig.\ \ref{diff:etaN}. While at lower energies the angular dependence
is quite satisfactory, one finds larger deviations at higher energies.
One reason for this might be the omission of the $D_{13}$ resonance
\cite{Kno94,TiB94}. However, for the present study this is of no relevance.
\section{The Coherent Process on the Deuteron}\label{deuteron}
Now we turn to the coherent $\eta$ production on the deuteron
\begin{equation}
\gamma(k_\mu)+d(d_\mu) \rightarrow \eta(q_\mu)+d(d_\mu^{\,\prime})\,.
\end{equation}
The variables in parentheses refer to the corresponding four momenta of the
particles. We will consider this reaction in the photon-deuteron
($\gamma d$) c.m.\
frame. There we choose the $z$-axis along the photon momentum $(\vec e_z=
\hat k =\vec k /k)$, the $y$-axis parallel to $\vec k \times \vec q$ and
the $x$-axis such as to form a right handed system. Thus the outgoing $\eta$
meson is described by the spherical angles $\phi=0$ and $\theta$
with $\cos\theta=\hat{q}\cdot\hat{k}$.
For the unpolarized differential cross section we obtain in the $\gamma d$
c.m.\ frame \cite{Bre95}
\begin{equation}
\frac{d \sigma}{d \Omega_{\eta}^{c.m.}}=\frac{1}{6}\frac{1}{16\pi^{2}}
\frac{q}{k}
\frac{E^{d}_k E^{d}_q}{W_{\gamma d}^{2}}
\sum_{m_{d}m_{d}^{\prime}\mu}
|\tau_{m_{d}m_{d}^{\prime}\mu}(W_{\gamma d},\theta)|^{2} \,,
\end{equation}
with $E^d_p= \sqrt{M_{d}^{2}+\vec p^{\,2}}$. Here we have introduced as
$t$-matrix
\begin{equation}
\tau_{m_{d}m_{d}^{\prime}\mu}(W_{\gamma d},\theta) = \langle m'_d|
t_{\gamma \eta}^d(W_{\gamma d})|m_d \mu\rangle\,,
\end{equation}
denoting the initial and final deuteron spin projections by $m_d$ and
$m'_{d}$, respectively, and the photon polarization by $\mu$. Furthermore,
$W_{\gamma d}$ denotes the invariant mass of the $\gamma d$ system and $k$
and $q$ again the photon and $\eta$ momentum, respectively, in the $\gamma
d$ c.m.\ system. These quantities are given as function of the lab photon
energy by expressions analogous to (\ref{equ:inv}) through (\ref{equ:q})
replacing the nucleon mass $M$ by the deuteron mass $M_d$.
Here the $\eta$ production threshold is at $E_{\gamma}^{Lab} =629$ MeV,
which is equivalent to an invariant mass $W_{\gamma d}^{thr}=2424$ MeV.
As noted in the introduction, we restrict our calculation to the impulse
approximation which means that the reaction will take place only
on one of the two nucleons leaving the other
as a pure spectator (see the diagram in Fig.\ \ref{fig:kk}).
Consequently, the production operator $t_{\gamma\eta}^d $ for the reaction on
the deuteron is obtained from the elementary operator $t_{\gamma\eta}$ by
\begin{equation}
t_{\gamma\eta}^{d}=t_{\gamma\eta}^{(1)} \times {\rm 1\hspace{-0.75ex}1}^{(2)} +{\rm 1\hspace{-0.75ex}1}^{(1)} \times
t_{\gamma\eta}^{(2)}\,,
\end{equation}
where the upper index in brackets refers to the nucleon on which the operator
acts. Off-shell effects will be neglected.
Then the $t$-matrix has the following form
\begin{eqnarray}
\tau_{m_{d}m_{d}^{\prime}\mu} & = &2\sum_{m_s',m_s}
\int d^{3}p \,\psi^{\ast}_{m_s' m^{\prime}_{d}}
\Big(\vec{p}-\frac{\vec{q}}{2} \Big)\bra{\vec{p}-\vec{q}\,;1m_s',00}
t^{(1)}_{\gamma \eta}\ket{\vec{p}-
\vec{k}\,;1m_s,00}\psi_{m_s m_{d}}\Big(\vec{p}-\frac{\vec{k}}{2} \Big),
\label{equ:t}
\end{eqnarray}
where $|1m_s,00\rangle$ denotes the two-nucleon spin and isospin wave
function.
Here, the intrinsic part of the deuteron wave function
projected onto $|1m_s\rangle$ is denoted by
\begin{equation}
\psi_{m_s m_{d}}(\vec p\,)=\sum_{l=0,2}\sum_{m_l}(l m_l 1 m_s|1 m_d) i^l u_l(p)
Y_{l,{m_l}}(\hat p)\,.
\end{equation}
For the radial wave functions $u_{0} $
and $u_{2} $, we have taken the ones of the Bonn
r-space potential \cite{MaH87}.
The operator $t_{\gamma \eta}^{(1)}$ is a
function of the photon, nucleon and eta momenta $\vec k$, $\vec p$, and
$\vec q$, respectively, the photon polarization $\mu$, and the invariant
mass $W_{\gamma N}$ of the photon-nucleon subsystem. As already mentioned
in the introduction, implementing this operator into a bound system poses
two problems. First of all, one has to know the invariant mass
$W_{\gamma N}$ for the struck or active nucleon, and secondly, one needs
the elementary $t$-matrix not in the $\gamma N$ c.m.\ system but in an
arbitrary frame of reference. We will discuss
these two points now in some detail.
\subsection{ Choices for the invariant mass $W_{\gamma N}$ for the
${\gamma N}$ subsystem}
For a bound system of two nucleons, the general expression for $W_{\gamma N}$
is, assuming the reaction to take place at nucleon ``1'',
\begin{eqnarray}
W_{\gamma N} & = & \sqrt{(p_{10}+k_{0})^{2}-(\vec{p}_{1}+\vec k)^{2}}
\nonumber \\
& = & \sqrt{(p_{10}^{\prime}+\omega_{q})^{2}-(\vec{p}_{1}\,^{\prime}+\vec
q\,)^{2}},
\end{eqnarray}
where $p_{1\mu}^{(\prime)}=(p_{10}^{(\prime)},\vec p_1^{\,(\prime)})$ denotes
its initial (final) four momentum. In general, one has $p_{10}^{(\prime)}
\not=\sqrt{M^{2}+\vec{p}\,^{\,(\prime)2}}$. In fact, the energy of an
individual bound nucleon of a nonrelativistic many-particle system is not
well defined. Only the total sum of the energies
of all nucleons is a well defined quantity, e.g., for the deuteron
$E^{d^{(\prime)}}=p_{10}^{(\prime)}+p_{20}^{(\prime)}$.
One of many possible choices is to distribute the total energy of the deuteron
equally on each of the two nucleons (Blankenbecler-Sugar choice) in the
deuteron rest system, i.e., each nucleon has the energy $M_{d}/2$,
independent of the momentum. Making such an assignment for the initial two
nucleons, the boost to the $\gamma d$ c.m.\ system
gives then with $\vec p_1=\vec p - \vec k$
\begin{equation}\label{equ:w1}
W_{\gamma N}^{BS}=\frac{1}{E^{d}_k}\sqrt{\Big(\frac{1}{2}(E^{d}_k+k_{0})^2
-\vec k \cdot \vec p\,\Big)^{2}-\vec{p}\,^{2}(E^{d}_k)^2} \,,
\end{equation}
while for the final two nucleons one finds
\begin{equation}\label{equ:w1s}
W_{\gamma N}^{BS'}=\frac{1}{E^{d}_q}\sqrt{\Big(\frac{1}{2}(E^{d}_q+\omega_q)^2
-\vec q \cdot \vec p\,\Big)^{2}-\vec{p}\,^{2}(E^{d}_q)^2} \,.
\end{equation}
Another possibility is to take
the active nucleon on-shell either before or after the interaction. In the
first case, one finds with
$p_{10}=E_{p-k}=\sqrt{M^{2}+(\vec{p}-\vec{k}\,)^{2}}$
\begin{equation}\label{equ:Ti}
W_{\gamma N}^{N}=\sqrt{(k_{0} +E_{p-k})^{2}-\vec{p}\,^{2}}\,,
\end{equation}
whereas in the second case with the final nucleon on-shell, i.e.\
$p_{10}^{\,\prime}=E_{p-q}=\sqrt{M^{2}+(\vec{p}-\vec{q}\,)^{2}}$,
one has
\begin{equation}\label{equ:finalN}
W_{\gamma N}^{N'}=\sqrt{(\omega_{q}+E_{p-q})^{2}-\vec{p}\,^{2}}\,.
\end{equation}
The former choice has been made in \cite{TiB94}.
In all these three cases, the invariant mass depends on the
kinematics, i.e., for fixed nucleon momentum on the angle between
$\vec p$ and the outgoing $\eta$ momentum $\vec q$.
The last choice, we want to discuss is to put the spectator nucleon on-shell,
i.e., $p_{20}=E_{p}$. This choice has been taken in \cite{HoH79}. It
is also used in coherent $\pi^0$ photoproduction on the deuteron \cite{WiA95},
and it may be justified by the fact that the deuteron is only loosely bound,
and hence the spectator acts nearly like a free nucleon. We obtain in this case
\begin{equation}\label{equ:eb}
W_{\gamma N}^{S}=\sqrt{(W_{\gamma d}-E_{p})^{2}-\vec{p}\,^{2}}\,,
\end{equation}
which is independent from the direction of $\vec p$.
Figure \ref{fig:1wsub} shows the invariant mass $W_{\gamma N}$ for these
different choices as function of the spectator momentum at fixed
photon lab energy $E_{\gamma}^{Lab}=800$ MeV. For the first four choices
($W_{\gamma N}^{BS}$, $W_{\gamma N}^{BS'}$, $W_{\gamma N}^{N}$
and $W_{\gamma N}^{N'}$) we have represented the boundaries of the range
spanned by the angle dependence by two curves corresponding to
$\vec p$ and $\vec k$ or $\vec q$ parallel (upper curve) and
antiparallel (lower curve).
One readily notes that $W_{\gamma N}^{N}$ spans the largest range, while
the smaller ranges of $W_{\gamma N}^{BS}$, $W_{\gamma N}^{BS'}$
and $W_{\gamma N}^{N'}$ are compatible with each other. However, the average
invariant masses nearly coincide
for $W_{\gamma N}^{N}$ and $W_{\gamma N}^{N'}$, and they show a slight
increase with increasing spectator momentum, whereas it decreases for both
$W_{\gamma N}^{BS}$ and $W_{\gamma N}^{BS'}$ which, moreover, show a very
similar behaviour. Finally, $W_{\gamma N}^{S}$ gives the lowest invariant
mass with the strongest decrease with increasing $p$.
One has to keep in mind that the main contribution to the total cross
section originates from momenta
$p$ below 400 MeV. But even in this region one notes a sizeable difference
between the various choices for the invariant mass of the active $\gamma N$
subsystem.
\subsection{Contribution of $\omega$ meson exchange in an arbitrary
reference frame}
Now we will illustrate for the case of $\omega$ meson exchange the two
methods for deriving the elementary production
operator for a general frame of reference as mentioned above. To this end,
we evaluate first the corresponding Feynman diagrams for an arbitrary
frame (see the Appendix). The resulting $t$-matrix is expressed in terms
of two amplitudes $\tilde M_v$ and $\tilde M_t$ which are also defined in
the Appendix. These in turn can be represented as linear combinations of
the following operators
\begin{equation}
\omnull{a}{b}:=i \vec{\varepsilon}\cdot (\vec a \times \vec b)\,,
\hspace{1cm}
\Omega^1:= \vec{\sigma} \cdot \vec{\varepsilon}\,,\hspace{1cm}
\omzwei{a}{b}:= \vec{\sigma} \cdot \vec a\, \vec{\varepsilon} \cdot \vec b
\,.
\label{omoperator}
\end{equation}
We then specialize to the $\gamma d$ c.m.\ system by the replacements
$\vec p \rightarrow \vec p - \vec k$ and $\vec p^{\,\prime}
\rightarrow \vec p - \vec q$, so that the $\gamma N$ subsystem moves with
the total momentum $\vec p$. The corresponding coefficients
for the amplitudes $\tilde M_v$ and $\tilde M_t$ are listed in
Tab.\ \ref{tab:lt} where $s$, $t$ and $u$ stand for the Mandelstam
variables of the $\gamma N$ subsystem. We further have introduced
$p_0=\sqrt{\vec p^{\, 2}+s}$, the total energy of the subsystem,
$E=\sqrt{(\vec p - \vec k\,)^2+M^2}$ and $E'=\sqrt{(\vec p - \vec q\,)^2+
M^2}$, the on-shell energies of the initial and final active nucleon,
respectively, and $e_\pm= e'\pm e$ with $e^{(\prime)}= E^{(\prime)}+M$.
Then, in order to proceed according to the second method, we specialize to the
$\gamma N$ c.m.\ frame. Here one usually expresses the $t$-matrix in terms of
the CGLN amplitudes. But we prefer to give them in terms of the operators
defined in (\ref{omoperator}),
i.e.\
\begin{eqnarray}
\vec{\sigma} \cdot \vec{k}\,\vec{\varepsilon} \cdot \vec{q}
&=& \omzwei{k}{q}\,,\\
\vec{\sigma} \cdot \vec{q}\,\vec{\varepsilon} \cdot \vec{q}
&=& \omzwei{q}{q}\,,\\
i\vec{\sigma} \cdot \vec{q}\,\vec{\sigma} \cdot (\vec{k}\times\vec{
\varepsilon}\,) &=& -\omnull{k}{q} -\vec k \cdot \vec q \,\mbox{$ \Omega^1 $} +
\omzwei{k}{q} \,.
\end{eqnarray}
We list the resulting coefficients in Tab.\ \ref{tab:cm}.
These operators are
then transformed to the $\gamma d$ c.m. frame by a proper Lorentz boost of
all participating momenta, i.e., we boost $\vec k_{c.m.}$ and $\vec q_{c.m.}$
defined with respect to the $\gamma N$ c.m.\ frame to
$\vec k$ and $\vec q$ with
respect to the $\gamma d$ c.m. frame. In the latter frame the $\gamma N$
system moves with momentum $\vec p$ according to our choice of variables
(see Fig.\ \ref{fig:kk}).
Then the Lorentz transformation reads
\begin{eqnarray}
\vec{k}_{c.m.} & = & \vec{k} +A_k\vec{p}
\label{equ:d1}, \\
\vec{q}_{c.m.} & = & \vec{q} +A_q\vec{p}
\label{equ:d2}\,,
\end{eqnarray}
with the boost parameter
\begin{equation}\label{equ:boosta}
A_k = \frac{1}{W_{\gamma N}}\Big( \frac{\vec{k}\cdot\vec{p}}
{E_{\gamma N}+W_{\gamma N}}-k_{0} \Big)\,, \\
\end{equation}
where $E_{\gamma N}=\sqrt{W_{\gamma N}^2+\vec p^{\,2}}$ denotes the energy
of the $\gamma N$ subsystem,
and $A_q$ is given by a corresponding expression
replacing $k_\mu$ by $q_\mu$ in
(\ref{equ:boosta}). Expressing $k_0$ and $\omega_q$ in terms of the
invariant mass, the energy and the total momentum of the subsystem
\begin{eqnarray}
k_{0}&=& \frac{1}{2E_{\gamma N}}\Big(W_{\gamma N}^2 -M^2
+2\vec{k}\cdot\vec{p}\Big)\,,\\
\omega_q&=& \frac{1}{2E_{\gamma N}}\Big(W_{\gamma N}^2 -M^2+m_\eta^2
+2\vec{q}\cdot\vec{p}\Big)\,,
\end{eqnarray}
we find for the boost parameters
\begin{eqnarray}
A_k & =& -\frac{1}{2W_{\gamma N}E_{\gamma N}}\Big( W_{\gamma N}^2
-M^2 +\frac{2W_{\gamma N}}
{E_{\gamma N}+W_{\gamma N}}\vec{k}\cdot\vec{p} \Big)\,, \\
A_q & =& -\frac{1}{2W_{\gamma N}E_{\gamma N}}\Big( W_{\gamma N}^2+m_\eta^2
-M^2 +\frac{2W_{\gamma N}}
{E_{\gamma N}+W_{\gamma N}}\vec{q}\cdot\vec{p} \Big)\,,
\end{eqnarray}
For the resulting transformation of the operators we find
\begin{eqnarray}
\omnull{k}{q} & \rightarrow & \omnull{k}{q} + A_k\,\omnull{p}{q}
+ A_q\,\omnull{k}{p} \,,\\
\Omega^1 & \rightarrow & \Omega^1\,,
\\
\omzwei{k}{q}
&\rightarrow & \omzwei{k}{q} + A_k\,\omzwei{p}{q} + A_q\,\omzwei{k}{p} +
A_k A_q\,\omzwei{p}{p} \,,\\
\omzwei{q}{q}
&\rightarrow & \omzwei{q}{q} + A_q\,\omzwei{p}{q} + A_q\,\omzwei{q}{p} +
A_q^2\,\omzwei{p}{p} \,,
\end{eqnarray}
leading to the same
operator structures as for the general case. However, they differ in
their corresponding coefficient functions as can already be seen by comparing
the first four operators in Tab.\ \ref{tab:lt} with the corresponding ones
in Tab.\ \ref{tab:cm} which are not affected by the transformation.
They differ just by terms which vanish in the
$\gamma N$ c.m.\ frame. Thus it is not surprising that these terms cannot
be generated by a boost. This means that information has been lost when going
first to the $\gamma N$ c.m.\ frame with subsequent boost. Furthermore, for the
remaining operators, which vanish in the $\gamma N$ c.m.\ frame, one finds
little resemblance between the two methods. For example, the coefficient of
$\omzwei{p}{p}$ in $\tilde M_v$ vanishes in Tab.\ \ref{tab:lt}
whereas the one resulting from the
transformation of $\omzwei{k}{q}$ and $\omzwei{q}{q}$ does not vanish.
In Sect. \ref{kap:2}
we will discuss the effect of these differences on the cross sections.
\section{Results and Discussion}\label{kap:2}
Having fixed the parameters of the $\eta$ production model for the elementary
process we can now proceed to study the coherent reaction on the deuteron.
The $t$-matrix elements of (\ref{equ:t}) have been evaluated numerically
using Gauss integration in momentum space. As deuteron wave functions, we
have taken the ones of the Bonn r-space potential \cite{MaH87}. With
respect to the elementary amplitudes, only the value for the helicity
amplitude of the neutron $A^{n}_{1/2}$ remained undetermined. We have
listed several values for $A^{n}_{1/2}$ in Table \ref{tab:game3} to be
discussed in the following together with the resulting ratios of the
neutron to proton amplitude $A^{n}_{1/2}/A^{p}_{1/2}$ and the correlated
values for $e\kappa^{s}_{S_{11}}$ and the ratio of the isoscalar to the
proton amplitude $A^{s}_{1/2}/A^{p}_{1/2}=1+\frac{A^{n}_{1/2}}{A^{p}_{1/2}}$.
In addition, the last line of Table \ref{tab:game3} summarizes the range of
values for these quantities which can be found in the literature
(\cite{HaR89,KrA95b,PDG94} and references therein) and which are obtained
either from photo reactions or from quark model predictions.
First, we will consider the experimental results of Krusche et al.\
\cite{KrA95b} in order to discuss the limits on the neutron amplitude
$A^{n}_{1/2}$ as imposed by present experimental data.
Krusche et al.\ have measured the total and differential cross
sections for quasifree $\eta$ photo production on the deuteron for energies
from threshold up to 800 MeV, and they found for the ratio of neutron to
proton total cross sections a value $\frac{\sigma_{n}}{\sigma_{p}}
\simeq \frac{2}{3}$.
Furthermore, they have estimated from their data for the neutron resonance
amplitude $A^{n}_{1/2}=-(100\pm30)\cdot 10^{-3}$GeV$^{-\frac{1}{2}}$
and for the cross section of the coherent process a value
$(10^{-3}\pm 10^{-2})\,\mu$b/sr. This clearly
indicates that the excitation of the $S_{11}$ resonance is dominated by the
isovector amplitude. Consequently, the neutron and proton helicity
amplitudes should have opposite sign, i.e.,
\begin{equation}\label{equ:an}
A^{n}_{1/2} = \pm \sqrt{\frac{\sigma_{n}}{\sigma_{p}}}A^{p}_{1/2}\,.
\end{equation}
With our choice of $A^{p}_{1/2}=130\cdot 10^{-3}$GeV$^
{-\frac{1}{2}}$ (Table \ref{tab:a}) we get
$A^{n}_{1/2}=-106\cdot 10^{-3}$GeV$^{-\frac{1}{2}}$. It corresponds to the
parameter set (A) in Table \ref{tab:game3}.
The differential cross sections corresponding to this value of $A^{n}_{1/2}$
are shown in Fig.\ \ref{fig:4kurven} for four different photon lab energies
$E_{\gamma}^{Lab}$. The results are well within
the limit given by \cite{KrA95b} for the coherent production on the deuteron.
Fig.\ \ref{fig:kappa_s} shows the differential cross sections at a photon
lab energy $E_{\gamma}^{Lab}=700$ MeV for the choices (A) through
(C) for $A^{s}_{1/2}/A^{p}_{1/2}$
listed in Table \ref{tab:game3} and, for comparison, the result of Tiator et
al.\ \cite{TiB94}. We have also indicated separately
the contributions of the resonance and the nucleon Born terms. One
readily sees the strong sensitivity on $\kappa^{s}_{S_{11}}$. An increase
of $\kappa^{s}_{S_{11}}$ by a factor of two ((A) $\rightarrow$ (C)) leads
to an increase of the cross
section by almost a factor four. One also notes a destructive interference
between resonance and nucleon pole term contributions whereas the vector
meson exchange terms lead to a slight increase. Case (A) corresponds to the
already discussed estimate of Krusche et al.\ while (B) is motivated by
quark model predictions. Taking these as upper limit, then
remains for the ratio
$A^{s}_{1/2}/A^{p}_{1/2}$ only the value
0.14 (set (B) in Table \ref{tab:game3}).
The next case (C) serves for comparison with \cite{TiB94}.
If we take the same ratio for
$A^{s}_{1/2}/A^{p}_{1/2}$ as in \cite{TiB94}, namely 0.15 which is very close
to set (B) in Table \ref{tab:game3}, we get only 75 \% of their result.
However, doubling the
ratio of set (A), yielding set (C) of Table \ref{tab:game3}, we find
a cross section that is qualitatively equal to the one of \cite{TiB94}.
However, our angular distribution drops
somewhat faster with increasing angle than the one of \cite{TiB94}.
The main differences between our calculation and the one of Tiator et al.\
\cite{TiB94} are
that (i) they include in addition the resonances $P_{11}$ and $D_{13}$, but all
resonances only in the $s$-channel, (ii) their elementary production model
is a mixture of the isobar model and a coupled channel calculation, (iii) they
start from the CGLN amplitudes in the $\gamma N$ c.m.\ frame and then boost
all momenta as described above, (iv) they do not integrate over the
spectator momentum but make use of the so-called factorization approximation
which leads to uncertainties of about 5 to 10~$\%$ \cite{Kam95}, (v) they
have used an $\eta N$-coupling constant for the nucleon Born
terms of $g_{\eta}^{2}/4\pi=0.1$ in contrast to our value of 0.4, and (vi)
they have chosen $W_{\gamma N}^{N'}$ as invariant mass of the $\gamma N$
subsystem.
The earlier experiment of Anderson and Prepost \cite{AnP69} seemed to
indicate that the isoscalar
amplitude of the resonance is the dominant part instead of the isovector
one in contradiction to quark model predictions which give
$A^{s}_{1/2}/A^{p}_{1/2} \simeq (-0.02 \pm 0.16)$
(see \cite{HaR89} and references therein). In fact, the calculations of
\cite{KrL76} in the impulse approximation require for this ratio a much
larger value of 0.84 in order to reproduce the data of \cite{AnP69}.
Even if one considers pion and eta rescattering terms
one still needs a value of 0.6 \cite{HaR89}.
In our approach, we could reproduce the data of \cite{AnP69}
with a value of 0.88 corresponding to
$A_{1/2}^{n}=114\cdot 10^{-3}$GeV$^{-\frac{1}{2}}$ and
$e\kappa^{s}_{S_{11}}= 340\cdot 10^{-3}$GeV$^{-\frac{1}{2}}$ as is shown in
Fig.\ \ref{fig:anp69}. On the other hand, the data of \cite{AnP69} are
obviously at variance with the measurements of \cite{KrA95b} mentioned
above which clearly prefer the isovector part to be the dominant one.
This is also supported by comparing our results with the upper limit near
threshold obtained by Beulertz \cite{Beu94} which will be discussed in more
detail next. The cross section exceeds
this upper limit by more than a factor four. This fact and more recent
experiments with a better background reduction imply
that very likely the experiment \cite{AnP69} has included other events
and therefore has resulted in an overestimation of the $\eta$ production cross
section.
Now we will study the question for which values of $A_{1/2}^{n}$ in Table
\ref{tab:game3}
the resulting total cross section is compatible with the upper limits
obtained by Beulertz \cite{Beu94}. Instead of detecting the $\eta$ meson by
its two-photon decay as, for example, is done by \cite{KrA95b}, the recoiling
deuteron has been used as signal and
the upper limits $\sigma_{tot}<0.040\, \mu$b for $E_{\gamma}^{Lab}=632.2$
MeV and $<0.064\,\mu$b for $E_{\gamma}^{Lab}=637$ MeV have been obtained.
In Fig.\ \ref{fig:game3} we show theoretical total cross sections for the six
different values of $A_{1/2}^{n}$ for the parameter sets (A) through (F) of
Table \ref{tab:game3}. Although these resonance couplings vary over a wide
range, the variation in the total cross sections is much less pronounced in
the near threshold region than at higher energies. Even a sign change in
$e\kappa^s_{S_{11}}$ does not show a big influence as one can see by
comparing the results of
set (C) with (E) or (D) with (F) except above the resonance region where
the interference with the background becomes more important. The
experimental upper limit is reached for the set (G) as is shown in Fig.\
\ref{fig:schranke}.
All parameter sets of Table \ref{tab:game3} give total cross sections
compatibel with the experimental estimates for the coherent deuteron cross
section of \cite{Beu94} and \cite{KrA95b}. However, if one takes the
quark model seriously, the parameter sets (D) through (G) can
be excluded while (C) is on the borderline.
Thus we conclude that the most probable parameter sets are (A) through (C)
which give a neutron amplitude $A_{1/2}^{n}$ between $-82\cdot 10^{-3}$ and
$-106\cdot 10^{-3}$GeV$^{-\frac{1}{2}}$.
The set (A) reproduces the experimental ratio for
$\sigma_{n}/\sigma_{p}=2/3$ \cite{KrA95b}, whereas we find the best
agreement with the theoretical results of \cite{TiB94} for the set (C).
In this case, the ratio of neutron to proton cross section is only 2/5.
Now we want to discuss the influence of different choices
for the invariant mass of the $\gamma N$ subsystem.
Figure \ref{fig:wsub0668} presents the total cross sections for the
$S_{11}$ resonance alone obtained with four of the different choices for the
invariant mass $W_{\gamma N}$ as discussed in Sect.\ \ref{deuteron}.
We did not consider $W_{\gamma N}^{BS'}$ because it is very similar to
$W_{\gamma N}^{BS}$.
Since for this question the exact value of $A^{n}_{1/2}$ is not relevant,
we have used here $A^{n}_{1/2}=-82\cdot10^{-3}$GeV$^{-\frac{1}{2}}$.
One readily notes considerable differences for the various prescriptions.
The largest total cross section is obtained with the spectator on shell
$W_{\gamma N}^{S}$ having its maximum at 750 MeV. If one puts the active
nucleon on shell, i.e., takes $W_{\gamma N}^{N}$ or $W_{\gamma N}^{N'}$
instead, the maximum
is decreased by about 18\% and slightly shifted towards higher energies.
This decrease and shift can be understood as a result of the assignment
of a higher invariant mass to the $\gamma N$ subsystem and the additional
smearing due to the dependence on the angle between the spectator momentum
and the photon, respective eta momentum (see Fig.\ \ref{fig:1wsub})
which leads to a larger effective width. The result is a slight upshift of
the resonance position and a broadening, thus lowering of the maximum.
One notes also little difference between putting the active nucleon before or
after the interaction on shell.
From the foregoing discussion it is apparent that the curve for the
Blankenbecler-Sugar assignment $W_{\gamma N}^{BS}$ is about halfway
between the spectator on shell and the active nucleon on shell because
according to Fig.\ \ref{fig:1wsub} $W_{\gamma N}^{BS}$ lies in between the
spectator and active nucleon assignments.
The differential cross sections for three photon energies in
Fig.\ \ref{fig:difsub} show a similar behaviour with respect to the
invariant mass assignment. In particular at lower energies (see as example
the cross section at $E_\gamma=700$ MeV in Fig.\ \ref{fig:difsub}) the
differences are quite sizeable in forward direction.
In view of these results, we have to conclude that the choice for the invariant
mass of the $\gamma N$ subsystem has a significant influence
on the cross section. In order to obtain a cross section of the same magnitude
for two different choices of $W_{\gamma N}$, one has to
change the elementary helicity amplitude, too. Consequently, the other
correlated parameters (see Table \ref{tab:game3}) will change together with
the assignment for the invariant mass. This has also some bearing on the
question of compatibility with quark model predictions.
As last point we want to discuss the different methods of deriving
the elementary
production operator in an arbitrary frame which is necessary for implementing
it into the bound system, i.e., we will compare the
more general case using an arbitrary frame (GC), the Lorentz boost of the
momenta from the $\gamma N$ c.m.\ frame (LB), and the simplest approach of
taking the elementary operator without any transformation of the variables
(CM). The last method, for
example, is considered in \cite{Wil92} for the coherent $\pi^0$
photoproduction on the deuteron where
only the momentum of the $\Delta$ resonance is transformed while for
the background terms this has not been done, assuming the effect to
be negligible. While for the pion this might be justified because of its
low mass, one certainly would not expect it to be a valid approximation for
the eta meson.
We present results for two choices of the neutron helicity amplitude, namely
$A^{n}_{1/2}=-106\cdot10^{-3}$GeV$^{-\frac{1}{2}}$ (A) and
$A^{n}_{1/2}=-82\cdot10^{-3}$GeV$^{-\frac{1}{2}}$ (C) (Table
\ref{tab:game3}). The differential cross sections as a
function of the photon laboratory energy and for constant c.m.\ angles are
shown for both amplitudes in Figs.\ \ref{fig:3maldeut22}.
It can be seen that the differences between (GC), (LB), and (CM)
depend not only on the angles and energies but also
on the strength of the helicity amplitude chosen.
Comparing first (GC) and (LB), we see that at forward angles the
deviations are small. However, for backward angles
they become more significant reaching a maximum at 180$^{\circ}$.
The maximum is considerably lower for (LB) while at higher energies the
differential cross setion for (LB) lies above the one for (GC).
On the other hand, the main contributions to the cross sections stem from
small angles where also the differences are smaller so that the total cross
sections differ much less. Furthermore, using a larger helicity
amplitude $A^{n}_{1/2}$ decreases the differences between the calculations
(GC) and (LB) even more. That is the case for an $A^{n}_{1/2}$ with which we get a cross
section comparable to the one of \cite{TiB94}.
Finally, comparing the calculation (CM), to which only the first four operator
structures of Table \ref{tab:lt} contribute, with (GC) one clearly sees very
large deviations from (GC) which obviously are
not tolerable.
\section{Summary and Conclusions}
Coherent eta photoproduction on the deuteron has been studied in
the impulse approximation neglecting rescattering and two-body effects.
For the elementary reaction, we have assumed that the process is dominated by
the excitation of the $S_{11}$(1535) resonance
for photon energies not too far above
threshold. Thus we have included the $S_{11}$ in the $s$- and $u$-channels.
In addition, we have considered as background the $t$-channel
vector meson exchange with $\rho$ and $\omega$ as well as the nucleon
pole terms in $s$- and $u$-channel for which pseudoscalar coupling has been
assumed. The vertices are taken from an effective Lagrangian theory.
The elementary process is treated nonrelativistically. All parameters are
fixed by fitting the experimental data for the reaction on the proton.
The electromagnetic coupling constant $A_{1/2}^{n}$ for the $S_{11}$
excitation on the neutron cannnot be determined this way, and one needs
data for the reaction on the deuteron. In view of the fact that up to now
no precise data on the deuteron are available, we could only use the
existing estimates to find limits for this amplitude. The experiment for
quasifree production
\cite{KrA95b} gave a value for the ratio of neutron to proton cross section
with which we have obtained a deuteron cross section that is within the
experimentally obtained limits of \cite{Beu94} and \cite{KrA95b}.
Correlated with this is a ratio of the isoscalar to isovector amplitude
that is consistent with quark model predictions. However, the upper limits
of \cite{Beu94} and \cite{KrA95b} for the coherent deuteron
cross section are not sufficient for a precise determination of $A_{1/2
}^{n}$. An uncertainty of the factor ten is to be considered which could
result in deviations from quark model predictions up to 50\%. Furthermore,
we have studied the theoretical problems of implementing the elementary
reaction amplitude into a bound system connected to the choice of the
invariant mass for the resonance excitation and to the transformation of
the reaction amplitude from the $\gamma N$ c.m.\ frame to the c.m.\ frame
of the $\gamma$-bound system, and we have found that the theoretical results
show significant differences when using different prescriptions.
For the future we need on the experimental side precise data for both the
coherent and the incoherent production on the deuteron in order to be able
to fix in a reliable manner the $S_{11}$ excitation on the neutron. On the
theoretical side, we have to improve the treatment by including
rescattering and two-body effects in order to obtain a more realistic
description of this important process.
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix: $t$-matrix contribution from $\omega$ meson exchange}
\label{kap:A2}
With the aid of the Lagrangians (\ref{lvnn})-(\ref{equ:he})
we get for the $t$-matrix contribution from vector meson exchange
\begin{equation}\label{equ:a41}
i \tau_{fi}^{V}= \frac{e \lambda}{m_{\eta}} \frac{\varepsilon_{\mu\nu
\lambda\sigma}k^{\nu}\varepsilon^{\mu}q^{\lambda}}{t-m_{V}^{2}}
\bar{u}_{f}(p^{\prime}) \left[ g_{v} \gamma^{\sigma}- \frac{g_{t}}{2M }i
\sigma^{\sigma\alpha}(q-k)_{\alpha} \right] u_{i}(p)\,.
\end{equation}
For the $\rho$ meson one has to multiply with an additional factor $\tau_{0}$.
In the CGLN-representation \cite{ChG57}, on has for photo production
only four independent amplitudes
\begin{equation}\label{equ:a1}
i\tau_{fi}=\bar{u}_{f}(p^{\prime}) \sum_{j=1}^{4}A_{j}(s,t,u)M_{j}u_{i}
(p)\,.
\end{equation}
containing the Dirac operators
\begin{eqnarray}
M_{1}& = & -\frac{1}{2} \gamma_{5} \gamma_{\mu} \gamma_{\nu}
(\epsilon^{\mu}k^{\nu}-\epsilon^{\nu}k^{\mu})
\,, \\
M_{2} & = & \gamma_{5} (p+p^{\prime})_{\mu} \left (q_{\nu}-
\frac{1}{2}k_{\nu} \right )(\epsilon^{\mu}k^{\nu}-\epsilon^{\nu}k^{\mu}) \,, \\
M_{3} & = & - \gamma_{5} \gamma_{\mu} q_{\nu}
(\epsilon^{\mu}k^{\nu}-\epsilon^{\nu}k^{\mu})\,, \\
M_{4} & = & - \gamma_{5} \gamma_{\mu} (p+p^{\prime})_{\nu}
(\epsilon^{\mu}k^{\nu}-\epsilon^{\nu}k^{\mu})-2M M_{1}\,,
\end{eqnarray}
and the invariant amplitudes $A_j$ for which one finds from (\ref{equ:a41})
\begin{eqnarray} \label{equ:a111}
A_{1} & = & \frac{e \lambda}{m_{\eta}} \frac{g_{t}}{2M } \frac{t}{t-m_{V}
^{2}}\,, \\
A_{2} & = & -\frac{e \lambda}{m_{\eta}} \frac{g_{t}}{2M }\frac{1}{t-m_{V}
^{2}}\,,\\ \label{equ:a2}
A_{3} & = & 0\,, \\ \label{equ:a3}
A_{4} & = & -\frac{e \lambda}{m_{\eta}} g_{v}\frac{1}{t-m_{V}^{2}}\,.
\label{equ:a4}
\end{eqnarray}
Introducing the operators $\tilde M_j$ in Pauli spinor space by
$\chi^{\dag}_{f}\tilde{M_{j}}\chi_{i}:=\bar{u}_{f} M_{j} u_{i}$ and noting
that $A_{3}=0$ for vector mesons, one finds for the remaining operators
\begin{eqnarray}
\tilde{M_{1}} = N^{\prime}N \Big[ &&- k_{0} \vec{\sigma} \cdot
\vec{\varepsilon}- \frac{i \vec{\sigma} \cdot (\vec{k} \times \vec{
\varepsilon})\, \vec{\sigma} \cdot \vec{p}}{e_{p}}+\frac{i \vec{
\sigma}
\cdot \vec{p}\,^{\prime}\, \vec{\sigma} \cdot (\vec{k} \times \vec
{\varepsilon})}{e_{p^{\prime}}} \nonumber \\
& & +\frac{k_{0}}{e_{p}e_{p^{\prime}}} \vec{\sigma}
\cdot \vec{p}\,^{\prime}
(\vec{p} \cdot \vec{\varepsilon}-i \vec{\sigma} \cdot (\vec{p}
\times\vec{\varepsilon})) \Big] \,,\\
\tilde{M_{2}} = N^{\prime}N \Big[&&
\frac{\vec{\sigma} \cdot \vec{p}\,^{\prime}}{e_{p^{\prime}}}
-\frac{\vec{\sigma} \cdot \vec{p}}{e_{p}}\Big]
\Big[ J\vec{\varepsilon}\cdot (\vec{p}\,^{\prime} +\vec{p}\,)
-K \vec{\varepsilon}\cdot \vec q\,\Big]\,,\\
\tilde{M_{4}} = N^{\prime}N \Big[ &&\left (k_{0} \frac{ \vec{
\sigma} \cdot \vec{p}\,^{\prime}}{e_{p^{\prime}}}+k_{0} \frac{ \vec{
\sigma} \cdot \vec{p}}
{e_{p}}- \vec{\sigma} \cdot \vec{k}- \vec{\sigma} \cdot \vec{p}\,^{\prime}
\, \vec{\sigma} \cdot \vec{k} \,\vec{\sigma}
\cdot \vec{p} \frac{1}{e_{p}e_{p^{\prime}}}\right)
\nonumber \\
& & \times (\vec{p} \cdot
\vec{\varepsilon}+ \vec{p}\,^{\prime} \cdot \vec{\varepsilon})
- \frac{K}{e_{p^{\prime}}e_{p}} \vec{\sigma} \cdot \vec{p}\,
^{\prime}
\, \vec{\sigma} \cdot \vec{\varepsilon} \,\vec{\sigma} \cdot \vec{p}
- \vec{\sigma} \cdot \vec{\varepsilon} K\Big]
-2M \tilde{M_{1}}\,,
\end{eqnarray}
with $J = q\cdot k$, $K = (p+p^{\prime})\cdot k$,
$N^{(\prime)}=\sqrt{ \frac{e_{p^{(\prime)}} }{2M }}$ and
$e_{p^{(\prime)}} =E_{p^{(\prime)}}+M $.
We will now bring (\ref{equ:a1}) into the form
\begin{equation}
i\tau_{fi}= \frac{e \lambda}{m_{\eta}} \frac{1}{t-m_{V}^{2}}\,
\frac{N'N}{e_{p'}e_{p}}\,
\chi^{\dag}_{f}\Big(-g_{v} \tilde{M}_{v} + \frac{g_{t}}{2M }
\tilde{M}_{t}\Big)\chi_{i}\,,
\end{equation}
where we have introduced
\begin{eqnarray}
\tilde{M}_{v}&=&\frac{e_{p'}e_{p}}{N'N}\,\tilde{M}_{4}\,,\\
\tilde{M}_{t}&=&\frac{e_{p'}e_{p}}{N'N}\,(t\tilde{M}_{1}-\tilde{M}_{2})\,.
\end{eqnarray}
In terms of the operators introduced in (\ref{omoperator}), we find
explicitly
\begin{eqnarray}
\tilde{M}_{v}&=&
\frac{1}{4}\Big( (s-u + 4Mk_0)t +2Me_-(m_\eta^2-t)\Big)
\Omega^1 +(\frac{t}{2} + E_pe_+)\Omega^0_{\vec k,\,\vec p^{\,\prime}}
\nonumber\\
&& -(\frac{t}{2} + E_{p'}e_+)\Omega^0_{\vec k,\,\vec p}
+k_0e_+\Omega^0_{\vec p^{\,\prime},\,\vec p}
+(\frac{t}{2}+Me_-)\Omega^2_{\vec k,\,\vec p}
+(\frac{t}{2}-Me_-)\Omega^2_{\vec k,\,\vec p^{\,\prime}}
\nonumber\\
&&+(k_0M + \frac{1}{2}(s-M^2) )(\Omega^2_{\vec p^{\,\prime},\,\vec p^{\,\prime}}-
\Omega^2_{\vec p,\,\vec p^{\,\prime}})
+(k_0M - \frac{1}{2}(u-M^2))(\Omega^2_{\vec p,\,\vec p}-
\Omega^2_{\vec p^{\,\prime},\,\vec p})\,,
\\
\tilde{M}_{t}&=&
-\frac{t}{2}\Big( k_0t+e_{p'}(s-M^2)-e_{p}(u-M^2)\Big)\Omega^1
+e_{p'}t\Big( \Omega^0_{\vec k,\,\vec p}-\Omega^2_{\vec k,\,\vec p}\Big)
\nonumber\\
&&-e_{p}t\Big( \Omega^0_{\vec k,\,\vec p^{\,\prime}}
+\Omega^2_{\vec k,\,\vec p^{\,\prime}}\Big)
-k_0t\Omega^0_{\vec p^{\,\prime},\,\vec p}
-e_{p}(s-M^2)\Omega^2_{\vec p^{\,\prime},\,\vec p^{\,\prime}}
+e_{p'}(u-M^2)\Omega^2_{\vec p,\,\vec p}
\nonumber\\
&&+(k_0t + e'\,(s-M^2) )\Omega^2_{\vec p,\,\vec p^{\,\prime}}
+(k_0t - e\,(u-M^2))\Omega^2_{\vec p^{\,\prime},\,\vec p}\,.
\end{eqnarray}
Here, $s$, $t$ and $u$ denote the usual Mandelstam variables and
$e_{\pm}=e_{p'}\pm e_{p}$.
For the specialization to the $\gamma N$ c.m.\ sytem, we note the following
relations using $\vec p = -\vec k$ and $\vec p^{\,\prime} = -\vec q$
\begin{eqnarray}
k_0&=&\frac{1}{2W}(s-M^2)\,,\\
e_p&=&\frac{1}{2W}(W+M)^2\,,\\
e_{p'}&=&\frac{1}{2W}((W+M)^2-m_\eta^2)\,,
\end{eqnarray}
with $W^2=s$.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Acknowledgements}
\bibliographystyle{abbrv}
\section{Supporting Application-Specific Optimizations in Pangolin}\label{sect:app-opt}
In this section, we describe how Pangolin's API and execution
model supports application-specific optimizations that:
(1) enable enumeration search space pruning
and (2) enable the eliding of isomorphism tests.
\subsection{Pruning Enumeration Search Space}
\textbf{Directed Acyclic Graph (DAG):}
In typical GPM applications,
the input graph is undirected.
In some vertex-induced GPM applications, a common optimization
technique is \textit{orientation} which converts the undirected
input graph into a directed acyclic graph (DAG)~\cite{Arboricity,Alon}.
Instead of enumerating candidate subgraphs in an undirected graph,
the direction significantly cuts down the combinatorial search
space. Orientation has been adopted in triangle
counting~\cite{Schank}, clique finding~\cite{KClique},
and motif counting~\cite{ESCAPE}. \cref{fig:dag} illustrates an example of the
DAG construction process. In this example, vertices are ordered
by vertex ID. Edges are directed from vertices with smaller IDs
to vertices with larger IDs. Generally, vertices can be ordered
in any total ordering, which guarantees the input graph is
converted into a DAG. In our current implementation, we establish
the order~\cite{DistTC} among the vertices based on their degrees: edges will
point towards the vertex with higher degree. When there is a tie,
the edge points to the vertex with the larger vertex ID.
Other orderings can be included in the future.
In Pangolin, the user can enable orientation by simply setting a macro.
\textbf{Eager Pruning:}
In some applications like MC and FSM,
all vertices in an embedding may need to be extended
before determining whether the new embedding candidate
is a ({\it automorphism}) canonical embedding or a duplicate.
However, in some applications like TC and CF~\cite{KClique},
duplicate embeddings can be detected
eagerly before extending current embeddings.
In both TC and CF,
all embeddings obtained by extending vertices
except (the last) one will lead to duplicate embeddings.
Thus, as shown in Listing~\ref{lst:kcl},
only the last vertex of the current embedding needs to be extended.
This aggressive pruning can significantly reduce the search space.
The \texttt{toExtend} function in Pangolin
enables the user to specify such {\it eager pruning}.
\input{sections/hybrid}
\section{Background and Motivation}\label{sect:back}
We describe GPM concepts, applications,
as well as algorithmic and architectural optimizations
in state-of-the-art hand-optimized GPM solvers.
Lastly, we point out performance limitations of existing GPM frameworks.
\subsection{Graph Pattern Mining}\label{sect:gpm}
\hlc{Given an \emph{input graph} $G$ and a \emph{pattern} $P$
which is a subgraph defined by the user (e.g., triangle or clique),
the goal of GPM is to find the \emph{embeddings}, i.e.,
subgraphs in $G$ which are isomorphic to $P$.}
In the input graph in \cref{fig:example},
colors represent vertex labels, and numbers denote vertex IDs.
The 3-vertex pattern is a blue-red-green chain,
and there are four embeddings of this pattern in the input graph,
shown on the right of the figure.
In a specific GPM problem, \hlc{the user may be interested in
some statistical information (i.e., pattern frequency),
instead of listing all the embeddings.}
The measure of the frequency of $P$ in $G$,
termed \textit{support}, is also defined by the user.
For example, in triangle counting,
the support is defined as the total count of triangles.
\hlc{In some problems, the user might be interested in multiple patterns.
In this work, we focus on connected patterns only.}
\iffalse
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.37\textwidth]{figures/4-motifs.pdf}
\caption{3-vertex (top) and 4-vertex (bottom) motifs.}
\vspace{-1cm}
\label{fig:4-motifs}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\textwidth]{figures/example.pdf}
\caption{An example of the GPM problem. In the input graph,
colors represent vertex labels, and numbers denote vertex IDs.
The 3-vertex pattern is a blue-red-green chain, and there are four
embeddings of this pattern in the input graph.
To calculate the minimum image-based (MNI) support, we count the number
of distinct mappings for each vertex, and the MNI support
of this pattern is $min\{3,2,1\} = 1$.}
\vspace{-0.7cm}
\label{fig:example}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.26\textwidth]{figures/system-overview.pdf}
\vspace{0cm}
\caption{Pangolin system overview. The shaded parts belong to the Pangolin framework.}
\vspace{-0.8cm}
\label{fig:overview}
\end{center}
\end{figure}
\fi
There are two types of GPM problems targeting two types of embeddings.
In a \textit{vertex-induced} embedding,
a set of vertices is given and the subgraph of interest is obtained from these
vertices and the set of edges in the input graph connecting these vertices.
Triangle counting uses vertex-induced embeddings. In an \textit{edge-induced} embedding,
a set of edges is given and the subgraph is formed by including all the endpoints of these
edges in the input graph. Frequent subgraph mining (FSM) is an edge-induced GPM problem.
A GPM algorithm enumerates embeddings of the given pattern(s).
If duplicate embeddings exist ({\it automorphism}),
the algorithm chooses one of them as the {\it canonical} one
(namely canonical test) and collects statistical information
about these canonical embeddings such as the total count.
The canonical test needs to be performed on each embedding,
and can be complicated and expensive for complex problems such as FSM.
Enumeration of embeddings in a graph grows exponentially
with the embedding size (number of vertices or edges in the embedding),
which is computationally expensive and consumes lots of memory.
In addition, a graph isomorphism (GI) test is needed for each embedding
to determine whether it is \textit{isomorphic} to a pattern.
Unfortunately, the GI problem is not solvable in polynomial time~\cite{Garey}.
It leads to compute and memory intensive algorithms~\cite{Bliss}
that are time-consuming to implement.
\hlc{Graph analytics problems typically involve
allocating and computing
labels on vertices or edges of the input graph iteratively.
On the other hand, GPM problems involve generating
embeddings of the input graph
and analyzing them.
Consequently, GPM problems require much more memory
and computation to solve.
The memory consumption is not only proportional to the graph size,
but also increases exponentially as the embedding size increases~\cite{Arabesque}.
Furthermore, GPM problems require compute-intensive operations,
such as isomorphism test and automorphism test on each embedding.
Thus, GPM algorithms are more difficult to develop,
and conventional graph analytics systems
\hlc{~\cite{GRAPE,EvoGraph,SIMD-X,Khorasani,MultiGraph,Falcon,PowerGraph,Gemini,Gluon}}
are not sufficient to provide a good trade-off between programmability and efficiency.}
\subsection{Hand-Optimized GPM Applications}\label{sect:handopt}
\input{sections/handopt}
\iffalse
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.3\textwidth]{figures/support.pdf}
\caption{Calculating the minimum image-based (MNI) support for the example
pattern in Figure 1. The support of this pattern is $min\{3,2,1\} = 1$.}
\vspace{-0.3cm}
\label{fig:support}
\end{center}
\end{figure}
\fi
\input{sections/motive}
\section{Conclusion}\label{sect:concl}
We present Pangolin, a high-performance, flexible
GPM system on shared-memory CPUs and GPUs.
Pangolin provides a simple
programming interface that enables the user to specify
eager enumeration search space pruning and
customized pattern classifications.
To exploit locality,
Pangolin uses an efficient structure of arrays (SoA)
for storing embeddings.
It avoids materialization of temporary embeddings
and blocks the schedule of
embedding exploration to reduce the memory usage.
It also uses inspection-execution and scalable memory allocators
to mitigate the overheads of dynamic memory allocation.
These application-specific and architectural optimizations
enable Pangolin to outperform \hlc{prior GPM frameworks,
Arabesque, RStream, and Fractal, by 49$\times$, 88$\times$, and 80$\times$}, on average,
respectively, on the same 28-core CPU. Moreover, Pangolin on
V100 GPU is 15$\times$ faster than that on the CPU on average.
Thus, Pangolin provides performance competitive with hand-optimized
implementations but with much better programming experience.
To mine 4-cliques in a web-crawl (gsh)
with 988 million vertices and 51 billion edges,
Pangolin takes $\sim6.5$ hours on a 48-core
Intel Optane machine with 6 TB memory.
\section{Design of Pangolin Framework}\label{sect:design}
\hlc{\cref{fig:overview} illustrates an overview of the Pangolin system.
Pangolin provide a simple API (purple box) to the user for writing GPM applications.
The unified execution engine (orange box) follows the embedding-centric model.
Important common operations are encapsulated
and provided to the user
in the helper routines (blue box),
which are optimized for both CPU and GPU.
The embedding list data structure (green box) is also optimized for
different architectures to exploit hardware features.
Thus,
Pangolin hides most of the architecture oriented programming complexity
and achieves high performance and high productivity simultaneously.
In this section, we describe the execution model, programming interface (i.e., API),
and example applications of Pangolin.}
\begin{figure*}[t]
\centering
\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/extend.pdf}
\vspace{-0cm}
\caption{\small An example of vertex extension.}
\vspace{-0.3cm}
\label{fig:extension}
\end{minipage}
\hfill
\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/reduce.pdf}
\vspace{-0.3cm}
\caption{\small Reduction operation that calculates pattern
frequency using a pattern map.}
\vspace{-0.3cm}
\label{fig:reduce}
\end{minipage}
\hfill
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/automorphism.pdf}
\vspace{-0cm}
\caption{\small An example of automorphism.}
\vspace{-0.3cm}
\label{fig:automorphism}
\end{minipage}
\end{figure*}
\iffalse
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\textwidth]{figures/extend.pdf}
\caption{An example of vertex extension.}
\vspace{-0.6cm}
\label{fig:extension}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.36\textwidth]{figures/reduce.pdf}
\caption{The reduction operation that calculates pattern
frequency using a pattern map.}
\vspace{-0.7cm}
\label{fig:reduce}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.36\textwidth]{figures/automorphism.pdf}
\vspace{-0cm}
\caption{An example of automorphism.}
\vspace{-0.5cm}
\label{fig:automorphism}
\end{center}
\end{figure}
\fi
\subsection{Execution Model}\label{subsect:model}
Algorithm~\ref{alg:engine} describes the execution engine in Pangolin
which illustrates our {\it extend-reduce-filter} execution model.
To begin with, a worklist of embeddings is initialized with all the
single-edge embeddings (line 4). The engine then works in an
iterative fashion (line 6). In each iteration, i.e., \textit{level},
there are three phases: \textproc{Extend} (line 8), \textproc{Reduce}
(line 10) and \textproc{Filter} (line 12).
Pangolin exposes necessary details in each phase to enable a more
flexible programming interface (\cref{subsect:apis}) than existing systems;
for example, Pangolin exposes the \textproc{Extend} phase
which is implicit in Arabesque.
The \textproc{Extend} phase takes each embedding in the input worklist
and extends it with a vertex (vertex-induced) or an edge (edge-induced).
Newly generated embeddings then form the output worklist for the next level.
The embedding size is increased with $level$ until the user defined maximum size is reached (line 14).
\cref{fig:extension} shows an example of the first iteration of vertex-based extension.
The input worklist consists of all the 2-vertex (i.e., single-edge) embeddings.
For each embedding in the worklist, one vertex is added to yield a
3-vertex embedding. For example, the first 2-vertex embedding $\{0,1\}$ is extended
to two new 3-vertex embeddings $\{0,1,2\}$ and $\{0,1,3\}$.
\makeatletter
\newcommand{\textbf{break}}{\textbf{break}}
\newcommand{\State \algorithmicbreak}{\State \textbf{break}}
\makeatother
\setlength{\textfloatsep}{2pt}
\begin{algorithm}[t]
\small
\caption{Execution Model for Mining}
\label{alg:engine}
\begin{algorithmic}[1]
\Procedure{MineEngine}{$G$($V$,$E$), MAX\_SIZE}
\State EmbeddingList $in\_wl$, $out\_wl$ \Comment{double buffering}
\State PatternMap $p\_map$
\State \Call{Init}{$in\_wl$} \Comment{insert single-edge embeddings}
\State $level \leftarrow 1$
\While{true}
\State $out\_wl \leftarrow \emptyset$ \Comment{clear the new worklist}
\State \Call{\textcolor{blue}{Extend}}{$in\_wl$, $out\_wl$}
\State $p\_map \leftarrow \emptyset$ \Comment{clear the pattern map}
\State \Call{\textcolor{blue}{Reduce}}{$out\_wl$, $p\_map$}
\State $in\_wl \leftarrow \emptyset$ \Comment{clear the old worklist}
\State \Call{\textcolor{blue}{Filter}}{$out\_wl$, $p\_map$, $in\_wl$}
\State $level \leftarrow level + 1$
\If{$level =$ MAX\_SIZE - 1}
\State \algorithmicbreak \Comment{termination condition}
\EndIf
\EndWhile
\State \Return $in\_wl$, $p\_map$
\EndProcedure
\end{algorithmic}
\end{algorithm}
After vertex/edge extension, a \textproc{Reduce} phase is used to
extract some pattern-based statistical information,
i.e., pattern frequency or {\it support}, from the embedding worklist.
The \textproc{Reduce} phase first classifies all the embeddings
in the worklist into different categories according to their patterns,
and then computes the support for each pattern category,
forming pattern-support pairs.
All the pairs together constitute a pattern map ($p\_map$ in line 10).
\cref{fig:reduce} shows an example of the reduction operation.
The three embeddings (top) can be classified into two categories,
i.e., triangle and wedge (bottom).
Within each category, this example counts the number of embeddings as the support.
As a result, we get the pattern-map as \{[triangle, 2], [wedge, 1]\}.
After reduction, a \textproc{Filter} phase may be needed to remove those
embeddings which the user are no longer interested in;
e.g., FSM removes infrequent embeddings in this phase.
Note that \textproc{Reduce} and \textproc{Filter} phases are not
necessary for all applications, and they can be disabled by the user.
If they are used, they are also executed after initializing
single-edge embeddings (line 4) \hlc{and before entering the main loop (line 6).
Thus, infrequent single-edge embeddings are filtered out
to collect only the frequent ones
before the main loop starts.
Note that this is omitted from Algorithm~\ref{alg:engine} due to lack of space.}
If \textproc{Reduce} is enabled but \textproc{Filter} is disabled,
then reduction is only required and executed for the last iteration,
as the pattern map produced by reduction is not used in prior iterations (dead code).
\setlength{\textfloatsep}{1pt}
\begin{algorithm}[t]
\small
\caption{Compute Phases in \textbf{Vertex-induced} Mining}
\label{alg:operators}
\begin{algorithmic}[1]
\Procedure{Extend}{$in\_wl$, $out\_wl$}
\For{each embedding $emb \in in\_wl$ \textbf{in parallel}}
\For{each vertex $v$ in $emb$}
\If{\Call{\textcolor{blue}{toExtend}}{$emb$, $v$} = $true$}
\For{each vertex $u$ in $adj(v)$}
\If{\Call{\textcolor{blue}{toAdd}}{$emb$, $u$} = $true$}
\State insert $emb \cup u$ to $out\_wl$
\EndIf
\EndFor
\EndIf
\EndFor
\EndFor
\EndProcedure
\item[]
\Procedure{Reduce}{$queue$, $p\_map$}
\For{each embedding $emb \in queue$ \textbf{in parallel}}
\State Pattern $pt$ $\leftarrow$ \Call{\textcolor{blue}{getPattern}}{$emb$}
\State Support $sp$ $\leftarrow$ \Call{\textcolor{blue}{getSupport}}{$emb$}
\State $p\_map$[$pt$] $\leftarrow$ \Call{\textcolor{blue}{Aggregate}}{$p\_map$[$pt$], $sp$}
\EndFor
\EndProcedure
\item[]
\Procedure{Filter}{$in\_wl$, $p\_map$, $out\_wl$}
\For{each embedding $emb \in in\_wl$ \textbf{in parallel}}
\If{\Call{\textcolor{blue}{toPrune}}{$emb$, $p\_map$} = $false$}
\State insert $emb$ to $out\_wl$
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\setlength{\textfloatsep}{\parsep}
\subsection{Programming Interface}\label{subsect:apis}
Pangolin exposes flexible and simple interfaces to the user
to express application-specific optimizations.
\cref{lst:api} lists user-defined functions (APIs) and
\cref{alg:operators} describes how these functions (marked in blue)
are invoked by the Pangolin execution engine.
A specific application can be created by defining these APIs.
Note that all the functions are not mandatory; each of them has
a default return value.
In the \textproc{Extend} phase, we provide two functions,
\texttt{toAdd} and \texttt{toExtend},
for the user to prune embedding candidates aggressively.
When they return false, the execution engine avoids generating
an embedding and thus the search space is reduced.
More specifically, \texttt{toExtend} checks whether a
vertex in the current embedding needs to be extended.
Extended embeddings can have duplicates due to {\it automorphism}.
\cref{fig:automorphism} illustrates {\it automorphism}: two different embeddings
$(3,5,4)$ and $(2,5,4)$ can be extended into the same embedding $(2,5,3,4)$.
Therefore, only one of them (the canonical embedding) should be kept,
and the other (the redundant one) should be removed.
\hlc{This is done by a \textit{canonical test} in \texttt{toAdd},
which checks whether the newly generated
embedding is a \emph{qualified} candidate.
An embedding is not qualified when it is a duplicate or it does not
have certain user-defined characteristics.
Only qualified embeddings are added into the next worklist.}
Application-specific knowledge can be used to specialize the two functions.
If left undefined, \texttt{toExtend} returns true
and \texttt{toAdd} does a default canonical test.
Note that the user specifies whether the embedding exploration
is vertex-induced or edge-induced.
The only difference for edge-induced extension is in lines 5 to 7:
instead of vertices adjacent to $v$, edges incident on $v$ are used.
\begin{lstlisting}[float=tp,floatplacement=b,label={lst:api},language=C++,abovecaptionskip=0pt,caption={User-defined functions in Pangolin.}]
bool toExtend(Embedding emb, Vertex v);
bool toAdd(Embedding emb, Vertex u)
bool toAdd(Embedding emb, Edge e)
Pattern getPattern(Embedding emb)
Pattern getCanonicalPattern(Pattern pt)
Support getSupport(Embedding emb)
Support Aggregate(Support s1, Support s2)
bool toPrune(Embedding emb);
\end{lstlisting}
In the \textproc{Reduce} phase, \texttt{getPattern} function specifies how to obtain the pattern of an embedding.
Finding the canonical pattern of an embedding involves an expensive {\it isomorphism} test.
This can be specialized using application-specific knowledge
to avoid such tests.
If left undefined, a canonical pattern is returned by
\texttt{getPattern}.
In this case,
to reduce the overheads of invoking the {\it isomorphism} test,
embeddings in the worklist are first reduced using their {\it quick patterns}~\cite{Arabesque},
and then quick patterns are aggregated using their canonical patterns.
In addition, \texttt{getSupport} and {\tt Aggregate}
functions specify the support of an embedding and the reduction
operator for the support, respectively.
Lastly, in the \textproc{Filter} stage, \texttt{toPrune} is used to
specify those embeddings the user is no longer interested in.
This depends on the support for the embedding's canonical pattern
(that is in the computed pattern map).
\label{para:complexity}
\hlc{\textbf{Complexity Analysis.}
Consider an input graph $G$ with $n$ vertices and maximum embedding size $k$.
In the \textproc{Extend} phase of the last level (which dominates the execution time and complexity),
there are up to $O(n^{k-1})$ embeddings in the input worklist.
Each embedding has up to $k-1$ vertices to extend.
Each vertex has up to $d_{max}$
neighbors (candidates).
In general,
each candidate needs to check connectivity with $k-1$ vertices,
with a complexity of $O(log(d_{max}))$ (binary search).
An isomorphism test needs to be performed
for each newly generated embedding (size of $k$) to find its pattern.
The state-of-the-art algorithm to test isomorphism
has a complexity of $O(e^{\sqrt{klogk}})$~\cite{Complexity}.
Therefore, the overall worst-case complexity is $O(n^{k-1}k^2d_{max}log(d_{max})e^{\sqrt{klogk}})$.}
\iffalse
// check if vertex in the embedding needs to be extended
// check if the new (vertex-induced) embedding needs to be added
// check if the new (edge-induced) embedding needs to be added
// get the pattern of an embedding
// by default turning it into its canonical pattern
// get the support of an embedding
// by default mapping its frequency to its pattern
// check if an embedding needs to be pruned
// print out the embeddings or patterns
Support getSupport(Pattern p)
\fi
Pangolin also provides APIs to process the embeddings or pattern
maps at the end of each phase (e.g., this is used in clique-listing,
which a variant of clique-finding that requires listing all the cliques).
We omit this from Algorithm~\ref{alg:operators}
and \cref{lst:api} for the sake of brevity.
To implement the application-specific functions,
users are required to write C++ code for CPU and
CUDA \texttt{\_\_device\_\_} functions for GPU
(compiler support can provide a unified interface
for both CPU and GPU in the future).
\cref{lst:routine} lists the helper routines provided by Pangolin.
These routines are commonly used in GPM applications;
e.g., to check connectivity, to test canonicality,
as well as an implementation of domain support.
They are available on both CPU and GPU,
with efficient implementation on each architecture.
\textbf{Comparison With Other GPM APIs:}
Existing GPM frameworks do not expose \texttt{toExtend}
and \texttt{getPattern} to the application developer
(instead, they assume these functions always return
true and a canonical pattern, respectively).
\hlc{Note that existing embedding-centric frameworks like Arabesque
can be extended to expose the same API functions in Pangolin so as to
enable application-specific optimizations (Section~\ref{sect:app-opt}),
but this is difficult for relational model based systems
like RStream, as the table join operations
are inflexible to allow this fine-grained control.}
\begin{lstlisting}[float=tp,floatplacement=tbp,label={lst:routine},language=C++,abovecaptionskip=0pt,caption={Helper routines provided to the user by Pangolin.}]
// connectivity checking routines
bool isConnected(Vertex u, Vertex v)
// canonical test routines
bool isAutoCanonical(Embedding emb, Vertex v)
bool isAutoCanonical(Embedding emb, Edge e)
Pattern getIsoCanonicalBliss(Embedding emb)
Pattern getIsoCanonicalEigen(Embedding emb)
// to get domain (MNI) support
Support getDomainSupport(Embedding emb)
Support mergeDomainSupport(Support s1, Support s2)
Support getPatternSupport(Embedding emb)
Support getPatternSupport(Edge e)
\end{lstlisting}
\iffalse
\begin{lstlisting}[float=tp,floatplacement=tbp,label={lst:tc},language=C++,abovecaptionskip=0pt,caption=Triangle counting (vertex induced) in Pangolin.]
bool toExtend(Embedding emb, Vertex v) {
return (emb.getLastVertex() == v);
}
bool toAdd(Embedding emb, Vertex u) {
return isConnected(emb.getFirstVertex(), u);
}
\end{lstlisting}
\fi
\begin{lstlisting}[float=tp,floatplacement=tbp,label={lst:kcl},language=C++,abovecaptionskip=0pt,caption={\small Clique finding (vertex induced) in Pangolin.}]
bool toExtend(Embedding emb, Vertex v) {
return (emb.getLastVertex() == v);
}
bool toAdd(Embedding emb, Vertex u) {
for v in emb.getVertices() except last:
if (!isConnected(v, u)) return false;
return true;
}
\end{lstlisting}
\subsection{Applications in Pangolin}\label{subsect:apps}
TC, CF, and MC use vertex-induced embeddings, while FSM uses edge-induced embeddings.
\cref{lst:kcl,lst:motif,lst:fsm} show CF, MC, and FSM implemented in Pangolin
(we omit TC due to lack of space).
For TC, extension happens only once, i.e., for each edge $(v_0, v_1$),
$v_1$ is extended to get a neighbor $v_2$.
We only need to check whether $v_2$ is connected to $v_0$.
If it is, this 3-vertex embedding $(v_0, v_1, v_2)$ forms a triangle.
For CF in \cref{lst:kcl}, the search space is reduced by extending only
the last vertex in the embedding instead of extending every vertex.
If the newly added vertex is connected to all the vertices in the embedding,
the new embedding forms a clique. Since cliques can only grow
from smaller cliques (e.g., 4-cliques can only be generated by extending
3-cliques), all the non-clique embeddings are implicitly pruned.
Both TC and CF do not use \textproc{Reduce} and \textproc{Filter} phases.
\begin{lstlisting}[float=tp,floatplacement=tbp,label={lst:motif},language=C++,abovecaptionskip=0pt,caption={\small Motif counting (vertex induced) in Pangolin.}]
bool toAdd(Embedding emb, Vertex v) {
return isAutoCanonical(emb, v);
}
Support getSupport(Embedding emb) { return 1; }
Pattern getPattern(Embedding emb) {
return getIsoCanonicalBliss(emb);
}
Support Aggregate(Support s1, Support s2) {
return s1 + s2;
}
\end{lstlisting}
\cref{lst:motif} shows MC. An extended embedding is added only if it is
canonical according to automorphism test. In the \textproc{Reduce} phase,
the quick pattern of each embedding is first obtained
and then the canonical pattern is obtained using an isomorphism test.
In Section~\ref{subsect:hybrid}, we show a way to customize this pattern
classification method for MC to improve performance. \textproc{Filter}
phase is not used by MC.
\begin{lstlisting}[float=tp,floatplacement=tbp,label={lst:fsm},language=C++,abovecaptionskip=0pt,caption={\small Frequent subgraph mining (edge induced) in Pangolin.}]
bool toAdd(Embedding emb, Edge e) {
return isAutoCanonical(emb,e)
}
Support getSupport(Embedding emb) {
return getDomainSupport(emb);
}
Pattern getCanonicalPattern(Embedding emb) {
return getIsoCanonicalBliss(emb);
}
Support Aggregate(Support s1, Support s2) {
return mergeDomainSupport(s1, s2);
}
bool toPrune(Embedding emb, PatternMap map) {
return (getPatternSupport(emb, map) < MIN_SUPPORT)
}
\end{lstlisting}
FSM is the most complicated GPM application.
As shown in \cref{lst:fsm}, it uses the custom domain support
routines provided by Pangolin. An extended embedding is added only if
the new embedding is (automorphism) canonical. FSM uses the \textproc{Filter}
phase to remove embeddings whose patterns are not frequent from the worklist.
Despite the complexity of FSM, the Pangolin implementation is still much
simpler than hand-optimized FSM implementations~\cite{DistGraph,Scalemine,GraMi},
thanks to the Pangolin API and helper routines.
\section{Evaluation}
\label{sect:eval}
In this section,
we compare Pangolin with state-of-art GPM frameworks
and hand-optimized applications.
We also analyze Pangolin performance in more detail.
\subsection{Experimental Setup}\label{subsect:setup}
\hlc{We compare Pangolin with the state-of-the-art GPM frameworks:
Arabesque~\cite{Arabesque}, RStream~\cite{RStream},
G-Miner~\cite{G-Miner}, Kaleido~\cite{Kaleido},
Fractal~\cite{Fractal}, and AutoMine~\cite{AutoMine}.
Arabesque, G-Miner, and Fractal support distributed execution,
while the rest support out-of-core execution.
All of them support execution only on CPUs.
Kaleido and AutoMine results are reported
from their papers because they are not publicly available.
We also compare Pangolin with the state-of-the-art
hand-optimized GPM applications~\cite{GAPBS,DistTC,KClique,PGD,Rossi,DistGraph,ParFSM,GpuFSM}.}
We test the 4 GPM applications discussed in Section~\ref{subsect:apps},
i.e., TC, CF, MC, and FSM.
$k$-MC and $k$-CF terminate when subgraphs reach a size of $k$ vertices.
For $k$-FSM, we mine the frequent subgraphs with $k-1$ edges.
\cref{tab:input} lists the input graphs used in the experiments.
We assume that input graphs are symmetric, have no self-loops,
and have no duplicated edges. We represent the input graphs in
memory in a compressed sparse row (CSR) format. The neighbor
list of each vertex is sorted by ascending vertex ID.
The first 3 graphs --- \texttt{Mi}, \texttt{Pa}, and \texttt{Yo} --- have been
previously used by Arabesque, RStream, and Kaleido. We use the same
graphs to compare Pangolin with these existing frameworks.
In addition, we include larger graphs from SNAP Collection~\cite{SNAP}
(\texttt{Lj}, \texttt{Or}), Koblenz Network Collection~\cite{Konect} (\texttt{Tw}),
DistGraph~\cite{DistGraph}(\texttt{Pdb}), and a very large web-crawl~\cite{gsh2015} (\texttt{Gsh}).
Except \texttt{Pdb}, other larger graphs do not have
vertex labels, therefore, we only use them to test TC, CF, and MC.
\texttt{Pdb} is used only for FSM.
Unless specified otherwise, CPU experiments were conducted on a single machine with
Intel Xeon Gold 5120 CPU 2.2GHz, 4 sockets (14 cores each),
190GB memory, and 3TB SSD.
\hlc{AutoMine was evaluated using 40 threads
(with hyperthreading) on
Intel Xeon E5-2630 v4 CPU 2.2GHz, 2 sockets (10 cores each),
64GB of memory, and 2TB of SSD.}
Kaleido was tested using 56 threads (with hyperthreading) on
Intel Xeon Gold 5117 CPU 2.0GHz, 2 sockets (14 cores each), 128GB memory, and 480GB SSD.
To make our comparison fair, we restrict our experiments to
use only 2 sockets of our machine,
but we only use 28 threads without hyperthreading.
For the largest graph, \texttt{Gsh}, we used a 2 socket machine with Intel's second
generation Xeon scalable processor with 2.2 Ghz and 48 cores, equipped with 6TB of
Intel Optane PMM~\cite{optane} (byte-addressable memory technology).
Our GPU platforms are NVIDIA GTX 1080Ti (11GB memory) and Tesla
V100 (32GB memory) GPUs with CUDA 9.0. Unless specified
otherwise, GPU results reported are on V100.
RStream writes its intermediate data to the SSD,
whereas other frameworks run all applications in memory.
We exclude preprocessing time and only report the computation
time (on the CPU or GPU) as an average of 3 runs.
\hlc{We also exclude the time to transfer data from CPU to GPU
as it is trivial compared to the GPU compute time.}
\input{sections/tbl_combine}
\input{sections/tbl_gsh}
\subsection{GPM Frameworks}\label{subsect:frameworks-compare}
\hlc{\cref{tab:compare} reports the execution time of
Arabesque, RStream, Kaleido, Fractal, and Pangolin.
The execution time of G-Miner and AutoMine is reported
in \cref{tab:tc} and \cref{tab:automine} respectively
(because it does not have other applications or datasets respectively).
Note that Kaleido and AutoMine results on 28-core and 20-core CPU,
respectively, are reported from their papers.
We evaluate the rest on our 28-core CPU,
except that we evaluate Pangolin for {\tt gsh} on 48-core CPU.
Fractal and AutoMine use DFS-based exploration, whereas
the rest use BFS-based exploration.
Pangolin is an order-of-magnitude faster than Arabesque, RStream, Fractal, and G-Miner.
Pangolin outperforms Kaleido in all cases except 4-MC on \texttt{patent}.
Pangolin on CPU is comparable or slower than AutoMine
but outperforms it by exploiting the GPU.}
\hlc{For small inputs (e.g., TC and $3$-CF with \texttt{Mi}),
Arabesque suffers non-trivial overhead due to the startup cost of Giraph.
For large graphs, however, due to lack of algorithmic
(e.g., eager pruning and customized pattern classification)
and data structure optimizations, it is also slower than Pangolin.}
On average, Pangolin is 49$\times$ faster than Arabesque.
For RStream, the number of partitions $P$ is a key performance knob.
For each configuration, we choose $P$ to be the
best performing one among 10, 20, 50, and 100.
RStream only supports edge-induced exploration and
does not support pattern-specific optimization.
This results in extremely large search spaces for CF
and MC because there are many more edges than vertices.
In addition, RStream does not scale well because of the
intensive use of \texttt{mutex} locks for updating shared data.
Lastly, Pangolin avoids inefficient data structures and
expensive redundant computation (isomorphism test) used by RStream.
Pangolin is 88$\times$ faster than RStream on average
(Kaleido~\cite{Kaleido} also observes that RStream is slower than Arabesque).
On average, Pangolin is 2.6$\times$ faster than Kaleido
(7.4$\times$, 3.3$\times$, 2.4$\times$, and 1.6$\times$
for TC, CF, MC, and FSM respectively).
This is mainly due to DAG construction and customized pattern
classification in Pangolin.
\hlc{
Pangolin is on average 80$\times$ faster than Fractal.
Fractal is built on Spark and suffers from overheads due to it.
More importantly, some optimizations in hand-optimized DFS-based
applications like PGD~\cite{PGD} and KClist~\cite{KClique}
are not supported in Fractal, which limits its performance.
}
\hlc{
AutoMine uses a key optimization~\cite{PGD,KClique}
to remove redundant computation that
can only be enabled in DFS-based exploration.
Due to this,
when pattern size $k$ is large like in 5-CF and 4-MC,
AutoMine is faster than Pangolin.
However, since Pangolin uses BFS-based
exploration which easily enables GPU acceleration,
Pangolin on GPU is on average 5.8$\times$ faster than AutoMine.
It is not clear how to enable DFS mode for GPU efficiently,
especially when $k$ is large.
Note that for all the applications, AutoMine can only do counting
but not listing, because it has no automorphism test during extension
(instead it uses post-processing to address the multiplexity issue).
FSM in AutoMine uses frequency (which is not anti-monotonic) instead of domain support,
and thus it is not comparable to FSM in Pangolin.}
\subsection{Hand-Optimized GPM Applications}\label{sect:hand-compare}
We compare hand-optimized implementations with Pangolin on CPU and GPU.
We report
results for the largest datasets supported on our platform
for each application.
\hlc{Note that all hand-optimized applications involve
substantially more programming effort than Pangolin ones.
Hand-optimized TC has 4$\times$ more lines of code (LoC) than
Pangolin TC.
The other hand-optimized applications have one or two orders
of magnitude more LoC than Pangolin ones.}
\hlc{In \cref{tab:tc}, we compare with GAP~\cite{GAPBS}
and DistTC~\cite{DistTC}, the state-of-the-art TC implementations on
CPU and GPU, respectively.
It is clear from \cref{tab:compare} and \cref{tab:tc} that}
TC implementations in existing GPM frameworks are orders of
magnitude slower than the hand-optimized implementation in GAP.
In contrast, \hlc{Pangolin performs similar to GAP
on the same CPU.
Pangolin is also faster than DistTC on the same GPU
due to its embedding list data structure,
which has better load balance and memory access behavior.}
\cref{tab:4-clique} compares our $4$-clique with KClist~\cite{KClique},
\hlc{the state-of-the-art CF implementation}.
Pangolin is 10 to 20$\times$ slower than KClist on the CPU, although
GPU acceleration of Pangolin significantly reduces the performance gap.
This is because KClist constructs a shrinking local graph for each edge,
which significantly reduces the search space.
This optimization can only enabled in the DFS exploration.
In \cref{tab:3-motif}, we observe the same trend for
3-MC compared with \hlc{PGD,
the state-of-the-art MC solver for multicore CPU~\cite{PGD} and GPU~\cite{Rossi}.
Note that PGD can only do counting, but not listing, as it only counts
some of the patterns and the other patterns' counts are calculated
directly using some formulas. In contrast, MC in Pangolin can do
both counting and listing. Another limitation of PGD is that it
can only handle 3-MC and 4-MC, while Pangolin handles arbitrary $k$.
As PGD for GPU (PGD-GPU)~\cite{Rossi} is not released,
we estimate PGD-GPU performance using their reported
speedup~\cite{Rossi} on Titan Black GPU.
Pangolin-GPU is 20\% to 130\% slower.}
\cref{tab:3-fsm} and \cref{tab:4-fsm} compares our 3-FSM and 4-FSM,
respectively, with DistGraph~\cite{DistGraph,ParFSM}.
DistGraph supports both shared-memory and distributed platforms.
DistGraph supports a runtime parameter $\sigma$, which specifies
the minimum support, but we had to modify it to add the maximum size $k$.
On CPU, Pangolin outperforms DistGraph for 3-FSM in all cases,
except for \texttt{Pa} with support 5K.
For graphs that fit in the GPU memory (\texttt{Mi}, \texttt{Pa}),
\hlc{Pangolin on GPU is 6.9$\times$ to 290$\times$ faster than DistGraph.
In comparison, the GPU implementation of DistGraph is only
4$\times$ to 9$\times$ faster than its CPU implementation~\cite{GpuFSM}
(we are not able able to run their GPU code and we cannot compare with their reported results as they do not evaluate the same datasets).}
For 4-FSM, Pangolin is 22\% to 240\% slower than DistGraph.
The slowdown is mainly due to the algorithmic differences:
DistGraph adopts DFS exploration and a recursive approach which reduces
computation and memory consumption, while Pangolin does BFS exploration.
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width=\linewidth]{figures/scale.pdf}
\caption{\scriptsize Strong scaling using \texttt{Yo} graph.
$\sigma$=500 for FSM.}
\label{fig:scale}
\end{minipage}
\hfill
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width=\linewidth]{figures/rmat.pdf}
\caption{\scriptsize Execution time for RMAT graphs (log-log scale).}
\label{fig:rmat}
\end{minipage}
\vspace{-0.15cm}
\end{figure}
\iffalse
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.25\textwidth]{figures/scale.pdf}
\vspace{-0.2cm}
\caption{Strong scaling of Pangolin (using \texttt{Yo}).}
\label{fig:scale}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.25\textwidth]{figures/rmat.pdf}
\vspace{-0.2cm}
\caption{Pangolin performance as graph size increases.}
\label{fig:rmat}
\end{center}
\vspace{-0.5cm}
\end{figure}
\fi
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.92\textwidth]{figures/gpu.pdf}
\vspace{-0.1cm}
\caption{Speedup of Pangolin on GPU over Pangolin on 28-thread CPU.
Missing bars of 1080Ti are due to out of memory.}
\vspace{-0.7cm}
\label{fig:gpu}
\end{center}
\end{figure*}
\subsection{Scalability and GPU Performance}
\label{subsect:scaling}
Although Pangolin is an in-memory processing system,
Pangolin can scale to very large datasets by using large memory systems.
To demonstrate this, we evaluate Pangolin on the Intel Optane PMM system
and mine a very large real-world web crawl, \texttt{Gsh}.
As shown in \cref{tab:gsh}, TC and 3-CF only take 2 and 11 minutes, respectively.
4-CF is much more compute and memory intensive, so it takes $\sim6.5$ hours.
To the best of our knowledge,
this is the largest graph dataset for which 4-CF has been mined.
\cref{fig:scale} illustrates how the performance of Pangolin applications
scales as the number of threads increases for different applications on \texttt{Yo}.
Pangolin achieves good scalability by utilizing
efficient, concurrent, scalable data structures and allocators.
For TC, we observe near linear speedup over single-thread execution.
In contrast, FSM's scalability suffers
due to the overheads of computing domain support.
\hlc{To test weak scaling, we use the RMAT graph generator~\cite{Khorasani}
to generate graphs with vertices $|V|$ from $2^{20}$ to $2^{25}$
and
average degree $\overline{d}=20$.
\cref{fig:rmat} reports the execution time normalized to that of \texttt{rmat20}
(log-log scale).
The execution time grows exponentially as the graph size increases
because the enumeration search space grows exponentially.}
\cref{fig:gpu} illustrates speedup of Pangolin
applications on GPU over 28 threads CPU.
Note that due to the limited memory size,
GPUs fail to run some applications and inputs.
On average, 1080Ti and V100 GPUs achieve a speedup
of 6$\times$ and 15$\times$ respectively over the CPU execution.
Specifically, we observe substantial speedup on CF and MC. For example,
the V100 GPU achieves 50$\times$ speedup on 4-MC for \texttt{Yo},
demonstrating the suitability of GPUs for these compute intensive applications.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.46\textwidth]{figures/memory.pdf}
\vspace{0cm}
\caption{\small Peak memory usage in Arabesque,
RStream, and Pangolin for \texttt{Pa}
(4-MC in RStream runs out of memory).}
\vspace{-0.5cm}
\label{fig:memory}
\end{center}
\end{figure}
\subsection{Memory Consumption}\label{subsect:memory}
The peak memory consumption for Arabesque,
RStream, and Pangolin is illustrated in \cref{fig:memory}.
\hlc{All systems are evaluated on the same 28-core CPU platform.}
We observe that Arabesque always requires
the most memory because it is implemented in Java using
Giraph~\cite{Giraph} framework that allocates a huge amount of memory.
In contrast, Pangolin avoids this overhead
and reduces memory usage. Since Pangolin does
in-memory computation, it is expected to consume much more memory
than RStream which stores its embeddings in disk. However, we find
that the difference in memory usage is trivial because aggressive search space
pruning and customized pattern classification significantly
reduce memory usage. Since this small memory cost brings substantial
performance improvement, we believe Pangolin makes a reasonable trade-off.
For 4-MC, RStream runs out of memory due to its edge-induced
exploration (Arabesque and Pangolin are using vertex-induced exploration).
\begin{figure}[t]
\centering
\captionsetup[subfigure]{aboveskip=-1pt,belowskip=0pt}
\begin{subfigure}[t]{0.235\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/prune.pdf}
\caption{\scriptsize 4-CF {(pruning)}}
\label{fig:prune}
\end{subfigure}
\begin{subfigure}[t]{0.234\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/alloc.pdf}
\caption{\scriptsize 3-FSM {(Galois allocator)}}
\label{fig:alloc}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/custom.pdf}
\caption{\scriptsize $k$-MC {(customized patterns)}}
\label{fig:custom}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.234\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/eager.pdf}
\caption{\scriptsize $k$-MC {(avoid materialization)}}
\label{fig:eager}
\end{subfigure}
\caption{\small Speedup due to various optimizations:
(\subref{fig:prune}) eager pruning and DAG;
(\subref{fig:alloc}) Galois scalable memory allocator;
(\subref{fig:custom}) customized pattern classification;
(\subref{fig:eager}) avoiding materialization.}
\vspace{-0.1cm}
\end{figure}
\begin{figure}[t]
\centering
\captionsetup[subfigure]{aboveskip=-1pt,belowskip=0pt}
\begin{subfigure}[t]{0.238\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/list-vs-queue.pdf}
\caption{}
\label{fig:list-vs-queue}
\end{subfigure}
\begin{subfigure}[t]{0.232\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/search.pdf}
\caption{}
\label{fig:search}
\end{subfigure}
\caption{\small $k$-MC speedup of (a) using embedding list (SoA+inspection-execution)
over using embedding queue (AoS) and (b) binary search over linear search.}
\vspace{-0.36cm}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\textwidth]{figures/llc.pdf}
\vspace{-0cm}
\caption{LLC miss counts in the vertex extension phase
of $k$-CF using AoS and SoA for embeddings.}
\vspace{-0.5cm}
\label{fig:llc}
\end{center}
\end{figure}
\subsection{Impact of Optimizations}
We evaluate the performance improvement due to the optimizations
described in \cref{sect:app-opt} and \cref{sect:impl}.
Due to lack of space, we present these comparisons
only for the CPU implementations,
but the results on the GPU are similar.
\cref{fig:prune} shows the impact of orientation ({\em DAG})
and user-defined eager pruning ({\em Prune}) on 4-CF.
Both techniques significantly improve performance for TC (not shown) and CF.
\cref{fig:alloc} demonstrates the advantage of using Galois memory
allocators instead of \texttt{std} allocators. This is particularly important
for FSM as it requires intensive memory allocation for counting support.
\cref{fig:custom} illustrates that customized pattern classification
used in MC and FSM yields huge performance gains by eliding
expensive generic isomorphism tests.
\cref{fig:eager} shows that materialization of temporary
embeddings causes 11\% to 37\% slowdown for MC.
This overhead exists in every application of Arabesque
(and RStream), and is avoided in Pangolin.
In \cref{fig:list-vs-queue}, we evaluate the performance of
our proposed embedding list data structure with SoA layout and
inspection-execution. Compared to the straight-forward embedding
queue (mimic the AoS implementation used in Arabesque and RStream),
the $k$-MC performance is 2.1$\times$ to 4.7$\times$ faster.
Another optimization is employing binary search for connectivity
check. \cref{fig:search} shows that binary search can achieve
up to 6.6$\times$ speedup compared to linear search.
Finally, \cref{fig:llc} illustrates the last level cache (LLC)
miss counts in the vertex extension phase of $k$-CF. We compare
two data structure schemes for the embeddings, AoS and SoA.
We observe a sharp reduction of LLC miss count by switching
from AoS to SoA. This further confirms that SoA has better
locality than AoS, due to the data reuse among embeddings.
\subsection{Eliding Isomorphism Test}\label{subsect:hybrid}
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/dag.pdf}
\vspace{-0.27cm}
\caption{\small Convert an undirected graph into a DAG.}
\label{fig:dag}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/avoid-iso.pdf}
\vspace{-0.27cm}
\caption{\small Examples of eliding isomorphism test for 4-MC.}
\label{fig:avoid-iso}
\end{minipage}
\vspace{-0.1cm}
\end{figure}
\iffalse
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.28\textwidth]{figures/avoid-iso.pdf}
\vspace{0cm}
\caption{An example of eliding isomorphism test for 4-MC.}
\vspace{-0.5cm}
\label{fig:avoid-iso}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.3\textwidth]{figures/dag.pdf}
\vspace{0.2cm}
\caption{Orientation: convert the undirected input graph
into a directed acyclic graph (DAG).}
\vspace{-0.6cm}
\label{fig:dag}
\end{center}
\end{figure}
\fi
\textbf{Exploiting Memoization:}
Pangolin avoids redundant computation in each stage with memoization.
Memoization is a tradeoff between computation and memory usage.
Since GPM applications are usually memory hungry,
we only do memoization when it requires small amount of
memory and/or it dramatically reduce complexity.
For example, in the \textproc{Filter} phase of FSM, Pangolin avoids isomorphism
test to get the pattern of each embedding, since it has been done in the
\textproc{Reduce} phase. This recomputation is avoided by maintaining a pattern
ID (hash value) in each embedding after isomorphism test, and setting up
a map between the pattern ID and pattern support. Compared to isomorphism test,
which is extremely compute and memory intensive, storing the pattern
ID and a small pattern support map is relatively lightweight.
In MC, which is another application to find multiple patterns,
the user can easily enable memoization for the pattern id in each level.
In this case, when it goes to the next level, the pattern of each
embedding can be identified with its pattern id in the previous level
with much less computation than a generic isomorphism test.
As shown in \cref{fig:avoid-iso}, to identify a 4-cycle
from a wedge or a diamond from a triangle, we only need to check if
vertex 3 is connected to both vertex 1 and 2.
\begin{lstlisting}[float=t,floatplacement=b,label={lst:3-motif},language=C++,abovecaptionskip=0pt,belowcaptionskip=2pt,caption=Customized pattern classification for 3-MC.]
Pattern getPattern(Embedding emb) {
if (emb.size() == 3) {
if (emb.getNumEdges() == 3) return P1;
else return P0;
} else return getIsoCanonicalBliss(emb);
}
\end{lstlisting}
\textbf{Customized Pattern Classification:}
In the \textproc{Reduce} phase, embeddings are classified into different
categories based on their patterns, as shown in \cref{fig:reduce}.
To get the pattern of an embedding, a generic way is to convert
the embedding into a canonical graph that is isomorphic to it
(done in two steps, as explained in Section~\ref{subsect:apis}).
Like Arabesque and Rstream, Pangolin uses the Bliss~\cite{Bliss}
library for getting the canonical graph or pattern for an embedding.
This graph isomorphism approach is applicable to embeddings of any size,
but it is very expensive as it requires frequent dynamic memory
allocation and consumes a huge amount of memory. For small embeddings,
such as 3-vertex and 4-vertex embeddings in vertex-induced applications
and 2-edge and 3-edge embeddings in edge-induced applications,
the canonical graph or pattern can be computed very efficiently.
For example, we know that there are only 2 patterns in 3-MC
(i.e., wedge and triangle in \cref{fig:4-motifs}).
The only computation needed to differentiate the two patterns is to
count the number of edges (i.e., a wedge has 2 edges and a triangle has 3),
as shown in \cref{lst:3-motif}. This specialized method significantly
reduces the computational complexity of pattern classification.
The \texttt{getPattern} function in Pangolin enables the user
to specify such \textit{customized pattern classification}.
\section{Implementation of Pangolin on CPU and GPU}\label{sect:impl}
\section{Implementation on CPU and GPU}\label{sect:impl}
The user implements application-specific optimizations
using the Pangolin API and helper functions,
and Pangolin transparently parallelizes the application.
Pangolin provides an efficient and scalable parallel
implementation on both shared-memory multicore CPU and GPU.
Its CPU implementation is built using
the Galois~\cite{Galois} libray and
its GPU implementation is built using
the LonestarGPU~\cite{LonestarGPU} infrastructure.
Pangolin includes several architectural optimizations.
In this section, we briefly describe some of them:
(1) exploiting locality and fully utilizing memory
bandwidth~\cite{Analysis,Locality,Memory};
(2) reducing the memory consumption;
(3) mitigating the overhead of dynamic memory allocation;
(4) minimizing synchronization and other overheads.
\subsection{Data Structures for Embeddings}
Since the number of possible $k$-embeddings in a graph increases
exponentially with $k$, storage for embeddings grows rapidly and
easily becomes the performance bottleneck. Most existing systems
use array-of-structures (AoS) to organize the embeddings,
which leads to poor locality, especially for GPU computing.
In Pangolin, we use structure of arrays (SoA) to store embeddings in memory.
The SoA layout is particularly beneficial for parallel processing
on GPU as memory accesses to the embeddings are fully coalesced.
\cref{fig:embedding} illustrates the embedding list data structure.
On the left is the prefix-tree that illustrates the
embedding extension process in \cref{fig:extension}.
The numbers in the vertices are vertex IDs (VIDs).
Orange VIDs are in the first level $L_{1}$, and blue VIDs
belong to the second level $L_{2}$. The grey level $L_{0}$
is a dummy level which does not actually exist but is used to
explain the key ideas. On the right, we show the corresponding
storage of this prefix tree. For simplicity, we only show
the vertex-induced case. Given the maximum size $k$,
the embedding list contains $k-1$ levels. In each level,
there are two arrays, index array (\texttt{idx}) and vertex ID
array (\texttt{vid}). In the same position of the two arrays,
an element of index and vertex ID consists of a pair (\texttt{idx},
\texttt{vid}). In level $L_{i}$, \texttt{idx} is the index pointing
to the vertex of the same embedding in the previous level $L_{i-1}$,
and \texttt{vid} is the $i$-th vertex ID of the embedding.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.43\textwidth]{figures/embedding.pdf}
\vspace{-0.1cm}
\caption{{\small An example of the embedding list data structure.}}
\vspace{-0.47cm}
\label{fig:embedding}
\end{center}
\end{figure}
Each embedding can be reconstructed by backtracking from the
last level lists. For example, to get the first embedding
in level $L_{2}$, which is a vertex set of $\{0, 1, 2\}$,
we use an empty vertex set at the beginning. We start from
the first entry ($0$, $2$) in $L_{2}$, which indicates the last
vertex ID is `$2$' and the previous vertex is at the position
of `$0$'. We put `$2$' into the vertex set $\{2\}$. Then we go back
to the previous level $L_{1}$, and get the $0$-th entry ($0$, $1$).
Now we put `$1$' into the vertex set $\{1, 2\}$. Since $L_{1}$ is
the lowest level and its index is the same as the vertex ID in
level $L_{0}$, we put `$0$' into the vertex set $\{0, 1, 2\}$.
For the edge-induced case, the strategy is similar but requires
one more column \texttt{his} in each level to indicate the history
information. Each entry is a triplet (\texttt{vid}, \texttt{his},
\texttt{idx}) that represents an edge instead of a vertex,
where \texttt{his} indicates at which level the source
vertex of this edge is, while \texttt{vid} is the ID of the destination
vertex. In this way we can backtrack the source vertex with \texttt{his}
and reconstruct the edge connectivity inside the embedding. Note that
we use three distinct arrays for \texttt{vid}, \texttt{his} and
\texttt{idx}, which is also an SoA layout. This data layout
can improve temporal locality with more data reuse. For example, the
first \texttt{vid} in $L_{1}$ ($v_{1}$) is connected to two vertices in
$L_{2}$ ($v_{2}$ \& $v_{3}$). Therefore $v_{1}$ will be reused. Considering
high-degree vertices in power-law graphs, there are lots of reuse opportunities.
\subsection{Avoiding Materializaton of Data Structures}
\label{sect:avoid-mat}
\textbf{Loop Fusion:}
Existing GPM systems first collect all the embedding candidates
into a list and then call the user-defined function (like
\texttt{toAdd}) to select embeddings from the list.
This leads to materializaton of the candidate embeddings list.
In contrast, Pangolin preemptively discards embedding candidates
using the \texttt{toAdd} function before adding it to the embedding
list (as shown in Algorithm~\ref{alg:operators}),
thereby avoiding the materialization of the candidate embeddings
(this is similar to {\it loop fusion} in array languages).
This significantly reduces memory allocations,
yielding lower memory usage and execution time.
\textbf{Blocking Schedule:}
Since the memory consumption increases exponentially with the
embedding size, existing systems utilize either distributed
memory or disk to hold the data. However, Pangolin is a shared
memory framework and could run out of memory for large graphs.
In order to support processing large datasets, we introduce
an \emph{edge-blocking} technique in Pangolin.
\hlc{Since an application starts expansion with single-edge
embeddings, Pangolin blocks the initial embedding list
into smaller chunks, and
processes all levels (main loop in Algorithm~\ref{alg:engine})
for each chunk one after another.
As shown in \cref{fig:blocking}, there are $n$ edges in
the initial embedding list ($e_0$ $\sim$ $e_{n-1}$).
Each chunk contains 4 edges which are assigned
to the 2 threads ($t_0$ $\sim$ $t_1$) to process.
After all levels of the current chunk are processed,
the threads move to the next chunk and
continue processing until all chunks are processed.
The chunk size $C_s$ is a parameter to tune;
$C_s$ is typically much larger than the number of threads.
Blocking will not affect parallelism
because there are a large number of edges in each chunk that
can be processed concurrently.
Note that
the FILTER phase requires strict synchronization
in each level,
so edge-blocking cannot be applied for applications
that use it. For example, we need to gather embeddings for
each pattern in FSM in order to compute the domain support.
Due to this, all embeddings needs to be processed before
moving to the next level, so we disable blocking for FSM.
Currently, edge-blocking is used specifically for bounding
memory usage, but it is also potentially beneficial
for data locality with an appropriate block size.
We leave this for future work.}
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.43\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/blocking.pdf}
\vspace{-0.36cm}
\caption{\small Edge blocking.}
\label{fig:blocking}
\end{minipage}
\hfill
\begin{minipage}[t]{0.53\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/insp.pdf}
\vspace{-0.36cm}
\caption{\small Inspection-execution.}
\label{fig:insp}
\end{minipage}
\vspace{-0.1cm}
\end{figure}
\iffalse
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.35\textwidth]{figures/blocking.pdf}
\vspace{0.2cm}
\caption{Edge blocking.}
\vspace{-0.6cm}
\label{fig:blocking}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.35\textwidth]{figures/insp-exec.pdf}
\caption{Inspection-execution.}
\vspace{-0.7cm}
\label{fig:insp-exec}
\end{center}
\end{figure}
\fi
\subsection{Dynamic Memory Allocation}
\label{sect:mem-alloc}
\textbf{Inspection-Execution:}
Compared to graph analytics applications, GPM applications
need significantly more dynamic memory allocations and
memory allocation could become a performance bottleneck.
A major source of memory allocation is the embedding list.
As the size of embedding list increases,
we need to allocate memory for the embeddings in each round.
When generating the embedding list,
there are write conflicts as different threads write to the same
shared embedding list. In order to avoid frequent \texttt{resize}
and \texttt{insert} operation, we use {\it inspection-execution}
technique to generate the embedding list.
The generation include 3 steps. In the first step,
we only calculate the number of newly generated embeddings for each
embedding in the current embedding list. We then use parallel prefix
sum to calculate the \texttt{start} index for each current embedding,
and allocate the exact amount of memory for all the new embeddings.
Finally, we actually write the new embeddings to update the embedding list,
according to the \texttt{start} indices. In this way, each thread can
write to the shared embedding list simultaneously without conflicts.
\hlc{\cref{fig:insp} illustrates the inspection process.
At level $i$, there are 4 embeddings $e_0$, $e_1$, $e_2$, $e_3$ in the
embedding list, which will generate 1, 2, 1, 3 new embeddings respectively.
We get the \texttt{start} indices (0, 1, 3, 4) using prefix sum,
and then allocate memory for the level $i+1$ embedding list.
Next, each embedding writes generated embeddings from its \texttt{start} index in the level $i+1$ list (concurrently).
}
Although inspection-execution requires iterating over the embeddings twice,
making this tradeoff for GPU is reasonable for two reasons.
First, it is fine for the GPU to do the recomputation
as it has a lot of computation power.
Second, improving the memory access pattern to better
utilize memory bandwidth is more important for GPU.
This is also a more scalable design choice for the CPU as
the number of cores on the CPU are increasing.
\textbf{Scalable Allocators:}
Pattern reduction in FSM is another case where dynamic memory allocation
is frequently invoked. To compute the domain support of each pattern,
we need to gather all the embeddings associated with the same pattern (see \cref{fig:example}).
This gathering requires resizing the vertex set of each domain.
The C++ standard {\tt std} library employs a concurrent allocator
implemented by using a global lock for each allocation,
which could seriously limit performance and scalability.
We leverage the Galois memory allocator to alleviate this overhead.
Galois provides an in-built efficient and concurrent memory allocator that
implements ideas from prior scalable allocators~\cite{Hoard,Michael,Schneider}.
The allocator uses per-thread memory pools of huge pages.
Each thread manages its own memory pool.
If a thread has no more space in its memory pool,
it uses a global lock to add another huge page to its pool.
Most allocations thus avoid locks.
Pangolin uses variants of {\tt std} data structures
provided by Galois that use the Galois memory allocator.
\hlc{For example, this is used for maintaining the pattern map.}
On the other hand, our GPU infrastructure currently lacks support
for efficient dynamic memory allocation inside CUDA kernels.
To avoid frequent \texttt{resize} operations inside kernels,
we conservatively calculate the memory space
required and pre-allocate bit vectors for kernel use.
This pre-allocation requires much more memory than is actually required,
and restricts our GPU implementation to smaller inputs for FSM.
\input{sections/tbl_inputs}
\subsection{Other Optimizations}
\label{sect:other-opt}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/binary.pdf}
\caption{{\small Binary search for connectivity check.
The current embedding is $\{2, 5, 8\}$, which
we are trying to extend by adding vertex 9.
To check which vertices in the embedding are directly
connected to vertex 9, we use `2', `5', and `8' as the keys
to search in the neighbor list of vertex 9. Using binary
search we find `2' and `5' are connected to `9'.}}
\vspace{-0.5cm}
\label{fig:binary}
\end{center}
\end{figure}
\input{sections/tbl_compare}
GPM algorithms make extensive use of connectivity operations
for determining how vertices are connected in the input graph. For
example, in $k$-cliques, we need to check whether a new vertex is
connected to all the vertices in the current embedding. Another common
connectivity operation is to determine how many vertices are connected
to given vertices $v_{0}$ and $v_{1}$, which is usually obtained by
computing the intersection of the neighbor lists of the two vertices.
A naive solution of connectivity checking is to search for one vertex
$v_{0}$ in the other vertex $v_{1}$'s neighbor list sequentially.
If found, the two vertices are directly connected.
To reduce complexity and improve parallel efficiency,
we employ binary search for the connectivity check.
\hlc{\cref{fig:binary} illustrates an example of
binary search for connectivity check.}
This is particularly efficient on GPU,
as it improves GPU memory efficiency~\cite{TriCore}.
We provide efficient CPU and GPU implementations of
these connectivity operations as helper routines,
such as \texttt{isConnected} (\cref{lst:routine}), which allow
the user to easily compose pruning strategies in applications.
\hlc{In summary, when no algorithmic optimization is applied,
programming in Pangolin should be as easy as previous GPM systems like Arabesque.
In this case, performance gains over Arabesque is achieved
due to the architectural optimizations (e.g., data structures) in Pangolin.
To incorporate algorithmic optimizations, the user can leverage
Pangolin API functions (e.g., \texttt{toExtend} and \texttt{toAdd}) to express
application-specific knowledge.
While this involves slightly more programming effort,
the user can get an order of magnitude performance improvement by doing so.}
\section{Introduction}\label{sect:intro}
Applications that use graph data are becoming increasingly important in
many fields such as world wide web, advertising, social media, and biology.
Graph analytics algorithms such as PageRank and SSSP have been studied
extensively and many frameworks have been proposed to provide both high
performance and high productivity~\cite{Pregel,GraphLab,Galois,Ligra}.
Another important class of graph problems deals with \hlc{\emph{graph pattern mining} (GPM)},
which has plenty of applications in areas such as chemical engineering~\cite{chemical},
bioinformatics~\cite{Motifs1,Protein}, and social sciences~\cite{Social}.
GPM discovers relevant patterns in a given graph.
One example is \emph{triangle counting},
which is used to mine graphs in security applications~\cite{Voegele2017}.
Another example is \emph{motif counting}~\cite{Motifs2,Motif3},
which counts the frequency of certain structural patterns;
this is useful in evaluating network models or classifying vertex roles.
\cref{fig:4-motifs} illustrates the 3-vertex and 4-vertex motifs.
Compared to graph analytics, GPM algorithms
are more difficult to implement on parallel platforms;
for example, unlike graph analytics algorithms,
they usually generate enormous amounts of intermediate data.
GPM systems such as \hlc{Arabesque~\cite{Arabesque}, RStream~\cite{RStream}, and Fractal~\cite{Fractal}}
have been developed to provide abstractions for improving programmer productivity.
Instead of the vertex-centric model used in graph analytics systems~\cite{Pregel},
Arabesque proposed an \textit{embedding-centric} programming model.
In Arabesque, computation is applied on individual
embeddings (\emph{i.e.}, subgraphs) concurrently.
It provides a simple programming interface that substantially
reduces the complexity of application development.
However, existing systems suffer dramatic performance
loss compared to hand-optimized implementations.
For example, Arabesque and RStream take 98s and 39s respectively to count
3-cliques for the \texttt{Patent} graph with 2.7M vertices and 28M edges,
while a custom solver (KClist)~\cite{KClique} counts it in 0.16s.
This huge performance gap significantly limits the usability of
existing GPM frameworks in real-world applications.
The first reason for this poor performance is that existing
GPM systems provide limited support for application-specific
customization. The state-of-the-art systems focus on generality and
provide high-level abstraction to the user for ease-of-programming.
Therefore, they hide as many execution details as possible from the user,
which substantially limits the flexibility for algorithmic customization.
The complexity of GPM algorithms is primarily due
to combinatorial enumeration of embeddings and
isomorphism tests to find canonical patterns.
Hand-optimizing implementations exploit application-specific
knowledge to aggressively prune the enumeration
search space or elide isomorphism tests or both.
Mining frameworks need to support such optimizations
to match performance of hand-optimized applications.
The second reason for poor performance is inefficient implementation of
parallel operations and data structures. Programming parallel processors
requires exploring trade-offs between synchronization overhead, memory management,
load balancing, and data locality. However, the state-of-the-art
GPM systems target either distributed or out-of-core platforms,
and thus are not well optimized for shared-memory multicore/manycore architectures.
\begin{figure*}[t]
\centering
\begin{minipage}[t]{0.33\linewidth}
\includegraphics[width=\textwidth]{figures/4-motifs.pdf}
\vspace{-0.37cm}
\caption{\small 3-vertex motifs (top) and 4-vertex motifs (bottom).}
\vspace{-0.5cm}
\label{fig:4-motifs}
\end{minipage}
\hfill
\begin{minipage}[t]{0.37\linewidth}
\includegraphics[width=\textwidth]{figures/example.pdf}
\vspace{-0.37cm}
\caption{\small An example of the GPM problem.}
\vspace{-0.5cm}
\label{fig:example}
\end{minipage}
\hfill
\begin{minipage}[t]{0.23\linewidth}
\includegraphics[width=\textwidth]{figures/system-overview.pdf}
\vspace{-0.37cm}
\caption{\small System overview of Pangolin (shaded parts).}
\vspace{-0.5cm}
\label{fig:overview}
\end{minipage}
\vspace{-0.1cm}
\end{figure*}
In this paper, we present Pangolin, an efficient in-memory GPM framework.
Pangolin provides a simple yet flexible
embedding-centric programming interface,
based on the {\it extend-reduce-filter} model,
which enables application-specific customization (Section~\ref{sect:design}).
Application developers can implement aggressive pruning
strategies to reduce the enumeration search space, and
apply customized pattern classification methods to elide
generic isomorphism tests (Section~\ref{sect:app-opt}).
To make full use of parallel hardware, we optimize parallel
operations and data structures, and provide helper routines
to the users to compose higher level operations.
Pangolin is built as a lightweight layer on top of
the Galois~\cite{Galois} parallel library and
LonestarGPU~\cite{LonestarGPU} infrastructure,
targeting both shared-memory multicore CPUs and GPUs.
Pangolin includes novel optimizations that exploit locality,
reduce memory consumption, and mitigate overheads of
dynamic memory allocation and synchronization (Section~\ref{sect:impl}).
Experimental results (Section~\ref{sect:eval}) on a 28-core CPU demonstrate
that Pangolin outperforms existing \hlc{GPM frameworks,
Arabesque, RStream, and Fractal, by 49$\times$, 88$\times$, and 80$\times$ on average, respectively.}
Furthermore, Pangolin on V100 GPU outperforms
Pangolin on 28-core CPU by 15$\times$ on average.
Pangolin provides performance competitive to
state-of-the-art hand-optimized \hlc{GPM} applications,
but with much less programming effort.
To mine 4-cliques in a real-world web-crawl graph (gsh)
with 988 million vertices and 51 billion vertices,
Pangolin takes $\sim6.5$ hours on a 48-core Intel Optane PMM
machine~\cite{optane} with 6 TB (byte-addressable) memory.
To the best of our knowledge, this is the largest graph
on which 4-cliques have been mined.
In summary, Pangolin makes the following contributions:
\vspace{-0.1cm}
\begin{itemize}[leftmargin=*]
\setlength\itemsep{-0.2em}
\item We investigate the performance gap between state-of-the-art
\hlc{GPM systems} and hand-optimized approaches, and point
out two key features absent in existing systems:
\emph{pruning enumeration space} and \emph{eliding isomorphism tests}.
\item We present a high-performance in-memory GPM system,
Pangolin, which enables \emph{application-specific optimizations} and
provides transparent parallelism on CPU or GPU.
To the best of our knowledge, it is \hlc{the first
GPM system that provides high-level abstractions
for GPU processing}.
\item We propose novel techniques that enable the user to aggressively
prune the enumeration search space and elide isomorphism tests.
\item We propose novel optimizations that exploit locality,
reduce memory usage, and mitigate overheads of dynamic
memory allocation and synchronization on CPU and GPU.
\item We evaluate Pangolin on a multicore CPU and a GPU
to demonstrate that Pangolin is substantially faster than
existing GPM frameworks. \hlc{Compared to hand-optimized applications,
it provides competitive performance while requiring less programming effort.}
\end{itemize}
\iffalse
The rest of this paper is organized as follows.
Section~\ref{sect:back} provides background on existing
GPM frameworks and hand-optimized implementations.
Section~\ref{sect:design} introduces the Pangolin API
and execution model.
Section~\ref{sect:app-opt} describes how Pangolin enables
application-specific optimizations using its API.
Section~\ref{sect:impl} describes Pangolin's implementation
and architectural optimizations.
Section~\ref{sect:eval} presents our evaluation of Pangolin
and state-of-the-art GPM applications and frameworks.
Related work is discussed in Section~\ref{sect:relate} and
conclusions are presented in Section~\ref{sect:concl}.
\fi
\subsection{Existing GPM Frameworks}
Existing GPM systems target
either distributed-memory~\cite{Arabesque,Fractal,ASAP} or
out-of-core~\cite{RStream,Kaleido,AutoMine} platforms,
and they make tradeoffs specific for their targeted architectures.
None of them target in-memory GPM on a multicore CPU
or a GPU.
Consequently, they do not pay much attention to
reducing the synchronization overheads among threads
within a CPU/GPU or reducing memory consumption overheads.
Due to this, naively porting these GPM systems to
run on a multicore CPU or GPU would
lead to inefficient implementations.
We first describe two of these GPM systems briefly
and then discuss their major limitations.
Arabesque~\cite{Arabesque} is a distributed GPM system.
It proposes ``think like an embedding'' (TLE) programming paradigm,
where computation is performed in an embedding-centric manner.
It defines a \textit{filter-process} computation model which
consists of two functions: (1) \textit{filter}, which indicates
whether an embedding should be processed and (2) \textit{process},
which examines an embedding and may produce some output.
RStream~\cite{RStream} is an out-of-core single-machine GPM system.
Its programming model is based on relational algebra.
Users specify how to generate embeddings using relational
operations such as \texttt{\small select}, \texttt{\small join}, and \texttt{\small aggregate}.
It stores intermediate data (\emph{i.e.}, embeddings)
on disk while the input graph is kept in memory for reuse.
It streams data (or table) from disk and
uses relational operations that may produce more
intermediate data, which is stored back on disk.
\textbf{Limitations in API:}
{\emph{Most of the application-specific optimizations
like pruning enumeration search space and avoiding isomorphism
test are missing in existing GPM frameworks}}, as they focus
on providing high-level abstractions but lack support for
application-specific customization.
The absence of such key optimizations in existing systems
results in a huge
performance gap when compared to hand-optimized implementations.
Moreover, some frameworks like Rstream support only
edge-induced embeddings
but for applications like CF,
the enumeration search space is much smaller
using vertex-induced exploration than edge-induced one.
\textbf{Data Structures for Embeddings:}
Data structures used to store embeddings in existing GPM systems are not efficient.
Both Arabesque and RStream store embeddings in an array of structures (AoS),
where the embedding structures consists of a vertex set and an edge set.
Arabesque also proposes a space efficient data structure called the \emph{Overapproximating
Directed Acyclic Graph} (ODAG), but it requires extra canonical test for each embedding,
which has been demonstrated to be very expensive for large graphs~\cite{Arabesque}.
\textbf{Materialization of Data Structures:}
The list or array of intermediate embeddings in both Arabesque
and RStream is always materialized in memory and in disk, respectively.
This has significant overheads as the size of such data
grows exponentially. Such materialization may not be needed
if the embeddings can be filtered or processed immediately.
\textbf{Dynamic Memory Allocation:}
As the number of (intermediate) embeddings are not known before
executing the algorithm, memory needs to be
allocated dynamically for them. Moreover, during parallel execution,
different threads might allocate memory for embeddings they create or enumerate.
Existing systems use standard ({\tt std}) maps and sets,
which internally use a global lock to dynamically allocate memory.
This limits the performance and scalability.
\textbf{Summary:}
Existing GPM systems have limitations in their API,
execution model, and implementation.
Pangolin addresses these issues by
permitting application-specific optimizations in its API,
optimizing the execution model, and
providing an efficient, scalable implementation
on multicore CPU and GPU.
These optimizations can be applied to existing
embedding-centric systems like Arabesque.
\iffalse
To process partitions in parallel, RStream employs a producer-consumer
which takes each streaming partition as a task. Compared to
embedding tasks, this task granularity is relatively coarse-grain.
Although it ensures that partitions have similar sizes by inserting smaller
chunks into the task queue, the task processing is still coarse-grained
and difficult to balance. This is particularly problematic when given
a power-law graph as input, in which the degrees of different vertices
vary dramatically, and therefore the cost of join operation for
different tasks can be quite different.
\fi
\section{Related Work}\label{sect:relate}
\textbf{GPM Applications:}
Hand-optimized GPM applications target various platforms. For triangle counting,
Shun~\emph{et~al.}~\cite{Shun} present a parallel, cache-oblivious
TC solver on multicore CPUs that achieves good cache performance
without fine-tuning cache parameters. Load balancing is applied in
distributed TC solvers~\cite{Suri,PDTL} to evenly distribute workloads.
TriCore~\cite{TriCore} is a multi-GPU TC solver that uses binary search
to increase coalesced memory accesses, and it employs dynamic load balancing.
Chiba and Nishizeki (C\&N)~\cite{Arboricity} proposed an efficient
$k$-clique listing algorithm which computes the subgraph induced
by neighbors of each vertex, and then recurses on the subgraph.
Danisch~\emph{et~al.}~\cite{KClique} refine the
C\&N algorithm for parallelism and construct DAG
using a core value based ordering to further reduce the search space.
PGD~\cite{PGD} counts 3 and
4-motifs by leveraging a number of proven combinatorial arguments
for different patterns. Some patterns (\emph{e.g.,} cliques) are counted
first, and the frequencies of other patterns are obtained in constant
time using these combinatorial arguments. Escape~\cite{ESCAPE} extends this
approach to 5-vertex subgraphs and leverages DAG to reduce search space.
Frequent subgraph mining (FSM)~\cite{Huan}
is one of the most important GPM applications.
gSpan~\cite{gSpan} is an efficient sequential FSM solver which implements a
depth-first search (DFS) based on a lexicographic order called minimum DFS Code.
GraMi~\cite{GraMi} proposes an approach that finds only the minimal set of
instances to satisfy the support threshold and avoids enumerating all instances.
This idea has been adopted by most other frameworks. DistGraph~\cite{DistGraph}
parallelizes gSpan for both shared-memory and distributed CPUs. Each worker thread
does the DFS walk concurrently. To balance workload, it introduces a customized
dynamic load balancing strategy which splits tasks on the fly and recomputes
the embedding list from scratch after the task is sent to a new worker.
Scalemine~\cite{Scalemine} solves FSM with a two-phase
approach, which approximates frequent subgraphs in phase-1,
and uses collected information to compute the exact solution in phase-2.
Other important GPM applications includes \textit{maximal cliques}~\cite{MaximalClique},
\textit{maximum clique}~\cite{MaximumClique,Aboulnaga},
and \textit{subgraph listing}~\cite{SubgraphListing,CECI,DUALSIM,PathSampling,TurboFlux,Ma,Lai}.
They employ various optimizations to reduce
computation and improve hardware efficiency.
They inspired our work to design a flexible interface for user-defined
optimizations. However, they achieve high performance at the cost of tremendous
programming efforts, while Pangolin provides a unified model for ease of programming.
\textbf{GPM Frameworks:}
For ease-of-programming, GPM systems
such as Arabesque~\cite{Arabesque}, RStream~\cite{RStream},
\hlc{G-Miner~\cite{G-Miner}}, and Kaleido~\cite{Kaleido} have been proposed.
They provide a unified programming interface to the user
which simplifies application development.
However, their interface is not flexible enough to
enable application specific optimizations.
Instead of the BFS exploration used in these frameworks,
Fractal~\cite{Fractal} employs a DFS strategy to enumerate subgraphs,
which reduces memory footprint.
\hlc{AutoMine~\cite{AutoMine} is a compiler-based GPM system using DFS exploration.
In contrast, Pangolin uses the BFS approach that is inherently more load-balanced,
and is better suited for GPU acceleration.}
In the future, we plan to also support DFS exploration.
\hlc{EvoGraph~\cite{EvoGraph} is a GPU framework supporting both graph analytics
and mining. However, as it is not designed specifically for GPM,
many features such as automorphism and isomorphism test are
not supported, which places a lot of burden on the programmer for complex GPM problems.}
\textbf{Approximate GPM:}
There are approximate solvers for TC~\cite{DOULION,Rahman,Tsourakakis},
CF~\cite{Mitzenmacher,Jain}, MC~\cite{Slota,Bressan1}, and FSM~\cite{Approx}.
ASAP~\cite{ASAP} is an approximate GPM framework that supports various
GPM applications. It extends graph approximation theory to general
patterns and incurs less than 5\% error. Since approximation reduces computation,
ASAP is much faster than exact frameworks like Arabesque, and scales to large graphs.
\hlc{Chen and Lui~\cite{MineOSN} propose another approximate GPM framework based on random walk.}
Compared to approximate solutions, Pangolin focuses on exact GPM and
achieves high performance without sacrificing accuracy.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{\uppercase\expandafter{\romannumeral1}. Introduction}
The standard model (SM) can explain with great accuracy most of the available experimental phenomena including the newest data from the large hadron collider (LHC).
However, there are still some unexplained discrepancies and theoretical issues that the SM can not solve.
As we know, the SM is a very successful gauge theory, which can naturally be extended through adding the form of gauge symmetries with new matter fields.
The measurements of the Higgs boson mediated cross-sections\cite{Aad:2012tfa} indicate that the existence of the fourth generation of chiral quarks is in disfavour \cite{Eberhardt:2012gv}. However, vector-like quarks (VLQs) are a class of interesting new particles, which mix with the SM quarks and can lead to rich phenomenology.
Unlike chiral quarks of the SM, which get the mass terms through electroweak symmetry breaking, VLQs obtain the direct gauge invariant mass term of the form $m\bar{\psi}\psi$.
This means that VLQs are not subject to the constraints from Higgs production and not excluded by precision measurements \cite{Eberhardt:2012sb}.
~Furthermore, if Higgs boson is a pseudo-Goldstone boson of a spontaneously broken approximate symmetry group, the VLQ of around TeV mass will be presented in many models beyond the SM, which can provide a viable option to explain the observed lightness of the Higgs\cite{Perelstein:2003wd}.
~VLQs are proposed in several theoretical frameworks, like little Higgs\cite{ArkaniHamed:2002qy} and composite Higgs\cite{Dobrescu:1997nm}.
~In these new physics models, VLQs are colored spin-1/2 fermions, whose left- and right-handed components transform in the same way under the SM electroweak symmetry group\cite{delAguila:1982fs}.
VLQs appear as partners of the third generation of quarks. The electric charges of VLQs could be $Q_T= +2/3$, $Q_B= -1/3$, $Q_X= +5/3$, and $Q_Y= -4/3$. They might appear in $SU(2)$ singlets or multiplets.
~These new fermions have a common feature that they are assumed to decay to a SM quark with a SM gauge boson, or a Higgs boson.
Many phenomenological studies for VLQs at the existing or future colliders have been made in literatures, for example see \cite{Fuks:2016ftf}.
Among various new particles, VLQs play an important role in terms of experimental effort.~By now, the experimental data corresponding to an integrated luminosity of
35-36 fb$^{-1}$ have been used to search for the various VLQ decay modes\cite{Sirunyan:2018omb,Aaboud:2018uek,Aaboud:2018xuw,Sirunyan:2017ynj,Sirunyan:2018fjh,Sirunyan:2018ncp,
ATLAS:2016ovj}. The most stringent bounds on the down-type VLQ (VLQ-$B$) decaying into $Zb$, $Wt$, $hb$ are 1.24 TeV from CMS \cite{Sirunyan:2018omb} and 1.35 TeV from
ATLAS \cite{Aaboud:2018uek} experiments. Similarly, considering the decay modes $ T \to Zt\;, W^{+}b\;, ht $ of up-type VLQ (VLQ-$T$), the most stringent bounds on the mass is 1.3 TeV and 1.43 TeV given by CMS \cite{Sirunyan:2018omb} and ATLAS \cite{Aaboud:2018xuw}, respectively.
~Since pair production of VLQs is induced by QCD interaction, these mass bounds are independent of the value of the mixing angles with the SM quarks.
~Single production of VLQs, whose rate is proportional to the mixing angles, has also been searched by the ATLAS and CMS experiments at the 13 TeV
LHC \cite{Sirunyan:2017ynj,Sirunyan:2018fjh,Sirunyan:2018ncp,ATLAS:2016ovj}. In the case of the single production of VLQ-$B$ decaying into $hb$, the upper limits on the product of
the cross section and the branching fraction vary from 1.28 to 0.07 pb for the VLQ-$B$ mass in the range 700 - 1800 GeV\cite{Sirunyan:2018fjh}.
The CMS collaboration has also published a search for VLQ-$T$ in the final state $Zt$\cite{Sirunyan:2017ynj}, while a search in the final state $Wb$ is given in the 2015 ATLAS dataset\cite{ATLAS:2016ovj}. We expect that more information about VLQs can be obtained in future high energy collider experiments.
VLQs can be produced singly or in pairs at the LHC. The pair production process induced by gluon fusion is model-independent. For heavy VLQs, the single production process becomes more important because of weaker phase-space suppression\cite{DeSimone:2012fs}. Although the exact details of the corresponding production mechanism are model-dependent, one can assume that VLQs mix with the SM quarks and general consider their single productions. In this letter, we will take that the VLQ-$B$ with electric charge of $Q_B= -1/3$ is the $SU(2)$ singlet, consider its single productions at the 14 TeV LHC in a model independent way, and careful simulate the signals and SM backgrounds for the subprocess $bg\to B\to Zb$.
If working in the 5 Flavor Scheme, which assumes a non-zero $b$-quark parton distribution function (PDF)\cite{Figueroa:2018chn}, production of a $Z$ boson associated with a $b$ mainly proceeds via the subprocess $bg \to Zb$ in the SM. Refs.\cite{Campbell:2003dd,Figueroa:2018chn} have calculated the leading order of QCD corrections and next-to-leading order (NLO) QCD corrections to its production cross section. In this work, we will first consider all possible single production channels of VLQ-$B$ at the LHC in the case of assuming that the VLQ-$B$ mass is larger than the electroweak (EW) scale. Then we will focus our attention on the observability of the VLQ-$B$ signals by considering its contributions to the subprocess $bg\to Zb$ at the 14 TeV LHC.
The letter is organized as follows. On the basis of the most general Lagrangian, which is related to the couplings of VLQ-$B$ with the SM particles, in Section II we consider its various decays and single productions at the LHC. The corresponding branching ratios and production cross sections are calculated. In Section III, we give a detailed analysis of the relevant signals and backgrounds for the process $pp \rightarrow B\to Zb$. Our results are summarized in Section IV.
\section*{\uppercase\expandafter{\romannumeral2}. Decays and single productions of VLQ-$B$ at the LHC}
The couplings of VLQs to the SM particles are uniquely fixed by gauge invariance \cite{Cabibbo:1983bk}. The most general Lagrangian describing the effective interaction of VLQ-$B$, can be given by \cite{20}
\begin{eqnarray}
\mathcal{L}&=&\frac{g_s}{2\Lambda}G_{\mu\nu}\overline{b}\sigma^{\mu\nu}\Big(\kappa^b_L P_L+\kappa^b_R P_R\Big)B+\frac{g_2}{\sqrt{2}}W_{\mu}^{-} \overline{t} \gamma^{\mu}\Big(Y_LP_L+Y_RP_R\Big)B \nonumber\\
&+&\frac{g_2}{2c_W}Z_{\mu}\overline{b}\gamma^{\mu}\Big(F_L P_L+F_R P_R\Big)B+\frac{m_b}{\upsilon}h\overline{b}\Big(y_L P_L+y_R P_R\Big)B+h.c.. \label{Lag1}
\end{eqnarray}
In above equation, we have abbreviated $\cos\theta_W$ as $c_W$, and $\theta_W$ is the Weinberg angle. $\Lambda$ is the cutoff scale which is set to the VLQ-$B$ mass $M_B$. $g_s$ and $g_2$ are the strong coupling constant and the $SU(2)_L$ coupling constant, respectively. $G_{\mu\nu}$ is the field strength tensor of the gluon. $m_b$ is the bottom quark mass. $P_{L,R}$ are the normal chiral projection operators with $P_{L,R} = (1\mp \gamma_5)/2$. The factors $\kappa^b_{L,R}$, $Y_{L,R}$, $F_{L,R}$ and $y_{L,R}$ parameterize the chirality of the VLQ-$B$ couplings with the SM particles. For the singlet VLQ-$B$ with electric charge of $Q_B= -1/3$, it only mixes with the left-handed bottom quark and there are $Y_R \simeq 0, F_R \simeq 0$ and $y_R \simeq \frac{M_B}{m_b} y_L$ . In this letter, we only consider this kind of VLQ-$B$ and assume $Y_L =F_L = y_{L}=s_L$ with $s_L$ being the mixing angle, $\kappa^b_{L}= \kappa^b_{R}= \kappa^b$. The current high energy experimental data can give constraints on these free parameters. In our following numerical estimation, we will take a conservative range for the parameters $s_L= {\upsilon}/{M_B}$, where $\upsilon$ =246 GeV is the EW scale, and $0\leq \kappa^b\leq 0.5$ according to Refs.~\cite{20,ycx}.
\begin{figure}[htbp]
\centering
\subfigure{
\begin{minipage}[b]{0.5\textwidth}
\includegraphics[width=1\textwidth]{figure1}
\end{minipage}
}
\caption{The branching ratios for the decay modes $Wt$, $Zb$, $hb$ and $bg$ as functions of the \hspace*{1.65cm} VLQ-$B$ mass $M_B$ for $\kappa^b= 0.1$.}
\label{fig1}
\end{figure}
From Eq. (\ref{Lag1}), we can see that the possible decay modes of VLQ-$B$ are $ ~Wt, ~Zb, ~hb$, and $bg$. The branching ratios for these decay channels are plotted as functions of the mass parameter $M_B$ in Fig.~\ref{fig1} in the case of $\kappa^b= 0.1$. One can see from Fig.~\ref{fig1} that, for the relatively light VLQ-$B$, the process $B \rightarrow Zb$ is the dominant decay channel, while for $M_B>1000GeV$ the branching ratio of the decay mode $Wt$ will increase to about 50\%, and there is $Br(B \rightarrow Zb) \simeq Br(B \rightarrow hb) $. Certainly, if we enhance the value of the parameter $\kappa^b$, the value of the branching ratio $Br(B \rightarrow bg)$ will be increased. It should be noted that Ref.~\cite{20} has studied the possibility of detecting VLQ-$B$ via the decay channel $B \rightarrow Wt$ at the LHC.
Hence, in this letter, we only use the decay process $B \rightarrow Zb$ to study the possible signals of VLQ-$B$ at the LHC.
\begin{figure}[htbp]
\centering
\subfigure{
\begin{minipage}[b]{0.55\textwidth}
\includegraphics[width=1\textwidth]{figure2.pdf}
\end{minipage}
}
\caption{ The single production cross sections of VLQ-$B$ as functions of $M_B$ with different values \hspace*{1.65cm}of $\kappa^b$ at the 14TeV LHC.}
\label{fig2}
\end{figure}
From above discussions we can see that VLQ-$B$ can be single produced via $bg$, $Zb$ and $Wt$ fusions at the LHC. It is obvious that the production cross section coming from $bg$ fusion depends on the free parameters $M_B$ and $\kappa^b$, while other production cross sections only depend on the free parameter $M_B$ in the case of $s_L= {\upsilon}/{M_B}$. The relevant production cross sections are plotted in Fig.~\ref{fig2} as functions of $M_B$ with different values of $\kappa^b$ at the 14TeV LHC. From Fig.~\ref{fig2} we can see that the production cross section of the subprocess $bg \rightarrow B$ is larger than other two production channels for $\kappa^b= 0.3 $ and $0.5$. For $\kappa^b= 0.1$, the production cross section induced by $Zb$ fusion can compare with that from $bg$ fusion. However, comparing with the $Zb \rightarrow B$ production process, the $bg$ production channel less one $q$ jet in the final state and more easier to be analyzed. So we study the possibility of detecting the signals of VLQ-$B$ via the subprocess $bg\to B\to Zb$ at the 14 TeV LHC.
\section*{III. Signal analysis and discovery potentiality}
After discussing all possible decays and single productions of VLQ-$B$ in Sec.II, in this section we will investigate the signatures of VLQ-$B$ by considering its contributions to $Zb$ production at the 14 TeV LHC. Ref.~\cite{ycx} has shown that the correction of VLQ-$B$ to $Zb$ production is comparable to NLO QCD correction in wide range of the parameter space and might be larger than the NLO QCD correction for some special values of the free parameters. We will consider prospects for observing VLQ-$B$ at 14 TeV LHC in the process $pp\rightarrow B \rightarrow Zb$ with $Z$ decaying into two leptons. During our simulation procedure, we first implement the model with VLQ-$B$ into the FeynRules package~\cite{fr} to generate the model files in UFO~\cite{UFO} format. The calculation of cross sections and generation of parton level events are performed using MadGraph5-aMC@NLO\cite{mg} with built-in parton distribution function NNPDF23~\cite{nnpdf}. Subsequently, the events pass through Pythia~\cite{pythia} and Delphes~\cite{delphes} which are employed for parton showering, hadronization and fast simulation of ATLAS detector. Finally, MadAnalysis5~\cite{md} is applied for data analysis and plotting.
\begin{figure}[htbp]
\centering
\subfigure{
\begin{minipage}[b]{0.47\textwidth}
\includegraphics[width=1\textwidth]{figure3.pdf}
\end{minipage}
}
\caption{The Feynman diagram for the resonance production of VLQ-$B$ subsequently decaying \hspace*{1.65cm}to $Zb$ at the LHC.}
\label{fig3}
\end{figure}
VLQ-$B$ can be single produced via $bg$ fusion at the LHC and its subsequent decaying to $Zb$ can contribute to the process $pp\rightarrow Zb$. The relevant Feynman diagram is shown in Fig.~\ref{fig3}. The production cross section $ \sigma(pp\rightarrow B \rightarrow Zb )$ is plotted in Fig.~\ref{fig4} as a function of the mass $M_B$ for different values of the parameter $\kappa^b$ at the 14 TeV LHC. For $0\leq\kappa^b \leq0.5$ and 300 GeV $<M_B<$ 2000 GeV, the values of the cross section $ \sigma(pp\rightarrow B \rightarrow Zb )$ are in the range of $3.09\times10^{-4} \sim 276.72~ $pb. For $\kappa^b=0.1$ and $M_B$ = 800, 1000, 1200 GeV, their values are 0.10, 0.03, 0.01 $\rm pb$, respectively. Certainly, the value of the production cross section also depends on the PDFs of bottom quark and gluon. The cross section benefits from high gluon PDF but it is also suppressed by the PDF of bottom quark. Since in the single production the VLQ-$B$ mass might reach the full center-of-mass energy, single production can be more sensitive to heavy VLQ-$B$ than pair production.
\begin{figure}[htbp]
\centering
\subfigure{
\begin{minipage}[b]{0.5\textwidth}
\includegraphics[width=1\textwidth]{figure4.pdf}
\end{minipage}
}
\caption{ The production cross section of the process $pp\rightarrow B \rightarrow Zb$ as a function of $M_B$ for \hspace*{1.65cm} different values of $\kappa^b$ at the 14 TeV LHC.}
\label{fig4}
\end{figure}
\begin{figure}[htbp]
\centering
\subfigure[]{
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=1.45\textwidth]{figure5a}
\end{minipage}
}
\centering
\subfigure[]{
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=1.45\textwidth]{figure5b}
\end{minipage}
}
\centering
\subfigure[]{
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=1.45\textwidth]{figure5c}
\end{minipage}
}
\centering
\subfigure[]{
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=1.45\textwidth]{figure5d}
\end{minipage}
}
\centering
\subfigure[]{
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=1.45\textwidth]{figure5e}
\end{minipage}
}
\caption{Normalized distributions of the transverse momentums $p^{l,b}_T$, the particle separations \hspace*{1.65cm} $\Delta R(l,b)$, $\Delta R(l^+,l^-)$, and the invariant mass $M(l^+l^-b)$ for the signals and backgrounds \hspace*{1.65cm} after basic cuts at the 14 TeV LHC with an integrated luminosity of 300~ $\rm fb^{-1}$. }
\label{fig5}
\end{figure}
Now, we consider the signatures of VLQ-$B$ at the 14 TeV LHC through the process $pp\rightarrow B \rightarrow Zb$ with $Z$ subsequently decaying to pair of leptons.
We analyze the observation potential by performing Monte Carlo simulation of the signal and background events, and applying the suitable selection cuts to maximize the statistical significance. We choose the following three benchmark points: $M_{B_1}$ = 800 GeV, $M_{B_2}$ = 1000 GeV, $M_{B_3}$ = 1200 GeV with $\kappa^b$= 0.1, and then explore the sensitivity of VLQ-$B$ at the 14 TeV LHC through the channel
\begin{equation}
pp\rightarrow B \to Z (\rightarrow l^+l^-)b.
\end{equation}
The main SM backgrounds are the processes containing two leptons and one $b$ jet in the final states, such as
\begin{eqnarray}
pp\to~ Z (\to l^{+}l^{-})b\;.
\end{eqnarray}
To simplify our analysis, we will assume $l^\pm= e ^\pm$ and $ \mu^\pm$. In this case, the total cross section of the main SM backgrounds is about 18.01 $\rm pb$ at the 14 TeV LHC.
The signal and background processes are simulated at the 14 TeV LHC with an integrated luminosity of 300 $\rm fb^{-1}$.
First of all, we pick up the events that include one opposite-sign same-flavor lepton pair and one $b$ jet after imposing the following basic cuts:
\begin{eqnarray}
&& p^{l}_{T}> 10~{\rm GeV} \;,\quad p^{j}_{T}> 20~{\rm GeV} \;,\quad
\vert\eta(l)\vert<2.5 \;,\quad \vert\eta(j)\vert<5 \;.\quad
\Delta R(ll,lj)>0.4 \;.
\end{eqnarray}
Where $p^{}_T$ denotes the transverse momentum and $\eta$ is the pseudo-rapidity, the particle separation $\Delta R_{ij}$ is defined as $\sqrt{ \Delta \eta_{ij}^2 + \Delta \phi_{ij}^2 }$ with $\Delta \eta_{ij}$ and $\Delta \phi_{ij}$ being the rapidity and azimuthal angle gaps between the two particles in a pair. These basic cuts are used to simulate the geometrical acceptance and detection threshold of the detector.
In order to get some hints of further cuts for reducing the SM backgrounds, we show the normalized distributions of $p^{l,b}_T$, $\Delta R(l,b)$, $\Delta R(l^+,l^-)$ and the invariant mass $M(l^+l^-b)$ for the signals and backgrounds after the basic cuts in Fig.~\ref{fig5}, where the signals~1, 2 and 3 correspond to $M_{B_1}$ = 800 GeV, $M_{B_2}$ = 1000 GeV and $M_{B_3}$ = 1200 GeV, respectively. As shown in Fig.~\ref{fig5}, because of the larger mass of VLQ-$B$, the decay products of VLQ-$B$ are highly boosted. Therefore, the $p^{l,b}_T$ peaks of the signals are larger than those of the SM backgrounds and the lepton pairs of the signal are much closer.
For the distribution of the invariant mass $M(l^+l^-b)$, the signal events have peaks around the VLQ-$B$ mass, while the background events do not. According to the information of these kinematic distributions, we impose the following cuts to get a high statistical significance:
\begin{eqnarray}
\rm{Cut\!-\!1}&:& p^{b}_{T} > 150~{\rm GeV};\nonumber
\\\rm{Cut\!-\!2}&:& p^{l}_{T} > 100~{\rm GeV};\nonumber
\\\rm{Cut\!-\!3}&:& \Delta R(l^+l^-) < 1.0;\nonumber
\\\rm{Cut\!-\!4}&:& \Delta R(l,b) > 3.0;\nonumber
\\\rm{Cut\!-\!5}&:& M(l^+l^-b) > 700 ~{\rm GeV}.\nonumber
\end{eqnarray}
In order to see whether the signatures of VLQ-$B$ can be detected at the 14 TeV LHC, we further calculate the statistical significance of signal events $SS$= $S/\sqrt{S+B}$, where $S$ and $B$ denote the numbers of the signal and background events, respectively. Our numerical results are shown in Table~\ref{table1}.
\begin{table*}[htbp]
\caption{ Effects of individual kinematic cuts on the signals and backgrounds for $M_{B_1}$ = 800GeV, \hspace*{1.65cm}$M_{B_2}$ = 1000GeV, $M_{B_3}$ = 1200 GeV with $\kappa^b$=0.1.}
\hspace{15cm}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{8}{|c|}{ LHC @14~TeV, ~$\mathcal{L}$ = 300 $\rm fb^{-1}$}\\
\hline
\multicolumn{1}{|c|}{}
&\multicolumn{3}{c|}{Signals(S)}
&\multirow{2}*{Background(B)}
&\multicolumn{3}{c|}{\multirow{1}*{$S/\sqrt{S+B}$}}\\
\cline{2-4}
\cline{6-8}
&$M_{B_1}$&$M_{B_2}$&$M_{B_3}$&&$M_{B_1}$&$M_{B_2}$&$M_{B_3}$\\
\hline
Basic cuts & 607.4 &147.4&36.3 &$1.31\times10^6$&0.53&0.13&0.03 \\
\hline
$\rm{Cut\!-\!1}$ & 493.1&121.3&29.2&$2.47\times10^4$&3.11&0.77&0.19\\
\hline
$\rm{Cut\!-\!2}$& 486.6 &120.2&29.0&$1.92\times10^4$&3.46&0.86&0.21 \\
\hline
$\rm{Cut\!-\!3} $ & 449.9&114.1&27.6&6385.2&5.44&1.42&0.34 \\
\hline
$\rm{Cut\!-\!4} $&378.6&101.5&24.9 &4312.1&5.53&1.53&0.38 \\
\hline
$\rm{Cut\!-\!5} $&294.0 &96.9&24.4&576.9&9.96&3.73&0.996 \\
\hline
\end{tabular}
\label{table1}
\end{table*}
\begin{figure}[htbp]
\centering
\subfigure{
\begin{minipage}[b]{0.47\textwidth}
\includegraphics[width=1\textwidth]{figure6.pdf}
\end{minipage}
}
\caption{ 3$\sigma$ and 5$\sigma$ curves in the $M_B$-$\kappa^b$ plane for the process $pp\rightarrow B \rightarrow Zb$ at the 14 \hspace*{1.65cm} TeV LHC with the integrated luminosity $\mathcal{L}$ = 300 $\rm fb^{-1}$.}
\label{fig6}
\end{figure}
Based on the above benchmark points, the exclusion limits of the free parameters $M_{B}$ and $\kappa^{b}$ are shown in Fig.~\ref{fig6}.
We can see that the values of $\kappa^b$ can respectively reach about 0.05~(0.07), 0.10~(0.12), and~0.15~(0.20) for $M_B$=800, 1000, and 1200 GeV at 3$\sigma$~(5$\sigma$) level at the 14 TeV LHC with the integrated luminosity $\mathcal{L}$ = 300 $\rm fb^{-1}$. The detectable lower limits of the parameter $\kappa^b$ enhance with $M_B$ increasing. So, we can say that, with an upgraded energy and higher luminosity, the signals of VLQ-$B$ might be detected through the process $pp\rightarrow B \rightarrow Zb$ at the LHC in near future.
\section*{IV. Conclusions}
VLQs have been presented in many new physics models beyond the SM, which can be produced singly and in pairs at the LHC. When the mass of VLQ is large enough, the single production processes become important, which can not only be used to detect the possible signals of VLQ, but also be used to test the mixing between the VLQ and SM quark. In this letter, we first discuss all possible decay modes and single production channels of the $SU(2)$ singlet VLQ-$B$ at the LHC in a model independent way. Then we investigate its observability via the process $pp \to B \to Z~(\to l^+ l^-) b$ at the 14 TeV LHC. Our numerical results show that the production cross section of the process $pp \to B \to Z b$ strongly depends on the free parameters $\kappa^b$ and $M_B$. Its value is in the range of $3.09\times10^{-4} \sim 276.72~ \rm pb$ for $0 \leq \kappa^b \leq 0.5$ and 300 GeV $<M_B<$ 2000 GeV. We focus our attention on analysis of the signals of VLQ-$B$ through the $l^+l^-b$ final state, where $l^\pm$ are assumed to be $ e ^\pm$ and $\mu^\pm$. After simulating the signals as well as the SM backgrounds and applying suitable kinematic cuts on the relevant variables, the signal significance $SS$ can reach 9.96 and 0.996 for $\kappa^b=0.1$, $M_B$=800 and 1000GeV, respectively. Certainly, the value of $SS$ enhances with $\kappa^b$ increasing for the fixed value of $M_B$. We further study the detectable lower limits of the free parameters $M_{B}$ and $\kappa^{b}$ and obtain the minimum values of the free parameters $\kappa^b$ and $M_B$ with the integrated luminosity required for the discovery of such kind of down-type VLQ. We find that the values of $\kappa^b$ can respectively reach about 0.05~(0.07), 0.10~(0.12), and~0.15~(0.20) for $M_B$=800, 1000, and 1200 GeV at 3$\sigma$~(5$\sigma$) level at the 14 TeV LHC with the integrated luminosity $\mathcal{L}$ = 300 $\rm fb^{-1}$. Thus, we expect that the possible signals of the VLQ-$B$ with electric charge of $Q_B= -1/3$, which is the $SU(2)$ singlet, could be detected via the process $pp \to B \to Z~(\to l^+ l^-) b$ at the upgraded LHC.
\section*{ACKNOWLEDGEMENT}
This work was partially supported by the National Natural Science Foundation of China under Grants No. 11875157, Grant No. 11847303 and Grants No.11847019.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{introduction}
It is well known that a locally causal hidden-variable theory of quantum physics need not be constrained by the Bell inequalities if this theory also violates the measurement independence condition
\be
\label{mip}
\rho(\lambda| a, b)= \rho (\lambda| a', b')
\ee
for all $a, a', b, b'$. Here $\rho(\lambda| a, b)$ denotes a probability density function over some hidden-variable $\lambda$, conditional on the orientations of Alice and Bob's measuring apparatuses, represented by the points $a$ and $a'$ (for Alice), and $b$ and $b'$ (for Bob), on the unit sphere. In fact, it may only be necessary to violate (\ref{mip}) partially \cite{Hall:2010} \cite{Hall:2011}, suggesting that this might be a fruitful area to explore in order to create viable realistic causal theories of fundamental physics. However, an important objection to any proposed violation of measurement independence is that it can give rise to correlations that seemingly have no basis in physical theory, suggesting some implausible conspiracy operating in the universe at large.
Consider the example proposed by Bell himself \cite{Bell} where a measurement orientation is set by some pseudo-random number generator (PRNG). The output of the PRNG is determined by, and depends sensitively on, some input variable. For example, the value of the most significant digit of the output variable may depend on the value of the millionth digit of the input variable. In Bell's own words:
\begin{quote}
\ldots this particular piece of information is unlikely to be the vital piece for any distinctively different purpose, i.e. is otherwise rather useless.
\end{quote}
Hence, if we seek to violate measurement independence, even partially, then we must explain how the value of this millionth digit can somehow correlated with the value of the hidden variable of the particles being measured. This becomes especially problematic when we realise that the PRNG may be run months (or indeed billions of years, see \cite{Gallicchio:2014}) before the particles being measured are actually produced as part of the experimental procedure. One proposal to explain such correlations is retrocausality \cite{Price:1997}.
Here, an alternative and novel way to understand these correlations is proposed, based on the concept of global invariant sets in state space, and fractal invariant sets in particular. Importantly, local properties of such sets cannot be determined by finite algorithm - they are formally non-computational. Whilst generic to a broad class of nonlinear dynamical system, fractal invariant sets are not especially familiar concepts to many quantum physicists and philosophers. Hence, in Section \ref{cat}, to illustrate the power of the global invariant set in providing a new perspective on this long-studied problem, we consider a simple analogy: a gedanken experiment in which, depending on whether a radioactive atom decays, a massive object is fired into a black hole. We consider the temporal correlation between the size of the black hole at $t_0$, as determined by the radius of its event horizon $\mathscr H ^+ (t_0)$, and the (supposed) hidden variable of the radioactive atom at $t_1$. The existence of such a correlation, even when $t_0 \ll t_1$, is easy to understand (without conspiracy or retrocausality) if we take into account the fact that $\mathscr H ^+$ is a globally defined invariant subset of space time. In particular, like the fractal invariant set, $\mathscr H^+(t_0)$'s local properties cannot be defined from the neighbouring space-time geometry near $t_0$.
Two properties of fractal attracting invariant sets in state space are discussed in Section \ref{cantor}: their non-computability, and the notion that such sets are, in a well defined sense, `large' when looked at from the inside, yet `small' when looked at from the outside. The key thesis which underpins the subsequent analysis in this paper is then proposed: that states of physical reality lie precisely on a (fractal) invariant set in the state space of the universe - the Cosmological Invariant Set Postulate. In Section \ref{CHSH} it is shown how a development of these ideas, referred to as Invariant Set Theory, can nullify the CHSH version of the Bell Theorem, determinism and causality notwithstanding. In Section \ref{conspiracy} it is shown that the proposed nullification suffers from none of the familiar objections: it is not conspiratorial, it does not need retrocausality, the theory is not superdeterministic or fine tuned, it is robust under small perturbations, and experimenters are not prevented from measuring particle spins relative to any direction they may wish to measure. Despite this, the Cosmological Invariant Set Postulate is manifestly not classical. Based on the discussion in this paper, in Section \ref{conclusions}, some remarks are made about the relevance of the complex Hilbert Space as a description of physical reality.
\section {Schr\"{o}dinger's Black Cat}
\label{cat}
Consider the following simple gedanken experiment. If a given radioactive atom decays (within a certain period of time) then a massive object - which could in principle be a cat in a box - is propelled into a black hole. If the radioactive atom does not decay, the massive object remains some distance from the black hole in question.
The mass and hence size of the black hole are therefore dependent on the decay of the radioactive atom. As shown in Fig \ref{fig:blackhole}, let the size of the black hole in the case where the radioactive atom does not decay be equal to the radius $r(t)$ of the event horizon $\mathscr H ^+$, where $t$ labels a family of spacelike hypersurfaces which intersect $\mathscr H ^+$. Fig \ref{fig:blackhole} shows a second null surface lying at a radius $r'(t)> r(t)$. If the radioactive atom does decay at time $t_1$ then whilst for $t<t_1$ it appears that this null surface will escape to future null infinity, $\mathscr I ^+$, in fact after the object has fallen into the black hole, increasing its mass, this null surface becomes trapped. The second null surface can be assumed to correspond to the event horizon of the black hole in the case where the atom decays. Hence at $t_0<t_1$ the size of the black hole is bigger if the atom decays at $t_1$ than if it doesn't. If the mass of the `cat+box' is large enough, then even at times $t_0 \ll t_1$, it is possible that $r'(t_0) \gg r(t_0)$.
\begin{figure}
\centering
\includegraphics[scale=0.3]{blackhole_cat}
\caption{A massive object is propelled into a black hole if a radioactive atom decays within a certain period of time around $t_1$. Assuming some underpinning hidden variable theory, the size of the black hole at $t_0< t_1$ depends on whether the radioactive atom decays and hence on the value of the atom's hidden variables. If the size of the black hole could be estimated from the space-time geometry in the neighbourhood of the black hole at $t_0$, then this dependency would appear to imply either some implausible conspiracy or some form of retrocausality. However, since the size of the black hole is determined by the distance of the event horizon $\mathscr H ^+$ from the centre of the black hole, and since $\mathscr H^+$ is itself defined by a global space-time condition (the boundary of null surfaces that extend to $\mathscr I ^+$), neither conspiracy or retrocausality is needed to explain this dependency.}
\label{fig:blackhole}
\end{figure}
In the conventional language of quantum theory (assuming the whole system is unobserved) the black hole at the earlier time $t_0$ must be considered a linear superposition of black holes of size $r$ and $r'$. Here we pursue the possibility that quantum physics is indeed underpinned by some causal hidden-variable theory. Hence, let the decay of the radioactive atom at $t_1$ be determined by the value of some hidden variable $\lambda$. Then, according to the discussion above, the size of the black hole at $t_0 < t_1$ is both well-defined (i.e. not a superposition of sizes) and correlated with $\lambda$ at $t_1$. Instead of a radioactive atom decay, the release of the box could be determined by an experimenter. In this case, the size of the black hole at $t_0$ will be the result of experimenter decision at $t_1$.
A physicist who believed that the size of the black hole at $t_0$ can be determined from the space-time geometry in some neighbourhood of $\mathscr H^+$ at $t_0$ would be faced with a dilemma. Somehow, the black hole would have to `know' to be the right size at $t_0$ in order to anticipate the decay or non-decay of the atom, or alternatively the experimenter decision, at $t_1$. The apparent dilemma is compounded by the fact that the photons which propagate on $\mathscr H^+$ never interact with the radioactive atom (or experimenter) at $t_1$. To avoid requiring some implausible conspiracy, the physicist may conclude that there may be a retrocausal effect by which information, associated with the decay of the radioactive atom, propagates back in time causing the black hole event horizon to expand at earlier times.
However, in this case there is a simple explanation for the correlation which requires neither conspiracy nor retrocausality. The explanation lies in the fact that $\mathscr H^+$ is defined by a global property of space-time, it being the boundary of null surfaces that escape to $\mathscr I^+$. Consistent with this, the location $r(t_0)$ of $\mathscr H^+$ cannot be determined by the geometry of space time in the neighbourhood of $\mathscr H^+$ at $t_0$. Hence, the correlation is explainable without conspiracy or retrocausality.
\section{Global Invariant Sets in State Space}
\label{cantor}
Instead of the global geometric constructions in space-time, we now consider global geometric constructions in state space.
Consider a dynamical system $\mathcal D: X_{t_0} \rightarrow X_{t_0+t}$, defined, say, by a set of differential equations $\dot X=F[X]$. An attracting invariant set $I_{\mathcal D}$ is a closed subset of the state space of $\mathcal D$ with the properties (see e.g. \cite{Strogatz}):
\begin{itemize}
\item If $X_{t_0} \in {I}_{\mathcal D}$ then $X_{t_0+t} \in {I}_{\mathcal D}$ for all $t$.
\item $I_{\mathcal D}$ attracts an open set of initial conditions. That is, there is an open neighbourhood $\mathcal U$ containing ${I}_{\mathcal D}$ such that if $X(0) \in \mathcal U$, then the distance from $X(t)$ to $I _ {\mathcal D}$ tends to zero as $t \rightarrow \infty$. The largest such $\mathcal U$ is called the basin of attraction of $\mathcal {I}_{\mathcal D}$.
\item There is no proper subset of ${I}_{\mathcal D}$ having the first two properties.
\end{itemize}
Just as $\mathscr H^+$ is a subset of space-time defined globally, so $I_{\mathcal D}$ is a subset of state space defined globally. One can readily give simple examples of dynamical systems with attracting invariant sets. Consider for example the dynamical system whose evolution equations are
\begin{equation}
\dot r=r(1-r^2) \;\;\;\; \dot \theta=1
\end{equation}
in polar coordinates $(r,\theta)$. It is easily shown that all trajectories spiral asymptotically towards a limit cycle at $r=1$. The attracting invariant set is the subset of points where $r=1$. More generally, there is a generic class of nonlinear dynamical systems where the attracting invariant sets are not such simple subsets, but are fractal eg the Lorenz \cite{Lorenz:1963} attractor $I_L$, derived from the equations
\begin{eqnarray}
\label{lorenz}
\dot X &=& -\sigma X + \sigma Y \nonumber \\
\dot Y &=& -XZ+rX-Y \nonumber \\
\dot Z &=& \;\;\;XY -bZ
\end{eqnarray}
where $\sigma, r, b >0$. For such dynamical systems there is no formula defining the invariant set. Consider then the question of determining whether a given point in state space lies in $I_L$. One may start with a state known to be in the basin of attraction of $I_L$. However, one would have to integrate the evolution equations for an infinite time before one could be sure that the state had evolved onto the invariant set. One would then integrate for a further time, comparing the evolved state with the chosen point in state space, and stopping if the two were identical. This cannot be achieved by a finite algorithm and one must therefore conclude that the fractal invariant set is not computational. More formally, Blum et al \cite{Blum} have shown that for decision processes such as this to be solved in finite time, the invariant set must have integral Hausdorff dimension. By definition, fractal attracting invariant sets do not have integral Hausdorff dimension. Dube \cite{Dube:1993} has shown that some of the classic problems in computation theory that cannot be solved by algorithm, can be recast in terms of fractal geometry. For example, the Post Correspondence Problem is equivalent to that of determining whether a given line intersects the attracting invariant set of an Iterative Function System.
A fractal invariant set can locally be considered the Cartesian product $\mathbb R \times \mathcal C$ where $\mathcal C$ denotes some (multidimensional) Cantor set. The simplest example of a Cantor Set is the ternary set $\mathcal T$, based on the intersection of all iterates $\mathcal T_k$, where $\mathcal T_{k}$ comprises two copies of $\mathcal T_{k-1}$, each copy reduced by a factor of $1/3$. Generalisations of $\mathcal T$ are needed to define invariant set geometries which exhibit quantum probability structure \cite{Palmer:2014} \cite{Palmer:2015b}. Key properties shared by all Cantor sets are the following:
\begin{itemize}
\item Relative to the full measure of the Euclidean space in which they are embedded, Cantor Sets have measure zero (i.e. are `small' when looked at from the outside).
\item Despite this Cantor sets have the cardinality of the continuum (i.e. are `large' when looked at from the inside.
\end{itemize}
For example, elements of $\mathcal T$ can be represented by the numbers $0 \le r \le 1$ whose base-3 expansion contain no digit `1'. The probability that a randomly chosen number on the interval $[0,1]$ has such a base-3 expansion is equal to zero. However, if we take the base-3 representation of a point in $\mathcal T$ and replace all occcurrences of `2' with the digit `1', and then interpret that number as a real number in binary form, it can be seen that there are as many points in $\mathcal T$ are there are points in $[0,1]$. The ternary set $\mathcal T$ can be readily generalised, firstly be extending into multiple state-space dimensions (e.g. the Menger Sponge) and secondly by considering restrictions on base-$N$ rather than base-$3$ numbers, where $N>3$. In this way, for large enough $N$ it is possible to construct multidimensional fractals which, whilst still measure zero in their embedding space, do not 'appear' at all gappy (or lacunar \cite{Mandelbrot}). These are relevant in the discussion below. It should also be noted that a Cantor Set is an example of what, mathematically, is referred to as a 'perfect set". That is to say, given any point of the Cantor Set and any neighbourhood of this point in the embedding space contains another point of the Cantor Set - i.e. the Cantor Set has no isolated points. This is relevant in the discussion about `superdeterminism' below.
There is an important technique for reconstructing an attracting invariant set using data from a (low-dimensional) dynamical system. It is based on the notion that the dynamics in the full state space can be reconstructed from measurements of just a single component of the state space \cite{Takens:1981}: The Takens Embedding Theorem. The method is based on time delays. This fact has important conceptual implications discussed later.
We apply these concepts to fundamental physics through the Cosmological Invariant Set Postulate \cite{Palmer:2014}: states of physical reality lie precisely on a fractal attracting invariant set $I_U$ of measure zero in the state space of the universe $U$, considered as a dynamical system. A crucial question to ask is: What fundamental physical process could account for the attraction of neighbouring state-space trajectories onto $I_U$? Here we appeal to ideas put forward by Roger Penrose (e.g. \cite{Penrose:2010}), that the black hole no-hair theorem (and consequent quantum black-hole information loss) should imply the existence of regions of state-space trajectory convergence. The idea is illustrated schematically in Fig \ref{fig:bigbang}, which shows the trajectory of the universe in its state space over a cycle of an assumed quasi-cyclic cosmology. By the Cosmological Invariant Set Postulate, this trajectory lies precisely on $I_U$. Shown in Fig \ref{fig:bigbang} are regions of state space (containing black holes) where neighbouring trajectories are converging onto $I_U$. It should be noted that these convergent trajectories do not lie on $I_U$, leading to some important new perspectives on the arrow of time problem that will be addressed elsewhere.
\begin{figure}
\centering
\includegraphics[scale=0.4] {bigbang}
\caption{A schematic illustration of the evolving trajectory of the universe $U$ in its state space, within a quasi-cyclic cosmology. The Cosmological Invariant Set Postulate states that these trajectories are evolving on a measure-zero fractal attracting invariant subset $I_U$ which defines $U$'s dynamical evolution. In regions of state space containing black holes, the black hole no-hair theorem implies that state space trajectories are convergent (consistent with discussions by Penrose \cite{Penrose:2010}).}
\label{fig:bigbang}
\end{figure}
\section{The Bell Theorem}
\label{CHSH}
The relevance of the Cosmological Invariant Set Postulate to the Bell Theorem can now be discussed. The key conceptual issues are related to the properties in the two bullets above:
\begin{itemize}
\item $I_U$ is sufficiently small from the perspective of the embedding space that the counterfactual experiments needed to establish the Bell inequality do not lie on $I_U$ and therefore, by postulate, do not correspond to states of physical reality.
\item $I_U$ is sufficiently large that these restrictions do not pose any practical constraints on which experiments experimenters might want to perform. Related to this, the theory which arises out of the Cosmological Invariant Set Postulate cannot be said to be `superdeterministic' or fine tuned in any meaningful sense.
\end{itemize}
Before discussing these issues in detail, a brief summary of some ongoing technical developments to describe the geometry of $I_U$ using algebraic techniques - something referred to below as `invariant set theory' (see \cite {Palmer:2014} \cite{Palmer:2015b}).
\subsection{Some Technical Preliminaries}
As discussed above, locally, $I_U$ can be written as $\mathbb{R} \times \mathcal C$ where $\mathcal C$ is a Cantor Set. That is to say, locally $I_U$ is a Cantor Set of trajectories, each trajectory representing a cosmological space-time. Using the language of symbolic dynamics \cite{Williams} and recognising Schwinger's \cite{Schwinger} symbolic algebraic approach to quantum mechanics, a theoretical framework is developed in \cite{Palmer:2015b} which defines a one-to-one mapping (which, importantly, is not a surjection) between a symbolic representation of fractal trajectory bundles on $I_U$ and the (multi-qubit) complex Hilbert Space of quantum theory.
For one qubit, this correspondence can be written
\begin{equation}
\label{correspondence}
\mathbf{E}^\alpha_\beta(000\ldots 0)^T \sim
\cos\frac{\theta}{2}\;|0\rangle + \sin\frac{\theta}{2} e^{i \phi}\;|1\rangle
\end{equation}
where $\mathbf{E}^\alpha_\beta(000\ldots 0)^T$ is a bit string of length $2^N$ comprising the symbols '0' and '1', $N$ is a large but finite integer from which the fractal dimension of $I_U$ is determined, and the superscript `T' denotes matrix transpose. From the perspective of Invariant Set Theory, the symbols `0' and `1' describe discrete attracting subsets of $I_U$; from a quantum theoretic perspective these symbols describe measurement outcomes\footnote{In \cite{Palmer:2015b} these attracting subsets are considered state-space manifestations of gravitational `clumpiness' - thus linking Invariant Set Theory to descriptions of quantum measurement as a fundamentally gravitational process}. From the perspective of Invariant Set Theory, the bit strings $\mathbf{E}^\alpha_\beta(000\ldots 0)$ describe a bundle of trajectories belonging to a particular fractal iterate, $\mathcal C_k$ of $\mathcal C$; from a quantum theoretic perspective, these bit strings define the sample spaces from which measurement probabilities are defined. In Invariant Set Theory, $\mathbf{E}^\alpha_\beta$ denotes a two parameter family of negation/permutation operators acting on bit strings, with the multiplicative property of quaternions (and hence Pauli Spin matrices). In particular, the frequency of occurrence of the symbol `0' in $\mathbf{E}^\alpha_\beta(000 \ldots 0)^T$ is equal to $|1-\alpha/2|$. Importantly, $\alpha$ and $\beta$ are only defined if their binary representations can be expressed with less than $N$ bits. In particular, $\mathbf{E}^\alpha_\beta$ is undefined if either $\alpha$ or $\beta$ is irrational.
These numbers relate to complex Hilbert Space parameters through the relationships
\begin{align}
\label{incom}
\cos^2 \theta/2&= |1-\alpha/2| \nonumber \\
\phi&=\pi\beta/2
\end{align}
Hence, the frequency of occurrence of the symbol `0' in $\mathbf{E}^\alpha_\beta(000 \ldots 0)^T$ is equal to $\cos^2\theta/2$, and, for example, the eigenstates $|0\rangle$ and $|1 \rangle$ in quantum theory correspond to the bit strings $\mathbf{E}^0_\beta (000 \ldots 0)=(000 \ldots 0)$ and $\mathbf{E}^2_\beta (000 \ldots 0)=(111 \ldots 1)$ respectively. The fractal structure of $I_U$ becomes relevant when considering multiple sequential spin measurements. Since $\alpha$ and $\beta$ must be expressible with $N$ bits, so too must $\theta$ and $\phi$. This means that according to Invariant Set Theory, complex Hilbert Space vectors where either $\theta$ or $\phi$ is not describable by $N$ bits (e.g. are irrational) do not correspond to a sample space of trajectories on the invariant set $I_U$, and therefore, by the Cosmological Invariant Set Postulate, do not corresponds to a sample space comprising elements of physical reality. Put another way, such Hilbert Space vectors describe physically unreal counterfactual experiments.
The relationship (\ref{correspondence}) can be readily extended to $M \ge 1$ qubits, corresponding to $M$ bit strings based on multiple copies of the quaternion operators $\mathbf{E}^\alpha_\beta$. We refer the reader to reference \cite{Palmer:2015b} for details.
We can now discuss how precisely invariant set theory fails to be constrained by the Bell inequality, and the CHSH \cite{CHSH} version of it in particular (similar arguments apply to an analysis of the original Bell inequalities \cite{Palmer:2014}).
\subsection{CHSH}
Alice and Bob each have a measuring device which has two orientation settings ($a_1$ and $a_2$ for Alice, and $b_1$ and $b_2$ for Bob) and two 'outputs' say $\pm1$. Let the orientations $a_1$, $a_2$, $b_1$ and $b_2$ be represented by four points on the unit sphere (see Fig \ref{fig:chsh}). A conventional causal hidden-variable theory is constrained by the CHSH inequality
\begin{equation}
\label{chsh}
|\mathrm{Corr}_{\Lambda}(a_1, b_1) - \mathrm{Corr}_{\Lambda}(a_1, b_2)|+|\mathrm{Corr}_{\Lambda}(a_2, b_1)+\mathrm{Corr}_{\Lambda}(a_2, b_2)| \le 2
\end{equation}
where each correlation is defined by
\be
\label{correlation}
\mathrm{Corr}_{\Lambda}(a_i, b_j)=\sum_{k} A(a_i, \lambda_k) B(b_j, \lambda_k)
\ee
where $A=\pm1$ and $B=\pm1$ denote deterministic (hidden-variable) functions and $\Lambda=\{\lambda_k\}$. As is well known, according to experiment and consistent with quantum theory, with $a_1 \approx 0^\circ$, $a_2 \approx 90^\circ$, $b_1 \approx 45^\circ$, $b_2 \approx 135^\circ$, then the left hand side of (\ref{chsh}) sums to about 2.8, in clear violation of (\ref{chsh}). Does a causal hidden-variable theory constrained by the Cosmological Invariant Set Postulate necessarily obey the CHSH inequality? According to the discussion above, for Invariant Set Theory to necessarily obey the CHSH inequality, then the cosine of the angular distance between any pair of the four points $a_1, a_2, b_1, b_2$ must be describable by $O(N)$ bits. If this is possible, then the sample spaces which generate the correlations $\mathrm{Corr}_{\Lambda}(a_1, b_1), \mathrm{Corr}_{\Lambda}(a_1, b_2), \mathrm{Corr}_{\Lambda}(a_2, b_1)$ and $\mathrm{Corr}_{\Lambda}(a_2, b_2)$ can be associated with bit strings of the type $\{a_1, a_2, \ldots a_{2^N}\}$, where $a_i \in \{0,1\}$, and the symbol $0$ denotes a situation where $A \times B=1$ (Alice and Bob agree) and the symbol $1$ denotes a situation where $A \times B = -1$ (Alice and Bob disagree).
\begin{figure}
\centering
\includegraphics[scale=0.7]{CHSH}
\caption{$a_1, a_2, b_1, b_2$ are four points on the sphere, representing directions associated with measurement orientations from which the left hand side of the CHSH inequality (\ref{chsh}) can be estimated. We focus on a particular $\lambda \in \Lambda$. The angular length between $a_i$ and $b_j$ be $\theta_{a_i,b_j}$. According to Invariant Set Theory, in order that $\mathrm{Corr}_{\Lambda}(a_i b_j)$ in (\ref{chsh}) be definable, then $\cos \theta_{a_ib_j}$ must be base-2 rational. As discussed in the text $\cos \theta_{a_1a_2}$ and $\cos \theta_{b_1b_2}$ are necessarily base-2 rational. Then, using the cosine rule for spherical triangles, it is impossible for the sides of any of the spherical triangles whose apexes are drawn from $\{a_1, a_2, b_1, b_2\}$ to all have angular lengths whose cosines are base-2 rational. Based on this, the left hand side of the CHSH inequality is not definable from a common sample space of hidden variables, and Invariant Set Theory is therefore not constrained by the CHSH inequality.}
\label{fig:chsh}
\end{figure}
However, it is not the case that the cosine of the angular distance between all pairs of points $a_1, a_2, b_1, b_2$ are base-2 rational (and hence cannot be describable by $N$ bits). To see this, first of all note that $\cos \theta_{a_1a_2}$ must be describable by $N$ bits (where $\theta_{a_1a_2}$ denotes the relative orientation between $a_1$ and $a_2$). The reason for this is that it is possible for Alice to measure the spin of a particle with her apparatus oriented with respect to $a_1$, and, with the spin now prepared in the $a_1$ direction, to measure the same particle again with the apparatus oriented in the $a_2$ direction. For an experiment where a particle is prepared in the state associated with $a_1$ and measured in the state associated with $a_2$, then according to Invariant Set Theory, $\cos\theta_{a_1a_2}$ must be describable by $N$ bits. Similarly, $\cos \theta_{b_1b_2}$ must be describable by $N$ bits.
As discussed below, Alice and Bob can be considered, for all practical purposes, free agents. Therefore they can choose, without constraint, any of the following possibilities $\{a_1, b_1\}$,
$\{a_1, b_2\}$, $\{a_2, b_1\}$, $\{a_2, b_2\}$. Each of these choices can be associated with a possible invariant set $I_{U_1}$, $I_{U_2}$, $I_{U_3}$ or $I_{U_4}$ respectively. Suppose, without loss of generality, Alice and Bob (independently) choose $a_1$ and $b_1$ respectively, implying that $\cos\theta_{a_1b_1}$ is describable by $N$ bits. By the cosine rule for spherical triangles
\be
\label{triangle}
\cos \theta_{a_2b_1}=\cos \theta_{a_1a_2} \cos \theta_{a_1b_1}+\sin \theta_{a_1a_2} \sin \theta_{a_1b_1} \cos \phi
\ee
where $0<\phi< \pi/2$ is the (typically small) angle subtended by the two sides of $\triangle_{a_1a_2b_1}$ at $a_1$ and $\phi/\pi$ is describable by $N$ bits (see Fig 3). Now the first term on the right hand side of (\ref{triangle}) is the product of two terms, each of which is describable by $N$ bits. Hence the product is base-2 rational. However, by a simple number theorem, $\cos \phi$ cannot be base-2 rational. Hence the left hand side of (\ref{triangle}) cannot be base-2 rational, and in particular cannot not describable by $N$ bits, no matter how big is $N$. The theorem is:
$\mathbf{Theorem}$\cite{Jahnel:2005}. Let $\phi/\pi \in \mathbb{Q}$. Then $\cos \phi \notin \mathbb{Q}$ except when $\cos \phi =0, \pm 1/2, \pm 1$.
$\mathbf{Proof}$. We derive a \emph{reductio ad absurdum}. Assume that $2\cos \phi = a/b$ is rational, where $a, b \in \mathbb{Z}, b \ne 0$ have no common factors. Using the identity $2 \cos 2\phi = (2 \cos \phi)^2-2$ we have
\be
2\cos 2\phi = \frac{a^2-2b^2}{b^2}
\ee
Now $a^2-2b^2$ and $b^2$ have no common factors, since if $p$ were a prime number dividing both, then $p|b^2 \implies p|b$ and $p|(a^2-2b^2) \implies p|a$, a contradiction. Hence if $b \ne \pm1$, then the denominators in $2 \cos \phi, 2 \cos 2\phi, 2 \cos 4\phi, 2 \cos 8\phi \dots$ get bigger and bigger without limit. On the other hand, with $0 < \phi/\pi < 1/2 \in \mathbb{Q}$, then $\phi/\pi=m/n$ where $m, n \in \mathbb{Z}$ have no common factors. This implies that the sequence $(2\cos 2^k \phi)_{k \in \mathbb{N}}$ admits at most $n$ values. Hence we have a contradiction. Hence $b=\pm 1$ and $\cos \phi =0, \pm1/2, \pm1$. QED.
Hence if in reality Alice chooses setting $a_1$ and Bob chooses setting $b_1$ when measuring the spins of a particle pair described by the hidden-variable $\lambda$, then the actual invariant set is $I_{U_1}$, and a counterfactual universe where Alice chooses setting $a_2$ and Bob chooses setting $b_1$ lies off $I_{U_1}$. Conversely, if in reality Alice chooses $a_2$ and Bob $b_1$, then the invariant set is $I_{U_3}$ and a counterfactual universe where Alice instead chooses $a_1$ and Bob $b_1$ lies off $I_{U_3}$. In general, only two of the four correlations in (\ref{chsh}) are definable for given $\lambda$, no matter what choices Alice and Bob make.
If only two of the four correlations are defined for any $\lambda$, what happens when experiments show that the inequalities are violated? The key point that distinguishes experimental procedure from the theoretical analysis above is that in an experimental test of the CHSH inequality the four correlation functions in (\ref{chsh}) are evaluated using four separate sub-experiments (implying four disjoint sets of hidden variables). According to Invariant Set Theory, each sub-experiment must be associated with a state of the universe on an invariant set $I_U$, i.e. the cosines of the relative angles must all be definable by $N$ bits. That is to say, according to Invariant Set Theory, what is actually tested experimentally is not (\ref{chsh}) but something of the form
\begin{equation}
\label{chsh2}
|\mathrm{Corr}_{\Lambda_1}(a_1, b_1) - \mathrm{Corr}_{\Lambda_2}(a'_1, b_2)|+|\mathrm{Corr}_{\Lambda_3}(a_2, b'_1)+\mathrm{Corr}_{\Lambda_4}(a'_2, b'_2)| \le 2
\end{equation}
where $\Lambda_1 \ne \Lambda_2 \ne\Lambda_3 \ne\Lambda_4$. Here $a'_1=a_1$, $a'_2=a_2$, $b'_1=b_1$ and $b'_2=b_2$ to within the necessarily finite precision of the measuring instruments, such that all of $\cos \theta_{a_1 b_1}$, $\cos \theta_{a'_1 b_2}$, $\cos \theta_{a_2 b'_1}$ and $\cos \theta_{a'_2 b'_2}$ are describable by $N$ bits. By contrast, the theoretical analysis described above, necessary to determine whether a given hidden-variable theory is constrained by the CHSH inequality, considers different measurement orientations for a given $\lambda$, a situation which can never arise in an experimental test of the CHSH inequality.
In summary, the reason Invariant Set Theory can violate the CHSH inequalities is through a violation of the measurement independence condition (\ref{mip}), in particular that
\be
\label{mip}
\rho(\lambda| \hat a, \hat b)= \rho (\lambda| \hat a, \hat b')
\ee
where the cosine of the angle between $a$ and $b$ is not describable by $N$ bits (e.g. is not dyadic rational), but the cosine of the angle between $a$ and $ b' \approx b$ is. An immediate reaction to such a conclusion might be that such a violation would not be robust to the tiniest perturbation. We discuss this in Section \ref{fine} below, showing it is not so.
It can be considered an open question as to whether all demonstrations of quantum non-locality are nullified in Invariant Set Theory by the incommensurateness of $\phi$ and $\cos \phi$. However, given the general applicability of such incommensurateness to the realistic interpretation of a range of quantum phenomena - interference, sequential Stern-Gerlach, and the original Bell inequality (see \cite{Palmer:2015b}) - we speculate here that this question can be answered in the positive.
\section{Fine Tuning, Conspiracy, Free Will and Superdeterminism}
\label{conspiracy}
It is claimed above that the Bell Inequalities can be violated without resorting to a breakdown of realism or local causality. What are the conceivable objections to Invariant Set Theory as a description of physical reality? We list four: it is fine-tuned, inimical to experimenter free will, conspiratorial or superdeterministic. In this section it is shown that invariant set theory is none of these.
\subsection{Fine Tuning}
\label{fine}
The analysis above suggests that it might be sensitive to tiny perturbations (experimenter hand shake for example). However, this is not the case. We return to one of the basic properties of a Cantor Set, that it is `large' (i.e. has the cardinality of the continuum) when looked at from the inside. That is to say, the analysis above is insensitive to perturbations which keeps a trajectory on the invariant set. There is a continuum of such perturbations. Indeed, one can perform continuous analysis on $I_U$ making use of the link to p-adic numbers. For example, there is a well-known continuous 1-1 mapping between the dyadic integers $...d_2d_1d_0$ and the points $0.e_0e_1e_2$ of the Cantor Ternary set, by the relationship $e_n=2d_n$ \cite{Robert}.
On the other hand, the analysis is sensitive to noise which is random with respect to the measure of the euclidean space in which $I_U$ is embedded - such noise will surely take a state on $I_U$ to a state off it, no matter how small the noise actually is. However, since by the Cosmological Invariant Set Postulate, only those perturbations which maps points of $I_U$ to points of $I_U$ can be considered physical, then such full-measure noise is not physical.
In this sense, the experimenter can certainly have a degree of hand shake. This will mean that parameters like $\theta$ in the discussion above are (epistemically) uncertain, and can be expected to vary during an experiment within some finite tolerance $\Delta \theta$. However, according to Invariant Set Theory, this uncertainty notwithstanding, the actual values of the parameter $\theta$ occurring within this finite tolerance must all be describable by $N$ bits.
\subsection{Conspiracy}
Let us return to Bell's example (see Section \ref{introduction}) where the orientation of a measurement device is set by the output of a PRNG, itself sensitive to the value of the millionth digit of some input variable. Let us suppose the PRNG is run at $t=t_0$, and the actual experiment takes place at $t=t_1$. In principle (like the black-hole experiment), let us assume that $t_0 \ll t_1$. Now let us denote by $\mathcal A$ Bell's assertion (cf Section \ref{introduction}) that the value of the millionth digit is unimportant for any distinctly different purpose. Then, if $\mathcal A$ is true, there can be no rational reason to suppose the value of the millionth digit at $t_0$ should be correlated with the hidden variables associated with the particles whose spin is subsequently measured by Alice and Bob at $t_1$ (unless one invokes retrocausality).
However, assuming the Cosmological Invariant Set Postulate, there are three related ways of seeing that $\mathcal A$ is false.
\begin{itemize}
\item
Consider an ensemble of putative states $X_U(d)$ of the Universe at $t_0$. By construction, suppose $X_U(d)$ are identical except for the value $d \in \{0,1,2, \ldots 9\}$ of the millionth decimal digit of the input variable to the PRNG. If $I_U$ is fractal, then it is impossible to determine by finite algorithm started at $t_0$, the value of $d$ which ensures $X_U(d) \in I_U$. The situation is fundamentally no different to that of the black-hole gedanken experiment. Similar to the discussion in Section \ref{cantor}, the value $d$ of the millionth digit for which $X_U(d) \in I_U$ is determined by events in the future of $t_0$, (a crucial) one of which will be the settings of the experiment to measure the spin of the particle pair at $t_1$.
\item
Consider the implication of the Takens Embedding Theorem referred to in Section \ref{cantor}. In principle (but not in practice!) the entire invariant set $I_U$ could be reconstructed in state space, given sufficiently many values of the millionth digit of the input to the PRNG (over multiple aeons of the universe). This directly contradicts $\mathcal A$'s assertion that the values of the millionth digits are irrelevant for all distinctively different purposes.
\item
Suppose the millionth digit was the number 8 and consider a counterfactual universe in which all degrees of freedom are kept fixed except for one, where the number 8 was perturbed to the number 9, taking $U \rightarrow U'$. Whilst $U \in I_U$, one cannot assume that $U' \in I_U$, since $I_U$ has measure zero in its embedding space. Above we have shown that at least some of the counterfactual measurement orientations needed to establish the CHSH inequality necessarily lie off $I_U$. Hence, states of the universe where the value of the millionth digit would have led to such counterfactual measurement orientations, rather than the actual measurement orientations, necessarily lie off $I_U$.
\end{itemize}
\subsection{Free Will}
Alice and Bob can be said to have free will if they can hold rational beliefs that the future `is up to them'. Holding a rational belief means that there is nothing that disprove such a belief. However, it doesn't mean that the rational belief is actually true. Here we take a `classical compatibilist' view \cite{kane} (whose proponents include notable philosophers such as Hobbes, Hume and Mill) that Alice and Bob are `free' when there is an absence of constraints or impediments preventing them from doing what they want to do. In the analysis of the CHSH experiment above, there are certainly no `knowable' constraints at the time when the experiment is performed, which prevent them from selecting any of the combinations $\{a_1, b_1\}$, $\{a_1, b_2\}$, $\{a_2, b_1\}$, $\{a_2, b_2\}$. This is consistent with the non-computability of $I_U$. Alice and Bob will makes their choices on the basis of what they want to measure, this notion of `want' being determined by actions of the neurons in their brains, processes which themselves act out on the invariant set. Having made that choice, say $\{a_1, b_1\}$, then two of the three remaining pairs of values can, retrospectively, be said to not lie on $I_U$. Hence the counterfactuals in which measurements along these off $I_U$ directions are not physical.
In summary, according to the classical compatibilist view, the experimenters are free agents, determinism and the Cosmological Invariant Set Postulate notwithstanding. That is to say, they can hold rational beliefs that they really do `control' the settings (in the sense that they could have done otherwise), whereas in reality they do not (and, for some alternative settings, could not have done otherwise).
\subsection{Superdeterminism}
The word `superdeterminism' is generally used to denote a class of theories in which the measurement independent condition \ref{mip} is violated. The word is generally used in a pejorative sense, with the implication that it such a theory is necessarily conspiratorial and denies free will. As shown above, Invariant Set Theory is not conspiratorial and does not deny free will. In this Section, we attempt a mathematical definition of the notion of superdeterminism and assess whether Invariant Set Theory can itself be considered superdeterministic.
By definition, a superdeterministic theory is more restrictive than a deterministic theory - but how? A superdeterministic theory must not only have deterministic laws of evolution (e.g. in the form $\dot X=F[X]$), it must also impose some restrictions on allowed initial states $X(t_0)$. Consider the set of state-space perturbations to $X(t_0)$, which are transverse to the (integral curve) trajectory of $\dot X=F[X]$ from $X(t_0)$ (see Fig \ref{fig:super}). If \emph{any} such perturbation to an allowed $X(t_0)$ produces an initial state forbidden by the theory, i.e. if $X(t_0)$ was a completely isolated point in all transverse neighbourhoods of $X(t_0)$, no matter how large, then of course it would be reasonable to say that such a theory is superdeterministic. However, this is an unnecessarily strong restriction and instead we will take a much weaker definition of superdeterminism: a deterministic theory is superdeterministic iff there exists a transverse neighbourhood of $X(t_0)$ (no matter how small) comprising only forbidden states.
According to this weak definition, Invariant Set Theory is not superdeterministic. By the Cosmological Invariant Set postulate, states of the universe evolve on a fractal invariant set in state space - locally the product of a Cantor set and the real line. A Cantor set is a so-called `perfect' topological set. Any neighbourhood of a point of a perfect set contains other points of the set, i.e. a perfect set contains no isolated points. Hence all transverse neighbourhoods of an allowed initial state contain allowed initial states.
\begin{figure}
\centering
\includegraphics[scale=0.5]{superdeterminism}
\caption{Invariant Set Theory would be superdeterministic if for a point $p \in I_U$ there exist (sufficiently small) neighbourhoods $N$ of $p$, transverse to the state-space trajectory on $I_U$, which contain no other points of $I_U$. Because $I_U$ is locally the Cartesian product of the real line and a Cantor set, and because a Cantor set is a perfect topological set (with no isolated points), there exist (an infinity of) points where $p' \in N$ and $p' \in I_U$. Hence Invariant Set Theory is not superdeterministic.}
\label{fig:super}
\end{figure}
Superdeterministic theories are rightly seen as unacceptable by physicists and philosophers. Most importantly, they offer no theoretical framework for defining the notion of probability (which is central to quantum theory). As such, they offer no theoretical framework to understand why the world is governed by quantum probability and not classical probability. This problem can be framed in the following way: if the world is indeed causal and deterministic (as a superdeterministic world could indeed be), how can we explain that the vast majority of quantum physicists have been fooled into believing otherwise. However as discussed, a fractal invariant set is not superdeterministic. In particular, any transverse neighbourhood of any state on $I_U$ contains an infinite number of states, also lying on $I_U$. From the statistics of these alternative states we can construct the notion of probability (and quantum probability in particular).
\section{Conclusions}
\label{conclusions}
We have presented a new approach to formulating a locally causal hidden-variable theory of quantum physics, in which the measurement independence condition is partially violated without conspiracy, superdeterminism, retrocausality, fine-tuning and without denying experimenter free will. The key ingredient in this approach is the global invariant set; more specifically the notion that globally defined invariant subsets of state space define the notion of physical reality. Such subsets are generic in nonlinear dynamical systems theory, but are less familiar to quantum physicists and philosophers. To try to make accessible the reasoning why such subsets do provide a novel solution to the problem of quantum non-locality, we considered a simple black-hole gedanken experiment - the event horizon in space-time being the analogy of the invariant set in state space. The fundamental postulate underlying the analysis in this paper is the Cosmological Invariant Set Postulate, that states of the universe $U$ are evolving precisely on a global measure-zero fractal invariant set, $I_U$ in state space.
Recently, a proposal was made to close the measurement independence `loophole' by setting measurement orientations from the light of distant quasars \cite{Gallicchio:2014}. Because the light originated so early in the universe, and because the cosmic sources have never been in causal contact with one another, it seems improbable (in line with Bell's reasoning about the PRNG) that this light could in any way be correlated with the hidden variables of particle source in an experiment being conducted today. However, by analogy with the black hole event horizon, it is shown that the cosmological invariant set is an atemporal one. The fact that the light was emitted billions of years before the experiment took place is really a `red herring'. It is therefore predicted that if these proposed experiments are conducted, Bell's inequalities will continue to be violated.
This experiment draws to attention the notion that if quantum theory can be replaced by a deeper causal deterministic theory, that theory must treat cosmology and quantum physics much more synergistically than is done in contemporary theories of physics. This in turn suggests that the long-sought unification of gravitational and quantum physics should also be based on the development of a more synergistic relationship between cosmology and quantum physics than exists today.
We conclude with some remarks about the implications of the discussions above on the physical nature of the central mathematical structure in quantum theory: the complex Hilbert space. A vector space is a rudimentary algebraic structure defined over a field, the field of complex numbers in the case of the complex Hilbert Space. The set of elements of a field is by definition closed over addition and multiplication. However, in terms of the correspondence (\ref{correspondence}), $\cos \theta$ and $\phi/\pi$ are restricted to rational numbers describable by fixed finite $N$ bits. Hence, the set of corresponding numbers $\cos \theta$ is not in general closed under multiplication, and the set of corresponding numbers $e^{i\phi}$ is not in general closed under addition, no matter how large is $N$. Hence the set of vectors corresponding to bit strings describable in Invariant Set Theory is not a vector space, no matter how large is $N$. Is this a problem? Whilst it is appealing to be able to describe all elements of a physical theory by rudimentary mathematical structures, as a physicist one should not be overly beguiled by the elegance of any \emph{particular} mathematical structure. In particular, the property of closure guarantees that all conceivable counterfactual worlds have equal ontological status, the antithesis of that implied by the Cosmological Invariant Set Postulate. Hence we claim that closure can be an undesirable feature of theory if one strives for causal realism in physics. In particular, we reject the algebraic properties of the complex Hilbert Space as a description of physical reality and speculate that the unquestioning acceptance of these properties over the years has hindered the development of physical theory (the unification of quantum and gravitational physics in particular). In terms of Invariant Set Theory, the complex Hilbert space should be treated as the singular and not the regular limit of the set of bit string representations, as the parameter $1/N$ tends to zero. As Berry has noted \cite{berry}, old physical theories generically arise as the singular limit of new physical theories.
\section*{Acknowledgement} My thanks to Harvey Brown, Huw Price, Terry Rudolph and Simon Saunders for helpful discussions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\noindent
A fundamental question in hadronic physics is to understand how the spin of
the nucleon is divided among the quarks and gluons that form the nucleon. In
the EMC experiment, it was found that only a small fraction of the nucleon
spin is carried by the quarks and antiquarks. Recent experiments suggest that
the intrinsic spin carried by the gluons is also small. Thus a substantial
part of the spin comes from quark and gluon orbital angular momentum (OAM). There
are issues related to the gauge invariance and experimental measurability
that complicates the understanding of the OAM. However, recently it was
shown that the quantum mechanical Wigner distributions of quarks inside the
nucleon can give information on the OAM carried by the quarks. Wigner
distributions can be thought of as so-called quantum mechanical phase space
distributions which give a joint position and momentum space information
about the quarks in the nucleon. As position and momentum operators do not
commute in quantum mechanics, they cannot be simultaneously determined. As a
result, Wigner distributions are not positive definite. However, a reduced
Wigner distribution can be defined after integrating over several variables,
and these are positive definite. The Wigner distributions are related to the
generalized parton correlation functions (GPCFs) or generalized transverse
momentum dependent parton distributions (GTMDs). These are the mother
distributions of the GPDs and TMDs, both of which contain very useful
information on the 3D structure of the nucleon as well as the spin and OAM
of the quarks in the nucleon. In \cite{lorce} the authors introduced 5-D reduced
Wigner distributions in the infinite momentum frame, or in light-front
framework, which are functions of
three momentum and two position variables. Working in the light-front
formalism is useful as the transverse boosts are Galilean or do not involve
dynamics and longitudinal boosts are scale transformations. Thus it is
possible to have a boost invariant description of the Wigner distributions
in terms of light-front wave functions. In \cite{lorce} the Wigner distributions were
calculated in light-cone constituent quark model and light-cone chiral
quark-soliton model. Both these models have no gluonic degrees of freedom.
In this work \cite{us}, we calculate the Wigner functions for a dressed quark in
light-front Hamiltonian perturbation theory, which is basically a
relativistic composite spin $1/2$ state. This is a simple model with a
gluonic degree of freedom. The state in expanded in Fock space in terms of
multiparton light-front wave functions (LFWFs). The advantage is that
this approach gives a field theoretic description of deep inelastic
scattering processes and at the same time keeps close contact with parton
model, the partons now are field theoretic, they are massive, non-collinear
and interacting. To obtain the non-perturbative LFWFs for a bound state like
the nucleon one needs a model light-front Hamiltonian. However, for a quark
state dressed with a gluon the two-body light-front wave function can be
obtained analytically. In the next section, we present a calculation of the
Wigner distributions in this model.
\section{Wigner Distributions}
\noindent
The Wigner distribution of quarks can be defined as
\cite{lorce,metz}
\begin{eqnarray} \label{main}
\rho^{[\Gamma]} ({b}_{\perp},{k}_{\perp},x,\sigma) = \int \frac{d^2
\Delta_{\perp}}
{(2\pi)^2} e^{-i \Delta_{\perp}.b_{\perp}}\nonumber \\
W^{[\Gamma]} (\Delta_{\perp},{k}_{\perp},x,\sigma);
\end{eqnarray}
\noindent
where $\Delta_{\perp}$ is momentum transfer from the initial state to the
final state in transverse
direction and
${b}_{\perp}$ is 2 dimensional vector
in impact parameter space conjugate to $\Delta_{\perp}$. $W^{[\Gamma]}$ is
the quark-quark correlator given by
\begin{flalign} \label{qqc}
W^{[\Gamma]} ({\Delta}_{\perp},{k}_{\perp},x,\sigma) =
\frac{1}{2}\int\frac{dz^{-}d^{2} z_{\perp}}{(2\pi)^3}\nonumber\\e^{i(xp^+
z^-/2-k_{\perp}.z_{\perp})}\nonumber\\
\Big{\langle } p^{+},\frac{\Delta_{\perp}}{2},\sigma \Big{|}
\overline{\psi}(-\frac{z}{2}) \Omega\Gamma \psi(\frac{z}{2}) \Big{|}
p^{+},-\frac{\Delta_{\perp}}{2},\sigma \Big{\rangle }
\Big{|}_{z^{+}=0}.
\end{flalign}
\noindent
We take the target state to be a quark dressed with a gluon. We use the symmetric frame
\cite{brodsky}
where $p^+$ and $\sigma$ define the longitudinal momentum of the target state and its
helicity respectively. $x = k^+/p^+$ is the fraction of
longitudinal momentum of the dressed quark carried by the quark.
$\Omega$ is the gauge link needed for color gauge invariance. Here, we use the
light-front gauge, $A^+=0$ and take the gauge link to be unity. In fact the
quark orbital angular momentum in this model does not depend on the gauge
link. The symbol $\Gamma$
represents the Dirac matrix structure.
The state of momentum $p$ and helicity $\sigma$,
can be expanded in Fock space using the
multi-parton light-front wave functions (LFWFs) \cite{hari}. The boost
invariant two-particle LFWFs be calculated perturbatively as \cite{hari}.
We use the two component formalism \cite{zhang}.
The quark state dressed with a gluon as we consider here mimics the bound state of
a spin-1/2 particle and a spin-1 particle. However, for such a bound state the bound state mass $M$
should be less than the sum of the masses of the constituents for stability.
In this work, we use the same mass for the bare
as well as the dressed quark in perturbation theory \cite{hari}. The
Wigner distributions can be expressed as overlaps of two-particle LFWFs. We take the
momentum transfer to be completely in the transverse direction. In this
case, the overlaps are diagonal or between two-particle LFWFs.\\
\noindent
Wigner distributions of quarks with longitudinal polarization $\lambda$ in a
target with longitudinal polarization $\Lambda$ is denoted by $
\rho_{\Lambda \lambda}(\vec{b}_\perp,\vec{k}_\perp,x)$. This can be
decomposed in terms of four Wigner functions as defined below:
\begin{eqnarray} \label{ruu}
\rho_{UU}(\vec{b}_\perp,\vec{k}_\perp,x) = \frac{1}{2}\Big[\rho^{[\gamma^+]}
(\vec{b}_\perp,\vec{k}_\perp,x,+\vec{e}_z) \nonumber\\ +
\rho^{[\gamma^+]}(\vec{b}_\perp,\vec{k}_\perp,x,-\vec{e}_z) \Big]
\end{eqnarray}
is the Wigner distribution of unpolarized quarks in unpolarized target
state.
\begin{eqnarray} \label{rlu}
\rho_{LU}(\vec{b}_\perp,\vec{k}_\perp,x)= \frac{1}{2}\Big[\rho^{[\gamma^+]}
(\vec{b}_\perp,\vec{k}_\perp,x,+\vec{e}_z) \nonumber\\ -
\rho^{[\gamma^+]}(\vec{b}_\perp,\vec{k}_\perp,x,-\vec{e}_z) \Big]
\end{eqnarray}
is the distortion due to longitudinal polarization of the target state.
\begin{eqnarray} \label{rul}
\rho_{UL}(\vec{b}_\perp,\vec{k}_\perp,x) =
\frac{1}{2}\Big[\rho^{[\gamma^+\gamma_5]}
(\vec{b}_\perp,\vec{k}_\perp,x,+\vec{e}_z)\nonumber\\ +
\rho^{[\gamma^+\gamma_5]}(\vec{b}_\perp,\vec{k}_\perp,x,-\vec{e}_z) \Big]
\end{eqnarray}
is the distortion due to the longitudinal polarization of quarks, and
\begin{eqnarray} \label{rll}
\rho_{LL}(\vec{b}_\perp,\vec{k}_\perp,x) =
\frac{1}{2}\Big[\rho^{[\gamma^+\gamma_5]}
(\vec{b}_\perp,\vec{k}_\perp,x,+\vec{e}_z)\nonumber\\-
\rho^{[\gamma^+\gamma_5]}(\vec{b}_\perp,\vec{k}_\perp,x,-\vec{e}_z) \Big]
\end{eqnarray}
is the distortion due to the correlation between the longitudinal
polarized target state and quarks. $+\vec{e_z}$ and $-\vec{e_z}$ denote the
helicity up and down of the target state, respectively.
In our model, $\rho_{LU} = \rho_{UL}$.\\
\noindent
Using the two-particle LFWF the above distributions can be calculated
analytically for a quark state dressed with a gluon \cite{us}.
In Figs. \ref{fig1} and \ref{fig2} we have shown the 3D plots for the Wigner distribution
$\rho_{UU}$. In
the numerical calculation for Eq.\ref{ruu} we have upper cut-off's
$\Delta_x^{max}$ and
$\Delta_y^{max}$ for the $\Delta_{\perp}$ integration. In all plots we have
taken $m=0.33$ GeV and integrated over $x$.
We have plotted $\rho_{UU}$ in $b$ space with
$k_\perp = 0.4$ GeV such that $\vec{k_\perp} = k \hat{j}$.
The plot has a peak centered at $b_x=b_y=0$ decreasing in the
outer regions of the $b$ space.
The asymmetry in $b$ space can be understood from semi-classical arguments in a model with
confinement. As no confining potential is present in our perturbative model, the behavior
is expected to be different. In Figs. \ref{fig3} and \ref{fig4} we show the 3D plots for the
Wigner distribution $\rho_{LU}$.
This is the distortion of the Wigner distribution of unpolarized quarks due
to the longitudinal polarization of the nucleon. We
have shown $\rho_{LU}$ in $b$ space with $k_\perp = 0.4$ GeV such that
$\vec{k_\perp} = k \hat{j}$ for $\Delta_\perp^{max} = 1.0$ GeV and
$\Delta_\perp^{max} = 5.0$ GeV respectively. Similar to \cite{lorce} we observe a dipole structure in
these plots and the
dipole magnitude increases with increase in $\Delta_{max} $.
\begin{figure}[!htp]
\centering
\includegraphics[width=6cm,height=6cm,clip]{fig2a}
\caption{\label{fig1}(Color online)
3D plots of the Wigner distributions $\rho_{UU}$ in
$b$ space for $\Delta_\perp^{max} = 1.0$ GeV with $k_\perp = 0.4$ GeV.
For all the plots we kept $m = 0.33$ GeV, integrated out the $x$ variable
and we took $\vec{k_\perp}
= k \hat{j}$ and $\vec{b_\perp} = b \hat{j}$. }
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=6cm,height=6cm,clip]{fig2b}
\caption{\label{fig2}(Color online)
3D plots of the Wigner distributions $\rho_{UU}$ in
$b$ space for $\Delta_\perp^{max} = 5.0$ GeV with $k_\perp = 0.4$ GeV.}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=6cm,height=6cm,clip]{fig3a}
\caption{\label{fig3}(Color online)
3D plots of the Wigner distributions $\rho_{LU}$ in
$b$ space for $\Delta_\perp^{max} = 1.0$ GeV with $k_\perp = 0.4$ GeV. }
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=6cm,height=6cm,clip]{fig3b}
\caption{\label{fig4}(Color online)
3D plots of the Wigner distributions $\rho_{LU}$ in
$b$ space for $\Delta_\perp^{max} = 5.0$ GeV with $k_\perp = 0.4$ GeV.}
\end{figure}
\section{Orbital Angular Momentum of the quarks}
\noindent
The quark-quark correlator in
Eq.(\ref{qqc}) defining the Wigner distributions
can be parameterized in terms of generalized transverse momentum dependent
parton distributions (GTMDs) \cite{metz}.
For the twist two case we have eight GTMDs as defined below :
\begin{eqnarray}
W_{\lambda,\lambda'}^{[\gamma^+]} =
\frac{1}{2M} \bar{u}(p',\lambda') \Bigg[
F_{1,1}-
\frac{i \sigma^{i+}k_{i\perp}}{P^+} F_{1,2}\nonumber\\ -
\frac{i\sigma^{i+}\Delta_{i \perp}}{P^+} F_{1,3} +
\frac{i\sigma^{ij}k_{i\perp}\Delta_{j\perp}}{M^2} F_{1,4}
\Bigg] u(p,\lambda);
\label{f11}\end{eqnarray} \\
\begin{eqnarray}
W_{\lambda,\lambda'}^{[\gamma^+\gamma_5]} =
\frac{\bar{u}(p',\lambda') }{2M} \Bigg[
\frac{-i \epsilon^{ij}_{\perp} k_{i\perp}\Delta_{j\perp}}{M^2}G_{1,1}\nonumber\\-
\frac{i \sigma^{i+}\gamma_{5} k_{i\perp}}{P^+}G_{1,2} -
\frac{i\sigma^{i+}\gamma_{5} \Delta_{i\perp}}{P^+} G_{1,3} \nonumber\\+
i\sigma^{+-} \gamma_5 G_{1,4}
\Bigg] u(p,\lambda).
\label{g11}\end{eqnarray} \\%
\noindent
The GTMDs can be calculated analytically in the dressed quark model. Using
the relation between the GTMD $F_{14}$ and the canonical OAM
\cite{lorce, hatta1,lorce2}:
\begin{eqnarray}
l^{q}_z = -\int dx d^{2}k_{\perp} \frac{k_{\perp}^2}{m^2} F_{14}.
\end{eqnarray}
\noindent
The canonical OAM in this model is given by \cite{us}:
\begin{eqnarray} \label{sl}
l^{q}_{z} = - 2N \int dx (1-x^2) \Big[ I_{1} - \nonumber \\m^2(x-1)^2 I_{2}\Big ]
\end{eqnarray}%
\noindent
On the other hand, the kinetic quark OAM is given in terms of the GPDs as
:
\begin{eqnarray}
L^{q}_{z} = \frac{1}{2} \int dx \Bigg\{
x \Big[ H^{q}(x,0,0) + E^{q}(x,0,0) \Big] \nonumber\\- \tilde{H^q}(x,0,0)
\Bigg\}. \nonumber
\end{eqnarray}
In the model considered here, this becomes,
\begin{eqnarray} \label{cl}
L^{q}_{z} = \frac{N}{2} \int dx \Big \{
- f(x) I_{1} \nonumber\\+
4m^2(1-x)^2 I_{2}
\Big
\};
\end{eqnarray}
where,
\begin{align}
I_{1} &= \int
\frac{d^{2} k_{\perp}}{m^2(1-x)^2 +(k_{\perp})^2} \nonumber \\&= \pi
log\Bigg[\frac{Q^2+m^2(1-x)^2}
{\mu^2+m^2(1-x)^2}\Bigg];\nonumber \\
I_{2} &= \int \frac{d^{2} k_{\perp}}{\Big(m^2(1-x)^2
+(k_{\perp})^2\Big)^2} \nonumber\\&=
\frac{\pi}{(m^2(1-x)^2)};\nonumber\\
f(x) &=2(1+x^2).
\nonumber
\end{align}
\noindent
$Q$ is the large scale involved in the process, which comes
from the large momentum cutoff in this approach \cite{hari}.
$\mu$ can be safely taken to be zero provided the quark mass is non-zero.
Similar qualitative behavior of $L^{q}_z$ and $l^{q}_z$ are observed, both
giving negative values.
However the magnitude of the two OAM differs in our model, unlike the case
in \cite{lorce}, where these were calculated in several models
without any gluonic degrees of freedom and the total
quark contribution to the OAM were equal for both cases. Thus one can see
the effect of the gluonic degrees of freedom to
the OAM in the model considered here.
The contribution from the gluons has been calculated recently
also in \cite{other} in this model.
\section{Conclusion}
\noindent
We presented a recent calculation of the Wigner distributions for a quark state
dressed with a gluon using the overlap representation in terms of the LFWFs.
This is a relativistic composite spin-1/2 system which has a gluonic degree of
freedom. The Wigner distributions are calculated both for unpolarized and
longitudinally polarized target and quarks and the correlations are shown in
transverse position space. The kinetic quark OAM using the
GPD sum rule and the canonical OAM were also calculated in this model
and it was shown that these are different in
magnitude, the difference is an effect of the gluonic degree of freedom.
\begin{acknowledgement}
This work is supported by the DST project SR/S2/HEP-029/2010, Govt. of
India. AM thanks the organizers of Transversity 2014, June 9-13, 2014, Chia,
Sardinia for the invitation.
\end{acknowledgement}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Sound event detection (SED) is the task of detecting the type,
onset time and offset time of sound events in audio stream.
While some studies are satisfied with recognizing what types
of sound events are present in a recording (\emph{audio tagging}),
this paper pays special attention to the \emph{localization}
of sound events.
Modern SED systems usually take the form of neural networks,
with convolutional layers~\cite{gorin2016dcase, espi2015exploiting},
recurrent layers~\cite{Yun-ICASSP2016, parascandolo2016recurrent, adavanne2016sound, hayashi2016bidirectional},
or both~\cite{cakir2017convolutional}.
The networks predict the probability of each sound event type
frame by frame; applying a threshold to these frame-level probabilities
will then produce localized detections of sound events.
Traditionally, the training of SED models relied upon \emph{strong labeling},
which specifies the type, onset time and offset time of each sound event occurrence.
But such annotation is very tedious to obtain by hand.
In order to scale SED up, researchers have turned to SED with \emph{weak labeling},
which only specifies the types of sound events present in each training recording
but does not provide any temporal information.
In March 2017, Google released the weakly labeled Audio Set~\cite{AudioSet},
which is by far the largest corpus available for SED.
The DCASE challenge of 2017~\cite{DCASE2017} featured a task of SED
with weak labeling, which used a subset of Audio Set.
\begin{figure}[t]
\centering
\includegraphics[width=0.83\columnwidth]{mil-for-sed}
\caption{Block diagram of a MIL system for SED with weak labeling.}
\label{fig:mil-for-sed}
\end{figure}
A common framework for SED with weak labeling
is \emph{multiple instance learning} (MIL)~\cite{amores2013multiple},
as shown in Fig.~\ref{fig:mil-for-sed}.
In MIL, we do not know the ground truth label of every training instance;
instead, the instances are grouped into bags,
and we only know the label of bags.
In the case of binary classification,
the relationship between instance labels and bag labels
often obey the \emph{standard multiple instance} (SMI) \emph{assumption}:
the bag label is positive if and only if the bag contains at least
one positive instance.
In SED, each training recording is regarded as a bag,
and its frames are regarded as instances.
Each sound event type is considered independently,
so SED becomes a binary classification problem for each sound event type.
A neural network predicts the probability of each sound event type
being active at each frame.
Then, a \emph{pooling function} aggregates the frame-level probabilities
into a recording-level probability for each sound event type.
The recording-level probabilities can be compared against the recording labels
to compute a loss function,
and the network can then be trained to minimize the loss.
The choice of the pooling function is an important decision.
The default choice is the ``max'' pooling function~\cite{su2017weakly, kumar2016audio},
which is faithful to the SMI assumption.
A previous study of ours~\cite{Yun-Interspeech2018}
has evaluated a ``noisy-or'' pooling function%
~\cite{maron1998framework, zhang2006multiple, babenko2008simultaneous},
and has shown it does not work for localization
despite its nice probabilistic interpretation under the SMI assumption.
Since the 2017 DCASE challenge, a number of other pooling functions
have been reported to perform well even though they deviate from the SMI assumption.
These include average pooling~\cite{shah2018closer},
two softmax pooling functions based on linear weighting~\cite{dang2017deep}
and exponential weighting~\cite{salamon2017dcase},
as well as an attention-based pooling function~\cite{xu2018large, kong2018audio}.
The purpose of this paper is to compare these pooling functions
against max pooling from two aspects:
theoretically, we derive the gradient of the five pooling functions,
and check if their signs lead the training down the right way;
experimentally, we compare the five pooling functions on two SED corpora:
the DCASE 2017 challenge~\cite{DCASE2017} and Audio Set~\cite{AudioSet}.
Although the attention pooling function appears to be the most favored by researchers,
we demonstrate that it is the linear softmax pooling function
that works best for localization.
Our experiments also result in a convolutional and recurrent neural network (CRNN)
which is the first system within our knowledge that
exhibits strong performance on audio tagging and localization at the same time.
We name this network ``TALNet'', where ``TAL'' stands for
``tagging and localization''.
This network closely matches the current state-of-the-art
audio tagging performance on Audio Set,
while achieving competitive localization performance
on the DCASE 2017 challenge without any finetuning.
\section{Theoretical Comparison of the Five Pooling Functions}
\label{sec:theory}
\subsection{Definition of the Pooling Functions}
\label{sec:definition}
Let $y_i \in [0,1]$ be the predicted probability of a certain event type at the $i$-th frame,
and $y \in [0,1]$ be the aggregated recording-level probability of the same event.
We list the definitions of the five pooling functions to be compared
in Table~\ref{table:pool}.
The max pooling function simply takes the largest $y_i$ to be $y$.
If the same threshold is applied to the recording-level and
frame-level probabilities,
then the frame-level predictions and recording-level prediction
are guaranteed to be consistent with the SMI assumption.
However, the max pooling function has a defect that
only one frame in a recording can receive an error signal.
As a consequence, if an event occurs multiple times in a recording,
the occurrences that do not cover this frame may be easily missed.
All the other four pooling functions
try to alleviate this problem by assigning some weight
to smaller $y_i$'s when aggregating them to produce $y$.
The average pooling function~\cite{shah2018closer}
assigns an equal weight to all frames.
The equation appears to defy the SMI assumption,
but it is reported to perform better than the max pooling function
in~\cite{shah2018closer}.
The two softmax pooling functions compute $y$
as a weighted average of the $y_i$'s,
where larger $y_i$'s receive larger weights.
In this way, the recording-level probability is still mainly
determined by the larger frame-level probabilities,
but frames with smaller probabilities get a chance
to receive an error signal.
The linear softmax function~\cite{dang2017deep}
assigns weights equal to the frame-level probabilities $y_i$ themselves,
while the exponential softmax function~\cite{salamon2017dcase}
assigns a weight of $\exp(y_i)$ to the frame-level probability $y_i$.
Finally, in the attention pooling function~\cite{xu2018large, kong2018audio},
the weights for each frame $w_i$ are learned
with a dedicated layer in the network.
The recording-level probability $y$ is then computed using the general
weighted average formula.
The attention pooling function appears to be most favored by researchers
because of its flexibility, and variants have emerged such as
the multi-level attention in~\cite{yu2018multi}.
\subsection{Gradient of the Pooling Functions}
\label{sec:gradient}
In this section, we analyze the gradient of the loss function
w.r.t. the frame-level probabilities $y_i$
(and, in the case of attention, also the weights $w_i$).
Let $t \in \{0,1\}$ be the recording-level ground truth.
The loss function is usually the cross entropy:
\begin{equation}
L = -t \log y - (1-t) \log (1-y)
\end{equation}
We decompose its gradient with respect to the frame-level probabilities $y_i$
(and the frame-level weights $w_i$) using the chain rule:
\begin{equation}
\frac{\partial L}{\partial y_i} = \frac{\partial L}{\partial y} \frac{\partial y}{\partial y_i}, \quad \quad
\frac{\partial L}{\partial w_i} = \frac{\partial L}{\partial y} \frac{\partial y}{\partial w_i}
\label{eq:chain}
\end{equation}
The first term,
\begin{equation}
\frac{\partial L}{\partial y} = -\frac{t}{y} + \frac{1-t}{1-y}
\end{equation}
does not depend on the choice of the pooling function.
It is negative when the recording label is positive ($t = 1$),
and positive when the recording label is negative ($t = 0$).
The second term, $\partial y / \partial y_i$ (and $\partial y / \partial w_i$),
is calculated for each pooling function in Table~\ref{table:pool}.
\begin{table}[t]
\centering
\tabulinesep=0.2em
\resizebox{\columnwidth}{!}{
\begin{tabu}{r|l|l}
\hline
& \bf{Pooling Function} & \bf{Gradient} \\
\hline
\bf{Max pooling} & $\displaystyle y = \max_i y_i$ & $\displaystyle \frac{\partial y}{\partial y_i} = \begin{cases} 1, & \text{if } y_i = y \\ 0, & \text{otherwise} \end{cases}$ \\
\hline
\bf{Average pooling} & $\displaystyle y = \frac{1}{n} \textstyle \sum_i y_i$ & $\displaystyle \frac{\partial y}{\partial y_i} = \frac{1}{n}$ \\
\hline
\bf{Linear softmax} & $\displaystyle y = \frac{\sum_i y_i^2}{\sum_i y_i}$ & $\displaystyle \frac{\partial y}{\partial y_i} = \frac{2y_i - y}{\sum_j y_j}$ \\
\hline
\bf{Exp. softmax} & $\displaystyle y = \frac{\sum_i y_i \exp(y_i)}{\sum_i \exp(y_i)}$ & $\displaystyle \frac{\partial y}{\partial y_i} = (1 - y + y_i) \cdot \frac{\exp(y_i)}{\sum_j \exp(y_j)}$ \\
\hline
\bf{Attention} & $\displaystyle y = \frac{\sum_i y_i w_i}{\sum_i w_i}$ & $\displaystyle \frac{\partial y}{\partial y_i} = \frac{w_i}{\sum_j w_j}, \quad \frac{\partial y}{\partial w_i} = \frac{y_i - y}{\sum_j w_j}$ \\
\hline
\end{tabu}
}
\caption{The five pooling functions and their gradients.
$n$ is the number of frames in a recording.}
\label{table:pool}
\end{table}
With the max pooling function, $\partial y / \partial y_i$
equals 1 for the frame with the largest probability and 0 elsewhere.
The fact that only one frame receives a non-zero gradient
may cause many frame-level false negatives.
The gradient for this single frame, though, does have the correct sign:
when $t = 1$, the gradient $\partial L / \partial y_i$ is negative,
so the frame-level probability $y_i$ will be boosted in order to reduce the loss;
when $t = 0$, the gradient is positive, so $y_i$
will be suppressed.
With the average pooling function, $\partial y / \partial y_i$
equals $1/n$ regardless of the value of $y_i$.
This means the gradient is distributed evenly across all frames.
For negative recordings, this will suppress the probability $y_i$ of all frames,
and this is correct behavior.
For positive recordings, however, not all frames should be boosted,
and the average pooling function can produce a lot of false positive frames.
With the linear softmax pooling function,
$\partial y / \partial y_i$ is positive where $y_i > y/2$,
which gives rise to complicated and interesting behavior.
For positive recordings ($t = 1$), the gradient
is negative where $y_i > y/2$, and positive where $y_i < y/2$.
As a result, larger $y_i$'s will be boosted, while smaller $y_i$'s will be suppressed.
This is exactly the desired behavior under the SMI assumption:
the frame-level probabilities are driven to the extremes 0 and 1,
resulting in well-localized detections of sound events.
For negative recordings ($t = 0$), the gradient
is positive where $y_i > y/2$, and negative where $y_i < y/2$.
This means all frame-level probabilities will be pushed toward $y/2$.
Considering that $y$ is a weighted average of the $y_i$'s,
given enough iterations, all the $y_i$'s will converge to zero as desired.
With the exponential pooling function, $\partial y / \partial y_i$
is always positive, just like with the average pooling function.
As a result, the exponential pooling function also has the concern of
producing too many false positive frames.
Nevertheless, the problem will be less serious,
because smaller $y_i$'s receive smaller gradients.
\begin{figure*}[t]
\centering
\includegraphics[angle=90,height=8.7em]{dcase-structure}
\hfill
\includegraphics[angle=90,height=8.7em]{gas-structure}
\caption{Structures of the networks used in
Secs.~\ref{sec:dcase} (left) and \ref{sec:talnet} (right).
The shape is specified as ``frames * frequency bins * feature maps''
for \mbox{3-D} tensors (shaded), and
``frames * feature maps'' for \mbox{2-D} tensors.
``conv $n$*$m$'' stands for a convolutional layer with the specified
kernel size and ReLU activation; ``(*2)'' means
the layer is repeated twice.
``pool $n$*$m$'' stands for a max pooling layer with the specified stride.
``FC'' is short for ``fully connected''.
At the output end, the ``attention weights'' block is only used
with the attention pooling function.\vspace{-0.5em}}
\label{fig:structures}
\end{figure*}
With the attention pooling function, the term
$\partial y / \partial y_i$ is always positive.
Therefore, the frame-level probabilities will be boosted or suppressed
according to the recording label, with strengths proportional
to the learned weights.
This is correct behavior if frames with larger probabilities $y_i$
also get larger weights $w_i$.
However, because the weights $w_i$ are also learned,
we should also consider $\partial y / \partial w_i$,
the gradient of the loss function w.r.t. the weights:
this term is positive where $y_i > y$.
When the recording is positive, this will cause the weight $w_i$
to rise where the frame-level probability $y_i$ is large
and to shrink where the $y_i$ is small,
agreeing with the motivation that frames with larger probabilities $y_i$
should get larger weights $w_i$.
When the recording is negative, however, the opposite phenomenon will happen:
larger weights will concentrate upon frames with smaller probabilities.
This has a serious consequence:
while the recording-level probability $y$ will indeed be small,
there will be frames with large probabilities $y_i$ and small weights $w_i$.
This means the recording-level prediction and frame-level predictions
will be inconsistent with the SMI assumption,
and the frames with large probabilities will end up being false positives
for localization.
\section{Experimental Comparison of the Five Pooling Functions}
\label{sec:exp}
\subsection{The DCASE 2017 Challenge}
\label{sec:dcase}
We first compared the five pooling functions on Task~4 of the
DCASE 2017 challenge~\cite{DCASE2017}.
The task involves 17~types of vehicle and warning sounds,
and evaluates both tagging and localization.
The data used in the task is a subset of Audio Set~\cite{AudioSet}.
It consists of a training set (51,172~recordings),
a public test set (488~recordings),
and a private evaluation set (1,103~recordings).
All the recordings are 10-second excerpts from YouTube videos.
The test and evaluation sets are strongly labeled so
they can be used to evaluate both audio tagging and localization,
but the training set only comes with weak labeling.
Because we did not have access to the ground truth of the evaluation set,
we report the performance of our systems on the test set.
The test and evaluation sets have balanced numbers of the events,
but the training set is unbalanced.
We set aside 1,142~recordings from the training set
to make a balanced validation set, and used the remaining 50,030~recordings for training.
We implemented a convolutional and recurrent neural network (CRNN),
whose structure is shown in Fig.~\ref{fig:structures} (left).
The input is a matrix of filterbank features;
it has 400~frames and 64~frequency bins.
The convolutional and pooling layers reduce the frame rate from 40~Hz to 10~Hz.
At the output end, a fully connected layer with sigmoid activation
produces frame-level predictions, which are then aggregated across time into
recording-level predictions using any of the five pooling functions.
If the attention pooling function is used, a separate fully connected layer
with exponential activation is used to generate the weights.
The network was trained using the PyTorch toolkit~\cite{PyTorch}.
We applied data balancing so each minibatch
contained roughly equal numbers of recordings of each event type.
The performance of audio tagging was evaluated with the
micro-average $F_1$ on the recording level;
localization was evaluated with the
micro-average error rate (ER) and $F_1$ on 1-second segments.
The $F_1$ is the larger the better while the error rate is the smaller the better;
refer to~\cite{sed-eval} for detailed definitions of these evaluation metrics.
To make binary predictions on the recording level,
we thresholded the recording-level probabilities with class-specific thresholds,
which were tuned to optimize the audio tagging $F_1$ on the validation set.
To make binary predictions on 1-second segments,
we first computed segment-level probabilities
by aggregating the 10~frame-level probabilities within each segment,
then thresholded the segment-level probabilities
using the same class-specific thresholds.
\begin{table}[t]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{r|c|c|c|c|c}
\hline
& \bf{Max Pool.} & \bf{Ave. Pool.} & \bf{Lin. Soft.} & \bf{Exp. Soft.} & \bf{Attention} \\
\hline
\multicolumn{1}{l|}{\bf{Audio Tagging} \quad} & & & & & \\
\bf{TP} & 284 & 297 & 317 & 298 & 301 \\
\bf{FN} & 322 & 309 & 289 & 308 & 305 \\
\bf{FP} & 364 & 285 & 359 & 324 & 317 \\
\bf{Precision} & 43.8 & 51.0 & 46.9 & 47.9 & 48.7 \\
\bf{Recall} & 46.9 & 49.0 & 52.3 & 49.2 & 49.7 \\
\bf{$F_1$} & \bf{45.3} & \bf{50.0} & \bf{49.5} & \bf{48.5} & \bf{49.2} \\
\hline
\multicolumn{1}{l|}{\bf{Localization} \quad} & & & & & \\
\bf{TP} & 1,206 & 2,114 & 1,832 & 2,121 & 1,926 \\
\bf{FN} & 3,154 & 2,246 & 2,528 & 2,239 & 2,434 \\
\bf{FP} & 1,253 & 3,758 & 2,187 & 3,437 & 3,309 \\
\bf{Precision} & 49.0 & 36.0 & 45.6 & 38.2 & 36.8 \\
\bf{Recall} & 27.7 & 48.5 & 42.0 & 48.6 & 44.2 \\
\bf{$F_1$} & \bf{35.4} & \bf{41.3} & \bf{43.7} & \bf{42.8} & \bf{40.1} \\
\hline
\multicolumn{1}{l|}{\bf{Localization} \quad} & & & & & \\
\bf{Sub.} & 712 & 1,385 & 1,040 & 1,292 & 1,275 \\
\bf{Del.} & 2,442 & 861 & 1,488 & 947 & 1,159 \\
\bf{Ins.} & 541 & 2,373 & 1,147 & 2,145 & 2,034 \\
\bf{Error Rate} & \bf{84.7} & \bf{105.9} & \bf{84.3} & \bf{100.6} & \bf{102.5} \\
\hline
\end{tabular}
}
\caption{Detailed performance of the five systems on Task~4 of the DCASE 2017 challenge.
Error rates and $F_1$'s are in percentages.}
\label{table:dcase-performance}
\end{table}
Table~\ref{table:dcase-performance} compares the performance of the five pooling functions.
All the four new pooling functions outperform max pooling
in terms of $F_1$ for both audio tagging and localization.
In terms of localization error rate, however, only the linear softmax system
slightly outperforms max pooling;
the average pooling, exponential softmax and attention systems
yield error rates over 100\%.
Table~\ref{table:dcase-performance} also includes a breakdown of the error types.
All the five pooling functions achieve a reasonable balance between
false negatives and false positives for audio tagging.
However, the breakdown of errors for localization reveals that
only the linear softmax pooling function maintains a good balance.
As analyzed in Sec.~\ref{sec:gradient},
the max pooling system makes too many false negatives,
which result in a low recall and a low $F_1$;
the average, exponential softmax and attention pooling functions
make too many false positives, which result in a high insertion rate
and consequently a high error rate.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{dcase-example-2}
\vspace{-1em}
\caption{The frame-level predictions of the five systems
for the \texttt{bus} event on the test recording ``\texttt{-nqm\char`_RJ2xj8}''
(unfortunately, this recording is no longer available on YouTube).
Best viewed in color.}
\label{fig:dcase-example-2}
\end{figure}
Fig.~\ref{fig:dcase-example-2} illustrates the false positives made by
the average pooling, exponential softmax and attention systems.
The recording contains speech and bicycle noise, but no bus noise.
On the recording level, all five systems correctly detect the bicycle sound
and deny the existence of bus sounds.
On the frame level, the max and linear softmax systems predict low probabilities
that safely stay below the threshold for \texttt{bus} throughout the recording.
In the average pooling and exponential softmax systems, however,
some frame-level probabilities exceed the threshold,
even though other frames with lower probabilities keep
the recording-level probability under control.
In the attention system, we see exactly what we have anticipated in Sec.~\ref{sec:gradient}:
the attention (light blue line) mostly focuses on regions
where the frame-level probabilities are low (8.2s).
This correctly produces a negative recording-level prediction,
but lets many frame-level false positives (4$\sim$6s) get away unconstrained.
The false positives shown in Fig.~\ref{fig:dcase-example-2}
are common throughout the data.
Considering the balance between false negatives and false positives for localization,
as well as the agreement between recording-level and frame-level predictions,
we recommend the linear softmax pooling function among all the pooling functions
we have studied.
\subsection{TALNet: Joint Tagging and Localization on Audio Set}
\label{sec:talnet}
We also compared the five pooling functions on the entire Audio Set~\cite{AudioSet}.
This corpus provides a training set of over 2~million recordings,
and a evaluation set of 20,371 recordings.
The recordings in both sets are 10-second YouTube video excerpts,
labeled with the presence or absence of 527~types of sound events.
We trained a CRNN with the structure shown in Fig.~\ref{fig:structures} (right).
We name the network ``TALNet'', where ``TAL'' stands for
``tagging and localization''.
The network has 10~convolutional layers, 5~pooling layers, and 1~recurrent layer.
We applied data balancing during training;
we also found it essential to apply batch normalization~\cite{batchnorm}
before the ReLU activation of each convolutional layer.
Audio Set only provides evaluation metrics for audio tagging.
These include the mean average precision (MAP),
mean area under the curve (MAUC), and d-prime ($d'$);
all these metrics are the larger the better.
To evaluate the performance of localization,
we applied TALNet to the DCASE 2017 challenge directly,
and measured the same metrics as in Sec.~\ref{sec:dcase}.
The results are listed in the top row of Table~\ref{table:talnet-performance}.
Although the linear softmax system is not the best
in terms of all the evaluation metrics,
it is the only system that achieves a low error rate
and a high $F_1$ for localization.
The max pooling system falls behind on the $F_1$,
while the other three systems exhibit excessively high error rates.
\begin{table}[t]
\centering
\resizebox{\columnwidth}{!}{
\setlength{\tabcolsep}{0.55em}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\multirowcell{2}{\bf{System}} & \multirowcell{2}{\bf{\begin{tabular}{@{}c@{}}\# Train \\ Recs.\end{tabular}}} & \multicolumn{3}{c|}{\bf{Audio Set}} & \multicolumn{3}{c}{\bf{DCASE 2017}} \\
\cline{3-8}
& & \bf{MAP} & \bf{MAUC} & \bf{d'} & \bf{Tag.F1} & \bf{Loc.ER} & \bf{Loc.F1} \\
\hline
Max pooling & \multirowcell{5}{2M} & 0.351 & 0.961 & 2.497 & 52.6 & 81.5 & 42.2 \\
Average pooling & & 0.361 & \bf{0.966} & 2.574 & \bf{53.8} & 101.8 & \bf{46.8} \\
Linear softmax & & 0.359 & \bf{0.966} & \bf{2.575} & 52.3 & \bf{78.9} & 45.4 \\
Exp. softmax & & \bf{0.362} & 0.965 & 2.554 & 52.3 & 89.2 & 46.2 \\
Attention & & 0.354 & 0.963 & 2.531 & 51.4 & 92.0 & 45.5 \\
\hline
Hershey~\cite{hershey2017cnn, AudioSet} & 1M & 0.314 & 0.959 & 2.452 & & & \\
Kumar~\cite{kumar2017knowledge} & 22k & 0.213 & 0.927 & & & & \\
Shah~\cite{shah2018closer} & 22k & 0.229 & 0.927 & & & & \\
Wu~\cite{wu2017reducing} & 22k & & 0.927 & & & & \\
Kong~\cite{kong2018audio} & 2M & 0.327 & 0.965 & 2.558 & & & \\
Yu~\cite{yu2018multi} & 2M & \bf{0.360} & \bf{0.970} & \bf{2.660} & & & \\
Chen~\cite{chen2018class} & 600k & 0.316 & & & & & \\
Chou~\cite{chou2018learning} & 1M & 0.327 & 0.951 & & & & \\
\hline
\end{tabular}
}
\caption{The performance of TALNet on both Audio Set and the DCASE 2017 challenge,
compared with various systems in the literature (not all used the full training set).
Bold font indicates the best performance in each group.}
\label{table:talnet-performance}
\end{table}
In Table~\ref{table:talnet-performance} we also
list the results on Audio Set reported in all the literature that we can find.
Our system closely matches the system of Yu \textit{et al.}\xspace~\cite{yu2018multi},
and outperforms all other systems by a large margin.
We would like to point out that the systems in the literature
either do not perform localization well, or do not perform localization at all.
For example, the system of Kong \textit{et al.}\xspace~\cite{kong2018audio}
uses the attention pooling function.
As we have demonstrated, this pooling function can cause
many false positives on the frame level, and suffer from a high error rate.
The system of Yu \textit{et al.}\xspace~\cite{yu2018multi} uses multi-level attention:
attention layers are built upon multiple hidden layers,
whose outputs are concatenated and further processed by a fully connected layer
to yield a recording-level prediction.
No frame-level predictions at all are made in this process.
In contrast, our TALNet is the first system we know that
achieves good performance for both audio tagging and localization at the same time.
\section{Conclusion and Discussion}
\label{sec:conclusion}
In this paper we have compared five pooling functions,
and shown linear softmax to be the best among the five.
The linear softmax pooling function has the following advantages:
(1) it allows the gradient to flow unobstructedly;
(2) it achieves a balance between false negatives and false positives for localization;
(3) its predictions on the recording level and the frame level are relatively consistent.
Using the linear softmax pooling function, we have built TALNet,
which is the first network to achieve a strong performance
for both audio tagging and localization at the same time.
Our findings may not be limited to SED, but can apply generally to any MIL problem.
Nevertheless, linear softmax is by no means the ultimate optimal pooling function.
An adaptive pooling function has been proposed in~\cite{mcfee2018adaptive};
it gives a weight of $\exp(\alpha y_i)$ to the frame-probability $y_i$,
and may be considered a generalization of the exponential softmax pooling function.
Along this line, we may also consider a weighting scheme of $y_i^\beta \exp(\alpha y_i)$,
which would subsume both the linear softmax and the exponential softmax poling functions.
At the same time, the flexibility of the attention pooling function
to learn the weights on the fly is still attractive,
despite its excessive false positives on the frame level.
We have found that the false positives are caused by
the attention focusing on frames with low probabilities.
We believe the attention pooling function is still promising
if we could somehow impose a constraint of monotonicity
that frames with larger probabilities must receive larger weights.
For more details about the experiments, such as the data balancing algorithm,
how the class-specific thresholds were tuned,
and the hyperparameters for training, please refer to Chapter~3
of the first author's PhD thesis~\cite{Yun-PhD-Thesis}.
The code and acoustic features for the experiments are available at
{\footnotesize \url{https://github.com/MaigoAkisame/cmu-thesis}}.
\vfill\pagebreak
\section{REFERENCES}
\label{sec:refs}
\printbibliography[heading=none]
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and Definitions}
The theory of symmetric homology, in which the symmetric groups
$\Sigma_k^\mathrm{op}$, for $k \geq 0$, play the role that the cyclic
groups do in cyclic homology, begins with the definition of the category
$\Delta S$, containing the simplicial category $\Delta$ as subcategory.
Indeed, $\Delta S$ is an example of {\it crossed simplicial group}~\cite{FL}.
\subsection{The category $\Delta S$}
Let $\Delta S$ be the category that has as objects, the ordered sets $[n] =
\{0, 1, \ldots, n\}$ for $n \geq 0$, and as morphisms, pairs $(\phi, g)$, where
$\phi : [n] \to [m]$ is a non-decreasing map of sets (\textit{i.e.}, a morphism
in $\Delta$), and $g \in \Sigma_{n+1}^{\mathrm{op}}$ (the opposite group of the
symmetric group acting on $[n]$). The element $g$ represents an automorphism of
$[n]$, and as a set map, takes $i \in [n]$ to $g^{-1}(i)$. Equivalently, a
morphism in $\Delta S$ is a morphism in $\Delta$ together with a total ordering
of the domain $[n]$. Composition of morphisms is achieved as in~\cite{FL},
namely, $(\phi, g) \circ (\psi, h) := (\phi \cdot g^*(\psi), \psi^*(g)
\cdot h)$.
\begin{rmk}
Observe that the properties of $g^*(\phi)$ and $\phi^*(g)$ stated in Prop.~1.6
of~\cite{FL} are formally rather similar to the properties of exponents.
Indeed, the notation $g^\phi := \phi^*(g), \phi^g := g^*(\phi)$ will generally
be used in lieu of the original notation in what follows.
\end{rmk}
For each $n$, let $\tau_n$ be the $(n+1)$-cycle $(0, n, n-1, \ldots, 1) \in
\Sigma_{n+1}^\mathrm{op}$. The subgroup generated by $\tau_n$ is isomorphic to
the cyclic group $C_{n+1}^\mathrm{op} = C_{n+1}$. Indeed, we may view the
cyclic category $\Delta C$ as the subcategory of $\Delta S$ consisting of all
objects $[n]$ for $n \geq 0$, together with those morphisms of $\Delta S$ of the
form $(\phi, \tau_n^i)$ (cf.~\cite{L}). In this way, we get a natural chain of
inclusions, $\Delta \hookrightarrow \Delta C \hookrightarrow \Delta S$.
It is often helpful to represent morphisms of $\Delta S$ as diagrams of points
and lines, indicating images of set maps. Using these diagrams, we may see
clearly how $\psi^g$ and $g^\psi$ are related to $(\psi, g)$ (see
Figure~\ref{diag.morphism}).
\begin{figure}[ht]
\psset{unit=1in}
\begin{pspicture}(4,4)
\psdots[linecolor=black, dotsize=4pt]
(0.3, 2.4)(0.6, 2.7)(0.9, 3.0)(1.2, 3.3)(1.5, 3.6)
(2.7, 2.4)(3.0, 2.7)(3.3, 3.0)(3.6, 3.3)
(2.7, 0.3)(3.0, 0.6)(3.3, 0.9)(3.6, 1.2)
(0.3, 0.3)(0.6, 0.6)(0.9, 0.9)(1.2, 1.2)(1.5, 1.5)
\psline[linewidth=1pt, linecolor=black](0.3, 2.4)(0.3, 0.3)
\psline[linewidth=1pt, linecolor=black](0.6, 2.7)(0.6, 0.6)
\psline[linewidth=1pt, linecolor=black](0.9, 3.0)(1.5, 1.5)
\psline[linewidth=1pt, linecolor=black](1.2, 3.3)(0.9, 0.9)
\psline[linewidth=1pt, linecolor=black](1.5, 3.6)(1.2, 1.2)
\psline[linewidth=1pt, linecolor=black](0.3, 0.3)(2.7, 0.3)
\psline[linewidth=1pt, linecolor=black](0.6, 0.6)(2.7, 0.3)
\psline[linewidth=1pt, linecolor=black](0.9, 0.9)(3.0, 0.6)
\psline[linewidth=1pt, linecolor=black](1.2, 1.2)(3.0, 0.6)
\psline[linewidth=1pt, linecolor=black](1.5, 1.5)(3.6, 1.2)
\psline[linewidth=1pt, linecolor=black](0.3, 2.4)(2.7, 2.4)
\psline[linewidth=1pt, linecolor=black](0.6, 2.7)(2.7, 2.4)
\psline[linewidth=1pt, linecolor=black](0.9, 3.0)(3.0, 2.7)
\psline[linewidth=1pt, linecolor=black](1.2, 3.3)(3.6, 3.3)
\psline[linewidth=1pt, linecolor=black](1.5, 3.6)(3.6, 3.3)
\psline[linewidth=1pt, linecolor=black](2.7, 2.4)(2.7, 0.3)
\psline[linewidth=1pt, linecolor=black](3.0, 2.7)(3.6, 1.2)
\psline[linewidth=1pt, linecolor=black](3.3, 3.0)(3.3, 0.9)
\psline[linewidth=1pt, linecolor=black](3.6, 3.3)(3.0, 0.6)
\rput(0.2, 2.4){$0$}
\rput(0.5, 2.7){$1$}
\rput(0.8, 3.0){$2$}
\rput(1.1, 3.3){$3$}
\rput(1.4, 3.6){$4$}
\rput(0.2, 0.3){$0$}
\rput(0.5, 0.6){$1$}
\rput(0.8, 0.9){$2$}
\rput(1.1, 1.2){$3$}
\rput(1.4, 1.5){$4$}
\rput(2.6, 2.5){$0$}
\rput(2.9, 2.8){$1$}
\rput(3.2, 3.1){$2$}
\rput(3.5, 3.4){$3$}
\rput(2.6, 0.4){$0$}
\rput(2.9, 0.7){$1$}
\rput(3.2, 1.0){$2$}
\rput(3.5, 1.3){$3$}
\rput(0.1, 1.2){$g^\psi$}
\rput(2.4, 3.7){$\psi$}
\rput(3.6, 2.3){$g$}
\rput(1.8, 0.1){$\psi^g$}
\end{pspicture}
\caption[Morphisms of $\Delta S$]{Morphisms of $\Delta S$: $g \cdot \psi =
\big(g^*(\psi), \psi^*(g)\big) = \big(\psi^g, g^{\psi}\big)$}
\label{diag.morphism}
\end{figure}
An equivalent characterization of $\Delta S$ comes from Pirashvili
(cf.~\cite{P}), as the category $\mathcal{F}(\mathrm{as})$ of `non-commutative'
sets. The objects are sets $\underline{n} := \{1, 2, \ldots, n\}$ for
$n \geq 0$. By convention, $\underline{0}$ is the empty set. A morphism in
$\mathrm{Mor}_{\mathcal{F}(\mathrm{as})} (\underline{n}, \underline{m})$
consists of a map (of sets) $f : \underline{n} \to \underline{m}$ together with
a total ordering $\Pi_j$ on $f^{-1}(j)$ for all $j \in \underline{m}$. In such
a case, denote by $\Pi$ the partial order generated by all $\Pi_j$. If
$(f, \Pi) : \underline{n} \to \underline{m}$ and $(g, \Psi) : \underline{m} \to
\underline{p}$, their composition will be $(gf, \Phi)$, where $\Phi_j$ is the
total ordering on $(gf)^{-1}(j)$ (for all $j \in \underline{p}$) induced by
$\Pi$ and $\Psi$. Explicitly, for each pair $i_1, i_2 \in (gf)^{-1}(j)$, we
have $i_1 < i_2$ under $\Phi$ if and only if [$f(i_1) < f(i_2)$ under $\Psi$]
or [$f(i_1) = f(i_2)$ and $i_1 < i_2$ under $\Pi$].
There is an obvious inclusion of categories, $\Delta S \hookrightarrow
\mathcal{F}(\mathrm{as})$, taking $[n]$ to $\underline{n+1}$, but there is no
object of $\Delta S$ that maps to $\underline{0}$. It will be useful to define
$\Delta S_+ \supset \Delta S$ which is isomorphic to $\mathcal{F}(\mathrm{as})$:
\begin{definition}
$\Delta S_+$ is the category consisting of all objects and morphisms of
$\Delta S$, with the additional object $[-1]$, representing the empty set,
and a unique morphism $\iota_n : [-1] \to [n]$ for each $n \geq -1$.
\end{definition}
\begin{rmk}
Pirashvili's construction is a special case of a more general construction
due to May and Thomason \cite{MT}. This construction associates to any
topological operad $\{\mathcal{C}(n)\}_{n \geq 0}$ a topological category
$\widehat{\mathcal{C}}$ together with a functor $\widehat{\mathcal{C}} \to
\mathcal{F}$, where $\mathcal{F}$ is the category of finite sets, such that
the inverse image of any function $f: \underline{m} \to \underline{n}$ is the
space $\prod_{i=1}^n \mathcal{C}(\# f^{-1}(i))$. Composition in
$\widehat{\mathcal{C}}$ is defined using the composition of the operad. May
and Thomason refer to $\widehat{\mathcal{C}}$ as the {\it category of
operators} associated to $\mathcal{C}$. They were interested in the case of an
$E_\infty$ operad, but their construction evidently works for any operad. The
category of operators associated to the discrete $A_\infty$ operad
$\mathcal{A}ss$, which parametrizes monoid structures, is precisely
Pirashvili's construction of $\mathcal{F}(as)$, {\it i.e.} $\Delta S_+$.
\end{rmk}
One very useful advantage in enlarging our category to $\Delta S$ to
$\Delta S_+$ is the added structure inherent in $\Delta S_+$.
\begin{prop}\label{prop.deltaSpermutative}
$\Delta S_+$ is a permutative category.
\end{prop}
\begin{proof}
Define the monoid product on objects by $[n] \odot [m] := [n+m+1]$, (disjoint
union of sets), and on morphisms $(\phi, g) : [n] \to [n']$, $(\psi, h) :[m]
\to [m']$, by $(\phi,g) \odot (\psi,h) = (\eta, k) : [n+m+1] \to [n'+m'+1]$,
where $(\eta, k)$ is just the morphism $(\phi, g)$ acting on the first $n+1$
points of $[n+m+1]$, and $(\psi, h)$ acting on the remaining points.
The unit object will be $[-1] = \emptyset$. $\odot$ is clearly associative,
and $[-1]$ acts as two-sided identity. Finally, define the transposition
transformation $\gamma_{n,m} : [n] \odot [m] \to [m] \odot [n]$ to be the
identity on objects, and on morphisms to be precomposition with the block
transposition that switches the first block of size $n+1$ with the second
block of size $m+1$.
\end{proof}
\begin{rmk}
The fact that $\Delta S_+$ is permutative shall be exploited to prove that $HS_*(A)$
admits homology operations in a forthcoming paper.
\end{rmk}
For the purposes of computation, a morphism $\alpha : [n] \to [m]$ of $\Delta S$
may be conveniently represented as a tensor product of monomials in the formal
non-commuting variables $\{x_0, x_1, \ldots,$ $x_n\}$. Let $\alpha =
(\phi, g)$, with $\phi \in \mathrm{Mor}_\Delta([n],[m])$ and $g \in
\Sigma_{n+1}^\mathrm{op}$. The tensor representation of $\alpha$ will have
$m + 1$ tensor factors. Each $x_i$ will occur exactly once, in the order
$x_{g(0)}, x_{g(1)}, \ldots, x_{g(n)}$. The $i^{th}$ tensor factor consists of
the product of $\#\phi^{-1}(i-1)$ variables, with the convention that the empty
product will be denoted $1$. Thus, the $i^{th}$ tensor factor records the total
ordering of $\phi^{-1}(i)$. As an example, the tensor representation of the
morphism depicted in Fig.~\ref{diag.morphism-tensor} is $x_0x_1 \otimes x_3x_4
\otimes 1 \otimes x_2$.
\begin{figure}[ht]
\psset{unit=1in}
\begin{pspicture}(4,4)
\psdots[linecolor=black, dotsize=4pt]
(0.3, 2.4)(0.6, 2.7)(0.9, 3.0)(1.2, 3.3)(1.5, 3.6)
(2.7, 0.3)(3.0, 0.6)(3.3, 0.9)(3.6, 1.2)
(0.3, 0.3)(0.6, 0.6)(0.9, 0.9)(1.2, 1.2)(1.5, 1.5)
\psline[linewidth=1pt, linecolor=black](0.3, 2.4)(0.6, 0.6)
\psline[linewidth=1pt, linecolor=black](0.6, 2.7)(0.3, 0.3)
\psline[linewidth=1pt, linecolor=black](0.9, 3.0)(1.5, 1.5)
\psline[linewidth=1pt, linecolor=black](1.2, 3.3)(0.9, 0.9)
\psline[linewidth=1pt, linecolor=black](1.5, 3.6)(1.2, 1.2)
\psline[linewidth=1pt, linecolor=black](0.3, 0.3)(2.7, 0.3)
\psline[linewidth=1pt, linecolor=black](0.6, 0.6)(2.7, 0.3)
\psline[linewidth=1pt, linecolor=black](0.9, 0.9)(3.0, 0.6)
\psline[linewidth=1pt, linecolor=black](1.2, 1.2)(3.0, 0.6)
\psline[linewidth=1pt, linecolor=black](1.5, 1.5)(3.6, 1.2)
\rput(0.2, 2.4){$0$}
\rput(0.5, 2.7){$1$}
\rput(0.8, 3.0){$2$}
\rput(1.1, 3.3){$3$}
\rput(1.4, 3.6){$4$}
\rput(0.2, 0.3){$0$}
\rput(0.5, 0.6){$1$}
\rput(0.8, 0.9){$2$}
\rput(1.1, 1.2){$3$}
\rput(1.4, 1.5){$4$}
\rput(2.6, 0.4){$0$}
\rput(2.9, 0.7){$1$}
\rput(3.2, 1.0){$2$}
\rput(3.5, 1.3){$3$}
\rput(2.7, 2.4){$x_1x_0 \otimes x_3x_4 \otimes 1 \otimes x_2$}
\end{pspicture}
\caption[Morphisms of $\Delta S$ as Tensors]{Morphisms of $\Delta S$ in tensor
notation}
\label{diag.morphism-tensor}
\end{figure}
With this notation, the
composition of two morphisms $\alpha = X_0 \otimes X_1 \otimes \ldots \otimes
X_m : [n] \to [m]$ and $\beta = Y_1 \otimes Y_2 \otimes \ldots Y_n : [p] \to
[n]$ is given by, $\alpha \beta = Z_0 \otimes Z_1 \otimes \ldots \otimes Z_m$,
where $Z_i$ is determined by replacing each variable in the monomial $X_i =
x_{j_1} \ldots x_{j_s}$ in $\alpha$ by the corresponding monomials $Y_{j_k}$ in
$\beta$. So, $Z_i = Y_{j_1} \ldots Y_{j_s}$.
\subsection{Homological Algebra of Functors}
Recall, for a category $\mathscr{C}$, a \mbox{$\mathscr{C}$-module} is covariant
functor $F : \mathscr{C} \to \mbox{\textrm{$k$-\textbf{Mod}}}$. Similarly, a
$\mathscr{C}^\mathrm{op}$-module is a contravariant functor $G : \mathscr{C} \to
\mbox{\textrm{$k$-\textbf{Mod}}}$. Let $M$ be a
$\mathscr{C}^\mathrm{op}$-module and $N$ be a \mbox{$\mathscr{C}$-module}.
Following MacLane~\cite{ML}, define the tensor product of functors as a coend,
$M \otimes_\mathscr{C} N := \int^X (MX) \otimes (NX)$. That is,
\[
M \otimes_{\mathscr{C}} N := \bigoplus_{X \in \mathrm{Obj}\mathscr{C}}
M(X) \otimes_k N(X) / \approx,
\]
where the equivalence $\approx$ is generated by $y \otimes f_*(x) \approx f^*(y)
\otimes x$ for $f \in \mathrm{Mor}_{\mathscr{C}}(X, Y)$, $x \in N(X)$ and $y \in
M(Y)$. The {\it trivial} $\mathscr{C}$-module, resp.
$\mathscr{C}^\mathrm{op}$-module, denoted by $\underline{k}$ (for either
variance), is the functor taking each object to $k$ and each morphism to the
identity.
Define a product on generators of $k[\mathrm{Mor}\mathscr{C}]$ by setting
$f \cdot g := f \circ g$, if the maps are composable, and $f \cdot g := 0$
otherwise. Extending the product $k$-linearly, we obtain a ring structure on
$R_{\mathscr{C}} := k[\mathrm{Mor}\mathscr{C}]$. In general $R_{\mathscr{C}}$
does not have a unit, only {\it local units}; that is, for any finite set of
elements of $R_{\mathscr{C}}$, there is an element that acts as identity element
for these elements. Observe, a $\mathscr{C}$-module $M$ is equivalent to a left
$R_{\mathscr{C}}$-module structure on $\displaystyle{\bigoplus_{X \in \mathrm{Obj}
\mathscr{C}} M(X)}$.
In either characterization, it makes sense to talk about projective
$\mathscr{C}$-modules and the torsion products $\mathrm{Tor}_i^{\mathscr{C}}(M,
N)$ for $\mathscr{C}$-module $M$ and $\mathscr{C}^\mathrm{op}$-module $N$.
(It is also possible to define $\mathrm{Tor}_i^\mathscr{C}(M, N)$ directly as
the derived functors of the categorical tensor product construction;
see~\cite{FL}.)
\begin{rmk}
Note, the existing literature based on the work of Connes, Loday and Quillen
consistently defines the categorical tensor product in the reverse sense:
$N \otimes_{\mathscr{C}} M$ is the direct sum of copies of $NX \otimes_k MX$
modded out by the equivalence $x \otimes f^*(y) \approx f_*(x) \otimes y$ for
all $\mathscr{C}$-morphisms $f : X \to Y$. In this context, $N$ is covariant,
while $M$ is contravariant. I chose to follow the convention of Pirashvili
and Richter \cite{PR} in writing tensor products as $M \otimes_{\mathscr{C}}
N$ so that the equivalence $\xi : $\mbox{$\mathscr{C}$-\textbf{Mod}} $\to$
\mbox{$k[\mathrm{Mor}\mathscr{C}]$-\textbf{Mod}} passes to tensor products in
a straightforward way: $\xi( M \otimes_{\mathscr{C}} N ) = \xi(M)
\otimes_{k[\mathrm{Mor}\mathscr{C}]} \xi(N)$.
\end{rmk}
\subsection{The Symmetric Bar Construction}
Now that we have defined the category $\Delta S$, the next step should be to
define an appropriate bar construction. Recall that the {\it cyclic bar
construction} is a functor $B^{cyc}_*A : \Delta C^{\mathrm{op}} \to
k$-\textbf{Mod}. We then take the groups $\mathrm{Tor}^{\Delta C}_i(B^{cyc}_*A,
\underline{k})$ as our definition of $HC_i(A)$ for $i \geq 0$. However the
results of~\cite{FL} show that the cyclic bar construction does not extend to a
functor $\Delta S^{\mathrm{op}} \to k$-\textbf{Mod}. Furthermore, for {\it any}
functor $F : \Delta S^{\mathrm{op}} \to k$-\textbf{Mod}, the groups
$\mathrm{Tor}^{\Delta S}_i(F, \underline{k})$ simply compute the homology of
the underlying simplicial module of $F$ (given by restricting $F$ to
$\Delta^{\mathrm{op}}$). The second author discovered~\cite{F} that there is a
natural extension of the cyclic bar construction not to a {\it contravariant}
functor on $\Delta S$, but to a {\it covariant} functor.
\begin{definition}\label{def.symbar}
Let $A$ be an associative, unital algebra over a commutative ground ring $k$.
Define a functor $B_*^{sym}A : \Delta S \to k$-\textbf{Mod} by:
\[
B_n^{sym}A := B_*^{sym}A[n] := A^{\otimes (n+1)}
\]
\[
B_*^{sym}A(\alpha) : (a_0 \otimes a_1 \otimes \ldots \otimes a_n) \mapsto
\alpha(a_0, \ldots, a_n),
\]
where $\alpha : [n] \to [m]$ is represented in tensor notation, and evaluation
at $(a_0, \ldots, a_n)$ simply amounts to substituting each $a_i$ for $x_i$
and multiplying the resulting monomials in $A$. If the pre-image
$\alpha^{-1}(i)$ is empty, then the unit of $A$ is inserted. Observe that
$B_*^{sym}A$ is natural in $A$.
\end{definition}
\begin{rmk}
Fiedorowicz~\cite{F} defines the symmetric bar construction functor for
morphisms $\alpha = (\phi, g)$, where $\phi \in \mathrm{Mor}\Delta([n],[m])$
and $g \in \Sigma_{n+1}^{\mathrm{op}}$, via
\begin{eqnarray*}
B_*^{sym}A(\phi)(a_0 \otimes a_1 \otimes \ldots \otimes a_n) &=&
\left( \prod_{a_i \in \phi^{-1}(0)} a_i \right)\otimes \ldots
\otimes \left( \prod_{a_i \in \phi^{-1}(n)} a_i \right)\\
B_*^{sym}A(g)(a_0 \otimes a_1 \otimes \ldots \otimes a_n) &=&
a_{g^{-1}(0)} \otimes a_{g^{-1}(1)} \otimes \ldots \otimes a_{g^{-1}(n)}
\end{eqnarray*}
However, in order that this becomes consistent with earlier notation, we
should require $B_*^{sym}A(g)$ to permute the tensor factors in the inverse
sense:
\[
B_*^{sym}A(g)(a_0 \otimes a_1 \otimes \ldots \otimes a_n) =
a_{g(0)} \otimes a_{g(1)} \otimes \ldots \otimes a_{g(n)}.
\]
\end{rmk}
$B_*^{sym}A$ may be regarded as a simplicial \mbox{$k$-module} via the chain of
functors, $\Delta^{\mathrm{op}} \hookrightarrow \Delta C^{\mathrm{op}}
\stackrel{\cong}{\to} \Delta C \hookrightarrow \Delta S$. Here, the isomorphism
$D : \Delta C^{\mathrm{op}} \to \Delta C$ is the standard duality
(see~\cite{L}). Fiedorowicz showed in~\cite{F} that $B_*^{sym}A \circ D
= B_*^{cyc}A$. Note that by duality of $\Delta C$, it is equivalent to use the
functor $B_*^{sym}A$, restricted to morphisms of $\Delta C$ in order to compute
$HC_*(A)$.
\begin{definition}
The \textbf{symmetric homology} of an associative, unital $k$-algebra $A$
is denoted $HS_*(A)$, and is defined as:
\[
HS_*(A) := \mathrm{Tor}_*^{\Delta S}\left(\underline{k},B_*^{sym}A\right)
\]
\end{definition}
\begin{rmk}
Since $\underline{k}\otimes_{\Delta S} M \cong \colim_{\Delta S}M$, for any
$\Delta S$-module $M$, we can alternatively describe symmetric homology as
derived functors of $\colim$.
\[
HS_i(A) = {\colim_{\Delta S}}^{(i)}B_*^{sym}A.
\]
(To see the relation with higher colimits, we need to tensor a projective
resolution of $B_*^{sym}A$ with $\underline{k}$).
\end{rmk}
\subsection{The Standard Resolution}\label{sub.stand-res}
Let $\mathscr{C}$ be a category. The rank $1$ free \mbox{$\mathscr{C}$-module}
is $k[\mathrm{Mor}\mathscr{C}]$, with the left action of composition of
morphisms. Now as $k$-module, $k[\mathrm{Mor}\mathscr{C}]$ decomposes into the
direct sum,
\[
k[\mathrm{Mor}\mathscr{C}] = \bigoplus_{X \in \mathrm{Obj}\mathscr{C}}
\left( \bigoplus_{Y \in \mathrm{Obj}\mathscr{C}}
k\left[\mathrm{Mor}_\mathscr{C}(X, Y)\right] \right).
\]
By abuse of notation, denote $\displaystyle{\bigoplus_{Y \in \mathrm{Obj}\mathscr{C}}}
k\left[\mathrm{Mor}_\mathscr{C}(X, Y)\right]$ by
$k\left[\mathrm{Mor}_\mathscr{C}(X, -)\right]$.
So there is a direct sum decomposition as left \mbox{$\mathscr{C}$-module}),
\[
k[\mathrm{Mor}\mathscr{C}] = \bigoplus_{X \in \mathrm{Obj}\mathscr{C}}
k\left[\mathrm{Mor}_\mathscr{C}(X, -)\right].
\]
Thus, the submodules $k\left[\mathrm{Mor}_\mathscr{C}(X, -)\right]$ are
projective left \mbox{$\mathscr{C}$-modules}.
Similarly, $k[\mathrm{Mor}\mathscr{C}]$ is the rank $1$ free right
\mbox{$\mathscr{C}$-module}, with right action of pre-composition of morphisms,
and as such, decomposes as:
\[
k[\mathrm{Mor}\mathscr{C}] = \bigoplus_{Y \in \mathrm{Obj}\mathscr{C}}
k\left[\mathrm{Mor}_\mathscr{C}(-, Y)\right]
\]
Again, the notation $k\left[\mathrm{Mor}_\mathscr{C}(-, Y)\right]$ is shorthand
for $\displaystyle{\bigoplus_{X \in \mathrm{Obj}\mathscr{C}}}
k\left[\mathrm{Mor}_\mathscr{C}(X, Y)\right]$. The submodules
$k\left[\mathrm{Mor}_\mathscr{C}(-, Y)\right]$ are projective as right
\mbox{$\mathscr{C}$-modules}.
It will also be important to note that $k\left[\mathrm{Mor}_\mathscr{C}(-,
Y)\right] \otimes_{\mathscr{C}} N \cong N(Y)$ as $k$-module via the evaluation
map $f \otimes y \mapsto f_*(y)$. Similarly, $M \otimes_{\mathscr{C}}
k\left[\mathrm{Mor}_\mathscr{C}(X, -)\right]\cong M(X)$.
Recall ({\it e.g.} in Quillen~\cite{Q}, Section~1), for an object $Y \in
\mathrm{Obj}\mathscr{C}$, the category of {\it objects under $Y$}, denoted
$Y \setminus \mathscr{C}$, has as objects all pairs $(X, \phi)$ such that
$\phi \in \mathrm{Mor}_{\mathscr{C}}(Y, X)$, and as morphisms the commutative
triangles. Define a contravariant functor $( - \setminus \mathscr{C})$ from
$\mathscr{C}$ to $\textbf{Cat}$ as follows: An object $Y$ is sent to the
under category $Y \setminus \mathscr{C}$. If $\nu : Y \to Y'$ is a morphism
in $\mathscr{C}$, the functor $(\nu \setminus \mathscr{C}) : Y' \setminus
\mathscr{C} \to Y \setminus \mathscr{C}$ is defined on objects by $(X, \phi)
\mapsto (X, \phi\nu)$. Thus, $(- \setminus \mathscr{C})$ is a
$\mathscr{C}^\mathrm{op}$-category.
As noted in~\cite{FL}, the nerve of $(- \setminus \mathscr{C})$ is a simplicial
$\mathscr{C}^\mathrm{op}$-set, and the complex $L_*$, given by:
\[
L_n := k\left[ N(- \setminus \mathscr{C})_n \right]
\]
is a resolution by projective \mbox{$\mathscr{C}^\mathrm{op}$-modules} of the
trivial $\mathscr{C}^\mathrm{op}$-module, $\underline{k}$. Here, the boundary
map is $\partial := \sum_i (-1)^i d_i$, where the $d_i$'s come from the
simplicial structure of the nerve of $(- \setminus \mathscr{C})$.
For completeness, we shall provide a proof of:
\begin{prop}\label{prop.contractibility_under-category}
$L_*$ is a resolution of $\underline{k}$ by projective
\mbox{$\mathscr{C}^\mathrm{op}$-modules}.
\end{prop}
\begin{proof}
Fix $C \in \mathrm{Obj}\mathscr{C}$. Let $\epsilon : L_0(C) \to k$ be the map
defined on generators by $\epsilon( C \to A_0 ) := 1_k$. The complex
\[
k \stackrel{\epsilon}{\leftarrow} L_0(C)
\stackrel{\partial}{\leftarrow} L_1(C) \stackrel{\partial}{\leftarrow}
\ldots
\]
is chain homotopic to the $0$ complex via the homotopy,
\[
h_{-1} : 1 \mapsto (C \stackrel{id}{\to} C)
\]
\[
h_n : (C \to A_0 \to \ldots \to A_n) \mapsto (C \stackrel{id}{\to} C \to A_0
\to \ldots A_n), \qquad \textrm{for $i \geq 0$}
\]
Note that each $\mathscr{C}^\mathrm{op}$-module $L_n$ is projective.
\end{proof}
Thus, we may compute $HS_*(A)$ as the homology groups of the following complex:
\begin{equation}\label{symhomcomplex}
0 \longleftarrow L_0 \otimes_{\Delta S} B_*^{sym}A \longleftarrow
L_1 \otimes_{\Delta S} B_*^{sym}A \longleftarrow L_2 \otimes_{\Delta S}
B_*^{sym}A \longleftarrow \ldots
\end{equation}
Denote the complex~(\ref{symhomcomplex}) by $\mathscr{Y}_*A$. Indeed,
$\mathscr{Y}_*A$ is a simplicial $k$-module, and the assignment $A \mapsto
\mathscr{Y}_*A$ is a functor $k$-\textbf{Alg} $ \to k$-\textbf{SimpMod}.
\begin{cor}\label{cor.SymHomComplex}
For an associative, unital $k$-algebra $A$,
$HS_*(A) = H_*\left( \mathscr{Y}_*A;\,k \right)$. That is,
\[
HS_*(A) = H_*\left( k[N(- \setminus \Delta S)] \otimes_{\Delta S}
B_*^{sym}A;\,k \right).
\]
\end{cor}
\begin{rmk}\label{rmk.HC}
By remarks above, it is clear that the related complex
$k[N(- \setminus \Delta C)] \otimes_{\Delta C} B_*^{sym}A$ computes $HC_*(A)$.
\end{rmk}
\begin{rmk}\label{rmk.uniqueChain}
Observe that every element of $L_n \otimes_{\Delta S} B^{sym}_*A$ is
equivalent to one in which the first morphism of the $L_n$ factor is an
identity:
\[
[p]\stackrel{\alpha}{\to}
[q_0]\stackrel{\beta_1}{\to}[q_1]
\stackrel{\beta_2}{\to}\ldots\stackrel{\beta_n}{\to}[q_n]
\,\otimes\, (y_0 \otimes \ldots \otimes y_p)
\]
\[
\approx [q_0]\stackrel{\mathrm{id}_{[q_0]}}{\longrightarrow}[q_0]
\stackrel{\beta_1}{\to}[q_1] \stackrel{\beta_2}{\to}\ldots
\stackrel{\beta_n}{\to}[q_n] \,\otimes\,\alpha_*(y_0, \ldots, y_p)
\]
Thus, we may consider $L_n \otimes_{\Delta S} B_*^{sym}A$ to be the
\mbox{$k$-module} generated by the elements
\[
\left\{ [q_0]\stackrel{\beta_1}{\to}[q_1]
\stackrel{\beta_2}{\to}\ldots\stackrel{\beta_n}{\to}[q_n]
\,\otimes\, (y_0 \otimes \ldots \otimes y_{q_0}) \right\},
\]
where the tensor product is now over $k$. The face maps $d_i$ are defined on
generators by: $d_0$ omits $\beta_1$ and replaces $(y_0 \otimes \ldots
\otimes y_{q_0})$ by $\beta_1(y_0, \ldots, y_{q_0})$; for $0 < i < n$, $d_i$
composes $\beta_{i+1}$ with $\beta_i$; and $d_n$ omits $\beta_n$.
\end{rmk}
We now have enough tools to compute $HS_*(k)$. First, we need to show:
\begin{lemma}\label{lem.DeltaScontractible}
$N(\Delta S)$ is a contractible complex.
\end{lemma}
\begin{proof}
Define a functor $\mathscr{F} : \Delta S \to \Delta S$ by
\[
\mathscr{F} : [n] \mapsto [0] \odot [n],
\]
\[
\mathscr{F} : f \mapsto \mathrm{id}_{[0]} \odot f,
\]
using the monoid multiplication $\odot$ defined above.
There is a natural transformation $\mathrm{id}_{\Delta S} \to \mathscr{F}$
given by the following commutative diagram for each $f : [m] \to [n]$:
\[
\begin{diagram
\node{ [m] }
\arrow{e,t}{f}
\arrow{s,l}{\delta_0}
\node{ [n] }
\arrow{s,r}{\delta_0}\\
\node{ [m+1] }
\arrow{e,t}{\mathrm{id} \odot f}
\node{ [n+1] }
\end{diagram}
\]
Here, $\delta^{(k)}_j : [k-1] \to [k]$ is the $\Delta$ morphism that misses
the point $j \in [k]$.
Consider the constant functor $\Delta S \stackrel{[0]}{\to} \Delta S$ that
sends all objects to $[0]$ and all morphisms to $\mathrm{id}_{[0]}$. There is
a natural transformation $[0] \to \mathscr{F}$ given by the following
commutative diagram for each $f : [m] \to [n]$.
\[
\begin{diagram
\node{ [0] }
\arrow{e,t}{\mathrm{id}}
\arrow{s,l}{0_0}
\node{ [0] }
\arrow{s,r}{0_0}\\
\node{ [m+1] }
\arrow{e,t}{\mathrm{id} \odot f}
\node{ [n+1] }
\end{diagram}
\]
Here, $0^{(k)}_j : [0] \to [k]$ is the morphism that sends the point $0$ to
$j \in [k]$.
Natural transformations induce homotopy equivalences (see~\cite{Se} or
Prop.~1.2 of~\cite{Q}), so in particular, the identity map on $N(\Delta S)$ is
homotopic to the map that sends $N(\Delta S)$ to the nerve of a trivial
category. Thus, $N(\Delta S)$ is contractible.
\end{proof}
\begin{cor}\label{cor.HS_of_k}
The symmetric homology of the ground ring $k$ is isomorphic to $k$,
concentrated in degree $0$.
\end{cor}
\begin{proof}
$HS_*(k)$ is the homology of the chain complex generated (freely) over $k$ by
the chains
\[
\left\{ [q_0]\stackrel{\beta_1}{\to}[q_1]
\stackrel{\beta_2}{\to}\ldots\stackrel{\beta_n}{\to}[q_n] \,\otimes\,
(1 \otimes \ldots \otimes 1) \right\},
\]
where $\beta_i \in \mathrm{Mor}_{\Delta S}\left( [q_{i-1}], [q_i] \right)$.
Each chain $[q_0] \to [q_1] \to \ldots \to [q_n]\,\otimes\, (1 \otimes \ldots
\otimes 1)$ may be identified with the chain $[q_0] \to [q_1] \to \ldots \to
[q_n]$ of $N(\Delta S)$, and clearly this defines a chain isomorphism to
$N(\Delta S)$. The result now follows from Lemma~\ref{lem.DeltaScontractible}.
\end{proof}
\section{Symmetric Homology with Coefficients}\label{sec.coeff}
\subsection{Definitions and Conventions}
Following the conventions for Hochschild and cyclic homology in Loday~\cite{L},
when we need to indicate explicitly the ground ring $k$ over which we compute
symmetric homology of $A$, we shall use the notation: $HS_*(A\;|\;k)$.
Furthermore, since the notion ``$\Delta S$-module'' does not explicitly
state the ground ring, we shall use the bulkier ``$\Delta S$-module over $k$''
when the context is ambiguous.
\begin{definition}\label{def.HS-with-coeff}
If $\mathscr{C}_*$ is a complex that computes symmetric homology of the
algebra $A$ over $k$, and $M$ is a $k$-module, then the symmetric homology of
$A$ over $k$ {\it with coefficients in $M$} is defined by $HS_*(A;M) :=
H_*( \mathscr{C}_* \otimes_k M)$.
\end{definition}
\begin{rmk}
Definition~\ref{def.HS-with-coeff} is independent of the particular choice of
complex $\mathscr{C}_*$, so we shall generally use the complex $\mathscr{Y}_*A
= k[ N(- \setminus \Delta S) ] \otimes_{\Delta S} B_*^{sym}A$ of
Cor.~\ref{cor.SymHomComplex} in this section.
\end{rmk}
\begin{prop}\label{prop.M-flat}
If $M$ is flat over $k$, then $HS_*(A ; M) \cong HS_*(A) \otimes_k M$.
\end{prop}
\begin{proof}
Since $M$ is $k$-flat, the functor $ - \otimes_k M$ is exact.
\end{proof}
\begin{cor}\label{cor.HS-Q-Z_p}
For any $\mathbb{Z}$-agebra $A$, $HS_n(A ; \mathbb{Q}) \cong HS_n(A) \otimes_{\mathbb{Z}}
\mathbb{Q}$.
\end{cor}
\subsection{Universal Coefficient Theorem for Symmetric Homology}
Our goal will be the proof of the following:
\begin{theorem}\label{thm.univ.coeff.}
If $A$ is a flat $k$-algebra, and $B$ is a commutative $k$-algebra,
then there is a spectral sequence with
\[
E_2^{p,q} := \mathrm{Tor}^k_p\left( HS_q( A \;|\; k) , B \right)
\Rightarrow HS_*( A ; B).
\]
\end{theorem}
To that end, we begin with some preliminary results.
\begin{lemma}\label{lem.YA-flat}
\hfill
{\it i.} If $A$ is a flat $k$-algebra, then $\mathscr{Y}_nA$ is
flat for each $n$.
{\it ii.} If $A$ is a projective $k$-algebra, then $\mathscr{Y}_nA$
is projective for each $n$.
\end{lemma}
\begin{proof}
By Remark~\ref{rmk.uniqueChain} we may identify:
\[
\mathscr{Y}_nA \cong \bigoplus_{m \geq 0}\left(
k[ N([m] \setminus\Delta S)_{n-1} ] \otimes_k A^{\otimes(m+1)}\right).
\]
Note, $k[ N([m] \setminus\Delta S)_{n-1} ]$ is free, so if $A$ is flat
(resp. projective), then $\mathscr{Y}_nA$ is also flat (resp. projective).
\end{proof}
\begin{prop}\label{prop.HS-A-B}
If $B$ is a commutative $k$-algebra, then there is an
isomorphism $HS_*(A \otimes_k B \;|\; B) \cong HS_*(A ; B)$.
\end{prop}
\begin{proof}
Here, we are viewing $A \otimes_k B$ as a $B$-algebra via the inclusion
$B \cong 1_A \otimes_k B \hookrightarrow A \otimes_k B$. Observe, there
is an isomorphism
\[
(A \otimes_k B) \otimes_B (A \otimes_k B) \stackrel{\cong}{\longrightarrow}
A \otimes_k A \otimes_k (B \otimes_B B)\stackrel{\cong}{\longrightarrow}
(A \otimes_k A) \otimes_k B.
\]
Iterating this for $n$-fold tensors of $A \otimes_k B$,
\[
\underbrace{(A \otimes_k B) \otimes_B \ldots \otimes_B (A \otimes_k B)}_{n}
\cong \underbrace{A \otimes_k \ldots \otimes_k A}_n \otimes_k B.
\]
This shows that the $\Delta S$-module over $B$, $B_*^{sym}(A \otimes_k B)$
is isomorphic as $k$-module to \mbox{$(B_*^{sym}A) \otimes_k B$} over $k$.
The proposition now follows essentially by definition. Let $L_*$ be the
resolution of $\underline{k}$ by projective $\Delta S^\mathrm{op}$-modules
(over $k$) given by $L_* = k[N(- \setminus \Delta S)]$. Taking tensor
products (over $k$) with the algebra $B$, we obtain a projective resolution
of the trivial $\Delta S^\mathrm{op}$-module over $B$. Thus,
\begin{equation}\label{eq.H_AotimesB_B}
HS_*(A \otimes_k B \;|\; B) = H_*\left( (L_* \otimes_k B)
\otimes_{B[\mathrm{Mor}\Delta S]} B_*^{sym}(A \otimes_k B);\;B \right)
\end{equation}
On the chain level, there are isomorphisms:
\[
(L_* \otimes_k B) \otimes_{B[\mathrm{Mor}\Delta S]} B_*^{sym}(A \otimes_k B)
\cong (L_* \otimes_k B)\otimes_{B[\mathrm{Mor}\Delta S]}
(B_*^{sym}A \otimes_k B)
\]
\begin{equation}\label{eq.H_AotimesB_B-red}
\cong (L_* \otimes_{k[\mathrm{Mor}\Delta S]} B_*^{sym}A) \otimes_k B
\end{equation}
The complex~(\ref{eq.H_AotimesB_B-red}) computes $HS_*( A ; B)$ by definition.
\end{proof}
\begin{rmk}
Since $HS_*(A \;|\; k) = HS_*(A \otimes_k k\;|\; k)$, Prop.~\ref{prop.HS-A-B}
allows us to identify $HS_*(A \;|\; k)$ with $HS_*(A ; k)$.
\end{rmk}
The construction $HS_*(A ; -)$ is a covariant functor, as is immediately seen on
the chain level. In addition, $HS_*( - ; M)$ is a covariant functor for any
$k$-module, $M$.
\begin{prop}\label{prop.les-for-HS}
Suppose $0 \to X \to Y \to Z \to 0$ is a short exact sequence of left
$k$-modules, and suppose $A$ is a flat $k$-algebra. Then there is an induced
long exact sequence in symmetric homology:
\begin{equation}\label{eq.les_HS}
\ldots \to HS_n(A ; X) \to HS_n(A ; Y) \to HS_n(A ; Z) \to HS_{n-1}(A ; X)
\to \ldots
\end{equation}
Moreover, a map of short exact sequences, $(\alpha, \beta, \gamma)$, as in the
diagram below, induces a map of the corresponding long exact sequences
(commutative ladder)
\begin{equation}\label{eq.ses-morphism}
\begin{diagram
\node{0}
\arrow{e}
\node{X}
\arrow{e}
\arrow{s,l}{\alpha}
\node{Y}
\arrow{e}
\arrow{s,l}{\beta}
\node{Z}
\arrow{e}
\arrow{s,l}{\gamma}
\node{0}
\\
\node{0}
\arrow{e}
\node{X'}
\arrow{e}
\node{Y'}
\arrow{e}
\node{Z'}
\arrow{e}
\node{0}
\end{diagram}
\end{equation}
\end{prop}
\begin{proof}
By Lemma~\ref{lem.YA-flat}, the hypothesis $A$ is flat implies that the
following is an exact sequence of chain complexes:
\[
0 \to \mathscr{Y}_*A \otimes_k X \to \mathscr{Y}_*A \otimes_k Y \to
\mathscr{Y}_*A \otimes_k Z \to 0.
\]
This induces a long exact sequence in homology
\[
\ldots \to H_n(\mathscr{Y}_*A \otimes_k X) \to H_n(\mathscr{Y}_*A
\otimes_k Y) \to H_n(\mathscr{Y}_*A \otimes_k Z) \to H_{n-1}(\mathscr{Y}_*A
\otimes_k X) \to \ldots
\]
as required.
Now let $(\alpha, \beta, \gamma)$ be a morphism of short exact sequences, as
in diagram~(\ref{eq.ses-morphism}). Consider the diagram,
\begin{equation}\label{eq.les-morphism}
\begin{diagram
\node{ \vdots }
\arrow{s}
\node{ \vdots }
\arrow{s}
\\
\node{ HS_n(A ; X) }
\arrow{s}
\arrow{e,t}{ \alpha_* }
\node{ HS_n(A ; X') }
\arrow{s}
\\
\node{ HS_n(A ; Y) }
\arrow{s}
\arrow{e,t}{ \beta_* }
\node{ HS_n(A ; Y') }
\arrow{s}
\\
\node{ HS_n(A ; Z) }
\arrow{s,l}{\partial}
\arrow{e,t}{ \gamma_* }
\node{ HS_n(A ; Z') }
\arrow{s,l}{\partial'}
\\
\node{ HS_{n-1}(A ; X) }
\arrow{s}
\arrow{e,t}{ \alpha_* }
\node{ HS_{n-1}(A ; X') }
\arrow{s}
\\
\node{ \vdots }
\node{ \vdots }
\end{diagram}
\end{equation}
Since $HS_n(A ; -)$ is functorial, the upper two squares of the diagram
commute. Commutativity of the lower square follows from the naturality of the
connecting homomorphism in the snake lemma.
\end{proof}
\begin{rmk}\label{rmk.les_functors}
Any family of additive covariant functors $\{T_n\}$ between two abelian
categories is said to be a {\it long exact sequence of functors} if it takes
short exact sequences to long exact sequences such as~(\ref{eq.les_HS}) and
morphisms of short exact sequences to commutative ladders of long exact
sequences such as~(\ref{eq.les-morphism}). See~\cite{D}, Definition~1.1 and
also~\cite{Mc}, section~12.1. The content of Prop.~\ref{prop.les-for-HS} is
that for $A$ flat, $\{HS_n(A ; - )\}_{n \in \mathbb{Z}}$ is a long exact sequence of
functors.
\end{rmk}
We may now prove Theorem~\ref{thm.univ.coeff.}.
\begin{proof}
Let $T_q : k\textrm{-$\mathbf{Mod}$} \to k\textrm{-$\mathbf{Mod}$}$ be the
functor $HS_q( A ; - )$. Observe, since $A$ is flat, $\{T_q\}$ is a long
exact sequence of additive covariant functors (Rmk.~\ref{rmk.les_functors} and
Prop.~\ref{prop.les-for-HS}); $T_q = 0$ for sufficiently small $q$ (indeed,
for $q < 0$); and $T_q$ commutes with arbitrary direct sums. Hence, by the
Universal Coefficient Theorem of Dold (2.12 of~\cite{D}. See also
McCleary~\cite{Mc}, Thm.~12.11), there is a spectral sequence with
$E_2^{p,q} := \mathrm{Tor}^k_p\left( T_q(k) , B \right) \Rightarrow T_*(B)$.
\end{proof}
As an immediate consequence, we have the following result.
\begin{cor}\label{cor.iso_in_HS_with_coeff.}
If $f : A \to A'$ is a $k$-algebra map between flat algebras which induces
an isomorphism in symmetric homology, $HS_*(A) \stackrel{\cong}{\to}
HS_*(A')$, then for a commutative $k$-algebra $B$, the map $f \otimes
\mathrm{id}_B$ induces an isomorphism $HS_*(A;B) \stackrel{\cong}{\to}
HS_*(A' ; B)$.
\end{cor}
Under stronger hypotheses, the universal coefficient spectral sequence reduces
to short exact sequences. Recall some notions of ring theory (c.f. the article
Homological Algebra: Categories of Modules (200:K), Vol. 1, pp. 755-757
of~\cite{I}). A commutative ring $k$ is said to have {\it global dimension
$\leq n$} if for all \mbox{$k$-modules} $X$ and $Y$, $\mathrm{Ext}_k^m(X,Y) = 0$
for $m > n$. $k$ is said to have {\it weak global dimension $\leq n$} if for
all \mbox{$k$-modules} $X$ and $Y$, $\mathrm{Tor}_m^k(X, Y) = 0$ for $m>n$.
Note, the weak global dimension of a ring is less than or equal to its global
dimension, with equality holding for Noetherian rings but not in general. A
ring is said to be {\it hereditary} if all submodules of projective modules are
projective, and this is equivalent to the global dimension of the ring being no
greater than $1$.
\begin{theorem}\label{thm.univ.coeff.ses}
If $k$ has weak global dimension $\leq 1$, then the spectral sequence of
Thm.~\ref{thm.univ.coeff.} reduces to short exact sequences,
\begin{equation}\label{eq.UCTses}
0 \longrightarrow HS_n(A\;|\;k) \otimes_k B \longrightarrow
HS_n(A ; B) \longrightarrow \mathrm{Tor}^k_1( HS_{n-1}(A \;|\;k), B)
\longrightarrow 0.
\end{equation}
Moreover, if $k$ is hereditary and and $A$ is projective over $k$, then
these sequences split (unnaturally).
\end{theorem}
\begin{proof}
Assume first that $k$ has weak global dimension $\leq 1$. So
$\mathrm{Tor}_p^k(T_q(k), B) = 0$ for all $p > 1$. Following Dold's
argument (Corollary~2.13 of~\cite{D}), we obtain the required exact sequences,
\[
0 \longrightarrow T_n(k)\otimes_k B \longrightarrow T_n(B) \longrightarrow
\mathrm{Tor}^k_1( T_{n-1}(k), B ) \longrightarrow 0.
\]
Assume further that $k$ is hereditary and $A$ is projective. Then by
Lemma~\ref{lem.YA-flat}, $\mathscr{Y}_nA$ is projective for each n.
Theorem~8.22 of Rotman~\cite{R3} then gives us the desired splitting.
\end{proof}
\begin{rmk}
The proof given above also proves UCT for cyclic homology. A partial result
along these lines exists in Loday (\cite{L}, 2.1.16). There, he shows
$HC_*(A\;|\; k) \otimes_k K \cong HC_*(A \;|\; K)$ and \mbox{$HH_*(A\;|\; k)
\otimes_k K \cong$} \mbox{$HH_*(A \;|\; K)$} in the case that $K$ is a
localization of $k$, and $A$ is a $K$-module, flat over $k$. I am not aware
of a statement of UCT for cyclic or Hochschild homology in its full generality
in the literature.
\end{rmk}
\subsection{Integral Symmetric Homology and a Bockstein Spectral Sequence}
We shall obtain a converse to Cor.~\ref{cor.iso_in_HS_with_coeff.} in the case
$k = \mathbb{Z}$.
\begin{theorem}\label{thm.conv.HS_iso}
Let $f : A \to A'$ be an algebra map between torsion-free $\mathbb{Z}$-algebras.
Suppose for $B = \mathbb{Q}$ and $B = \mathbb{Z}/p\mathbb{Z}$ for any prime $p$, the map
$f \otimes \mathrm{id}_B$ induces an isomorphism $HS_*(A ; B) \to
HS_*(A' ; B)$. Then $f$ also induces an isomorphism $HS_*(A)
\stackrel{\cong}{\to} HS_*(A')$.
\end{theorem}
First, note that if $A$ is flat over $k$, Prop.~\ref{prop.les-for-HS} allows one
to construct the Bockstein homomorphisms $\beta_n : HS_n(A ; Z) \to
HS_{n-1}(A ; X)$ associated to a short exact sequence of $k$-modules, $0 \to
X \to Y \to Z \to 0$. These Bocksteins are natural in the following sense:
\begin{lemma}\label{lem.bockstein}
Suppose $f : A \to A'$ is a map of flat $k$-algebras, and $0 \to X \to Y \to
Z \to 0$ is a short exact sequence of $k$-modules. Then the following diagram
is commutative for each $n$:
\[
\begin{diagram
\node{HS_n(A; Z)}
\arrow{e,t}{\beta}
\arrow{s,l}{f_*}
\node{HS_{n-1}(A; X)}
\arrow{s,l}{f_*}
\\
\node{HS_n(A'; Z)}
\arrow{e,t}{\beta'}
\node{HS_{n-1}(A'; X)}
\end{diagram}
\]
Moreover if the induced map $f_*: HS_*(A;W) \to HS_*(A';W)$ is an isomorphism
for any two of $W=X$, $W=Y$, $W=Z$, then it is an isomorphism for the third.
\end{lemma}
\begin{proof}
$A$ and $A'$ flat imply both sequences of complexes are exact:
\[
0 \to \mathscr{Y}_*A \otimes_k X \to \mathscr{Y}_*A \otimes_k Y \to
\mathscr{Y}_*A \otimes_k Z \to 0,
\]
\[
0 \to \mathscr{Y}_*A' \otimes_k X \to \mathscr{Y}_*A' \otimes_k Y \to
\mathscr{Y}_*A' \otimes_k Z \to 0.
\]
The map $\mathscr{Y}_*A \to \mathscr{Y}_*A'$ induces a map of short exact
sequences, hence induces a commutative ladder of long exact sequences of
homology groups. In particular, the squares involving the boundary maps
(Bocksteins) must commute.
Now, assuming further that $f_*$ induces isomorphisms $HS_*(A;W) \to
HS_*(A';W)$ for any two of $W = X$, $W = Y$, $W = Z$, let $V$ be the third
module. The $5$-lemma implies isomorphisms $HS_n(A; V)
\stackrel{\cong}{\longrightarrow} HS_n(A'; V)$ for each $n$.
\end{proof}
We shall now proceed with the proof of Thm.~\ref{thm.conv.HS_iso}. All tensor
products will be over $\mathbb{Z}$ in what follows.
\begin{proof}
Let $A$ and $A'$ be torsion-free $\mathbb{Z}$-modules. Over $\mathbb{Z}$, torsion-free
implies flat. Let $f : A \to A'$ be an algebra map inducing isomorphism in
symmetric homology with coefficients in $\mathbb{Q}$ and also in $\mathbb{Z}/p\mathbb{Z}$
for any prime $p$. For $m \geq 2$, there is a short exact sequence,
\[
0 \longrightarrow \mathbb{Z}/p^{m-1}\mathbb{Z} \stackrel{p}{\longrightarrow} \mathbb{Z}/p^m\mathbb{Z}
\longrightarrow \mathbb{Z}/p\mathbb{Z} \longrightarrow 0.
\]
Consider first the case $m = 2$. Since $HS_*(A ; \mathbb{Z}/p\mathbb{Z}) \to HS_*(A' ;
\mathbb{Z}/p\mathbb{Z})$ is an isomorphism, Lemma~\ref{lem.bockstein} implies the induced map
is an isomorphism for the middle term:
\begin{equation}\label{eq.HS_A_Zp2}
f_* : HS_*(A; \mathbb{Z}/p^2\mathbb{Z}) \stackrel{\cong}{\longrightarrow} HS_*(A'; \mathbb{Z}/p^2\mathbb{Z})
\end{equation}
(Note, all maps induced by $f$ on symmetric homology will be denoted by
$f_*$.)
For the inductive step, fix $m > 2$ and suppose $f$ induces an isomorphism in
symmetric homology, $f_* : HS_*(A; \mathbb{Z}/p^{m-1}\mathbb{Z})
\stackrel{\cong}{\longrightarrow} HS_*(A'; \mathbb{Z}/p^{m-1}\mathbb{Z})$. Again,
Lemma~\ref{lem.bockstein} implies the induced map is an isomorphism on the
middle term.
\begin{equation}\label{eq.HS_A_Zpm}
f_* : HS_*(A; \mathbb{Z}/p^{m}\mathbb{Z}) \stackrel{\cong}{\longrightarrow} HS_*(A';
\mathbb{Z}/p^{m}\mathbb{Z})
\end{equation}
Denote $\displaystyle{\mathbb{Z} / p^\infty \mathbb{Z} := \lim_{\longrightarrow} \mathbb{Z}/ p^m\mathbb{Z}}$. Note,
this is a {\it direct limit} in the sense that it is a colimit over a directed
system. The direct limit functor is exact (Prop.~5.3 of~\cite{S2}), so the
maps $HS_n(A ; \mathbb{Z} / p^\infty \mathbb{Z}) \to HS_n(A' ; \mathbb{Z} / p^\infty \mathbb{Z})$ induced
by $f$ are isomorphisms, given by the chain of isomorphisms below:
\[
HS_n(A; \mathbb{Z} / p^\infty \mathbb{Z})
\cong
H_n(\lim_{\longrightarrow} \mathscr{Y}_*A \otimes \mathbb{Z}/p^m\mathbb{Z})
\cong
\lim_{\longrightarrow}H_*(\mathscr{Y}_*A \otimes \mathbb{Z}/p^m\mathbb{Z})
\stackrel{f_*}{\longrightarrow}
\]
\[
\qquad\qquad\qquad
\lim_{\longrightarrow}H_*(\mathscr{Y}_*A' \otimes \mathbb{Z}/p^m\mathbb{Z})
\cong
H_*(\lim_{\longrightarrow} \mathscr{Y}_*A' \otimes \mathbb{Z}/p^m\mathbb{Z})
\cong
HS_*(A'; \mathbb{Z} / p^\infty \mathbb{Z})
\]
(Note, $f_*$ here stands for $\displaystyle{\lim_{\longrightarrow}H_n(\mathscr{Y}_*f
\otimes \mathrm{id})}$.)
Finally, consider the short exact sequence of abelian groups,
\[
0 \longrightarrow \mathbb{Z} \longrightarrow \mathbb{Q} \longrightarrow
\bigoplus_{\textrm{$p$ prime}} \mathbb{Z} / p^\infty \mathbb{Z} \longrightarrow 0
\]
The isomorphism $f_* : HS_*(A; \mathbb{Z} / p^\infty \mathbb{Z}) \to HS_*(A'; \mathbb{Z} / p^\infty
\mathbb{Z})$ passes to direct sums, giving isomorphisms for each $n$,
\[
f_* : HS_{n}\left(A; \bigoplus_p \mathbb{Z}/p^\infty \mathbb{Z}\right) \stackrel{\cong}
{\longrightarrow} HS_{n}\left(A'; \bigoplus_p \mathbb{Z}/p^\infty \mathbb{Z}\right).
\]
Together with the assumption that $HS_*(A; \mathbb{Q}) \to HS_*(A';
\mathbb{Q})$ is an isomorphism, another appeal to Lemma~\ref{lem.bockstein}
gives the required isomorphism in symmetric homology, $f_* : HS_n(A)
\stackrel{\cong}{\longrightarrow} HS_n(A')$.
\end{proof}
\begin{rmk}
Theorem~\ref{thm.conv.HS_iso} may be useful for determining integral symmetric
homology, since rational computations are generally simpler, and computations
mod $p$ may be made easier due to the presence of additional structure.
\end{rmk}
Finally, we state a result along the lines of McCleary~\cite{Mc}, Thm.~10.3.
(This is a version of the Bockstein spectral sequence for symmetric homology.)
Denote the torsion submodule of the graded module $H_*$ by
$\tau\left(H_*\right)$.
\begin{theorem}\label{thm.bockstein_spec_seq}
Suppose $A$ is free of finite rank over $\mathbb{Z}$. Then there is a singly-graded
spectral sequence with
\[
E_*^1 := HS_*( A ; \mathbb{Z}/p\mathbb{Z}) \Rightarrow HS_*(A)/\tau\left(HS_*(A) \right)
\otimes \mathbb{Z}/p\mathbb{Z},
\]
with differential map $d^1 = \beta$, the standard Bockstein map associated to
$0 \to \mathbb{Z} / p\mathbb{Z} \to \mathbb{Z} / p^2\mathbb{Z} \to \mathbb{Z} / p\mathbb{Z} \to 0$. Moreover, the convergence
is strong.
\end{theorem}
The proof McCleary gives on p.~459 carries over to our case intact. All that is
required for this proof is that each $H_n(\mathscr{Y}_*A)$ be a
finitely-generated abelian group. The hypothesis that $A$ is
finitely-generated, coupled with a result of
Section~?,
namely Cor.~\ref{cor.fin-gen}, guarantees this. Note, over $\mathbb{Z}$, {\it free of
finite rank} is equivalent to {\it flat and finitely-generated}.
\section{Tensor Algebras}\label{sec.tensoralg}
For a general $k$-algebra $A$, the standard resolution is often too difficult
to work with. In order to develop workable methods to compute symmetric
homology, it will become necessary to find smaller resolutions of
$\underline{k}$. First, we develop the necessary results for the special case
of tensor algebras. Indeed, tensor algebra arguments are also key in the proof
of Fiedorowicz's Theorem (Thm.~1({\it i}) of~\cite{F}) about the symmetric
homology of group algebras.
Let $T : k$-\textbf{Alg} $\to k$-\textbf{Alg} be the functor sending an algebra
$A$ to the tensor algebra generated by $A$, $TA := k \oplus A \oplus
A^{\otimes 2} \oplus A^{\otimes 3} \oplus \ldots$ There is an algebra
homomorphism $\theta : TA \to A$, defined by multiplying tensor factors,
$\theta( a_0 \otimes a_1 \otimes \ldots \otimes a_k ) := a_0a_1 \cdots a_k$.
In fact, $\theta$ defines a natural transformation $T \to \mathrm{id}$. We
shall also make use of a $k$-module homomorphism $h$, sending the algebra $A$
identically onto the summand $A$ of $TA$. This map is a natural transformation
from the forgetful functor $U : k$-\textbf{Alg} $\to k$-\textbf{Mod} to the
functor $UT$. The resolution of $A$ by tensor algebras may be regarded as a
$k$-complex, where $T^nA$ is viewed in degree $n-1$.
\begin{equation}\label{eq.res_tensor_alg}
0 \gets A \stackrel{\theta_A}{\gets} TA \stackrel{\theta_1}{\gets} T^2A
\stackrel{\theta_2}{\gets} \ldots
\end{equation}
The maps $\theta_n$ for $n \geq 1$ are defined in terms of face maps,
$\theta_n := \sum_{i = 0}^{n} (-1)^i T^{n-i}\theta_{T^iA}$. Note that
naturality of $\theta$ impies $\theta_n$ is a natural transformation
$T^{p+1} \to T^p$.
\begin{rmk}
Note that the complex~(\ref{eq.res_tensor_alg}) is nothing more than the
complex associated to May's 2-sided bar construction $B_*(T, T, A)$ (See
chapter 9 of~\cite{M}). If we denote by $A_0$ the chain complex consisting of
$A$ in degree $0$ and $0$ in higher degrees, then there is a homotopy $h_n :
B_n(T, T, A) \to B_{n+1}(T, T, A)$ that establishes a strong deformation
retract $B_*(T, T, A) \to A_0$. In fact, the homotopy maps are given by
$h_n := h_{T^{n+1}A}$, where $h$ is the natural transformation $U \to UT$
discussed above.
\end{rmk}
For each $q \geq 0$, if we apply the functor $\mathscr{Y}_q$ to the
complex~(\ref{eq.res_tensor_alg}), we obtain the sequence below:
\begin{equation}\label{eq.ex-seq-TA}
\begin{diagram
\node{0}
\node{\mathscr{Y}_qA}
\arrow{w}
\node{\mathscr{Y}_qTA}
\arrow{w,t}{\mathscr{Y}_q\theta_{A}}
\node{\mathscr{Y}_qT^2A}
\arrow{w,t}{\mathscr{Y}_q\theta_1}
\node{\mathscr{Y}_qT^3A}
\arrow{w,t}{\mathscr{Y}_q\theta_2}
\node{\cdots}
\arrow{w}
\end{diagram}
\end{equation}
I claim this sequence is exact. Indeed, by Remark~\ref{rmk.uniqueChain},
$\mathscr{Y}_qA \cong \bigoplus_{n \geq 0} k\left[ N([n] \setminus
\Delta S)_{q-1} \right] \otimes_k A^{\otimes(n+1)}$. For each $n$, the
$k$-module map $h$ induces a homotopy $h^{\otimes n}$ on each complex,
\begin{equation}\label{eq.ex-seq-TA-tensored}
\begin{diagram
\node{0}
\node{A^{\otimes n}}
\arrow{w}
\node{(TA)^{\otimes n}}
\arrow{w,t}{\theta_{A}^{\otimes n}}
\node{(T^2A)^{\otimes n}}
\arrow{w,t}{\theta_1^{\otimes n}}
\node{\cdots}
\arrow{w}
\end{diagram}
\end{equation}
Each $k\left[ N([n] \setminus \Delta S)_{q-1} \right]$ is free as
$k$-module, so tensor products preserve exactness.
Denote by $d_i(A)$ (or $d_i$, when the context is clear) the $i^{th}$
differential map of $\mathscr{Y}_*A$.
Consider the double complex defined by
$\mathscr{T}_{p,q} := \mathscr{Y}_q T^{p+1}A$, with vertical
differential $(-1)^pd_q$.
\begin{equation}\label{eq.scriptT}
\begin{diagram
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\\
\node{\mathscr{Y}_2TA}
\arrow{s,l}{d_2}
\node{\mathscr{Y}_2T^2A}
\arrow{w,t}{\mathscr{Y}_2\theta_1}
\arrow{s,l}{-d_2}
\node{\mathscr{Y}_2T^3A}
\arrow{w,t}{\mathscr{Y}_2\theta_2}
\arrow{s,l}{d_2}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}_1TA}
\arrow{s,l}{d_1}
\node{\mathscr{Y}_1T^2A}
\arrow{w,t}{\mathscr{Y}_1\theta_1}
\arrow{s,l}{-d_1}
\node{\mathscr{Y}_1T^3A}
\arrow{w,t}{\mathscr{Y}_1\theta_2}
\arrow{s,l}{d_1}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}_0TA}
\node{\mathscr{Y}_0T^2A}
\arrow{w,t}{\mathscr{Y}_0\theta_1}
\node{\mathscr{Y}_0T^3A}
\arrow{w,t}{\mathscr{Y}_0\theta_2}
\node{\cdots}
\arrow{w,t}{}
\end{diagram}
\end{equation}
Consider a second double complex $\mathscr{A}_{*,*}$ consisting of the complex
$\mathscr{Y}_*A$ concentrated in the $0^{th}$ column.
\begin{equation}\label{eq.scriptA}
\begin{diagram
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\\
\node{\mathscr{Y}_2A}
\arrow{s,l}{d_2}
\node{0}
\arrow{w}
\arrow{s}
\node{0}
\arrow{w}
\arrow{s}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}_1A}
\arrow{s,l}{d_1}
\node{0}
\arrow{w}
\arrow{s}
\node{0}
\arrow{w}
\arrow{s}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}_0A}
\node{0}
\arrow{w}
\node{0}
\arrow{w}
\node{\cdots}
\arrow{w,t}{}
\end{diagram}
\end{equation}
\begin{theorem}\label{thm.doublecomplexiso}
There is a map of double complexes $\Theta_{*,*} : \mathscr{T}_{*,*} \to
\mathscr{A}_{*,*}$ inducing isomorphism in homology, $H_*\left(\mathrm{Tot}
(\mathscr{T})\right) \to H_*\left( \mathrm{Tot}(\mathscr{A})\right)$.
\end{theorem}
\begin{proof}
The map $\Theta_{*,*}$ is defined by $\Theta_{0,q} := \mathscr{Y}_q\theta_A$,
and $\Theta_{p,q} = 0$ for positive $p$. Functoriality of $\mathscr{Y}_*$
ensures that $\mathscr{Y}_*\theta_A$ is a chain map. The isomorphism follows
from the exactness of the sequence~(\ref{eq.ex-seq-TA}).
\end{proof}
\begin{rmk}\label{rmk.HS-A-bicomplex}
Observe that $H_*\left( \mathrm{Tot}(\mathscr{A});\, k\right) = H_*\left(
\mathscr{Y}_*A;\,k\right) = HS_*(A)$. This permits the computation of
symmetric homology of any given algebra $A$ in terms of tensor algebras:
\end{rmk}
\begin{cor}\label{cor.HS_A_via-tensoralgebras}
For an associative, unital $k$-algebra $A$, $HS_*(A) \cong H_*\left(
\mathrm{Tot}(\mathscr{T});\,k \right)$, where $\mathscr{T}_{*,*}$ is the
double complex $\{ \mathscr{Y}_q T^{p+1}A \}_{p,q \geq 0}$.
\end{cor}
The following lemma shows why it is advantageous to work with tensor algebras.
\begin{lemma}\label{lem.tensor_alg_constr}
For a unital, associative $k$-algebra $A$, there is an isomorphism of
$k$-complexes:
\begin{equation}\label{eq.YA-decomp}
\mathscr{Y}_*TA \cong \bigoplus_{n\geq -1} Y_n(A),
\end{equation}
where
\[
Y_n(A) = \left\{\begin{array}{ll}
k\left[N(\Delta S)\right], &n = -1\\
k\left[N([n] \setminus \Delta S)\right]
\otimes_{k\Sigma_{n+1}^\mathrm{op}} A^{\otimes(n+1)},
&n \geq 0
\end{array}\right.
\]
Moreover, the differential respects the direct sum decomposition.
\end{lemma}
\begin{proof}
Any generator of $\mathscr{Y}_*TA$ has the form
$[p] \stackrel{\alpha}{\to} [q_0] \to \ldots \to [q_n] \otimes u$,
where
\[
u = \left(\bigotimes_{a\in A_0}a\right) \otimes
\left(\bigotimes_{a\in A_1}a\right) \otimes \ldots \otimes
\left(\bigotimes_{a\in A_p}a\right),
\]
and $A_0, A_1, \ldots, A_p$ are finite ordered lists of elements of $A$.
Indeed, each $A_j$ may be thought of as an element of the set product
$A^{m_j}$ for some $m_j$. If $A_j = \emptyset$, then $m_j = 0$. We use the
convention that an empty tensor product is equal to $1_k$, and say that the
corresponding tensor factor is {\it trivial}. Now, let $m = \left(\sum m_j
\right) - 1$. Let $\pi \;:\; A^{m_0} \times A^{m_1} \times \ldots \times
A^{m_p} \longrightarrow A^{m+1}$ be the evident isomorphism. Let $A_m =
\pi( A_0, A_1, \ldots, A_p )$.
{\bf Case 1.} If $u$ is non-trivial (\textit{i.e.}, $A_m \neq \emptyset$),
then construct the element
\[
u' = \bigotimes_{a \in A_m}a
\]
Next, construct a $\Delta$-morphism $\zeta_u : [m] \to [p]$ as follows:
For each $j$, $\zeta_u$ maps the points
\[
\sum_{i=0}^{j-1} m_i, \left(\sum_{i=0}^{j-1} m_i\right)+1, \ldots,
\left(\sum_{i=0}^{j} m_i\right) - 1 \;\mapsto\; j
\]
Observe, $(\zeta_u)_*(u') = u$. Under $\Delta S$-equivalence, $[p]
\stackrel{\alpha}{\to} [q_0] \to \ldots \to [q_n] \otimes u \approx
[m] \stackrel{\alpha\zeta_u}{\to} [q_0] \to \ldots \to [q_n] \otimes u'$.
The choice of $u'$ and $\zeta_u$ is well-defined with respect to the
$\Delta S$-equivalence up to isomorphism of $[m]$ (an element of
$\Sigma_{m+1}^{\mathrm{op}}$). This shows that any such non-trivial element
in $\mathscr{Y}_*TA$ may be written uniquely as an element of $\displaystyle{k\left[
N([m] \setminus \Delta S)\right] \otimes_{k\Sigma_{m+1}^\mathrm{op}}
A^{\otimes(m+1)}}$.
{\bf Case 2.} If $u$ is trivial, then $u = 1_k^{\otimes(p + 1)}$, and $[p]
\stackrel{\alpha}{\to} [q_0] \to \ldots \to [q_n] \otimes u \approx [q_0]
\stackrel{\mathrm{id}}{\to} [q_0] \to \ldots \to [q_n] \otimes
1_k^{\otimes(q_0 + 1)}$. This element can be identified uniquely with
$[q_0] \to \ldots \to [q_n] \in k\left[N(\Delta S)\right]$.
Thus, the isomorphism~(\ref{eq.YA-decomp}) is proven. Note that for
$B^{sym}_*TA$, the total number of non-trivial tensor factors is preserved
under $\Delta S$ morphisms. This shows that the differential respects the
direct sum decomposition.
\end{proof}
\section{Symmetric Homology of Monoid Algebras}\label{sec.symhommonoid}
The symmetric homology for the case of a monoid algebra $A = k[M]$ has been
studied by Fiedorowicz in~\cite{F}. In the most general formulation (Prop. 1.3
of~\cite{F}), we have:
\begin{theorem}\label{thm.HS_monoidalgebra}
$HS_*(k[M]) \cong H_*\left(B(C_{\infty}, C_1, M); k\right)$, where $C_1$ is
the little $1$-cubes monad, $C_{\infty}$ is the little $\infty$-cubes monad,
and $B(C_{\infty}, C_1, M)$ is May's functorial 2-sided bar construction.
\end{theorem}
The proof, found in~\cite{F}, makes use of a variant of the symmetric bar construction:
\begin{definition}
Let $M$ be a monoid. Define a functor $B_*^{sym}M : \Delta S \to
\textbf{Sets}$ by:
\[
B_n^{sym}M := B_*^{sym}M[n] := M^{n+1}, \; \textrm{(set product)}
\]
\[
B_*^{sym}M(\alpha) : (m_0, \ldots, m_n) \mapsto \alpha(m_0, \ldots, m_n),
\qquad \textrm{for $\alpha \in \mathrm{Mor}\Delta S$}.
\]
where $\alpha : [n] \to [k]$ is represented in tensor notation, and evaluation
at $(m_0, \ldots, m_n)$ is as in definition~\ref{def.symbar}.
\end{definition}
\begin{definition}
For a $\mathscr{C}$-set $X$ and $\mathscr{C}^\mathrm{op}$-set $Y$, define the
$\mathscr{C}$-equivariant set product:
\[
Y \times_\mathscr{C} X := \left(\coprod_{C \in \mathrm{Obj}\mathscr{C}}
Y(C) \times X(C)\right) / \approx,
\]
where the equivalence $\approx$ is generated by the following: For every
morphism $f \in \mathrm{Mor}_{\mathscr{C}}(C, D)$, and every $x \in X(C)$ and
$y \in Y(D)$, we have $\big(y, f_*(x)\big) \approx \big(f^*(y), x\big)$.
\end{definition}
Note that $B_*^{sym}M$ is a $\Delta S$-set, and also a simplicial set, via the
chain of functors $\Delta^{\mathrm{op}} \hookrightarrow \Delta C^{\mathrm{op}}
\stackrel{\cong}{\to} \Delta C \hookrightarrow \Delta S$. Let $\mathscr{X}_*
:= N(- \setminus \Delta S) \times_{\Delta S} B^{sym}_*M$.
\begin{prop}\label{prop.SymHomComplexMonoid}
$\mathscr{X}_*$ is a simplicial set whose homology computes $HS_*(k[M])$.
\end{prop}
\begin{proof}
Since $M$ is a $k$-basis for $k[M]$, $B^{sym}_*M$ acts as a $k$-basis for
$B^{sym}_*k[M]$. Then, observe that
\[
k[ N( - \setminus \Delta S) \times_{\Delta S} B_*^{sym}M ] = k[N(-\setminus
\Delta S)] \otimes_{\Delta S} B_*^{sym}k[M].
\]
\end{proof}
If $M = JX_+$ is a free monoid on a generating set $X$, then $k[M] = T(X)$, the
(free) tensor algebra over $k$ on the set $X$. In this case, we have the
following:
\begin{lemma}\label{lem.HS_tensoralg}
\[
HS_*\left(T(X)\right) \cong H_*\left( \coprod_{n\geq -1} \widetilde{X}_n;
k\right),
\]
where
\[
\widetilde{X}_n = \left\{\begin{array}{ll}
N(\Delta S), &n = -1\\
N\left([n] \setminus \Delta S\right)
\times_{\Sigma_{n+1}^\mathrm{op}} X^{n+1},
&n \geq 0
\end{array}\right.
\]
\end{lemma}
\begin{proof}
This is essentially a special case of Lemma~\ref{lem.tensor_alg_constr}
when the tensor algebra is free, generated by $X = \{ x_i \;|\; i \in
\mathscr{I} \}$.
\end{proof}
This proves Thm.~\ref{thm.HS_monoidalgebra} in the special case that $M$ is a
free monoid. Indeed, if $A = k[JX_+]$, we have
\[
HS_*(A) \cong H_*\left(N(\Delta S)\right) \oplus H_*\left( \coprod_{n \geq 0} N\left(
[n] \setminus \Delta S\right)\times_{\Sigma_{n+1}^\mathrm{op}} X^{n+1} \right)
\]
Now, the set $\{ N\left([n] \setminus \Delta S\right) \}_{n \geq 0}$ has the structure
of $C_{\infty}$-operad, in the sense of May~\cite{M-Op}, since the augmentated chain complex
$k \gets k\left[N\left([n] \setminus \Delta S\right)\right]$ is acyclic
(Prop.~\ref{prop.contractibility_under-category}), and the action of
the symmetric group on $N\left([n] \setminus \Delta S\right)$ is free. Let the associated
monad be denoted $\mathscr{N}^{\Delta S}$. Thus we have by definition,
\[
\mathscr{N}^{\Delta S}X = \coprod_{n \geq 0} N\left( [n] \setminus \Delta S\right)
\times_{\Sigma_{n+1}^\mathrm{op}} X^{n+1}
\]
We obtain a chain of equivalences,
\[
\mathscr{N}^{\Delta S}X \simeq B( \mathscr{N}^{\Delta S}, J, JX) \simeq
B( C_{\infty}, J, JX) \simeq B( C_{\infty}, C_1, JX ).
\]
The first equivalence arises by Prop.~9.9 of May~\cite{M}, and the last equivalence
arises from a weak homotopy equivalence mentioned in Cor.~6.2 of~\cite{M} -- the James
construction $J$, is equivalent to the little $1$-cubes monad, $C_1$.
If $M$ is a group, $\Gamma$, then Fiedorowicz~\cite{F} found:
\begin{theorem}\label{thm.HS_group}
$\displaystyle{HS_*(k[\Gamma]) \cong H_*\left(\Omega\Omega^{\infty}S^{\infty}(B\Gamma);
k\right)}$.
\end{theorem}
This final formulation shows in particular that $HS_*$ is a non-trivial theory.
While it is true that $H_*(\Omega^{\infty}S^{\infty}(X)) = H_*(QX)$ is well
understood, the same cannot be said of the homology of $\Omega\Omega^{\infty}
S^{\infty}X$. Indeed, May states that $H_*(QX)$ may be regarded as the free
allowable Hopf algebra with conjugation over the Dyer-Lashof algebra and dual of
the Steenrod algebra (See~\cite{CLM}, preface to chapter 1, and also
Lemma~4.10). Cohen and Peterson~\cite{CP} found the homology of $\Omega
\Omega^{\infty}S^{\infty}X$, where $X = S^0$, the zero-sphere, but there is
little hope of extending this result to arbitrary $X$ using the same methods.
We shall have more to say about $HS_1(k[\Gamma])$ in~\ref{sub.2-torsion}.
\section{Symmetric Homology Using $\Delta S_+$}\label{sec.deltas_plus}
In this section, we shall show that replacing $\Delta S$ by $\Delta S_+$ in an
appropriate way does not affect the computation of $HS_*$. First, we extend
the symmetric bar contruction over $\Delta S_+$.
\begin{definition}\label{def.symbar_plus}
For an associative, unital algebra, $A$, over a commutative ground ring $k$,
define a functor $B_*^{sym_+}A : \Delta S_+ \to k$-\textbf{Mod} by:
\[
\left\{
\begin{array}{lll}
B_n^{sym_+}A &:=& B_*^{sym}A[n] := A^{\otimes (n+1)} \\
B_{-1}^{sym_+}A &:=& k,
\end{array}
\right.
\]
\[
\left\{
\begin{array}{ll}
B_*^{sym_+}A(\alpha) : (a_0 \otimes a_1 \otimes \ldots \otimes a_n)
\mapsto \alpha(a_0, a_1, \ldots, a_n), \;&\textrm{for $\alpha \in
\mathrm{Mor}\Delta S$},\\
B_*^{sym_+}A(\iota_n) : \lambda \mapsto \lambda(1_A \otimes \ldots \otimes
1_A), \;&(\lambda \in k).
\end{array}
\right.
\]
\end{definition}
Consider the functor $\mathscr{Y}_*^+ : k$-\textbf{Alg}$ \to k$-\textbf{SimpMod}
given by
\begin{equation}\label{eq.deltasplus}
\mathscr{Y}_*^+A \;:=\; k[ N(- \setminus \Delta S_+) ]\otimes_{\Delta S_+}
B_*^{sym_+}A.
\end{equation}
\[
\mathscr{Y}_*^+f = \mathrm{id} \otimes B_*^{sym_+}f
\]
Our goal is to prove the following:
\begin{theorem}\label{thm.SymHom_plusComplex}
For an associative, unital $k$-algebra $A$, $HS_*(A) = H_*\left(
\mathscr{Y}_*^+A;\,k \right)$.
\end{theorem}
As the preliminary step, we shall prove the theorem in the special case of
tensor algebras. We shall need an analogue of Lemma~\ref{lem.tensor_alg_constr}
for $\Delta S_+$.
\begin{lemma}\label{lem.tensor_alg_constr-plus}
For a unital, associative $k$-algebra $A$, there is an isomorphism of
$k$-complexes:
\begin{equation}\label{eq.YplusA-decomp}
\mathscr{Y}^+_*TA \cong \bigoplus_{n\geq -1} Y^+_n,
\end{equation}
where
\[
Y^+_n = \left\{\begin{array}{ll}
k\left[N(\Delta S_+)\right], &n = -1\\
k\left[N([n] \setminus \Delta S_+)\right]
\otimes_{k\Sigma_{n+1}} A^{\otimes(n+1)},
&n \geq 0
\end{array}\right.
\]
Moreover, the differential respects the direct sum decomposition.
\end{lemma}
\begin{proof}
The proof follows verbatim as the proof of Lemma~\ref{lem.tensor_alg_constr},
only with $\Delta S$ replaced with $\Delta S_+$ throughout.
\end{proof}
\begin{lemma}\label{lem.Y-to-Y-plus}
There is a chain map $J_A : \mathscr{Y}_*A \to \mathscr{Y}_*^+A$, which is
natural in $A$.
\end{lemma}
\begin{proof}
First observe that the the inclusion $\Delta S \hookrightarrow \Delta S_+$
induces an inclusion of nerves, $N(-\setminus \Delta S) \hookrightarrow
N(-\setminus \Delta S_+)$, which in turn induces the chain map:
\begin{equation}\label{eq.YtoY-plusChainMap1}
k\left[ N(-\setminus \Delta S) \right] \otimes_{\Delta S} B_*^{sym}A
\longrightarrow
k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S} B_*^{sym}A.
\end{equation}
Now, $k\left[ N(-\setminus \Delta S_+) \right]$ is both a right
$\Delta S$-module and a right $\Delta S_+$-module. Similarly, $B_*^{sym_+}A$
is both a left $\Delta S$-module and a left $\Delta S_+$-module. There is
a natural transformation \mbox{$B_*^{sym}A \to B_*^{sym_+}A$}, again induced
by inclusion of categories, hence there is a chain map
\begin{equation}\label{eq.YtoY-plusChainMap2}
k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S} B_*^{sym}A
\longrightarrow
k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S} B_*^{sym_+}A.
\end{equation}
Finally, pass to tensors over $\Delta S_+$:
\begin{equation}\label{eq.YtoY-plusChainMap3}
k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S} B_*^{sym_+}A
\longrightarrow
k\left[ N(-\setminus \Delta S_+) \right] \otimes_{\Delta S_+} B_*^{sym_+}A.
\end{equation}
The composition of maps (\ref{eq.YtoY-plusChainMap1}),
(\ref{eq.YtoY-plusChainMap2}) and (\ref{eq.YtoY-plusChainMap3})
defines a chain map $J_A : \mathscr{Y}_*A \to \mathscr{Y}_*^+A$.
It is straightforward to verify that $J$ is natural in $A$.
\end{proof}
We shall show that $J_A$ induces an isomorphism on homology $H_*\left(
\mathscr{Y}_*A;\,k\right) \stackrel{\cong}{\longrightarrow} H_*\left(
\mathscr{Y}^+_*A;\,k\right)$ by examining the case of tensor algebras.
\begin{lemma}\label{lem.SymHom_plusComplex-tensalg}
For a unital, associative $k$-algebra $A$, the chain map $J_{TA} :
\mathscr{Y}_*TA \longrightarrow \mathscr{Y}_*^+TA$ is a homotopy equivalence,
hence $HS_*(TA) = H_*\left( \mathscr{Y}_*^+TA;\,k \right).$
\end{lemma}
\begin{proof}
There is a commutative square of complexes:
\[
\begin{diagram
\node{\mathscr{Y}_*TA}
\arrow{e,t}{J_{TA}}
\node{\mathscr{Y}_*^+TA}\\
\node{\bigoplus_{n\geq -1} Y_n}
\arrow{n,l}{\cong}
\arrow{e,t}{\displaystyle{\bigoplus_{n \geq -1} j_n}}
\node{\bigoplus_{n \geq -1} Y_n^+}
\arrow{n,l}{\cong}
\end{diagram}
\]
The isomorphisms on the left and right follow from
Lemmas~\ref{lem.tensor_alg_constr} and~\ref{lem.tensor_alg_constr-plus}. The
maps $j_n$ are defined by inclusion of categories $\Delta S \hookrightarrow
\Delta S_+$:
\begin{equation}\label{eq.def-of-j}
j_{-1} : k\left[N(\Delta S)\right] \to k\left[N(\Delta S_+)\right]
\end{equation}
\begin{equation}\label{eq.def-of-jn}
j_n : k\left[N([n] \setminus \Delta S)\right]
\otimes_{k\Sigma_{n+1}} A^{\otimes(n+1)} \longrightarrow
k\left[N([n] \setminus \Delta S_+)\right]
\otimes_{k\Sigma_{n+1}} A^{\otimes(n+1)}, \qquad n \geq 0.
\end{equation}
Observe that $N(\Delta S_+)$ is contractible, since $[-1] \in \mathrm{Obj}
(\Delta S_+)$ is initial. Thus by Lemma~\ref{lem.DeltaScontractible}, the map
$j_{-1}$ is a homotopy equivalence on the $(-1)$-component. Now, for $n \geq
0$, there is equality $N([n] \setminus \Delta S_+) = N([n] \setminus
\Delta S)$, since there are no morphisms $[n] \to [-1]$ for $n \geq 0$, so
$j_n$ is a homotopy equivalence. Therefore $\bigoplus j_n$, and hence
$J_{TA}$, is a homotopy equivalence.
\end{proof}
\begin{rmk}\label{rmk.cyclic-departure}
Observe, this lemma provides our first major departure from the theory of
cyclic homology. The proof above would not work over the categories
$\Delta C$ and $\Delta C_+$, as $N(\Delta C)$ is not contractible.
\end{rmk}
Consider a double complex $\mathscr{T}^+_{*,*}(A)$, the analogue of
complex~(\ref{eq.scriptT}) for $\Delta S_+$.
\begin{equation}\label{eq.scriptT-plus}
\begin{diagram
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\\
\node{\mathscr{Y}^+_2TA}
\arrow{s,l}{d_2}
\node{\mathscr{Y}^+_2T^2A}
\arrow{w,t}{\mathscr{Y}^+_2\theta_1}
\arrow{s,l}{-d_2}
\node{\mathscr{Y}^+_2T^3A}
\arrow{w,t}{\mathscr{Y}^+_2\theta_2}
\arrow{s,l}{d_2}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}^+_1TA}
\arrow{s,l}{d_1}
\node{\mathscr{Y}^+_1T^2A}
\arrow{w,t}{\mathscr{Y}^+_1\theta_1}
\arrow{s,l}{-d_1}
\node{\mathscr{Y}^+_1T^3A}
\arrow{w,t}{\mathscr{Y}^+_2\theta_2}
\arrow{s,l}{d_1}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}^+_0TA}
\node{\mathscr{Y}^+_0T^2A}
\arrow{w,t}{\mathscr{Y}^+_0\theta_1}
\node{\mathscr{Y}^+_0T^3A}
\arrow{w,t}{\mathscr{Y}^+_1\theta_2}
\node{\cdots}
\arrow{w,t}{}
\end{diagram}
\end{equation}
Consider a second double complex, $\mathscr{A}^+_{*,*}(A)$, the analogue of
complex~(\ref{eq.scriptA}) for $\Delta S_+$. It consists of the complex
$\mathscr{Y}^+_*A$ as the $0^{th}$ column, and $0$ in all positive columns.
\begin{equation}\label{eq.scriptA-plus}
\begin{diagram
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\node{\vdots}
\arrow{s,l}{ }
\\
\node{\mathscr{Y}^+_2A}
\arrow{s,l}{d_2}
\node{0}
\arrow{w}
\arrow{s}
\node{0}
\arrow{w}
\arrow{s}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}^+_1A}
\arrow{s,l}{d_1}
\node{0}
\arrow{w}
\arrow{s}
\node{0}
\arrow{w}
\arrow{s}
\node{\cdots}
\arrow{w,t}{ }
\\
\node{\mathscr{Y}^+_0A}
\node{0}
\arrow{w}
\node{0}
\arrow{w}
\node{\cdots}
\arrow{w,t}{}
\end{diagram}
\end{equation}
We may think of each double complex construction, $\mathscr{A}_{*,*},
\mathscr{T}_{*,*}, \mathscr{A}^+_{*,*}$ and $\mathscr{T}^+_{*,*}$, as a functor
from $k$-\textbf{Alg} to the category of double $k$-complexes. Each functor
takes unital morphisms of algebras to maps of double complexes in the obvious
way -- for example if $f : A \to B$, then the induced map $\mathscr{T}_{*,*}(A)
\to \mathscr{T}_{*,*}(B)$ is defined on the $(p,q)$-component by the map
$\mathscr{Y}_qT^{p+1}f$. The induced map commutes with vertical differentials
of $\mathscr{A}_{*,*}$ and $\mathscr{T}_{*,*}$ (resp., $\mathscr{A}^+_{*,*}$ and
$\mathscr{T}^+_{*,*}$) by naturality of $\mathscr{Y}_*$ (resp.
$\mathscr{Y}_*^+$), and it commutes with the horizontal differentials of
$\mathscr{T}_{*,*}$ and $\mathscr{T}^+_{*,*}$ by naturality of $\theta_n$. The
map $J$ induces a natural transformation \mbox{$J_{*,*} : \mathscr{A}_{*,*} \to
\mathscr{A}^+_{*,*}$}, defined by
\[
J_{p,*}(A) = \left\{\begin{array}{ll}
J_A : \mathscr{Y}_*A \to \mathscr{Y}^+_*A, & p = 0\\
0, & p > 0 \end{array}\right.
\]
Define a map of bigraded modules, \mbox{$K_{*,*}(A) : \mathscr{T}_{*,*}(A)
\to \mathscr{T}^+_{*,*}(A)$} by: $K_{p,*}(A) = J_{T^{p+1}A} : \mathscr{Y}_*
T^{p+1}A \to \mathscr{Y}^+_*T^{p+1}A$. Now, $K_{*,*}(A)$ commutes with the
vertical differentials because each $J_{T^{p+1}A}$ is a chain map. $K_{*,*}(A)$
commutes with the horizontal differentials because of naturality of $J$.
Finally, $K_{*,*}$ defines a natural transformation $\mathscr{T}_{*,*} \to
\mathscr{T}^+_{*,*}$, again by naturality of $J$.
\multiply \dgARROWLENGTH by2
\[
\begin{diagram
\node{A}
\arrow{s,l}{f}
\node{\mathscr{Y}_qT^{p+1}A}
\arrow{s,l}{\mathscr{Y}_qT^{p+1}f}
\arrow{e,tb}{K_{p,q}(A)}{= J_{T^{p+1}A}}
\node{\mathscr{Y}^+_qT^{p+1}A}
\arrow{s,r}{\mathscr{Y}^+_qT^{p+1}f}\\
\node{B}
\node{\mathscr{Y}_qT^{p+1}A}
\arrow{e,tb}{K_{p,q}(B)}{= J_{T^{p+1}B}}
\node{\mathscr{Y}^+_qT^{p+1}A}
\end{diagram}
\]
\divide \dgARROWLENGTH by2
Recall by Thm~\ref{thm.doublecomplexiso}, there is a map of double complexes,
$\Theta_{*,*}(A) : \mathscr{T}_{*,*}(A) \longrightarrow \mathscr{A}_{*,*}(A)$
inducing an isomorphism in homology of the total complexes. We shall need the
analogous statement for the double complexes $\mathscr{T}^+_{*,*}$ and
$\mathscr{A}^+_{*,*}$.
\begin{theorem}\label{thm.doublecomplexiso-plus}
For any unital associative algebra, $A$, there is a map of double complexes,
$\Theta^+_{*,*}(A) : \mathscr{T}^+_{*,*}(A) \to \mathscr{A}^+_{*,*}(A)$
inducing isomorphism in homology $H_*\left( \mathrm{Tot}(\mathscr{T}^+(A));
\,k\right) \to H_*\left( \mathrm{Tot}(\mathscr{A}^+(A));\, k\right)$.
Moreover, $\Theta^+_{*,*}$ provides a natural transformation
$\mathscr{T}^+_{*,*} \to \mathscr{A}^+_{*,*}$.
\end{theorem}
\begin{proof}
The map $\Theta^+_{*,*}(A)$ is defined as:
\[
\Theta^+_{p,q}(A) := \left\{\begin{array}{ll}
\mathscr{Y}^+_q\theta_A, & p = 0 \\
0, & p > 0
\end{array}\right.
\]
This map is a map of double complexes by functoriality of $\mathscr{Y}^+_*$,
and the isomorphism follows from the exactness of the
sequence~(\ref{eq.ex-seq-TA}). Naturality of $\Theta_{*,*}$ follows from
naturality of $\theta$.
\end{proof}
\begin{lemma}\label{lem.comm-diag-functors}
The following diagram of functors and transformations is commutative.
\begin{equation}\label{eq.comm-diag-functors}
\begin{diagram
\node{\mathscr{T}_{*,*}}
\arrow{s,l}{K_{*,*}}
\arrow{e,t}{\Theta_{*,*}}
\node{\mathscr{A}_{*,*}}
\arrow{s,r}{J_{*,*}}\\
\node{\mathscr{T}^+_{*,*}}
\arrow{e,t}{\Theta^+_{*,*}}
\node{\mathscr{A}^+_{*,*}}
\end{diagram}
\end{equation}
\end{lemma}
\begin{proof}
It suffices to fix an algebra $A$ and examine only the $(p,q)$-components.
Note, if $p>0$, then the right hand side of the diagram is trivial, so we may
assume $p=0$.
\begin{equation}\label{eq.comm-diag-A0q}
\begin{diagram
\node{\mathscr{Y}_qTA}
\arrow{s,l}{(J_{TA})_q}
\arrow{e,t}{\mathscr{Y}_q\theta_A}
\node{\mathscr{Y}_qA}
\arrow{s,r}{(J_A)_q}\\
\node{\mathscr{Y}^+_qTA}
\arrow{e,t}{\mathscr{Y}^+_q\theta_A}
\node{\mathscr{Y}^+_qA}
\end{diagram}
\end{equation}
This diagram commutes by naturality of $J$.
\end{proof}
To any double complex $\mathscr{B}_{*,*}$ over $k$, we may associate two
spectral sequences: $(E_{I}\mathscr{B})_{*,*}$, obtained by first taking
vertical homology, then horizontal; and $(E_{II}\mathscr{B})_{*,*}$, obtained by
first taking horizontal homology, then vertical. In the case that
$\mathscr{B}_{*,*}$ lies entirely within the first quadrant, both spectral
sequences converge to $H_*\left( \mathrm{Tot}(\mathscr{B});\,k \right)$
(See~\cite{Mc}, Section~2.4). Maps of double complexes induce maps of spectral
sequences, $E_{I}$ and $E_{II}$, respectively.
Fix the algebra $A$, and consider the commutative diagram of spectral sequences
induced by diagram~(\ref{eq.comm-diag-functors}). The induced maps will be
indicated by an overline, and explicit mention of the algebra $A$ is suppressed
for brevity of notation.
\begin{equation}\label{eq.comm-diag-SpSeqII}
\begin{diagram
\node{E_{II} \mathscr{T}}
\arrow{s,l}{\overline{K}}
\arrow{e,t}{\overline{\Theta}}
\node{E_{II} \mathscr{A}}
\arrow{s,r}{\overline{J}}\\
\node{E_{II} \mathscr{T}^+}
\arrow{e,t}{\overline{\Theta^+}}
\node{E_{II} \mathscr{A}^+}
\end{diagram}
\end{equation}
Now, by Thm.~\ref{thm.doublecomplexiso} and
Thm.~\ref{thm.doublecomplexiso-plus}, we know that $\Theta_{*,*}$ and
$\Theta^+_{*,*}$ induce isomorphisms on total homology, so $\overline{\Theta}$
and $\overline{\Theta^+}$ also induce isomorphisms on the limit term of the
spectral sequences. In fact, both $\overline{\Theta}^r$ and
$\overline{\Theta^+}^r$ are isomorphisms $(E_{II} \mathscr{T})^r \to (E_{II}
\mathscr{A})^r$ for $r \geq 1$. This is because taking horizontal homology of
$\mathscr{T}_{*,*}$ (resp. $\mathscr{T}^+_{*,*}$) kills all components in
positive columns, leaving only the $0^{th}$ column, which is chain-isomorphic to
the $0^{th}$ column of $\mathscr{A}_{*,*}$ (resp. $\mathscr{A}^+_{*,*}$).
Consider a second diagram of spectral sequences, with induced maps indicated by
a hat.
\begin{equation}\label{eq.comm-diag-SpSeqI}
\begin{diagram
\node{E_{I} \mathscr{T}}
\arrow{s,l}{\widehat{K}}
\arrow{e,t}{\widehat{\Theta}}
\node{E_{I} \mathscr{A}}
\arrow{s,r}{\widehat{J}}\\
\node{E_{I} \mathscr{T}^+}
\arrow{e,t}{\widehat{\Theta^+}}
\node{E_{I} \mathscr{A}^+}
\end{diagram}
\end{equation}
Now the map $\widehat{K}$ induces an isomorphism on the limit terms of the
sequences $E_{I} \mathscr{T}$ and $E_{I} \mathscr{T}^+$ as a result of
Lemma~\ref{lem.SymHom_plusComplex-tensalg}. As before, $\widehat{K}^r$ is
an isomorphism for $r \geq 1$.
Now, since $H_*\left(\mathrm{Tot}(\mathscr{A});\,k\right) = H_*\left(
\mathscr{Y}_*A;\,k\right)$ and $H_*\left(\mathrm{Tot}(\mathscr{A}^+);\,k\right)
= H_*\left( \mathscr{Y}_*^+A;\,k \right)$, we can put together a chain of
isomorphisms
\[
\begin{diagram
\node{H_*\left(\mathscr{Y}_*A;\,k\right) \cong \left(E_{II} \mathscr{A}
\right)^{\infty}_*}
\node{\left(E_{II} \mathscr{T}\right)^{\infty}_*
\cong \left(E_{I} \mathscr{T}\right)^{\infty}_*}
\arrow{w,tb}{\overline{\Theta}^{\infty}}{\cong}
\arrow{e,tb}{\widehat{K}^{\infty}_*}{\cong}
\node{\left(E_{I} \mathscr{T}^+\right)^{\infty}_*}
\end{diagram}
\]
\begin{equation}\label{eq.long-iso-YtoYplus}
\begin{diagram
\node{\cong \left(E_{II} \mathscr{T}^+\right)^{\infty}_*}
\arrow{e,tb}{(\overline{\Theta^+})^{\infty}_*}{\cong}
\node{\left(E_{II} \mathscr{A}^+\right)^{\infty}_*
\cong H_*\left( \mathscr{Y}_*^+A;\,k \right)}
\end{diagram}
\end{equation}
Commutativity of Diagram~(\ref{eq.comm-diag-functors}) ensures that the
composition of maps in Diagram~(\ref{eq.long-iso-YtoYplus}) is the map induced
by $J_A$, hence proving:
\begin{theorem}\label{thm.J-iso}
For a unital, associative $k$-algebra $A$, the chain map $J_A : \mathscr{Y}_*A
\to \mathscr{Y}^+_*A$ induces an isomorphism on homology, $H_*\left(
\mathscr{Y}_*A;\,k\right) \stackrel{\cong}{\longrightarrow} H_*\left(
\mathscr{Y}^+_*A;\,k\right)$.
\end{theorem}
As a direct consequence, $HS_*(A) \cong H_*\left( \mathscr{Y}_*^+A;\,k \right)$,
proving Thm.~\ref{thm.SymHom_plusComplex}.
\section{The Category $\mathrm{Epi}\Delta S$ and a Smaller Resolution}
\label{sec.epideltas}
The complex~(\ref{symhomcomplex}) is an extremely large and unwieldy for
computation. Fortunately, when the algebra $A$ is equipped with an
augmentation, $\epsilon : A \to k$, complex~(\ref{symhomcomplex}) is homotopic
to a much smaller subcomplex.
\subsection{Basic and Reduced Tensors}
Recall, if $A$ has an augmentation $\epsilon$, then there is an augmentation
ideal $I$ and the sequence $0 \to I \to A \stackrel{\epsilon}{\to} k \to 0$ is
split exact as $k$-modules. So $A \cong I \oplus k$, and every $x \in A$ can be
decomposed uniquely as $x = a + \lambda$ for some $a \in I$, $\lambda \in k$.
\begin{definition}\label{def.B_JA}
Define $B_{-1,\emptyset}A = k$. For $n \geq 0$, if $J \subseteq [n]$, define
\[
B_{n,J}A := B^J_0 \otimes B^J_1 \otimes \ldots \otimes B^J_n, \quad
\textrm{where}\;\;
B^J_j = \left\{\begin{array}{ll}
I & \textrm{if $j \in J$}\\
k & \textrm{if $j \notin J$}
\end{array}\right.
\]
\end{definition}
\begin{lemma}\label{lem.I-decomp}
For each $n \geq -1$, there is a direct sum decomposition of $k$-modules
$\displaystyle{B_n^{sym_+}A \cong \bigoplus_{J \subseteq [n]} B_{n,J}A}$.
\end{lemma}
\begin{proof}
For $n = -1$, $B_{-1}^{sym_+}A = k = B_{-1,\emptyset}A$.
For $n \geq 0$, $\displaystyle{B_n^{sym_+}A = (I \oplus k)^{\otimes(n+1)}
\cong \bigoplus_{J \subseteq [n]} B_{n,J}A}$.
\end{proof}
\begin{definition}\label{def.basictensors}
A {\it basic tensor} is any tensor $w_0 \otimes w_1 \otimes \ldots \otimes
w_n$, where each $w_j$ is in $I$ or is equal to the unit of $A$. Call a
tensor factor $w_j$ \textit{trivial} if it is the unit of $A$. If all factors
of a basic tensor are trivial, then the tensor is called {\it trivial}, and if
no factors are trivial, the tensor is called {\it reduced}.
\end{definition}
It will become convenient to include the object $[-1] = \emptyset$ in $\Delta$.
Denote the enlarged category by $\Delta_+$. For a basic tensor $Y \in
B_n^{sym_+}A$, we shall define a map $\delta_Y \in \mathrm{Mor}\Delta_+$ as
follows: If $Y$ is trivial, let $\delta_Y = \iota_n$. Otherwise, $Y$ has
$\overline{n} + 1$ non-trivial factors for some $\overline{n} \geq 0$. Define
$\delta_Y : [\overline{n}] \to [n]$ to be the unique injective map that sends
each point $0, 1, \ldots, \overline{n}$ to a point $p \in [n]$ such that $Y$ is
non-trivial at factor~$p$. Let $\overline{Y}$ be the tensor obtained from $Y$
by omitting all trivial factors if such exist, or $\overline{Y} := 1_k$ if $Y$
is trivial. Note, $\overline{Y}$ is the unique basic tensor such that
$(\delta_Y)_*(\overline{Y}) = Y$.
\begin{prop}\label{prop.BsymI}
Any chain $[q]\to [q_0] \to \ldots \to [q_n] \otimes Y \;\in\; k[N(-\setminus
\Delta S_+)] \otimes_{\Delta S_+} B_*^{sym_+}A$, where $Y$ is a basic tensor,
is equivalent to a chain $[\overline{q}] \to [q_0] \to \ldots \to [q_n]
\otimes \overline{Y}$, where either $\overline{Y}$ is reduced or $\overline{Y}
= 1_k$ and $\overline{q} = -1$.
\end{prop}
\begin{proof}
Let $\delta_Y$ and $\overline{Y}$ be defined as above, and let
$[\overline{q}]$ be the source of $\delta_Y$. Then $Y =
(\delta_Y)_*(\overline{Y})$, and
\[
[q] \stackrel{\phi}{\to} [q_0] \to \ldots \to [q_n] \otimes Y \; \approx \;
[\overline{q}] \stackrel{\phi\delta_Y}{\to} [q_0] \to \ldots \to [q_n]
\otimes \overline{Y}.
\]
\end{proof}
\subsection{Reducing to Epimorphisms}
We now turn our attention to the morphisms in the chains. Our goal is to reduce
to those chains that involve only epimorphisms.
\begin{definition}
Let $\mathscr{C}$ be a category. The category $\mathrm{Epi}\mathscr{C}$
(resp. $\mathrm{Mono}\mathscr{C}$) is the subcategory of $\mathscr{C}$
consisting of the same objects as $\mathscr{C}$ and only those morphisms
$f \in \mathrm{Mor}\mathscr{C}$ that are epic (resp. monic).
The set of morphisms of $\mathrm{Epi}\mathscr{C}$ from $X$ to $Y$ may be
denoted $\mathrm{Epi}_{\mathscr{C}}(X, Y)$. Similarly, the set of morphisms
of $\mathrm{Mono}\mathscr{C}$ from $X$ to $Y$ may be denoted
$\mathrm{Mono}_{\mathscr{C}}(X, Y)$.
\end{definition}
\begin{rmk}
A morphism $\alpha = (\phi, g) \in \mathrm{Mor}\Delta S_+$ is epic (resp.
monic) if and only if $\phi$ is epic (resp. monic) as morphism in $\Delta_+$.
\end{rmk}
\begin{prop}\label{prop.decomp}
Any morphism $\alpha \in \mathrm{Mor}\Delta S_+$ decomposes uniquely as
$(\eta, \mathrm{id}) \circ \gamma$, where $\gamma \in \mathrm{Mor}(
\mathrm{Epi}\Delta S_+)$ and $\eta \in \mathrm{Mor}(\mathrm{Mono}\Delta_+)$.
\end{prop}
\begin{proof}
Suppose $\alpha$ has source $[-1]$ and target $[n]$. Then $\alpha = \iota_n$
is the only possibility, and this decomposes as $\iota_n \circ
\mathrm{id}_{[-1]}$. Now suppose the source of $\alpha$ is $[p]$ for some
$p \geq 0$. Write $\alpha = (\phi, g)$, with $\phi \in \mathrm{Mor}\Delta$
and $g \in \Sigma_{p+1}^{\mathrm{op}}$. We shall decompose $\phi$ as follows:
For $\phi : [p] {\to} [r]$, suppose $\phi$ hits $q+1$ distinct points in
$[r]$. Then $\pi : [p] \to [q]$ is induced by $\phi$ by maintaining the order
of the points hit. $\eta$ is the obvious order-preserving monomorphism $[q]
\to [r]$ so that $\eta \pi = \phi$ as morphisms in $\Delta$. To get the
required decomposition in $\Delta S$, use: $\alpha = (\eta, \mathrm{id})
\circ (\pi, g)$.
Now, if $(\xi, \mathrm{id})\circ (\psi, h)$ is also a decomposition of
$\alpha$, with $\xi$ monic and $\psi$ epic, then $(\xi, \mathrm{id}) \circ
(\psi, h) = (\eta, \mathrm{id}) \circ (\pi, g)$ implies $(\xi \psi, g^{-1}h)
= (\eta\pi, \mathrm{id})$, proving $g = h$. Uniqueness will then follow from
uniqueness of such decompositions entirely within the category $\Delta$. The
latter follows from Theorem~B.2 of~\cite{L}, since any monomorphism (resp.
epimorphism) of $\Delta$ can be factored uniquely as compositions of
$\delta_{i}$ (resp. $\sigma_{i}$).
\end{proof}
This decompisition will be written: $[p] \to [r] \;=\; [p] \twoheadrightarrow
\mathrm{im}([p] \to [r]) \hookrightarrow [r]$.
\begin{prop}\label{prop.epiconstruction-functorial}
The epimorphism construction is a functor $\mathscr{E}_p : [p] \setminus
\Delta S_+ \to [p] \setminus \mathrm{Epi}\Delta S_+$.
\end{prop}
\begin{proof}
Fix $p \geq -1$. If $\beta$ is an object of $[p] \setminus \Delta S_+$ ({\it
i.e.} a morphism $[p] \to [r_1])$, then $\mathscr{E}(\beta)$ is the
epimorphism $[p] \twoheadrightarrow \mathrm{im}([p] \to [r])$. If $[p]
\stackrel{\beta}{\to} [r_1] \stackrel{\alpha}{\to} [r_2]$, then there is an
induced map $\mathrm{im}([p] \to [r_1]) \stackrel{\overline{\alpha}}{\to}
\mathrm{im}([p] \to [r_2])$ making the diagram commute:
\begin{equation}\label{eq.epidiagram}
\begin{diagram
\node{ [r_1] }
\arrow[2]{e,t}{ \alpha }
\node[2]{ [r_2] }
\\
\node[2]{ [p] }
\arrow{nw,b}{ \beta }
\arrow{ne,b}{ \alpha\beta }
\arrow{sw,t,A}{ \pi_1 }
\arrow{se,t,A}{ \pi_2 }
\\
\node{ \mathrm{im}([p] \to [r_1]) }
\arrow[2]{n,l,J}{ \eta_1 }
\arrow[2]{e,b,A}{ \overline{\alpha} }
\node[2]{ \mathrm{im}([p] \to [r_2]) }
\arrow[2]{n,r,J}{ \eta_2 }
\end{diagram}
\end{equation}
$\overline{\alpha}$ is the epimorphism induced from the map $\alpha \eta_1$.
Furthermore, for morphisms $[p] \to [r_1] \stackrel{\alpha_1}{\to} [r_2]
\stackrel{\alpha_2}{\to} [r_3]$, we have: $\overline{\alpha_2 \alpha_1}
= \overline{\alpha_2} \circ \overline{\alpha_1}.$
\end{proof}
\begin{rmk}
Note that if $\alpha : [p] \to [r]$ is an epimorphism of $\Delta S_+$, then
$\mathscr{E}(\alpha) = \alpha$.
\end{rmk}
Define a variant of the symmetric bar construction:
\begin{definition}
$B_*^{sym_+}I : \mathrm{Epi}\Delta S_+ \to k$-\textbf{Mod} is the functor
defined by:
\[
\left\{
\begin{array}{lll}
B_n^{sym_+}I &:=& I^{\otimes (n+1)}, \quad n \geq 0, \\
B_{-1}^{sym_+}I &:=& k,
\end{array}
\right.
\]
\[
B_*^{sym_+}I(\alpha) : (a_0 \otimes a_1 \otimes \ldots \otimes a_n) \mapsto
\alpha(a_0, \ldots, a_n), \;\textrm{for $\alpha \in
\mathrm{Mor}(\mathrm{Epi}\Delta S_+$)}
\]
\end{definition}
This definition makes sense since $I$ is an ideal, and $\alpha$ is required to
be epimorphic. Note, the simple tensors $w_0 \otimes \ldots \otimes w_n$ in
$B_n^{sym_+}I$ are by definition reduced. Consider the simplicial $k$-module:
\begin{equation}\label{epiDeltaS_complex}
\mathscr{Y}^{epi}_*A := k[ N(- \setminus \mathrm{Epi}\Delta S_+) ]
\otimes_{\mathrm{Epi}\Delta S_+} B_*^{sym_+}I
\end{equation}
There is an obvious inclusion, $f : \mathscr{Y}^{epi}_*A \longrightarrow
\mathscr{Y}^+_*A$. Define a chain map $g$ in the opposite direction as follows.
First, by Prop.~\ref{prop.BsymI} and observations above, we only need to
define $g$ on the chains $u = [q] \to [q_0] \to \ldots \to [q_n] \otimes Y$
where $Y$ is reduced (or $Y = 1_k$). In this case, define component maps
$g(q) := N(\mathscr{E}_q) \otimes \mathrm{id}$. A priori, this definition is
well-defined only when the tensor product is over $k$. We would like to
assemble the maps $g(q)$ into a chain map $g$. In order to do this, we must
show that the maps are compatible under $\Delta S_+$-equivalence.
Suppose $v = [p] \stackrel{\phi\psi}{\longrightarrow} [p_0] \to \ldots \to
[p_n] \otimes Z$ and $v' = [q] \stackrel{\phi}{\to} [p_0] \to \ldots \to [p_n]
\otimes \psi_*(Z)$, where $\psi \in \mathrm{Mor}_{\Delta S_+}\left([p],
[q]\right)$. If $Z$ is a basic tensor, then so is $\psi_*(Z)$. In order to
apply $g$ to $v$ or $v'$, each must first be put into a reduced form.
{\bf Case 1} Suppose $Z$ is trivial. Then $v$ and $v'$ both reduce to
$[-1] \to [p_0] \to \ldots \to [p_n] \otimes 1$, hence $g(v) = g(v')$.
{\bf Case 2} Suppose $Z$ is non-trivial. For the sake of clean notation, let
$W = \psi_*(Z)$. Construct $\delta_Z$, $\overline{Z}$, $\delta_W$ and
$\overline{W}$ such that $Z = (\delta_Z)_*(\overline{Z})$ and $W =
(\delta_W)_*(\overline{W})$, as in Prop.~\ref{prop.BsymI}, and reduce both
chains:
\begin{equation}\label{eq.epi-reduction}
\begin{diagram
\node{ [p] \stackrel{\phi\psi}{\longrightarrow} [p_0] \to \ldots \to [p_n]
\otimes Z }
\arrow{e,t,!}{\approx}
\arrow{s,lr}{reduce}{\approx}
\node{ [q] \stackrel{\phi}{\to} [p_0] \to \ldots \to [p_n] \otimes W }
\arrow{s,lr}{reduce}{\approx}
\\
\node{ [\overline{p}] \stackrel{\phi\psi\delta_Z}{\longrightarrow} [p_0] \to
\ldots \to [p_n] \otimes \overline{Z} }
\node{ [\overline{q}] \stackrel{\phi\delta_W}{\longrightarrow} [p_0] \to
\ldots \to [p_n] \otimes \overline{W} }
\end{diagram}
\end{equation}
Observe that number of distinct points hit by $\psi\delta_Z$ is exactly
$\overline{q} + 1$; indeed, $W=\psi_*(Z)$ has $\overline{q} + 1$ non-trivial
factors. Thus, $[\overline{q}] = \mathrm{im}([\overline{p}] \to [q])$.
Now, Prop.~\ref{prop.decomp} implies that there is precisely one
$\Delta S$-epimorphism $\gamma : [\overline{p}] \to [\overline{q}]$ making
Diagram~(\ref{eq.epi-decomp}) commute.
\begin{equation}\label{eq.epi-decomp}
\begin{diagram
\node{ [p] }
\arrow{e,t}{\psi}
\node{ [q] }
\\
\node{ [\overline{p}] }
\arrow{n,l}{\delta_Z}
\arrow{e,t,.}{}
\arrow{see,b,A}{\gamma}
\arrow{ne,t}{\psi\delta_Z}
\node{ [\overline{q}] }
\arrow{n,l}{\delta_W}
\arrow{se,t,=}{}
\\
\node[3]{ \mathrm{im}([\overline{p}] \to [q]) }
\arrow{nnw,t,L}{}
\end{diagram}
\end{equation}
That is to say, there exists an epimorphism $\gamma$ such that
$\gamma_*(\overline{Z}) = \overline{W}$ and $\psi\delta_Z = \delta_W\gamma$. So
we may replace the first morphism of the chain in the lower left of
Diagram~(\ref{eq.epi-reduction}) with $\phi\delta_W\gamma$. Then when we apply
$g$ to the chain, the first morphism becomes $\mathscr{E}_{\overline{p}}
( \phi\delta_W\gamma ) = \mathscr{E}_{\overline{p}}( \phi\delta_W) \circ
\gamma$, since $\gamma$ is epic. Let $\pi := \mathscr{E}_{\overline{p}}
( \phi\delta_W)$. Then the result of applying $g$ to each side
Diagram~(\ref{eq.epi-reduction}) is shown below:
\begin{equation}\label{eq.applying-g}
\begin{diagram
\node{ [\overline{p}] \stackrel{\phi\psi\delta_Z}{\longrightarrow} [p_0] \to
\ldots \to [p_n] \otimes \overline{Z} }
\arrow{s,l}{ g(\overline{p}) }
\arrow{e,t,!}{}
\node{ [\overline{q}] \stackrel{\phi\delta_W}{\longrightarrow} [p_0] \to
\ldots \to [p_n] \otimes \overline{W} }
\arrow{s,r}{ g(\overline{q}) }
\\
\node{ [\overline{p}] \stackrel{\pi\gamma}{\to}
\mathrm{im}([\overline{p}] \to [p_0]) \to \ldots \to
\mathrm{im}([\overline{p}] \to [p_n]) \otimes \overline{Z} }
\arrow{e,t,!}{}
\node{ [\overline{q}] \stackrel{\pi}{\to}
\mathrm{im}([\overline{q}] \to [p_0]) \to \ldots \to
\mathrm{im}([\overline{q}] \to [p_n]) \otimes \overline{W} }
\end{diagram}
\end{equation}
Observe that there is equality of objects and morphisms, $\mathrm{im}(
[\overline{p}] \to [p_0]) \to \ldots \to \mathrm{im}([\overline{p}] \to [p_n]) =
\mathrm{im}([\overline{q}] \to [p_0]) \to \ldots \to \mathrm{im}([\overline{q}]
\to [p_n])$. Finally, since $\gamma$ is epic, the $\mathrm{Epi}
\Delta S_+$-equivalence allows us to identify $g(v) \approx g(v')$. This shows
that $g$ is well-defined.
Now clearly $gf = \mathrm{id}$.
\begin{prop}
$fg \simeq \mathrm{id}.$
\end{prop}
\begin{proof}
In what follows, we assume $Y$ is a basic tensor in $B_q^{sym_+}I$.
Define maps $h_j^{(n)}$ as follows:
\[
h_j^{(n)}\left([q] \to [q_0] \to \ldots \to [q_n] \otimes Y\right) \;:=
\]
\[
[q] \twoheadrightarrow \mathrm{im}([q] \to [q_0]) \twoheadrightarrow \ldots
\twoheadrightarrow \mathrm{im}([q] \to [q_j]) \hookrightarrow [q_j] \to
\ldots \to [q_n]
\otimes Y.
\]
$h_j^{(n)}$ is well-defined by the functorial properties of the
epimorphism construction, and a routine, but tedious, verification shows that $h$
defines a presimplicial homotopy from $fg$ to $\mathrm{id}$.
\end{proof}
\begin{prop}\label{prop.epi}
If $A$ has augmentation ideal $I$, then
\[
HS_*(A) = H_*\left(\mathscr{Y}^{epi}_*A;\,k\right) = H_*\left(k[ N(-
\setminus \mathrm{Epi}\Delta S_+) ] \otimes_{\mathrm{Epi}\Delta S_+}
B_*^{sym_+}I;\,k \right).
\]
\end{prop}
\begin{proof}
The complex~(\ref{epiDeltaS_complex}) has been shown to be chain homotopy
equivalent to the complex $\mathscr{Y}^+_*A$, which by
Thm.~\ref{thm.SymHom_plusComplex}, computes $HS_*(A)$.
\end{proof}
\begin{rmk}
The condition that $A$ have an augmentation ideal may be lifted (as Richter
conjectures), if it can be shown that $N(\mathrm{Epi}\Delta S)$ is
contractible. As partial progress along these lines, it can be shown that
$N(\mathrm{Epi}\Delta S)$ is simply-connected.
\end{rmk}
\section{A spectral sequence for $HS_*(A)$}\label{sec.specseq}
Fix a unital associative algebra $A$ over commutative ground ring $k$, and
assume $A$ has an augmentation, with augmentation ideal $I$. Let
$\mathscr{Y}_* = \mathscr{Y}^{epi}_*A$ be the complex~(\ref{epiDeltaS_complex}).
We shall find a useful spectral sequence that computes $HS_*(A)$ based on a
filtration of $\mathscr{Y}_*$.
\subsection{Filtering by Number of Strict Epimorphisms}
Appealing to Remark~\ref{rmk.uniqueChain}, we shall represent a $q$-chain of
$\mathscr{Y}_*$ as $[m_0] \to [m_1] \to \ldots \to [m_q] \otimes Y$, where the
tensor product is over $k$. Consider a filtration of $\mathscr{Y}_*$ by number
of strict epimorphisms, or \textit{jumps} in such chains: $\mathscr{F}_p
\mathscr{Y}_q$ is generated by the chains $[m_0] \to [m_1] \to \ldots \to [m_q]
\otimes Y$, where $m_{i-1} > m_{i}$ for no more than $p$ distinct values of $i$.
The face maps of $\mathscr{Y}_*$ only compose or delete morphisms, so
$\mathscr{F}_p$ is compatible with the differential of $\mathscr{Y}_*$. The
filtration quotients are easily described: $E^0_{p,q} := \mathscr{F}_p
\mathscr{Y}_q / \mathscr{F}_{p-1}\mathscr{Y}_q$ is generated by chains $[m_0]
\to [m_1] \to \ldots \to [m_q] \otimes Y$, where $m_{i-1} > m_{i}$ for exactly
$p$ distinct values of $i$. Consider the spectral sequence with $E^1_{p,q} =
H_{p+q}( E^0_{p,*} )$.
\begin{lemma}\label{lem.E1_term}
There are chain maps (one for each $p$):
\[
E^0_{p,*} \to \bigoplus_{m_0 > \ldots > m_p} \bigg( I^{\otimes(m_0+1)}
\otimes k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}],
[m_i]\big)\Big] \otimes_{k\Sigma_{m_p + 1}} E_*\Sigma_{m_p + 1} \bigg),
\]
inducing isomorphisms in homology:
\[
E^1_{p,q} \cong \bigoplus_{m_0 > \ldots > m_p} H_q\bigg(
\Sigma_{m_p+1}^{\mathrm{op}}\; ; \; I^{\otimes(m_0+1)} \otimes k\Big[
\prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}],[m_i]\big)\Big]\bigg).
\]
\end{lemma}
We use the convention that $I^{\otimes 0} = k$, and $\Sigma_0 \cong 1$, the
trivial group.
\begin{rmk}
Here, we are using the resolution $E_*G$ of $k$ by free $kG$-modules,
$E_nG = k\big[ \prod^{n+1}G \big]$, with $G$-action $g . (g_0, g_1, \ldots,
g_n) = (gg_0, g_1, \ldots, g_n)$, and differential,
\[
\partial^{EG}(g_0, g_1, \ldots, g_n) = \left[\sum_{i=0}^{n-1} (g_0, \ldots,
g_ig_{i+1}, \ldots, g_n)\right] + (g_0, g_1, \ldots, g_{n-1}).
\]
\end{rmk}
\subsection{Proof of Lemma~\ref{lem.E1_term}}
The proof will be broken down into a number of steps. Begin by defining two
related chain complexes $\mathscr{B}_*^{(m_0, \ldots, m_p)}$ and
$\mathscr{M}_*^{(m_0, \ldots, m_p)}$.
For $m_0 > m_1 > \ldots > m_p$, define:
\[
\mathscr{B}_*^{(m_0, \ldots, m_p)} := k\left[ \coprod \Big( [m_0]
\stackrel{\cong}{\to} \ldots \stackrel{\cong}{\to} [m_{0}] \twoheadrightarrow
[m_{1}] \stackrel{\cong}{\to} \ldots \stackrel{\cong}{\to} [m_{p-1}]
\twoheadrightarrow [m_p] \right] \otimes I^{\otimes(m_0+1)},
\]
where the coproduct extends over all such chains that begin with $0$ or more
isomorphisms of $[m_0]$, followed by a strict epimorphism
\mbox{$[m_0] \twoheadrightarrow [m_1]$}, followed by $0$ or more isomorphisms of
$[m_1]$, followed by a strict epimorphism \mbox{$[m_1] \twoheadrightarrow
[m_2]$}, etc., and the last morphism must be a strict epimorphism
\mbox{$[m_{p-1}] \twoheadrightarrow [m_p]$}. $\mathscr{B}_*^{(m_0, \ldots,
m_p)}$ is a subcomplex of $E^0_{p,*}$ with the same induced differential,
which we will denote by $\partial^B$.
Denote by $\mathscr{M}_*^{(m_0,\ldots, m_p)}$, the chain complex consisting of
$0$ in degrees different from $p$, and
\[
\mathscr{M}_p^{(m_0, \ldots, m_p)} \;:=\; I^{\otimes(m_0+1)} \otimes
k\Big[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big].
\]
This is the coefficient group that shows up in Lemma~\ref{lem.E1_term}. Its
differential is trivial. Now, $\mathscr{B}_p^{(m_0, \ldots, m_p)}$ is
generated by elements of the form $[m_0] \twoheadrightarrow [m_1]
\twoheadrightarrow \ldots \twoheadrightarrow [m_p] \otimes Y$, where the chain
consists entirely of strict epimorphisms of $\Delta S_+$. Observe that
\[
\mathscr{B}_p^{(m_0, \ldots, m_p)} =
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{0}], [m_1] ) \big] \otimes \ldots \otimes
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{p-1}], [m_p] ) \big]
\otimes
I^{\otimes(m_0+1)}.
\]
For convenience, the factor $I^{\otimes(m_0 + 1)}$ should be put on the left:
\begin{equation}\label{eq.B_p}
\mathscr{B}_p^{(m_0, \ldots, m_p)} \cong
I^{\otimes(m_0+1)} \otimes
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{0}], [m_1] ) \big] \otimes \ldots \otimes
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{p-1}], [m_p] ) \big]
\end{equation}
Now, each $k\big[\mathrm{Epi}_{\Delta S_+}( [m], [n] ) \big]$ is a
\mbox{$(k\Sigma_{m+1})$-$(\Sigma_{n+1})$-bimodule} via the action of symmetric
group elements as automorphisms of $[m]$ and $[n]$. Explicitly, for an element
$(\psi, g)$ of $\mathrm{Epi}_{\Delta S_+}([m],[n])$, and group elements $\tau
\in \Sigma_{m+1}$ and $\sigma \in \Sigma_{n+1}$, view $\tau$ as an
automorphism $t \in \Sigma_{m+1}^\mathrm{op}$ and $\sigma$ as an
automorphism $s \in \Sigma_{n+1}^\mathrm{op}$. Then $(\psi, g).\sigma \;:=\;
(\mathrm{id}, s) \circ (\psi, g) \;=\; (\psi^{s}, gs^\psi)$, and $\tau.(\psi,
g) \;:=\; (\psi, g) \circ (\mathrm{id}, t) \;=\; (\psi, tg)$. Moreover, view
$I^{\otimes(m_0+1)}$ as a right \mbox{$(k\Sigma_{m_0+1})$-module} via the
identification $I^{\otimes(m_0+1)} = B_{m_0}^{sym_+}I$. With this in
mind,~(\ref{eq.B_p}) becomes a (left) \mbox{$(k\Sigma^\mathrm{op}_
{m_p + 1})$-module}, where the action is the right action of $k\Sigma_{m_p+1}$
on the last tensor factor.
I claim that there is a $k$-module isomorphism,
\begin{equation}\label{eq.M_alt}
\mathscr{M}_p^{(m_0, \ldots, m_p)} \;\cong\; I^{\otimes(m_0+1)} \otimes_{kG_0}
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{0}], [m_1] ) \big]
\otimes_{kG_{1}} \ldots \otimes_{kG_{p-1}}
k\big[\mathrm{Epi}_{\Delta S_+}( [m_{p-1}], [m_p] ) \big],
\end{equation}
where $G_i$ is the group $\Sigma_{m_i + 1}$. The isormorphism follows from
the following observation. Any element $Y \otimes (\psi_1, g_1) \otimes \ldots
\otimes (\psi_p, g_p)$ in the module on the right in~(\ref{eq.M_alt}) is
equivalent to one in which all $g_i$ are identities by first writing $(\psi_p,
g_p) = g_p.(\psi_p, \mathrm{id})$ then commuting $g_p$ over the tensor to its
left and iterating this process to the leftmost tensor factor. Thus, we may
write the element uniquely as $Z \otimes \phi_1 \otimes \ldots \otimes \phi_p$,
where all tensors are now over $k$, and all morphisms are in $\mathrm{Epi}
\Delta S_+$, that is, $Z \otimes \phi_1 \otimes \ldots \otimes \phi_p \in
\mathscr{M}_p^{(m_0, \ldots, m_p)}$.
\begin{prop}\label{prop.gamma_iso}
There is a $\Sigma_{m_p + 1}^\mathrm{op}$-equivariant chain map,
$\gamma_* : \mathscr{B}_*^{(m_0,\ldots, m_p)} \to \mathscr{M}_*^{(m_0,\ldots,
m_p)}$, inducing an isomorphism on homology, $H_*\left( \mathscr{B}_*^{(m_0,
\ldots, m_p)} \right) \stackrel{\gamma_*}{\cong} H_*\left(
\mathscr{M}_*^{(m_0,\ldots, m_p)}\right)$.
\end{prop}
\begin{proof}
$\gamma_*$ is defined to be $0$ in degrees different from $p$. In degree $p$,
use the isomorphisms~(\ref{eq.B_p}) and~(\ref{eq.M_alt}) to define $\gamma_p$
as the canonical map,
\[
\begin{diagram
\node{ I^{\otimes(m_0+1)} \otimes k\big[\mathrm{Epi}_{\Delta S_+}( [m_0],
[m_1] ) \big] \otimes \ldots \otimes k\big[\mathrm{Epi}_{\Delta S_+}(
[m_{p-1}], [m_p] ) \big] }
\arrow{s}\\
\node{ I^{\otimes(m_0+1)} \otimes_{kG_0} k\big[\mathrm{Epi}_{\Delta S_+}(
[m_0], [m_1] ) \big] \otimes_{kG_1} \ldots \otimes_{kG_{p-1}} k\big[
\mathrm{Epi}_{\Delta S_+}( [m_{p-1}], [m_p] ) \big] }
\end{diagram}
\]
We shall prove that $\gamma_*$ induces an isomorphism in homology by induction
on $p$. First, if $p=0$, then $\mathscr{B}_*^{(m_0)} = I^{\otimes(m_0+1)}$,
and concentrated in degree $0$. Moreover, $\gamma_0$ is the identity
$\mathscr{B}_0^{(m_0)} \to \mathscr{M}_0^{(m_0)}$.
Next assume $\gamma_* : \mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \to
\mathscr{M}_*^{(m_0, \ldots, m_{p-1})}$ induces an isomorphism in homology for
any string of $p$ numbers $m_0 > m_1 > \ldots > m_{p-1}$. Now assume $m_p <
m_{p-1}$. Let $G = \Sigma_{m_{p-1}+1}$. As graded $k$-module, there is
a degree-preserving isomorphism:
\begin{equation}\label{eq.B_tensor_iso}
\theta_* \;:\; \mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes_{kG} E_*G
\otimes k\big[G \times \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big]
\longrightarrow \mathscr{B}_*^{(m_0, \ldots, m_{p-1}, m_p)}
\end{equation}
where the degree of an element $u \otimes (g_0, \ldots, g_n) \otimes (g,
\phi)$ is defined recursively: $deg\left(u \otimes (g_0, \ldots, g_n) \otimes
(g, \phi)\right) := deg(u) + n + 1$, and $deg(u_0) = 0$ for any $u_0 \in
\mathscr{B}_n^{(m_0)}$. $\theta_*$ is defined on generators,
$u \otimes (g_0, g_1, \ldots, g_n) \otimes (g, \phi)$, by letting $g_0$
act on the right of $u$, then appending the remaining group elements $g_1,
\ldots, g_n$ onto the chain as automorphisms, and finally appending the
morphism $(g, \phi)$ to the end. Explicitly,
\[
\theta_*\left( u \otimes (g_0, g_1, \ldots, g_n) \otimes (g, \phi) \right)
:= (u.g_0) \,\ast\, [m_{p-1}] \stackrel{ g_1 }{\longrightarrow} [m_{p-1}]
\stackrel{ g_2 }{\longrightarrow} \ldots \stackrel{ g_n }
{\longrightarrow} [m_{p-1}] \stackrel { (\phi, g) }{\longrightarrow} [m_p],
\]
where $v \ast w$ is the concatenation of chains $v$ and $w$ (The final target
of the chain $v$ must agree with the first source of the chain $w$).
If we define a right action of $\Sigma_{m_p+1}$ on $k\big[G \times
\mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big]$ via $(g, \phi) . h := \left(
gh^{\phi}, \phi^h \right)$, then $\theta_*$ becomes a map of right
$(k\Sigma_{m_p+1})$-modules, since the action defined above simply amounts to
post-composition of the morphism $(\phi, g)$ with $(\mathrm{id}, h)$.
Observe that $\mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes_{kG} E_*G \otimes
k\big[G \times \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big]$ is a chain
complex, with differential defined on generators by,
\[
\partial\big( u \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \big) \;=\;
\partial^B(u) \otimes (g_0, \ldots, g_n) \otimes (g, \phi) + (-1)^{deg(u)} u
\otimes \partial^{EG}\big((g_0, \ldots, g_n) \otimes (g, \phi)\big).
\]
Note, the $n^{th}$ face map of $E_nG \otimes k\big[G \times
\mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big]$ is defined by:
$\partial_n\big( (g_0, \ldots, g_n) \otimes (g, \phi) \big)
= (g_0, \ldots, g_{n-1}) \otimes ( g_ng, \phi)$.
It is straightfoward but tedious to verify that $\theta_*$ is a chain map
with respect to this differential.
$\theta_*$ has a two-sided
$\Sigma_{m_p+1}^{\mathrm{op}}$-equivariant inverse, defined by sending
\[
u \ast [m_{p-1}] \stackrel{g_1}{\to} [m_{p-1}] \stackrel{g_2}{\to} \ldots
\stackrel{g_n}{\to} [m_{p-1}] \stackrel{ (\phi, g) }{\to} [m_p] \; \mapsto
\; u \otimes (\mathrm{id}, g_1, g_2, \ldots, g_n) \otimes (g, \phi).
\]
Thus, $\theta_*$ is a chain isomorphism.
The next step in this proof is to prove a chain homotopy equivalence,
\[
\begin{diagram
\node{ \mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes_{kG} E_*G \otimes
k\big[G \times \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big] }
\arrow{s,l}{\simeq}
\\
\node{ \mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes k\big[
\mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big] }
\end{diagram}
\]
To that end, we shall define chain maps $F_*$ and $G_*$ between the two
complexes. Let $\mathscr{U}_* := \mathscr{B}_*^{(m_0, \ldots, m_{p-1})}$, and
$S := \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])$, and define maps
\begin{equation}\label{eq.F-map}
F_* \;:\; \mathscr{U}_* \otimes_{kG} E_*G \otimes k[ G \times S ]
\longrightarrow \mathscr{U}_* \otimes k[S],
\end{equation}
\[
F_*\big( u \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \big) \;:=\;
\left\{\begin{array}{ll}
u.(g_0g) \otimes \phi, \quad &\textrm{if $n = 0$} \\
0, \quad &\textrm{if $n > 0$}.
\end{array}\right.
\]
\begin{equation}\label{eq.G-map}
G_* \;:\; \mathscr{U}_* \otimes k[S] \to \mathscr{U}_* \otimes_{kG} E_*G
\otimes k[ G \times S ]
\end{equation}
is the composite, $\mathscr{U}_* \otimes k[S] \,\stackrel{\cong}
{\longrightarrow}\, \mathscr{U}_* \otimes_{kG} G \otimes k[S] \,=\,
\mathscr{U}_* \otimes_{kG} E_0G \otimes k[S] \hookrightarrow \mathscr{U}_*
\otimes_{kG} E_*G \otimes k[ G \times S ]$.
The last map is induced by inclusions $E_0G \hookrightarrow E_*G$ and
$k[S] \hookrightarrow k[ G \times S ]$.
Now, $F_*G_*$ is the identity, and $G_*F_* \simeq \mathrm{id}$ via the
homotopy, $h_* : u \otimes (g_0, \ldots, g_n) \otimes (g, \phi) \mapsto
(-1)^{deg(u) + n}u \otimes (g_0, \ldots, g_n, g)\otimes (\mathrm{id}, \phi)$.
The verification of this claim is rather tedious, but easy.
Since $\mathscr{M}_*^{(m_0, \ldots, m_{p-1})} \otimes k\big[
\mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\big] \cong \mathscr{M}_*^{(m_0,
\ldots, m_p)}$, the proof of Prop~\ref{prop.gamma_iso} follows from
the fact that $\gamma_*$ decomposes as the following chain of isomorphisms
and homotopy equivalences (each of which is also
$\Sigma_{m_p+1}^\mathrm{op}$-equivariant):
\[
\mathscr{B}_*^{(m_0, \ldots, m_p)}\stackrel{\theta_*^{-1}}{\longrightarrow}
\mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes_{kG} E_*G \otimes k\left[G
\times \mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\right] \stackrel{F_*}
{\longrightarrow}
\]
\[
\mathscr{B}_*^{(m_0, \ldots, m_{p-1})} \otimes k\left[
\mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\right] \stackrel{\gamma_* \otimes
\mathrm{id}}{\longrightarrow} \mathscr{M}_*^{(m_0, \ldots, m_{p-1})} \otimes
k\left[\mathrm{Epi}_{\Delta_+}([m_{p-1}], [m_p])\right] \stackrel{\cong}
{\longrightarrow}\mathscr{M}_*^{(m_0, \ldots, m_p)}
\]
\end{proof}
Now, we may prove Lemma~\ref{lem.E1_term}. Let $G=\Sigma_{m_p + 1}$. Observe,
\[
E^0_{p,q} \cong \bigoplus_{m_0 > \ldots > m_p}
\bigoplus_{s+t = q} \mathscr{B}_s^{(m_0, \ldots, m_p)}
\otimes_{kG} E_tG,
\]
with differential corresponding exactly to the vertical differential defined for
$E^0$. Note, the outer direct sum respects the differential $d^0$, so the $E^1$
term given by:
\begin{equation}\label{eq.E1_expression}
E^1_{p,q} = H_{p+q}(E^0_{p,*}) \cong
\bigoplus_{m_0 > \ldots > m_p} H_{p+q}\left(
\mathscr{B}_*^{(m_0, \ldots, m_p)}
\otimes_{kG} E_*G\right),
\end{equation}
where we view $\mathscr{B}_*^{(m_0, \ldots, m_p)} \otimes_{kG} E_*G$ as a double
complex. In what follows, let $(m_0, \ldots, m_p)$ be fixed.
In order to take the homology of the double complex, we set up another spectral
sequence. From the discussion above, the total differential is given by
$\partial^{total} = d^{v} + d^{h}$, where $d^v\left(u \otimes (g_0, \ldots,
g_t)\right) := \partial^{B}(u) \otimes (g_0, \ldots, g_t)$, and $d^h\left(u
\otimes (g_0, \ldots, g_t)\right) := (-1)^{deg(u)} u \otimes \partial^{EG}(g_0,
\ldots, g_t)$. Thus, there is a spectral sequence $\{\widetilde{E}^r_{*,*},
d^r\}$ with $\widetilde{E}^2 \cong H_{*,*}\left( H_*\left( \mathscr{B}_*^{(m_0,
\ldots, m_p)} \otimes_{kG} E_*G,\, d^h \right),\, d^v \right) \Rightarrow
H_*\left(\mathscr{B}_*^{(m_0, \ldots, m_p)}\otimes_{kG} E_*G\right)$.
Let us examine the $\widetilde{E}^2$ term more closely. Let $t$ be fixed, and
take the horizontal differential of the original double complex. We obtain
$\widetilde{E}^1_{*,t} = H_*\left( \mathscr{B}_*^{(m_0, \ldots, m_p)}
\otimes_{kG} E_tG,\, d^h \right) \cong H_*\left( \mathscr{B}_*^{(m_0, \ldots,
m_p)}\right) \otimes_{kG} E_tG$, since $E_tG$ is flat as left $kG$-module (in
fact, $E_tG$ is free). Then, by Prop.~\ref{prop.gamma_iso},
$\widetilde{E}^1_{*,t} \cong H_*( \mathscr{M}_*^{(m_0, \ldots, m_p)})
\otimes_{kG} E_tG$.
\[
= \left\{\begin{array}{ll}
I^{\otimes (m_0+1)} \otimes
k\left[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\left([m_{i-1}], [m_i]\right)
\right] \otimes_{kG} E_tG, & \textrm{in degree $p$}\\
0, & \textrm{in degrees different from $p$}
\end{array}\right.
\]
So, the only groups that survive are concentrated in column $p$. Taking the
vertical differential now amounts to computing the $G^\mathrm{op}$-equivariant
homology of $I^{\otimes (m_0+1)} \otimes k\Big[ \prod_{i=1}^p
\mathrm{Epi}_{\Delta_+}\big([m_{i-1}], [m_i]\big)\Big]$, so
\[
\widetilde{E}^2_{p,t} \cong H_t\left( G^{\mathrm{op}}; \; I^{\otimes (m_0+1)}
\otimes k\left[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\left([m_{i-1}], [m_i]
\right)\right]\right).
\]
Since $\widetilde{E}^2_{s,t} = 0$ for $s \neq p$, the sequence collapses here.
Thus,
\[
H_{p+q}\left( \mathscr{B}_*^{(m_0, \ldots, m_p)} \otimes_{kG} E_*G\right)
\cong H_q\left( G^{\mathrm{op}}; \; I^{\otimes (m_0+1)} \otimes k\left[
\prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\left([m_{i-1}], [m_i]\right)\right]
\right)
\]
Putting this information back into eq.~(\ref{eq.E1_expression}), we obtain the
desired isomorphism:
\[
E^1_{p,q} \cong \bigoplus_{m_0 > \ldots > m_p} H_q\left( G^{\mathrm{op}} \; ;
\; I^{\otimes (m_0+1)} \otimes k\left[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}
\left([m_{i-1}], [m_i]\right)\right] \right).
\]
A final piece of information needed in order to use Lemma~\ref{lem.E1_term} for
computation is a description of the horizontal differential $d^1_{p,q}$ on
$E^1_{p,q}$. This map is induced from the differential $d$ on $\mathscr{Y}_*$,
and reduces the filtration degree by $1$. Thus, it is the sum of face maps
that combine strict epimorphisms. Let
\[
[u] \in \bigoplus H_q\left( \Sigma_{m_p+1}^{\mathrm{op}}; \;
I^{\otimes (m_0+1)} \otimes k\left[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}
\left([m_{i-1}], [m_i]\right)\right] \right)
\]
be represented by a chain, $u = Y \otimes (\phi_1, \phi_2, \ldots, \phi_p)
\otimes (g_0, \ldots, g_q)$. Then, the face maps $d_i$ of $d^1_{p,q}$ are
defined for $0 \leq i < p$ by:
\[
d_i(u) := \left\{\begin{array}{ll}
(\phi_1)_*(Y) \otimes (\phi_2, \ldots, \phi_p) \otimes (g_0, \ldots, g_q),
\quad &\textrm{for $i = 0$}\\
Y \otimes (\phi_1, \ldots, \phi_{i+1}\phi_i, \ldots, \phi_p) \otimes (g_0,
\ldots, g_q), \quad &\textrm{for $0 < i < p$}.
\end{array}\right.
\]
The last face map $d_p$ has the effect of removing the morphism $\phi_p$ by
iteratively commuting it past any group elements to the right of it: $d_p(u) =
Y \otimes (\phi_1, \ldots, \phi_{p-1}) \otimes (g'_0,\ldots, g'_q)$, where
$g'_i = g_i^{\phi_p^{g_0g_1\ldots g_{i-1}}}$. Note that $d_p$ involves a
change of group from $\Sigma_{m_p}$ to $\Sigma_{m_{p-1}}$.
\begin{prop}
The spectral sequence $E^r_{p,q}$ above collapses at $r = 2$.
\end{prop}
\begin{proof}
This proof relies on the fact that the differential $d$ on $\mathscr{Y}_*$
cannot reduce the filtration degree by more than $1$. Explicitly, we shall
show that $d^r \,:\, E^r_{p,q} \to E^r_{p-r,q+r-1}$ is trivial for $r \geq 2$.
$d^r$ is induced by $d$ in the following way. Let $Z_p^r = \{ x \in
\mathscr{F}_p \mathscr{Y}_* \,|\, d(x) \in \mathscr{F}_{p-r}\mathscr{Y}_* \}$.
Then $E^r_{p,*} = Z_p^r/(Z_{p-1}^{r-1} + dZ_{p+r-1}^{r-1})$. Now, $d$ maps
$Z_p^r \to Z_{p-r}^r$ and $Z_{p-1}^{r-1} + dZ_{p+r-1}^{r-1} \to
dZ_{p-1}^{r-1}$. Hence, there is an induced map $\overline{d}$ making the
square below commute. $d^r$ is obtained as the composition of $\overline{d}$
with a projection onto $E^r_{p-r,*}$.
\[
\begin{diagram
\node{Z_p^r}
\arrow{r,t}{d}
\arrow{s,l,A}{\pi_1}
\node{Z_{p-r}^r}
\arrow{s,r,A}{\pi_2}\\
\node{Z_p^r/(Z_{p-1}^{r-1} + dZ_{p+r-1}^{r-1})}
\arrow{s,=}
\arrow{e,t}{\overline{d}}
\node{Z_{p-r}^r/dZ_{p-1}^{r-1}}
\arrow{s,r,A}{\pi'}\\
\node{E_{p,*}^r}
\arrow{se,t}{d^r}
\node{Z_{p-r}^r/(Z_{p-r-1}^{r-1} + dZ_{p-1}^{r-1})}
\arrow{s,=}\\
\node[2]{E_{p-r,*}^r}
\end{diagram}
\]
In our case, $x \in Z_p^r$ is a sum of the form $x = \sum_{q \geq 0}
a_i\left( Y \otimes (f_1, f_2, \ldots, f_q) \right)$, where $a_i \neq 0$ for
only finitely many $i$, and the sum extends over all symbols $Y \otimes (f_1,
f_2, \ldots, f_q)$ with $Y \in B_*^{sym_+}I$, $f_j \in \mathrm{Epi\Delta S_+}$
composable maps, and at most $p$ of the $f_j$ maps are strict epimorphisms.
The image of $x$ under $\pi_1$ looks like $\pi_1(x) = \sum_{q \geq 0} a_i
\left[ Y \otimes (f_1, f_2, \ldots, f_q) \right]$, where exactly $p$ of the
$f_j$ maps are strictly epic. There are, of course, other relations present
as well -- those arising from modding out by $dZ_{p+r-1}^{r-1}$. Consider,
$\overline{d}\pi_1(x)$. This should be the result of lifting $\pi_1(x)$ to a
representative in $Z_p^r$, then applying $\pi_2 d$. One such
representative is $y = \sum_{q \geq 0} a_i\left( Y \otimes (f_1, f_2, \ldots,
f_q) \right)$, in which each symbol $Y \otimes (f_1, f_2, \ldots, f_q)$ has
exactly $p$ strict epimorphisms. Now, $d(y)$ is the sum $\sum_{q \geq 0}
b_i\left( Z \otimes ( g_1, g_2, \ldots, g_{q-1}) \right)$, where each symbol
$Z \otimes ( g_1, g_2, \ldots, g_{q-1})$ has either $p$ or $p-1$ strict
epimorphisms. Thus, if $r \geq 2$, then $d(y) \in Z_{p-r}^r$ implies $d(y)
= 0$. But then, $\overline{d}\pi_1(x) = \pi_2d(y) = 0$, and $d^r = \pi'
\overline{d}$ is trivial.
\end{proof}
\subsection{Implications in Characteristic 0}
If $k$ is a field of characteristic 0, then for any finite group $G$ and
$kG$-module $M$, $H_q( G, M ) = 0$ for all $q > 0$ (see~\cite{B}, for example).
Thus, by Lemma~\ref{lem.E1_term}, the $E^1$ term of the spectral sequence would
be concentrated in row $0$, and
\[
E^1_{p,0} \cong \bigoplus_{m_0 > \ldots > m_p} \left( I^{\otimes (m_0+1)}
\otimes k\left[ \prod_{i=1}^p \mathrm{Epi}_{\Delta_+}\left([m_{i-1}], [m_i]
\right)\right] \right)/ \Sigma_{m_p+1}^{\mathrm{op}},
\]
that is, the group of co-invariants of the coefficient group, under the
right-action of $\Sigma_{m_p+1}$.
Since $E^1$ is concentrated on a row, the spectral sequence collapses at this
term. Hence for the $k$-algebra $A$, with augmentation ideal $I$,
\begin{equation}\label{eq.symhom-char0}
HS_*(A) = H_*\left( \bigoplus_{p\geq 0} \bigoplus_{m_0 > \ldots > m_p}
\left(I^{\otimes (m_0+1)} \otimes k\left[ \prod_{i=1}^p\mathrm{Epi}_{\Delta_+}
\left([m_{i-1}], [m_i]\right)\right]\right) / \Sigma_{m_p+1}^{\mathrm{op}},\;
d^1\right).
\end{equation}
This complex is still rather unwieldy as the $E^1$ term is infinitely generated
in each degree. In the next chapter, we shall see another spectral sequence that
is more computationally useful.
\section{A Second Spectral Sequence}\label{chap.spec_seq2}
\subsection{Reduced Symmetric Homology}
Again, we shall assume $A$ is a $k$-algebra equipped with an augmentation, and
whose augmentation ideal is $I$. Assume further that $I$ is free as $k$-module,
with countable basis $X$. Denote by $B_*^{sym}I$, the restriction of
$B_*^{sym_+}I$ to $\Delta S$. That is, $B_n^{sym}I := I^{\otimes(n+1)}$ for
all $n \geq 0$. Let $\widetilde{\mathscr{Y}}_* := k[ N(- \setminus \mathrm{Epi}
\Delta S) ] \otimes_{\mathrm{Epi}\Delta S} B_*^{sym}I$. Observe that there is a
splitting $\mathscr{Y}^{epi}_*A \cong \widetilde{\mathscr{Y}}_* \oplus
k[ N(\ast) ]$, where $\mathscr{Y}_*^{epi}A$ is the
complex~(\ref{epiDeltaS_complex}) of Section~\ref{sec.epideltas}, and $\ast$ is
the trivial subcategory of $\mathrm{Epi}{\Delta S_+}$ consisting of the object
$[-1]$ and morphism $\mathrm{id}_{[-1]}$. The fact that $I$ is an ideal ensures
that this splitting passes to homology. Hence, we have $HS_*(A) \cong
H_*(\widetilde{\mathscr{Y}}_*) \oplus k_0$, where $k_0$ is the graded $k$-module
consisting of $k$ concentrated in degree $0$.
\begin{definition}\label{def.reducedHS}
The {\it reduced symmetric homology} of $A$ is defined,
$\displaystyle{\widetilde{H}S_*(A) := H_*(\widetilde{\mathscr{Y}}_*)}$.
\end{definition}
Now, since $I = k[X]$ as \mbox{$k$-module} and $B_n^{sym}I =
k[X]^{\otimes(n+1)}$, $\widetilde{\mathscr{Y}}_*$ is generated, as a
\mbox{$k$-module}, by elements of the form $[m_0] \twoheadrightarrow [m_1]
\twoheadrightarrow \ldots \twoheadrightarrow [m_p] \otimes (x_0 \otimes x_1
\otimes \ldots \otimes x_{m_0})$, with $x_i \in X$. Analogous to
Eq.~(\ref{eq.B_p}), denote
\[
\widetilde{\mathscr{B}}^{(m_0, m_1, \ldots, m_q)} :=
k[X]^{\otimes(m_0+1)} \otimes \bigotimes_{i=1}^q k\left[
\mathrm{Epi}_{\Delta S}\left([m_{i-1}], [m_i]\right) \right]
\]
Then we may observe, $\widetilde{\mathscr{Y}}_q \cong \bigoplus_{m_0 \geq \ldots
\geq m_q} \widetilde{\mathscr{B}}^{(m_0, m_1, \ldots, m_q)}$.
The face maps are given on generators as follows:
\[
d_i( Y \otimes f_1 \otimes f_2 \otimes \ldots \otimes f_q )
= \left\{\begin{array}{ll}
f_1(Y) \otimes f_2 \otimes \ldots \otimes f_q, \quad &\textrm{for $i =
0$}\\
Y \otimes f_1 \otimes \ldots \otimes (f_{i+1}f_i) \otimes \ldots \otimes
f_q, \quad &\textrm{for $1 \leq i < q$}\\
Y \otimes f_1 \otimes \ldots \otimes f_{q-1} \quad &\textrm{for $i = q$}.
\end{array}\right.
\]
\subsection{Filtering by Degree}
Consider a filtration $\mathscr{G}_*$ of $\widetilde{\mathscr{Y}}_*$ given by,
\[
\mathscr{G}_p\widetilde{\mathscr{Y}}_q = \bigoplus_{p \geq m_0 \geq \ldots
\geq m_q} \widetilde{\mathscr{B}}^{(m_0, m_1, \ldots, m_q)}.
\]
Observe that $\mathscr{G}_*$ filters the complex by degree of $Y \in
B^{sym}_*I$. The only face map that can potentially change the degree of $Y$ is
$d_0$, and since all morphisms are epic, $d_0$ can only reduce the degree.
Thus, $\mathscr{G}_*$ is compatible with the differential of
$\widetilde{\mathscr{Y}}_*$. The filtration quotients are easily described:
\[
E^0_{p,q} = \bigoplus_{p = m_0 \geq \ldots \geq m_q}
\widetilde{\mathscr{B}}^{(m_0, m_1, \ldots, m_q)}.
\]
$E^0$ splits into a direct sum based on the product of $x_i$'s in $(x_0, \ldots,
x_p) \in X^{p+1}$. For $u \in X^{p+1}$, let $P_u$ be the set of all
distinct permutations of $u$. Then,
\[
E^0_{p,q} = \bigoplus_{u \in X^{p+1}/\Sigma_{p+1}} \left( \bigoplus_{p = m_0
\geq \ldots \geq m_q} \bigoplus_{w \in P_u} w \otimes k\left[\prod_{i=1}^q
\mathrm{Epi}_{\Delta S}\left([m_{i-1}], [m_i]\right)\right] \right).
\]
\subsection{The Categories $\widetilde{\mathcal{S}}_p$,
$\widetilde{\mathcal{S}}'_p$, $\mathcal{S}_p$ and $\mathcal{S}'_p$}
Before proceeding with the main theorem of this section, we must define four
related categories. In the definitions that follow, let $\{z_0, z_1, z_2,
\ldots z_p\}$ be a set of formal non-commuting indeterminates.
\begin{definition}
$\widetilde{\mathcal{S}}_p$ is the category with objects formal tensor
products $Z_0 \otimes \ldots \otimes Z_s$, where each $Z_i$ is a non-empty
product of $z_i$'s, and every one of $z_0, z_1, \ldots, z_p$ occurs exactly
once in the tensor product. There is a unique morphism $Z_0 \otimes \ldots
\otimes Z_s \to Z'_0 \otimes \ldots \otimes Z'_t$, if and only if the tensor
factors of the latter are products of the factors of the former in some order.
In such a case, there is a unique $\beta \in \mathrm{Epi}\Delta S$ so that
$\beta_*(Z_0 \otimes \ldots \otimes Z_s) = Z'_0 \otimes \ldots \otimes Z'_t$.
\end{definition}
$\widetilde{\mathcal{S}}_p$ has initial objects $\sigma_*(z_0 \otimes z_1
\otimes \ldots \otimes z_p)$, for $\sigma \in \Sigma^{\mathrm{op}}_{p+1}$, so
$N\widetilde{\mathcal{S}}_p$ is a contractible complex. Let
$\widetilde{\mathcal{S}}'_p$ be the full subcategory of
$\widetilde{\mathcal{S}}_p$ with all objects $\sigma_*(z_0 \otimes \ldots
\otimes z_p)$ deleted.
Let $\mathcal{S}_p$ be a skeletal category equivalent to
$\widetilde{\mathcal{S}}_p$. In fact, we may make $\mathcal{S}_p$ the quotient
category, identifying each object $Z_0 \otimes \ldots \otimes Z_s$ with any
permutation of its tensor factors, and identifying morphisms $\phi$ and $\psi$
if their source and target are equivalent. This category has nerve
$N\mathcal{S}_p$ homotopy-equivalent to $N\widetilde{\mathcal{S}}_p$. Now,
$\mathcal{S}_p$ is a poset with unique initial object, $z_0 \otimes \ldots
\otimes z_p$. Let $\mathcal{S}'_p$ be the full subcategory (subposet) of
$\mathcal{S}_p$ obtained by deleting the object $z_0 \otimes \ldots \otimes
z_p$. Clearly, $\mathcal{S}'_p$ is a skeletal category equivalent to
$\widetilde{\mathcal{S}}'_p$.
\subsection{Main Theorem}
\begin{theorem}\label{thm.E1_NS}
There is spectral sequence converging (weakly) to $\widetilde{H}S_*(A)$ with
\[
E^1_{p,q} \cong \bigoplus_{u\in X^{p+1}/\Sigma_{p+1}} H_{p+q}
\left(EG_{u}\ltimes_{G_u} |N\mathcal{S}_p/N\mathcal{S}'_p|;k\right),
\]
where $G_{u}$ is the isotropy subgroup for the chosen representative of $u \in
X^{p+1}/ \Sigma_{p+1}$.
\end{theorem}
Recall, for a group $G$, right $G$-space X, and left $G$-space $Y$, $X \ltimes_G
Y$ denotes the \textit{equivariant half-smash product}. If $\ast$ is a chosen
basepoint for $Y$ having trivial $G$-action, then $X \ltimes_G Y := (X \times_G
Y)/(X \times_G \ast) = X \times Y/\approx$, with equivalence relation defined
by $(x.g , y) \approx (x, g.y)$ and $(x, \ast) \approx (x', \ast)$ for all $x,
x' \in X$, $y \in Y$ and $g \in G$ (cf.~\cite{M4}). In our case, $X$ is of the
form $EG$, with canonical underlying complex $E_*G$, equipped with a right
$G$-action, $(g_0, g_1, \ldots, g_n).g = (g_0, g_1, \ldots, g_ng)$.
Observe, both $N\widetilde{\mathcal{S}}_p$ and $N\widetilde{\mathcal{S}}'_p$
carry a left $\Sigma_{p+1}$-action (hence also a $G_u$-action). The action is
defined on $0$-chains $Z_0 \otimes \ldots \otimes Z_s$ by permutation of
the individual indeterminates, $z_0, z_1, \ldots, z_p$. This action extends
to $n$-chains in the straightforward manner.
Define for each $u \in X^{p+1}/\Sigma_{p+1}$, the following subcomplex of
$E^0_{p,q}$:
\[
\mathscr{M}_u \;:=\;\bigoplus_{p = m_0 \geq \ldots \geq m_q}
\bigoplus_{w \in P_u} w \otimes k\left[ \prod_{i=1}^q \mathrm{Epi}_{\Delta S}
\left([m_{i-1}], [m_i]\right)\right]
\]
\begin{lemma}\label{lem.G_u-identification}
There is a chain-isomorphism, $\left(N\widetilde{\mathcal{S}}_p/
N\widetilde{\mathcal{S}}'_p\right)/G_u \stackrel{\cong}{\to} \mathscr{M}_u$.
\end{lemma}
\begin{proof}
Let $C_j$ denote objects of $\widetilde{\mathcal{S}}_p$. As above, we may
view each $C_j$ as a morphism of $\Delta S$. By abuse of notation, let
$C_j$ also represent a $0$-cell of $N\widetilde{\mathcal{S}}_p/
N\widetilde{\mathcal{S}}'_p$. Denote the chosen representative of $u$ again
by $u$ (We view $u =(x_{i_0}, x_{i_1}, \ldots, x_{i_p}) \in X^{p+1}/
\Sigma_{p+1}$ as represented by a $(p+1)$-tuple whose indices are in
non-decreasing order).
First define a map $N\widetilde{\mathcal{S}}_p \longrightarrow \mathscr{M}_u$
on $n$-cells by:
\begin{equation}\label{eq.forward-map}
\alpha_*:\left(C_0 \stackrel{\phi_1}{\to} \ldots \stackrel{\phi_q}{\to} C_q
\right) \mapsto \left(C_0(u) \otimes \phi_1 \otimes \ldots \otimes \phi_q
\right).
\end{equation}
(Note, the notation $C_0(u)$ is used in place of the more correct $(C_0)_*(u)$
in order to avoid clutter of notation.) I claim $\alpha_*$ factors through
$N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p$. Indeed, if
$\left(C_0 \stackrel{\phi_1}{\to} \ldots \stackrel{\phi_q}{\to} C_q\right)
\in N\widetilde{\mathcal{S}}'_p$, then we
cannot have $C_0 = \sigma(z_0 \otimes \ldots \otimes z_p)$ for any symmetric
group element $\sigma$. That is, $C_0$, viewed as a morphism, must be
strictly epic. Then the degree of of $C_0(u)$ is strictly less than the
degree of $u$, which would make $\left(C_0(u) \otimes \phi_1 \otimes \ldots
\otimes \phi_q \right)$ trivial in $E^0_{p,q}$.
The map $\alpha_*$ then factors through $\left(N\widetilde{\mathcal{S}}_p/
N\widetilde{\mathcal{S}}'_p \right)/G_u$, since if $\gamma \in G_u$, then
$\gamma$ corresponds to an automorphism
$g \in \Sigma_{p+1}^{\mathrm{op}}$, and by definition, we have:
$\gamma.\left(C_0 \stackrel{\phi_1}{\to} \ldots \stackrel{\phi_q}{\to} C_q
\right) = \left(C_0 \circ g \stackrel{\phi_1}{\to} \ldots \stackrel{\phi_q}
{\to}C_q \circ g\right) \mapsto \left(C_0(g(u)) \otimes \phi_1 \otimes \ldots
\otimes \phi_q\right) = \left(C_0(u) \otimes \phi_1 \otimes \ldots \otimes
\phi_q\right)$. (Note, $g(u) = u$ follows from the fact that $\gamma \in
G_u$, the isotropy subgroup for $u$).
For the opposite direction, we begin with a generator of the form,
$w \otimes \phi_1 \otimes \ldots \otimes \phi_q$ for some $w \in P_u$.
Let $\tau \in \Sigma_{p+1}$ so that $w = t(u)$, where $t \in
\Sigma_{p+1}^{\mathrm{op}}$ corresponds to $\tau$. Define a map sending,
\begin{equation}\label{eq.backward-map}
\beta_* : \left(w \otimes \phi_1 \otimes \ldots \otimes \phi_q\right)\mapsto
\left(t \stackrel{\phi} {\to} \phi_1t \stackrel{\phi_2}{\to} \ldots
\stackrel{\phi_q}{\to} \phi_q\cdots\phi_1t \right).
\end{equation}
We must check that the definition of $\beta_*$ does not depend on choice of
$\tau$. Indeed, if $w = s(u)$ also, then $u = s^{-1}t(u)$, hence $s^{-1}t \in
G_u^{\mathrm{op}}$.
Thus,
\[
\left(s \stackrel{\phi}{\to} \phi_1s \stackrel{\phi_2}{\to} \ldots \stackrel
{\phi_q}{\to} \phi_q\ldots\phi_1s\right) \;\approx\; s^{-1}t . \left(s
\stackrel{\phi}{\to} \phi_1s \stackrel{\phi_2}{\to} \ldots \stackrel{\phi_q}
{\to} \phi_q \cdots\phi_1s\right) = \left(t \stackrel{\phi}{\to} \phi_1t
\stackrel{\phi_2}{\to} \ldots \stackrel{\phi_q}{\to} \phi_q\cdots\phi_1t
\right).
\]
The maps $\alpha_*$ and $\beta_*$ are clearly inverse to one another. All
that remains is to verify that they are chain maps. We need only check
compatibility with the zeroth face maps in either case, since the $i^{th}$
face maps (for $i>0$) simply compose the morphisms $\phi_{i+1}$ and $\phi_i$
in either chain complex. Consider $\alpha_*$ first. (The zeroth face maps of
either complex will be denoted $d_0$).
First consider the map $alpha_*$.
\[
\begin{diagram}
\node{ \left(C_0 \stackrel{\phi_1}{\to} C_1 \stackrel{\phi_2}{\to} \ldots
\stackrel{\phi_q}{\to} C_q\right) }
\arrow[2]{e,t,T}{d_0}
\arrow{s,l,T}{\alpha_*}
\node[2]{ \left(C_1 \stackrel{\phi_2}{\to} \ldots \stackrel{\phi_q}{\to}
C_q \right)}
\arrow{s,l,T}{\alpha_*}
\\
\node{ \left(C_0(u) \otimes \phi_1 \otimes \phi_2 \otimes \ldots \otimes
\phi_q\right) }
\arrow{e,t,T}{d_0}
\node{ \left(\phi_1C_0(u) \otimes \phi_2 \otimes \ldots \otimes \phi_q
\right) }
\arrow{e,t,=}{}
\node{ \left(C_1(u) \otimes \phi_2 \otimes \ldots \otimes \phi_q\right) }
\end{diagram}
\]
The equality in the lower right of the diagram is simply a restatement that
$C_0 \stackrel{\phi_1}{\to} C_1$ is a morphism of $\widetilde{\mathcal{S}}_p$.
For the reverse direction, assume $w = t(u)$ as above
\[
\begin{diagram}
\node{ \left(w \otimes \phi_1 \otimes \ldots \otimes \phi_q\right) }
\arrow{e,t,T}{d_0}
\arrow{s,l,T}{\beta_*}
\node{ \left(\phi_1(w) \otimes \phi_2 \ldots \otimes \phi_q\right) }
\arrow{s,l,T}{\beta_*}
\\
\node{ \left(t \stackrel{\phi}{\to} \phi_1t \stackrel{\phi_2}{\to} \ldots
\stackrel{\phi_q}{\to} \phi_q\cdots\phi_1t \right) }
\arrow{e,t,T}{d_0}
\node{ \left(\phi_1t \stackrel{\phi_2}{\to} \ldots \stackrel{\phi_q}{\to}
\phi_q\cdots\phi_1t\right) }
\end{diagram}
\]
\end{proof}
Using this lemma, we identify $\mathscr{M}_u$ with the orbit complex
$\big(N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p\big)/G_u$. Now,
the complex $N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p$ is a
free $G_u$-complex, so we have an isomorphism: $H_*\left(\left(
N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p\right)/G_u\right)\cong
H^{G_u}_*\big(N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p\big)$.
(\textit{i.e.}, $G_u$-equivariant homology. See~\cite{B} for details).
Then, by definition, $H^{G_u}_*\left(N\widetilde{\mathcal{S}}_p/
N\widetilde{\mathcal{S}}'_p\right) = H_*\left(G_u, N\widetilde{\mathcal{S}}_p/
N\widetilde{\mathcal{S}}'_p\right)$, which may be computed using the free
resolution, $E_*G_u$ of $k$ as right $G_u$-module. The resulting complex
$k[E_*G_u] \otimes_{kG_u} k\left[N\widetilde{\mathcal{S}}_p\right]/k\left[
N\widetilde{\mathcal{S}}'_p\right]$ is a double complex isomorphic to the
quotient of two double complexes, namely:
\[
\left(k[E_*G_u] \otimes_{kG_u}k\left[N\widetilde{\mathcal{S}}_p\right]\right)/
\left(k[E_*G_u] \otimes_{kG_u}k\left[N\widetilde{\mathcal{S}}'_p\right]\right)
\;\cong\; k\left[ \left(E_*G_u \times_{G_u} N\widetilde{\mathcal{S}}_p\right)/
\left(E_*G_u \times_{G_u} N\widetilde{\mathcal{S}}'_p\right)\right].
\]
This last complex may be identified with the simplicial complex of the space,
\[
\left(EG_u \times_{G_u} |N\widetilde{\mathcal{S}}_p|\right)/\left(EG_u
\times_{G_u} |N\widetilde{\mathcal{S}}'_p|\right) \cong EG_u \ltimes_{G_u}
|N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p|.
\]
The last piece of the puzzle involves simplifying the spaces
$|N\widetilde{\mathcal{S}}_p/N\widetilde{\mathcal{S}}'_p|$. Since $\mathcal{S}$
is a skeletal subcategory of $\widetilde{\mathcal{S}}$, there is an equivalence
of categories $\widetilde{\mathcal{S}} \simeq \mathcal{S}$, inducing a homotopy
equivalence of complexes (hence also of spaces) $|N\widetilde{\mathcal{S}}|
\simeq |N\mathcal{S}|$. Note that $N\mathcal{S}$ inherits a $G_u$-action from
$N\widetilde{\mathcal{S}}$, and the map $\widetilde{\mathcal{S}} \to
\mathcal{S}$ is $G_u$-equivariant.
\begin{prop}\label{prop.weak-eq}
There are weak equivalences, $EG_u \times_{G_u} |N\widetilde{\mathcal{S}}_p|
\to EG_u \times_{G_u} |N\mathcal{S}_p|$ and $EG_u \times_{G_u}
|N\widetilde{\mathcal{S}}'_p| \to EG_u \times_{G_u} |N\mathcal{S}'_p|$,
inducing a weak equivalence $EG_u \ltimes_{G_u} |N\widetilde{\mathcal{S}}_p/
N\widetilde{\mathcal{S}}'_p| \to EG_u \ltimes_{G_u}|N\mathcal{S}_p/
N\mathcal{S}'_p|$.
\end{prop}
\begin{proof}
The case $p > 2$ will be handled first. Consider the fibration $X \to EG
\times_{G} X \to BG$ associated to a group $G$ and path-connected $G$-space
$X$. The resulting homotopy sequence breaks up into isomorphisms $0 \to
\pi_i(X) \stackrel{\cong}{\to} \pi_i(EG \times_G X) \to 0$ for $i \geq 2$ and
a short exact sequence $0 \to \pi_1(X) \to \pi_1(EG\times_G X) \to G \to 0$.
If there is a $G$-equivariant homotopy-equivalence \mbox{$f : X \to Y$} for a
path-connected $G$-space $Y$, then for $i \geq 2$, we have isomorphisms
$\pi_i(EG \times_{G} X) \gets \pi_i(X) \stackrel{f_*}{\to} \pi_i(Y) \to
\pi_i(EG \times_G Y)$, and a diagram corresponding to $i = 1$:
\[
\begin{diagram}
\node{0}
\arrow{e}
\arrow{s,=}
\node{\pi_1(X)}
\arrow{s,l}{f_*}
\arrow{e}
\node{\pi_1(EG \times_G X)}
\arrow{s,r}{(\mathrm{id} \times f)_*}
\arrow{e}
\node{G}
\arrow{s,=}
\arrow{e}
\node{0}
\arrow{s,=}
\\
\node{0}
\arrow{e}
\node{\pi_1(Y)}
\arrow{e}
\node{\pi_1(EG \times_G Y)}
\arrow{e}
\node{G}
\arrow{e}
\node{0}
\end{diagram}
\]
Thus, there is a weak equivalence $EG \times_G X \to EG \times_G Y$.
So in our case the desired result will be proved if the spaces
$|N\widetilde{\mathcal{S}}'_p|$ and $|N\mathcal{S}'_p|$ are path-connected.
(Note, $|N\widetilde{\mathcal{S}}_p|$ and $|N\mathcal{S}_p|$ are
path-connected because they are contractible). In fact, we need only check
$|N\mathcal{S}'_p|$, since this space is homotopy-equivalent to
$|N\widetilde{\mathcal{S}}'_p|$.
let $W_0 := z_0z_1 \otimes z_2 \otimes \ldots \otimes z_p$. This represents
a vertex of $N\mathcal{S}'_p$. Suppose $W = Z_0 \otimes \ldots \otimes
Z_i'z_0z_1Z_i'' \otimes \ldots \otimes Z_s$. Then there is a morphism $W_0
\to W$, hence an edge between $W_0$ and $W$.
Next, suppose $W = Z_0 \otimes \ldots \otimes Z_i'z_0Z_i''z_1Z_i''' \otimes
\ldots \otimes Z_s$. There is a path:
\[
\begin{diagram}
\node{Z_0 \otimes \ldots \otimes Z_i'z_0Z_i''z_1Z_i''' \otimes \ldots
\otimes Z_s}
\arrow{s}\\
\node{Z_0Z_1 \ldots Z_i'z_0Z_i''z_1Z_i''' \ldots Z_s}\\
\node{Z_0Z_1 \ldots Z_i' \otimes z_0 \otimes Z_i''z_1Z_i''' \ldots Z_s}
\arrow{n}
\arrow{s}\\
\node{z_0 \otimes Z_0Z_1 \ldots Z_i'Z_i''z_1Z_i''' \ldots Z_s}\\
\node{z_0 \otimes Z_0Z_1 \ldots Z_i'Z_i'' \otimes z_1Z_i''' \ldots Z_s}
\arrow{n}
\arrow{s}\\
\node{z_0z_1Z_i''' \ldots Z_s \otimes Z_0Z_1 \ldots Z_i'Z_i''}\\
\node{W_0}
\arrow{n}
\end{diagram}
\]
Similarly, if $W = Z_0 \otimes \ldots \otimes Z_i'z_1Z_i''z_0Z_i''' \otimes
\ldots \otimes Z_s$, there is a path to $W_0$. Finally, if
$W = Z_0 \otimes \ldots \otimes Z_s$ with $z_0$ occurring in $Z_i$ and
$z_1$ occurring in $Z_j$ for $i \neq j$, there is an edge to some
$W'$ in which $Z_iZ_j$ occurs, and thus a path to $W_0$.
The cases $p = 0, 1$ and $2$ are handled individually:
Observe that $|N\widetilde{\mathcal{S}}'_0|$ and $|N\mathcal{S}'_0|$ are empty
spaces, since $\widetilde{\mathcal{S}}'_0$ has no objects. Hence, $EG_u
\times_{G_u} |N\widetilde{\mathcal{S}}'_0| = EG_u \times_{G_u}
|N\mathcal{S}'_0| = \emptyset$. Furthermore, any group $G_u$ must be trivial.
Thus there is a chain of homotopy equivalences, $EG_u \ltimes_{G_u}
|N\widetilde{\mathcal{S}}_0/N\widetilde{\mathcal{S}}'_0| \simeq
|N\widetilde{\mathcal{S}}_0| \simeq |N\mathcal{S}_0| \simeq EG_u \ltimes_{G_u}
|N\mathcal{S}_p/N\mathcal{S}'_p|$.
Next, since $|N\widetilde{\mathcal{S}}'_1|$ is homeomorphic to
$|N\mathcal{S}'_1|$, each space consisting of the two discrete points
$z_0z_1$ and $z_1z_0$ with the same group action, the proposition is true for
$p = 1$ as well.
For $p=2$, observe that $|N\widetilde{\mathcal{S}}'_2|$ has two connected
components, $\widetilde{U}_1$ and $\widetilde{U}_2$ that are interchanged by
any odd permutation $\sigma \in \Sigma_3$. Similarly, $|N\mathcal{S}'_2|$
consists of two connected components, $U_1$ and $U_2$, interchanged by any odd
permutation of $\Sigma_3$. Now, resticted to the alternating group, $A_3$, we
certianly have weak equivalences for any subgroup $H_u \subseteq A_3$,
$EH_u \times_{H_u} \widetilde{U}_1 \stackrel{\simeq}{\longrightarrow} EH_u
\times_{H_u} U_1$ and $EH_u \times_{H_u} \widetilde{U}_2 \stackrel{\simeq}
{\longrightarrow} EH_u \times_{H_u} U_2$. The action of an odd permutation
induces equivariant homeomorphisms $\widetilde{U}_1 \stackrel{\cong}
{\longrightarrow} \widetilde{U}_2$ and $U_1 \stackrel{\cong}{\longrightarrow}
U_2$, and so if we have a subgroup $G_u \subseteq \Sigma_3$ generated by $H_u
\subseteq A_3$ and a transposition, then the two connected components are
identified in an \mbox{$A\Sigma_3$-equivariant} manner. Thus, if $G_u$
contains a transposition, $EG_u \times_{G_u} |N\widetilde{\mathcal{S}}'_2|
\cong EH_u \times_{H_u} \widetilde{U}_1 \simeq EH_u \times_{H_u} U_1 \cong
EG_u \times_{G_u} |N\mathcal{S}'_2|$. This completes the case $p=2$ and the
proof of Prop.~\ref{prop.weak-eq}.
\end{proof}
Prop.~\ref{prop.weak-eq} coupled with Lemma~\ref{lem.G_u-identification}
produces the required isomorphism in homology, hence proving
Thm.~\ref{thm.E1_NS}:
\[
E^1_{p,q} \;=\; \bigoplus_{u \in X^{p+1}/\Sigma_{p+1}} H_{p+q}(\mathscr{M}_u)
\;\cong\; \bigoplus_{u \in X^{p+1}/\Sigma_{p+1}}H_{p+q}\left( EG_u
\ltimes_{G_u} |N\widetilde{\mathcal{S}}_p / N\widetilde{\mathcal{S}}'_p|; k
\right)
\]
\[
\cong\; \bigoplus_{u \in X^{p+1}/\Sigma_{p+1}} H_{p+q}\left(
EG_u \ltimes_{G_u} |N\mathcal{S}_p/N\mathcal{S}'_p|; k\right).
\]
\begin{cor}\label{cor.square-zero}
If the augmentation ideal of $A$ satisfies $I^2 = 0$, then
\[
HS_n(A) \cong \bigoplus_{p \geq 0} \bigoplus_{u \in X^{p+1}/\Sigma_{p+1}}
H_{n}(EG_u \ltimes_{G_u} N\mathcal{S}_p/N\mathcal{S}'_p; k).
\]
\end{cor}
\begin{proof}
This follows from consideration of the original $E^0$ term of the spectral
sequence. $E^0$ is generated by chains $Y \otimes \phi_0 \otimes \ldots
\otimes \phi_n$, with induced differential $d^0$, agreeing with the
differential $d$ of $\widetilde{\mathscr{Y}}_*$ when $\phi_0$ is an
isomorphism. When $\phi_0$ is a strict epimorphism, however, the zeroth
face map of $d^0$ maps the generator to: $(\phi_0)_*(Y) \otimes \phi_1 \otimes
\ldots \otimes \phi_n = 0$, since $(\phi_0)_*(Y)$ would have at least one
tensor factor that is the product of two or more elements of $I$. Thus, $d^0$
also agrees with $d$ in the case that $\phi_0$ is strictly epic. Hence, the
spectral sequence collapses at level 1.
\end{proof}
\subsection{The complex $Sym_*^{(p)}$}
Note, for $p > 0$, there are homotopy equivalences $|N\mathcal{S}_p/
N\mathcal{S}'_p| \simeq |N\mathcal{S}_p| \vee S|N\mathcal{S}'_p| \simeq
S|N\mathcal{S}'_p|$, since $|N\mathcal{S}_p|$ is contractible.
$|N\mathcal{S}_p|$ is a disjoint union of $(p+1)!$ $p$-cubes, identified along
certain faces. Geometric analysis of $S|N\mathcal{S}'_p|$, however, seems quite
difficult. Fortunately, there is an even smaller chain complex homotopic to
$N\mathcal{S}_p/N\mathcal{S}'_p$.
\begin{definition}\label{def.sym_complex}
Let $p \geq 0$ and impose an equivalence relation on $k\left[
\mathrm{Epi}_{\Delta S} ([p], [q])\right]$ generated by:
\[
Z_0 \otimes \ldots \otimes Z_i \otimes Z_{i+1} \otimes \ldots \otimes Z_q
\approx (-1)^{ab} Z_0 \otimes \ldots \otimes Z_{i+1} \otimes Z_{i} \otimes
\ldots \otimes Z_q,
\]
where $Z_0 \otimes \ldots \otimes Z_q$ is a morphism expressed in tensor
notation, and $a = deg(Z_i) := |Z_i| - 1$, $b = deg(Z_{i+1}) := |Z_{i+1}|
- 1$. Here, $deg(Z)$ is one less than the number of factors of the monomial
$Z$. Indeed, if $Z = z_{i_0}z_{i_1} \ldots z_{i_s}$, then $deg(Z) = s$.
The complex $Sym_*^{(p)}$ is then defined by $Sym_i^{(p)} \;:=\; k\left[
\mathrm{Epi}_{\Delta S}([p], [p-i])\right]/\approx$. The face maps will be
defined recursively. On monomials,
\[
d_i(z_{j_0} \ldots z_{j_s}) = \left\{\begin{array}{ll}
0, & i < 0,\\
z_{j_0} \ldots z_{j_i} \otimes z_{j_{i+1}} \ldots z_{j_s},
\quad,& 0 \leq i < s,\\
0, & i \geq s.
\end{array}\right.
\]
Then, extend $d_i$ to tensor products via:
\begin{equation}\label{eq.face_i-tensor}
d_i(W \otimes V) = d_i(W) \otimes V + W \otimes d_{i-deg(W)}(V),
\end{equation}
where $W$ and $V$ are formal tensors in $k\left[\mathrm{Epi}_{\Delta S}([p],
[q])\right]$, and $deg(W) = deg(W_0 \otimes \ldots \otimes W_t) \;:=\;
\sum_{k=0}^t deg(W_k)$. The boundary map $Sym_n^{(p)} \to Sym_{n-1}^{(p)}$
is then $d = \sum_{i=0}^n (-1)^i d_i = \sum_{i=0}^{n-1} (-1)^i d_i$.
\end{definition}
\begin{rmk}
The result of applying $d_i$ on any formal tensor product will result in only
a single formal tensor product, since in Eq.~(\ref{eq.face_i-tensor}), at most
one of the two terms will be non-zero.
\end{rmk}
\begin{rmk}\label{rmk.action}
There is an action $\Sigma_{p+1} \times Sym_i^{(p)} \to Sym_i^{(p)}$, given by
permuting the formal indeterminates $z_i$. Furthermore, this action is
compatible with the differential.
\end{rmk}
\begin{lemma}\label{lem.NS-Sym-homotopy}
$Sym_*^{(p)}$ is chain-homotopy equivalent to $k[N\mathcal{S}_p]/
k[N\mathcal{S}'_p]$.
\end{lemma}
\begin{proof}
Let $v_0$ represent the common initial vertex of the $p$-cubes making up
$N\mathcal{S}_p$. Then, as cell-complex, $N\mathcal{S}_p$ consists of $v_0$
together with all corners of the various $p$-cubes and $i$-cells for each
$i$-face of the cubes. Thus, $N\mathcal{S}_p$ consists of $(p+1)!$ $p$-cells
with attaching maps \mbox{$\partial I^p \to (N\mathcal{S}_p)^{p-1}$} defined
according to the face maps for $N\mathcal{S}_p$ given above. Note that a
chain of $N\mathcal{S}_p$ is non-trivial in $N\mathcal{S}_p/N\mathcal{S}'_p$
if and only if the initial vertex $v_0$ is included. Thus, any (cubical)
$k$-cell is uniquely determined by the label of the vertex opposite $v_0$.
Label each top-dimensional cell with the permutation induced on the set $\{0,
1, \ldots, p\}$ by the order of indeterminates in final vertex, $z_{i_0}
z_{i_1}\ldots z_{i_p}$. On a given $p$-cell, for each vertex $Z_0 \otimes
\ldots \otimes Z_s$, there is an ordering of the tensor factors so that $Z_0
\otimes \ldots \otimes Z_s \to z_{i_0}z_{i_1}\ldots z_{i_p}$ preserves the
order of formal indeterminates $z_i$. Rewrite each vertex of this $p$-cell in
this order. Now, any $p$-chain $(z_{i_0} \otimes z_{i_1} \otimes \ldots
\otimes z_{i_p}) \to \ldots \to z_{i_0}z_{i_1}\ldots z_{i_p}$ is obtained by
choosing the order in which to combine the factors. In fact, the $p$-chains
for this cube are in bijection with the elements of the symmetric group
$S_{p}$, as in the standard decomposition of a $p$-cube into $p!$ simplices.
A given permutation $\{1,2, \ldots, p\}\mapsto \{ j_1, j_2, \ldots, j_p \}$
will represent the chain obtained by first combining $z_{j_0} \otimes z_{j_1}$
into $z_{j_0}z_{j_1}$, then combining $z_{j_1} \otimes z_{j_2}$ into
$z_{j_1}z_{j_2}$. In effect, we ``erase'' the tensor product symbol between
$z_{j_{r-1}}$ and $z_{j_r}$ for each $j_r$ in order given by the list above.
We shall declare that the {\it natural} order of combining the factors will be
the one that always combines the last two: $(z_{i_0} \otimes \ldots \otimes
z_{i_{p-1}} \otimes z_{i_p}) \to (z_{i_0} \otimes \ldots \otimes z_{i_{p-1}}
z_{i_p}) \to (z_{i_0} \otimes \ldots \otimes z_{i_{p-2}}z_{i_{p-1}}z_{i_p})
\to \ldots \to (z_{i_0} \ldots z_{i_p})$. This corresponds to a permutation
$\rho := \{1,\ldots, p\} \mapsto \{p, p-1, \ldots, 2, 1\}$, and this chain
will be regarded as {\it positive}. A chain $C_\sigma$, corresponding to
another permutation, $\sigma$, will be regarded as positive or negative
depending on the sign of the permutation $\sigma\rho^{-1}$. Finally, the
entire $p$-cell should be identified with the sum $\sum_{\sigma \in S_p}
sign(\sigma\rho^{-1}) C_\sigma$. It is this sign convention that permits the
inner faces of the cube to cancel appropriately in the boundary maps. Thus we
have a map on the top-dimensional chains:
\begin{equation}\label{eq.theta_p-map}
\theta_p : Sym_p^{(p)} \to \big(k[N\mathcal{S}_p]/k[N\mathcal{S}'_p]\big)_p.
\end{equation}
Extend the defintion $\theta_*$ to arbitrary $k$-cells by sending the
$k$-chain $Z_0 \otimes \ldots \otimes Z_{p-k}$ to the sum of $k$-length chains
with source $z_0 \otimes \ldots \otimes z_p$ and target $Z_0 \otimes \ldots
\otimes Z_{p-k}$ with signs determined by the natural order of erasing tensor
product symbols of $z_0 \otimes \ldots \otimes z_p$, excluding those tensor
product symbols that never get erased. The following example should clarify
the point. Let $W = z_3z_0 \otimes z_1 \otimes z_2z_4$. This is a $2$-cell of
$Sym_*^{(4)}$. $W$ is obtained from $z_0 \otimes z_1 \otimes z_2 \otimes z_3
\otimes z_4=z_3 \otimes z_0 \otimes z_1 \otimes z_2 \otimes z_4$ by combining
factors in some order. There are only $2$ erasable tensor product symbols in
this example. The natural order (last to first) corresponds to the chain,
$z_3 \otimes z_0 \otimes z_1 \otimes z_2 \otimes z_4 \to z_3 \otimes z_0
\otimes z_1 \otimes z_2z_4 \to z_3z_0 \otimes z_1 \otimes z_2z_4$. So, this
chain shows up in $\theta_*(W)$ with positive sign, whereas the chain $z_3
\otimes z_0 \otimes z_1 \otimes z_2 \otimes z_4 \to z_3z_0 \otimes z_1 \otimes
z_2 \otimes z_4 \to z_3z_0 \otimes z_1 \otimes z_2z_4$ shows up with a
negative sign.
Now, $\theta_*$ is easily seen to be a chain map $Sym_*^{(p)} \to
k[N\mathcal{S}_p]/k[N\mathcal{S}'_p]$. Geometrically, $\theta_*$ has
the effect of subdividing a cell-complex (defined with cubical cells) into
a simplicial space, so $\theta_*$ is a homotopy-equivalence.
\end{proof}
\begin{rmk}
As an example, consider $|N\mathcal{S}_2|$. There are $6$ \mbox{$2$-cells},
each represented by a copy of $I^2$. The \mbox{$2$-cell} labelled by the
permutation \mbox{$\{0,1,2\} \mapsto \{1,0,2\}$} consists of the chains $z_1
\otimes z_0 \otimes z_2 \to z_1 \otimes z_0z_2 \to z_1z_0z_2$ and $-(z_1
\otimes z_0 \otimes z_2 \to z_1z_0 \otimes z_2 \to z_1z_0z_2)$. Hence, the
boundary is the sum of \mbox{$1$-chains}: $[(z_1 \otimes z_0z_2 \to
z_1z_0z_2) - (z_1 \otimes z_0 \otimes z_2 \to z_1z_0z_2) + (z_1 \otimes z_0
\otimes z_2 \to z_1 \otimes z_0z_2)] - [(z_1z_0 \otimes z_2 \to z_1z_0z_2)
- (z_1 \otimes z_0 \otimes z_2 \to z_1z_0z_2) + (z_1 \otimes z_0 \otimes
z_2 \to z_1z_0 \otimes z_2)] = (z_1 \otimes z_0z_2
\to z_1z_0z_2) + (z_1 \otimes z_0 \otimes z_2 \to z_1 \otimes z_0z_2) -
(z_1z_0 \otimes z_2 \to z_1z_0z_2) - (z_1 \otimes z_0 \otimes z_2 \to z_1z_0
\otimes z_2)$. This \mbox{$1$-chains} correspond to
the $4$ edges of the square. Thus, in our example this $2$-cell of
$|N\mathcal{S}_p|$ will correspond to \mbox{$z_1z_0z_2 \in Sym_2^{(2)}$}, and
its boundary in $|N\mathcal{S}_p/N\mathcal{S}'_p|$ will consist of the two
edges adjacent to the vertex labeled$z_0 \otimes z_1 \otimes z_2$, with
appropriate signs: $(z_0 \otimes z_1 \otimes z_2 \to z_1 \otimes z_0z_2)
- (z_0 \otimes z_1 \otimes z_2 \to z_1z_0 \otimes z_2)$. The corresponding
boundary in $Sym_1^{(2)}$ will be $(z_1 \otimes z_0z_2) - (z_1z_0
\otimes z_2)$, matching the boundary map already defined on $Sym_*^{(p)}$.
See Figs.~\ref{fig.NS_2} and~\ref{fig.sym2}.
\end{rmk}
\begin{figure}[ht]
\psset{unit=.75in}
\begin{pspicture}(6,3.5)
\psdots[linecolor=black, dotsize=4pt]
(2, 2)(2, 3.5)(3.3, 2.75)(3.3, 1.25)(2, .5)
(.7, 1.25)(.7, 2.75)
\psline[linewidth=1pt, linecolor=black](2, 2)(2, 3.5)
\psline[linewidth=1pt, linecolor=black](2, 2)(3.3, 1.25)
\psline[linewidth=1pt, linecolor=black](2, 2)(.7, 1.25)
\psline[linewidth=1pt, linecolor=black](2, 3.5)(3.3, 2.75)
\psline[linewidth=1pt, linecolor=black](3.3, 2.75)(3.3, 1.25)
\psline[linewidth=1pt, linecolor=black](3.3, 1.25)(2, .5)
\psline[linewidth=1pt, linecolor=black](2, .5)(.7, 1.25)
\psline[linewidth=1pt, linecolor=black](.7, 1.25)(.7, 2.75)
\psline[linewidth=1pt, linecolor=black](.7, 2.75)(2, 3.5)
\rput(1.9, 2.12){$z_0 \otimes z_1 \otimes z_2$}
\rput(2, 3.62){$z_0z_1 \otimes z_2$}
\rput(3.28, 1.16){$z_0 \otimes z_1z_2$}
\rput(.9, 1.16){$z_1 \otimes z_2z_0$}
\rput(3.5, 2.87){$z_0z_1z_2$}
\rput(2, .38){$z_1z_2z_0$}
\rput(.6, 2.9){$z_2z_0z_1$}
\psdots[linecolor=black, dotsize=4pt]
(6, 2)(6, 3.5)(7.3, 2.75)(7.3, 1.25)(6, .5)
(4.7, 1.25)(4.7, 2.75)
\psline[linewidth=1pt, linecolor=black](6, 2)(6, 3.5)
\psline[linewidth=1pt, linecolor=black](6, 2)(7.3, 1.25)
\psline[linewidth=1pt, linecolor=black](6, 2)(4.7, 1.25)
\psline[linewidth=1pt, linecolor=black](6, 3.5)(7.3, 2.75)
\psline[linewidth=1pt, linecolor=black](7.3, 2.75)(7.3, 1.25)
\psline[linewidth=1pt, linecolor=black](7.3, 1.25)(6, .5)
\psline[linewidth=1pt, linecolor=black](6, .5)(4.7, 1.25)
\psline[linewidth=1pt, linecolor=black](4.7, 1.25)(4.7, 2.75)
\psline[linewidth=1pt, linecolor=black](4.7, 2.75)(6, 3.5)
\rput(5.9, 2.12){$z_0 \otimes z_1 \otimes z_2$}
\rput(6, 3.62){$z_1z_0 \otimes z_2$}
\rput(7.28, 1.16){$z_1 \otimes z_0z_2$}
\rput(4.9, 1.16){$z_0 \otimes z_2z_1$}
\rput(7.5, 2.87){$z_1z_0z_2$}
\rput(6, .38){$z_0z_2z_1$}
\rput(4.6, 2.9){$z_2z_1z_0$}
\end{pspicture}
\caption[$|N\mathcal{S}_2|$]{$|N\mathcal{S}_2|$ consists of six squares, grouped
into two hexagons that share a common center vertex}
\label{fig.NS_2}
\end{figure}
\begin{figure}[ht]
\psset{unit=.75in}
\begin{pspicture}(6,3.5)
\psdots[linecolor=gray, dotsize=4pt]
(2, 3.5)(3.3, 2.75)(3.3, 1.25)(2, .5)
(.7, 1.25)(.7, 2.75)
\psline[linewidth=1pt, linecolor=gray](2, 2)(2, 3.5)
\psline[linewidth=1pt, linecolor=gray](2, 2)(3.3, 1.25)
\psline[linewidth=1pt, linecolor=gray](2, 2)(.7, 1.25)
\psline[linewidth=1pt, linecolor=gray](2, 3.5)(3.3, 2.75)
\psline[linewidth=1pt, linecolor=gray](3.3, 2.75)(3.3, 1.25)
\psline[linewidth=1pt, linecolor=gray](3.3, 1.25)(2, .5)
\psline[linewidth=1pt, linecolor=gray](2, .5)(.7, 1.25)
\psline[linewidth=1pt, linecolor=gray](.7, 1.25)(.7, 2.75)
\psline[linewidth=1pt, linecolor=gray](.7, 2.75)(2, 3.5)
\psdots[linecolor=black, dotsize=4pt]
(2, 2)(2, 2.75)(2.65, 1.625)(1.35, 1.625)
(2, 1.25)(1.35, 2.375)(2.65, 2.375)
\psline[linewidth=1pt, linecolor=black]{->}(2, 1.25)(2.65, 1.625)
\psline[linewidth=1pt, linecolor=black]{->}(2, 1.25)(1.35, 1.625)
\psline[linewidth=1pt, linecolor=black]{->}(1.35, 2.375)(1.35, 1.625)
\psline[linewidth=1pt, linecolor=black]{->}(1.35, 2.375)(2, 2.75)
\psline[linewidth=1pt, linecolor=black]{->}(2.65, 2.375)(2, 2.75)
\psline[linewidth=1pt, linecolor=black]{->}(2.65, 2.375)(2.65, 1.625)
\psline[linewidth=1pt, linecolor=black]{->}(2.65, 2.375)(2, 2)
\psline[linewidth=1pt, linecolor=black]{->}(2, 1.25)(2, 2)
\psline[linewidth=1pt, linecolor=black]{->}(1.35, 2.375)(2, 2)
\psline[linewidth=1pt, linecolor=black]{->}(2, 2.75)(2, 2)
\psline[linewidth=1pt, linecolor=black]{->}(2.65, 1.625)(2, 2)
\psline[linewidth=1pt, linecolor=black]{->}(1.35, 1.625)(2, 2)
\rput(2, 1.1){$z_1z_2z_0$}
\rput(1.1, 2.5){$z_2z_0z_1$}
\rput(2.8, 2.5){$z_0z_1z_2$}
\rput(3, 1.5){$z_0 \otimes z_1z_2$}
\rput(1.1, 1.5){$z_1 \otimes z_2z_0$}
\rput(2, 2.9){$z_0z_1 \otimes z_2$}
\psdots[linecolor=gray, dotsize=4pt]
(6, 3.5)(7.3, 2.75)(7.3, 1.25)(6, .5)
(4.7, 1.25)(4.7, 2.75)
\psline[linewidth=1pt, linecolor=gray](6, 2)(6, 3.5)
\psline[linewidth=1pt, linecolor=gray](6, 2)(7.3, 1.25)
\psline[linewidth=1pt, linecolor=gray](6, 2)(4.7, 1.25)
\psline[linewidth=1pt, linecolor=gray](6, 3.5)(7.3, 2.75)
\psline[linewidth=1pt, linecolor=gray](7.3, 2.75)(7.3, 1.25)
\psline[linewidth=1pt, linecolor=gray](7.3, 1.25)(6, .5)
\psline[linewidth=1pt, linecolor=gray](6, .5)(4.7, 1.25)
\psline[linewidth=1pt, linecolor=gray](4.7, 1.25)(4.7, 2.75)
\psline[linewidth=1pt, linecolor=gray](4.7, 2.75)(6, 3.5)
\psdots[linecolor=black, dotsize=4pt]
(6, 2)(6, 2.75)(6.65, 1.625)(5.35, 1.625)
(6, 1.25)(5.35, 2.375)(6.65, 2.375)
\psline[linewidth=1pt, linecolor=black]{->}(6, 1.25)(6.65, 1.625)
\psline[linewidth=1pt, linecolor=black]{->}(6, 1.25)(5.35, 1.625)
\psline[linewidth=1pt, linecolor=black]{->}(5.35, 2.375)(5.35, 1.625)
\psline[linewidth=1pt, linecolor=black]{->}(5.35, 2.375)(6, 2.75)
\psline[linewidth=1pt, linecolor=black]{->}(6.65, 2.375)(6, 2.75)
\psline[linewidth=1pt, linecolor=black]{->}(6.65, 2.375)(6.65, 1.625)
\psline[linewidth=1pt, linecolor=black]{->}(6.65, 2.375)(6, 2)
\psline[linewidth=1pt, linecolor=black]{->}(6, 1.25)(6, 2)
\psline[linewidth=1pt, linecolor=black]{->}(5.35, 2.375)(6, 2)
\psline[linewidth=1pt, linecolor=black]{->}(6, 2.75)(6, 2)
\psline[linewidth=1pt, linecolor=black]{->}(6.65, 1.625)(6, 2)
\psline[linewidth=1pt, linecolor=black]{->}(5.35, 1.625)(6, 2)
\rput(6, 1.1){$z_0z_2z_1$}
\rput(5.1, 2.5){$z_2z_1z_0$}
\rput(6.8, 2.5){$z_1z_0z_2$}
\rput(7, 1.5){$z_1 \otimes z_0z_2$}
\rput(5.1, 1.5){$z_0 \otimes z_2z_1$}
\rput(6, 2.9){$z_1z_0 \otimes z_2$}
\end{pspicture}
\caption[$Sym^{(2)} \simeq N\mathcal{S}_2/N\mathcal{S}'_2$]
{$Sym^{(2)} \simeq N\mathcal{S}_2/N\mathcal{S}'_2$. The center of each hexagon
is $z_0\otimes z_1 \otimes z_2$.}
\label{fig.sym2}
\end{figure}
Now, with one piece of new notation, we may re-interpret Thm.~\ref{thm.E1_NS}.
\begin{definition}\label{def.ltimescirc}
Let $G$ be a group. Let $k_0$ be the chain complex consisting of $k$
concentrated in degree $0$, with trivial $G$-action. If $X_*$ is a right
$G$-complex, $Y_*$ is a left $G$-complex with $k_0 \hookrightarrow Y_*$ as a
$G$-subcomplex, then define the \textit{equivariant half-smash tensor product}
of the two complexes:
\[
X_* \textrm{\textcircled{$\ltimes$}}_G Y_* := \left(X_* \otimes_{kG}
Y_*\right)/\left(X_* \otimes_{kG} k_0\right)
\]
\end{definition}
\begin{cor}\label{cor.E1_Sym}
There is spectral sequence converging (weakly) to $\widetilde{H}S_*(A)$ with
\[
E^1_{p,q} \cong
\bigoplus_{u \in X^{p+1}/\Sigma_{p+1}}
H_{p+q}\left(E_*G_u \textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p)}; k\right),
\]
where $G_{u}$ is the isotropy subgroup for the chosen representative of $u \in
X^{p+1}/ \Sigma_{p+1}$.
\end{cor}
\section{Properties of the Complex $Sym_*^{(p)}$}\label{sec.sym_alg}
\subsection{Algebra Structure of $Sym_*$}
We may consider $Sym_* := \bigoplus_{p \geq 0} Sym_*^{(p)}$ as a bigraded
differential algebra, where $bideg(W) = (p+1, i)$ for $W \in Sym_i^{(p)}$. The
product $\boxtimes : Sym_i^{(p)} \otimes Sym_j^{(q)} \to Sym_{i+j}^{(p+q+1)}$
is defined by: $W \boxtimes V := W \otimes V'$, where $V'$ is obtained from $V$
by replacing each formal indeterminate $z_r$ by $z_{r+p+1}$ for $0 \leq r \leq
q$. Eq.~\ref{eq.face_i-tensor} then implies:
\begin{equation}\label{eq.partialboxtimes}
d( W \boxtimes V ) = d(W) \boxtimes V + (-1)^{bideg(W)_2}W \boxtimes d(V),
\end{equation}
where $bideg(W)_2$ is the second component of $bideg(W)$.
\begin{prop}\label{prop.boxtimes}
The product $\boxtimes$ is defined on the level of homology. Furthermore,
this product (on both the chain level and homology level) is skew commutative
in a twisted sense: $W \boxtimes V = (-1)^{ij}\tau(V \boxtimes W)$, where
$bideg(W) = (p+1, i)$, $bideg(V) = (q+1, j)$, and $\tau$ is the permutation
sending $\{0, 1, \ldots, q, q+1, q+2, \ldots, p+q, p+q+1\} \mapsto \{p+1, p+2,
\ldots, p+q+1, 0, 1, \ldots, p-1, p \}$.
\end{prop}
\begin{proof}
Eq.~(\ref{eq.partialboxtimes}) implies the product passes to homology classes.
Now, suppose $W = Y_0 \otimes Y_1 \otimes \ldots \otimes Y_{p-i} \in
Sym_i^{(p)}$ and $V = Z_0 \otimes Z_1 \otimes \ldots \otimes Z_{q-j} \in
Sym_j^{(p)}$.
\begin{equation}\label{eq.boxtimesformula}
V \boxtimes W = V \otimes W' = (-1)^\alpha W' \otimes V,
\end{equation}
where $W'$ is related to $W$ by replacing each $z_r$ by $z_{r+q+1}$. The
exponent $\alpha= deg(V)deg(W) = ij$ arises from the relations in
$Sym_{i+j}^{(p+q+1)}$. (The fact that $deg(V) = i$ and $deg(W) = j$ may be
made clear by observing that the degree of a formal tensor product in
$Sym_*^{(s)}$ is equal to the number of {\it cut points}, that is, the number
of places where a tensor product symbol may be inserted.)
Next, apply the block transformation $\tau$ to Eq.~(\ref{eq.boxtimesformula})
to obtain $\tau(V \boxtimes W) = (-1)^\alpha \tau(W' \otimes V) = (-1)^\alpha
W \otimes V' = (-1)^\alpha W \boxtimes V$, where $V'$ is obtained by replacing
$z_r$ by $z_{r+p+1}$ in $V$.
\end{proof}
\subsection{Computer Calculations}
In principle, the homology of $Sym_*^{(p)}$ may be found by using a computer.
In fact, we have the following results up to $p = 7$:
\begin{theorem}\label{thm.poincare_sym_complex}
For $0 \leq p \leq 7$, the groups $H_*(Sym_*^{(p)})$ are free abelian and have
Poincar\'e polynomials $P_p(t) := P\left(H_*(Sym_*^{(p)}); t\right)$:
\[
P_0(t) = 1,
\]
\[
P_1(t) = t,
\]
\[
P_2(t) = t + 2t^2,
\]
\[
P_3(t) = 7t^2 + 6t^3,
\]
\[
P_4(t) = 43t^3 + 24t^4,
\]
\[
P_5(t) = t^3 + 272t^4 + 120t^5,
\]
\[
P_6(t) = 36t^4 + 1847t^5 + 720t^6,
\]
\[
P_7(t) = 829t^5 + 13710t^6 + 5040t^7.
\]
\end{theorem}
\begin{proof}
These computations were performed using scripts written for the computer
algebra systems \verb|GAP|~\cite{GAP4} and \verb|Octave|~\cite{E}.
\end{proof}
We conjecture that the $H_*(Sym_*^{(p)})$ is always free abelian.
\subsection{Representation Theory of $H_*(Sym_*^{(p)})$}\label{sec.rep_sym}
By remark~\ref{rmk.action}, the groups $H_i(Sym_*^{(p)}; k)$ are
$k\Sigma_{p+1}$-modules, so it seems natural to investigate the irreducible
representations comprising these modules.
\begin{prop}
Let $C_{p+1} \hookrightarrow \Sigma_{p+1}$ be the cyclic group of order $p+1$,
embedded into the symmetric group as the subgroup generated by the permutation
$\tau_p := (0, p, p-1, \ldots, 1)$. Then there is a
$\Sigma_{p+1}$-isomorphism: $H_p(Sym_*^{(p)}) \cong AC_{p+1} \uparrow
\Sigma_{p+1}$, {\it i.e.}, the alternating representation of the cyclic group,
induced up to the symmetric group. Note, for $p$ even, $AC_{p+1}$ coincides
with the trivial representation $IC_{p+1}$.
Moreover, $H_p(Sym_*^{(p)})$ is generated by the elements $\sigma(b_p)$, for
the distinct cosets $\sigma C_{p+1}$, where $b_p := \sum_{j = 0}^p (-1)^{jp}
\tau_p^j(z_0z_1 \ldots z_p)$.
\end{prop}
\begin{proof}
Let $w$ be a general element of $Sym_p^{(p)}$, $w = \sum_{\sigma \in
\Sigma_{p+1}} c_\sigma \sigma(z_0z_1 \ldots z_p)$, where $c_\sigma$ are
constants in $k$. $H_p(Sym_*^{(p)})$ consists of those $w$ such that
$d(w) = 0$. That is,
\begin{equation}\label{eq.sum_sigma_zero}
0 = \sum_{\sigma \in \Sigma_{p+1}} \sum_{i=0}^{p-1}
(-1)^i c_\sigma \sigma(z_0 \ldots z_i \otimes z_{i+1} \ldots z_p).
\end{equation}
Now for each $\sigma$, the terms corresponding to $\sigma(z_0 \ldots z_i
\otimes z_{i+1} \ldots z_p)$ occur in pairs in the above formula. The obvious
term of the pair is $(-1)^i c_{\sigma} \sigma(z_0 \ldots z_i \otimes z_{i+1}
\ldots z_p)$. Not so obviously, the second term of the pair is
$(-1)^{(p-i-1)i}(-1)^{p-i-1} c_{\rho} \rho(z_0 \ldots z_{p-i-1} \otimes
z_{p-i} \ldots z_p)$, where $\rho = \sigma \tau_p^{p-i}$. Thus, if $d(w) = 0$,
then we must have $(-1)^i c_\sigma + (-1)^{(p-i-1)(i+1)}c_\rho = 0$, or
$c_\rho = (-1)^{(p-i)(i+1)}c_\sigma$.
Set $j = p-i$, so that $c_\rho = (-1)^{j(p-j+1)}c_\sigma = (-1)^{jp}c_\sigma$.
This shows that the only restrictions on the
coefficients $c_\sigma$ are that the absolute values of coefficients
corresponding to $\sigma, \sigma \tau_p, \sigma \tau_p^2, \ldots$ must be the
same, and their corresponding signs in $w$ alternate if and only if $p$ is
odd; otherwise, they have the same signs. Clearly, the elements $\sigma(b_p)$
for distinct cosets $\sigma C_{p+1}$ represents an independent set of
generators over $k$ for $H_p(Sym_*^{(p)})$.
Observe that $b_p$ is invariant under the action of $sign(\tau_p)\tau_p$, and
so $b_p$ generates an alternating representation $A C_{p+1}$ over $k$.
Induced up to $\Sigma_{p+1}$, we obtain the representation $AC_{p+1} \uparrow
\Sigma_{p+1}$ of dimension $(p+1)!/(p+1) = p!$, generated by the elements
$\sigma(b_p)$ as in the proposition.
\end{proof}
\begin{definition}
For a given proper partition $\lambda = [\lambda_0, \lambda_1, \lambda_2,
\ldots, \lambda_s]$ of the $p+1$ integers $\{0, 1, \ldots, p\}$, an element
$W$ of $Sym_*^{(p)}$ will designated as {\it type $\lambda$} if it equivalent
to $\pm(Y_0 \otimes Y_1 \otimes Y_2 \otimes \ldots \otimes Y_s)$ with
$deg(Y_i) = \lambda_i - 1$. That is, each $Y_i$ has $\lambda_i$ factors.
The notation $Sym_\lambda^{(p)}$ or $Sym_\lambda$ will denote the
$k$-submodule of $Sym_{p-s}^{(p)}$ generated by all elements of type
$\lambda$.
\end{definition}
In what follows, $|\lambda|$ will refer to the number of components of
$\lambda$. The action of $\Sigma_{p+1}$ leaves $Sym_\lambda$ invariant for any
given $\lambda$, so the there is a decomposition as $k\Sigma_{p+1}$-module:
\[
Sym_{p-s}^{(p)} = \bigoplus_{\lambda \vdash (p+1), |\lambda| = s+1}
Sym_{\lambda}.
\]
\begin{prop}\label{prop.alt-and-trivial-reps-in-sym}
For a given proper partition $\lambda \vdash (p+1)$,
{\it (a)} $Sym_\lambda$ contains exactly one
alternating representation $A\Sigma_{p+1}$ iff $\lambda$ contains no repeated
components.
{\it (b)} $Sym_\lambda$ contains exactly one trivial representation
$I\Sigma_{p+1}$ iff $\lambda$ contains no repeated even components.
\end{prop}
\begin{proof}
$Sym_\lambda$ is a quotient of the regular representation, since it is the
image of the $\Sigma_{p+1}$-map $\pi_\lambda \;:\; k\Sigma_{p+1} \to
Sym_\lambda,$ $\sigma \mapsto \psi_\lambda s$, where $s \in
\Sigma_{p+1}^{\mathrm{op}}$ is the $\Delta S$-automorphism of $[p]$
corresponding to $\sigma$ and $\psi_\lambda$ is a $\Delta$-morphism $[p] \to
[ \,|\lambda|\, ]$ that sends the points $0, \ldots, \lambda_0-1$ to $0$, the
points $\lambda_0, \ldots, \lambda_0 + \lambda_1 -1$ to $1$, and so on.
Hence, there can be at most $1$ copy of $A\Sigma_{p+1}$ and at most $1$ copy
of $I\Sigma_{p+1}$ in $Sym_\lambda$.
Let $W$ be the ``standard'' element of $Sym_\lambda$. That is, the
indeterminates $z_i$ occur in $W$ in numerical order and the degrees of
monomials of $W$ are in decreasing order. $A\Sigma_{p+1}$ exists
in $Sym_\lambda$ iff the element $V = \sum_{\sigma \in \Sigma_{p+1}}
sign(\sigma)\sigma(W)$ is non-zero. Suppose that some component of $\lambda$
is repeated, say $\lambda_i = \lambda_{i+1} = \ell$. If $W = Y_0 \otimes Y_1
\otimes \ldots \otimes Y_s$, then $deg(Y_i) = deg(Y_{i+1}) = \ell-1$. Now, we
know that $W = (-1)^{deg(Y_i)deg(Y_{i+1})} Y_0 \otimes \ldots \otimes Y_{i+1}
\otimes Y_i \otimes \ldots Y_s = -(-1)^{\ell} \alpha(W)$, for the permutation
$\alpha \in \Sigma_{p+1}$ that exchanges the indices of indeterminates in
$Y_i$ with those in $Y_{i+1}$ in an order-preserving way. In $V$, the term
$\alpha(W)$ shows up with sign $sign(\alpha) = (-1)^\ell$, thus cancelling
with $W$. Hence, $V = 0$, and no alternating representation exists.
If, on the other hand, no component of $\lambda$ is repeated, then no term
$\pm \alpha(W)$ can be equivalent to $W$ for $\alpha \neq \mathrm{id}$, so $V$
survives as the generator of $A\Sigma_{p+1}$ in $Sym_\lambda$.
A similar analysis applies for trivial representations. This time, we examine
$U = \sum_{\sigma \in \Sigma_{p+1}} \sigma(W)$, which would be a generator for
$I\Sigma_{p+1}$ if it were non-zero. As before, if there is a repeated
component, $\lambda_i=\lambda_{i+1} = \ell$, then $W=(-1)^{\ell-1}\alpha(W)$.
However, this time, $W$ cancels with $\alpha(W)$ only if $\ell - 1$ is odd.
That is, $|\lambda_i| = |\lambda_{i+1}|$ is even. If $\ell - 1$ is even, or
if all $\lambda_i$ are distinct, then the element $U$ must be non-zero.
\end{proof}
\begin{prop}\label{prop.alternating_reps}
$H_i(Sym_*^{(p)})$ contains an alternating representation for each partition
$\lambda \vdash (p+1)$ with \mbox{$|\lambda| = p-i$} such that no component of
$\lambda$ is repeated.
\end{prop}
\begin{proof}
This proposition will follow from the fact that $d(V) = 0$ for any generator
$V$ of an alternating representation in $Sym_\lambda$. Then, by Schur's
Lemma, the alternating representation must survive at the level of homology.
Let $V= \sum_{\sigma \in \Sigma_{p+1}} sign(\sigma)\sigma(W)$ be the generator
mentioned in Prop.~\ref{prop.alt-and-trivial-reps-in-sym}. $d(V)$ consists of
individual terms $d_j(\sigma(W)) = \sigma(d_j(W))$ along with appropriate
signs. For a given $j$, $d_j(W)$ is identical to $W$ except at some monomial
$Y_i$, where a tensor product symbol is inserted. We will
introduce some notation to make the argument a little cleaner. If
$Y = z_{i_0}z_{i_1} \ldots z_{i_r}$ is a monomial, then the notation $Y\{s,
\ldots, t\}$ refers to the monomial $z_{i_s}z_{i_{s+1}} \ldots z_{i_t}$,
assuming $0 \leq s \leq t \leq r$. Now, we write
\begin{equation}\label{eq.d_jW}
d_j(W) = (-1)^{a + \ell} Y_0 \otimes \ldots \otimes Y_i\{0, \ldots, \ell\}
\otimes Y_i\{\ell+1, \ldots, m\} \otimes \ldots \otimes Y_s,
\end{equation}
where $a = deg(Y_0) + \ldots + deg(Y_{i-1})$. Use the relations in $Sym_*$ to
rewrite Eq.~(\ref{eq.d_jW}):
\begin{equation}\label{eq.d_jW2}
(-1)^{(a + \ell) + \ell(m - \ell - 1)}Y_0 \otimes \ldots \otimes Y_i\{
\ell+1, \ldots, m\}\otimes Y_i\{0,\ldots,\ell\} \otimes \ldots \otimes Y_s.
\end{equation}
Let $\alpha$ be the permutation that relabels indices in such a way that
$Y_i\{0, \ldots, m-\ell-1\} \mapsto Y_i\{\ell+1, \ldots, m\}$ and $Y_i\{m-
\ell, \ldots, m\} \mapsto Y_i\{0,\ldots, \ell\}$, so that the following
is equivalent to Eq.~(\ref{eq.d_jW2}).
\begin{equation}\label{eq.d_jW-alpha}
(-1)^{a + m\ell - \ell^2}\alpha\left(Y_0 \otimes \ldots \otimes Y_i\{0,
\ldots, m-\ell-1\}\otimes Y_i\{m-\ell, \ldots, m\} \otimes \ldots \otimes
Y_s\right)
\end{equation}
Now, Eq.(~\ref{eq.d_jW-alpha}) also occurs in $d_{j'}\left(sign(\alpha)
\alpha(W) \right)$ for some $j'$. This term looks like:
\begin{equation}
sign(\alpha)(-1)^{a + m-\ell-1}
\alpha\big(Y_0 \otimes \ldots \otimes
Y_i\{0, \ldots, m-\ell-1\}
\otimes Y_i\{m-\ell, \ldots, m\} \otimes \ldots \otimes Y_s\big)
\end{equation}
\begin{equation}\label{eq.alpha-d_jprime}
= (-1)^{m\ell - \ell^2 + a - 1}
\alpha\big(Y_0 \otimes \ldots \otimes
Y_i\{0, \ldots, m-\ell-1\}
\otimes Y_i\{m-\ell, \ldots, m\} \otimes \ldots \otimes Y_s\big)
\end{equation}
Comparing the signs of Eq.~(\ref{eq.alpha-d_jprime}) and
Eq.~(\ref{eq.d_jW-alpha}), we verify the two terms canel each other out
in the sum $d(V)$.
\end{proof}
By Proposition~\ref{prop.alternating_reps}, it is clear that if $p+1$ is a
triangular number -- {\it i.e.}, $p+1$ is of the form $r(r+1)/2$ for some
positive integer $r$, then the lowest dimension in which an alternating
representation may occur is $p + 1 - r$, corresponding to the partition
$\lambda = [r, r-1, \ldots, 2, 1]$. A little algebra yields the following
statement for any $p$.
\begin{cor}\label{cor.lowest_alternating_reps}
$H_i(Sym_*^{(p)})$ contains an alternating representation in degree $p+1-r$,
where $r = \lfloor \sqrt{2p + 9/4} - 1/2 \rfloor$.
Moreover, there are no alternating representations present for $i \leq p-r$.
\end{cor}
There is not much known about the other irreducible representations occurring in
the homology groups of $Sym_*^{(p)}$, however computational evidence shows that
$H_i(Sym_*^{(p)})$ contains no trivial representation, $I\Sigma_{p+1}$, for
$i \leq p-r$ ($r$ as in the conjecture above) up to $p = 50$.
\subsection{Connectivity of $Sym_*^{(p)}$}
Quite recently, Vre\'cica and \v{Z}ivaljevi\'c~\cite{VZ} observed that the
complex $Sym_*^{(p)}$ is isomorphic to the suspension of the cycle-free
chessboard complex $\Omega_{p+1}$ (in fact, the isomorphism takes the form
$k\left[S\Omega_{p+1}^+\right] \to Sym_*^{(p)}$, where $\Omega_{p+1}^+$ is the
augmented complex).
The $m$-chains of the complex $\Omega_n$ are generated by ordered lists, $L =
\{ (i_0, j_0), (i_1, j_1), \ldots, (i_m, j_m) \}$, where $1 \leq i_0 < i_1 <
\ldots < i_m \leq n$, all $1 \leq j_s \leq n$ are distinct integers, and the
list $L$ is {\it cycle-free}. It may be easier to say what it means for $L$ not
to be cycle free: $L$ is not cycle-free if there exists a subset $L_c \subseteq
L$ and ordering of $L_c$ so that $L_c = \{ (\ell_0, \ell_1), (\ell_1, \ell_2),
\ldots, (\ell_{t-1}, \ell_t), (\ell_t, \ell_0) \}$.
The differential of $\Omega_n$ is defined on generators by:
\[
d\left( \{ (i_0, j_0), \ldots, (i_m, j_m) \} \right)
:= \sum_{s = 0}^{m} (-1)^s \{ (i_0, j_0), \ldots, (i_{s-1}, j_{s-1}),
(i_{s+1}, j_{s+1}), \ldots, (i_m, j_m) \}.
\]
For completeness, an explicit isomorphism shall be provided:
\begin{prop}\label{prop.iso_omega_sym}
Let $\Omega^+_n$ denote the augmented cycle-free $(n \times n)$-chessboard
complex, where the unique $(-1)$-chain is represented by the empty $n
\times n$ chessboard, and the boundary map on $0$-chains takes a vertex to the
unique $(-1)$-chain. For each $p \geq 0$, there is a chain isomorphism,
$\omega_* : k\left[S\Omega^+_{p+1}\right] \to Sym_*^{(p)}$.
\end{prop}
\begin{proof}
Note that we may define the generating $m$-chains of $k\left[\Omega_{p+1}
\right]$ as cycle-free lists $L$, with no requirement on the order of $L$,
under the equivalence relation: $\sigma. L := \{ (i_{\sigma^{-1}(0)},
j_{\sigma^{-1}(0)}), \ldots, (i_{\sigma^{-1}(m)}, j_{\sigma^{-1}(m)}) \}
\approx sign(\sigma)L$, for $\sigma \in \Sigma_{m+1}$. Suppose $L$ is an
$(m+1)$-chain of $S\Omega^+_{p+1}$ ({\it i.e.} an $m$-chain of
$\Omega^+_{p+1}$). Call a subset $L' \subseteq L$ a {\it queue} if there is a
reordering of $L'$ such that $L' = \{(\ell_0, \ell_1), (\ell_1, \ell_2),
\ldots, (\ell_{t-1}, \ell_t)\}$. $L'$ is called a {\it maximal queue} if it
is not properly contained in any other queue. Since $L$ is supposed to be
cycle-free, we can partition $L$ into some number of maximal queues, $L_1',
L_2', \ldots, L_q'$. Let $\sigma$ be a permutation representing the
reordering of $L$ into maximal ordered queues.
Now, each maximal ordered queue $L_i'$ will correspond to a monomial of
formal indeterminates $z_i$ as follows.
\begin{equation}\label{eq.monomial_correspondence}
L_s' := \{ (\ell_0, \ell_1), (\ell_1, \ell_2), \ldots, (\ell_{t-1},
\ell_{t}) \} \mapsto z_{\ell_0-1}z_{\ell_1-1}\cdots z_{\ell_{t}-1}.
\end{equation}
For each maximal ordered queue, $L'_s$, denote the monomial obtained by
formula~(\ref{eq.monomial_correspondence}) by $Z_s$. Let $k_1, k_2, \ldots,
k_u$ be the numbers in $\{0, 1, 2, \ldots, p\}$ such that $k_{r} + 1$ does
not appear in any pair $(i_s, j_s) \in L$. Now we may define $\omega_*$ on
$L = L'_1 \cup L'_2 \cup \ldots \cup L'_q$.
\begin{equation}\label{eq.omega-def}
\omega_{m+1}(L) := Z_1 \otimes Z_2 \otimes \ldots \otimes Z_q \otimes
z_{k_1} \otimes z_{k_2} \otimes \ldots \otimes z_{k_u}.
\end{equation}
Observe, if $L = \emptyset$ is the $(-1)$-chain of $\Omega^+_{p+1}$, then
there are no maximal queues in $L$, and so $\omega_0(\emptyset) = z_0 \otimes
z_1 \otimes \ldots \otimes z_p$.
$\omega_*$ is a (well-defined) chain map with inverse given by essentially
reversing the process. To each monomial $Z = z_{i_0}z_{i_1}\cdots z_{i_t}$
with $t > 0$, there is an associated ordered queue $L' = \{ (i_0+1, i_1+1),
(i_1+1, i_2+1), \ldots (i_{t-1} +1, i_t + 1) \}$. If the monomial is a
singleton, $Z = z_{i_0}$, the associated ordered queue will be the empty set.
Now, given a generator $Z_1 \otimes Z_2 \otimes \ldots \otimes Z_q \in
Sym_*^{(p)}$, map it to the list $L := L'_1 \cup L'_2 \cup \ldots \cup L'_q$,
preserving the original order of indices.
\end{proof}
\begin{theorem}\label{thm.connectivity}
$Sym_*^{(p)}$ is $\lfloor\frac{2}{3}(p-1)\rfloor$-connected.
\end{theorem}
\begin{proof}
See Thm.~10 of~\cite{VZ}.
\end{proof}
This remarkable fact yields the following useful corollaries:
\begin{cor}\label{cor.finitely-generated}
The spectral sequences of Thm.~\ref{thm.E1_NS} and Cor.~\ref{cor.E1_Sym}
converge strongly to $\widetilde{H}S_*(A)$.
\end{cor}
\begin{proof}
This relies on the fact that the connectivity of the complexes $Sym_*^{(p)}$
is a non-decreasing function of $p$. Fix $n \geq 0$, and consider
$\bigoplus_{u}H_n\left(E_*G_u \textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p)}\right)$, the
component of $E^1$ residing at position $p, q$ for $p + q = n$. A
priori, the induced differentials whose sources are $E^1_{p,q}, E^2_{p,q},
E^3_{p,q}, \ldots$ will have as targets certain subquotients of $\bigoplus_{u}
H_{n-1}\left(E_*G_u \textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p+1)}\right)$, $\bigoplus_{u}
H_{n-1}\left(E_*G_u \textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p+2)}\right)$, $\bigoplus_{u}
H_{n-1}\left(E_*G_u \textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p+3)}\right)$, etc. Now, if
$n-1 < \lfloor (2/3)
(p+k-1)\rfloor$ for some $k \geq 0$, then for $K > k$, we have $H_{n-1}\left(
Sym_*^{(p + K)}\right) = 0$, hence also, $H_{n-1}\left(E_*G_{u}
\textrm{\textcircled{$\ltimes$}}_{G_u} Sym_*^{(p+K)}\right) = 0$, using the fibration mentioned in
the proof of Thm.~\ref{thm.E1_NS} and the Hurewicz Theorem. Thus, the induced
differential $d^k$ is zero for all $k \geq K$.
On the other hand, the induced differentials whose targets are $E^1_{p,q},
E^2_{p,q}, E^3_{p,q}, \ldots$ must be zero after stage $p$, since there are
no non-zero components with $p < 0$.
\end{proof}
\begin{cor}\label{cor.trunc-isomorphism}
For each $i \geq 0$, there is a positive integer $N_i$ so that if
$p \geq N_i$, there is an isomorphism $H_i\left(\mathscr{G}_p
\widetilde{\mathscr{Y}}_*\right) \cong \widetilde{H}S_i(A)$.
\end{cor}
\begin{cor}\label{cor.fin-gen}
If $A$ is finitely-generated over a Noetherian ground ring $k$, then
$HS_*(A)$ is finitely-generated over $k$ in each degree.
\end{cor}
\begin{proof}
Examination of the $E^1$ term shows that the $n^{th}$ reduced symmetric
homology group of $A$ is a subquotient of $\bigoplus_{p \geq 0}
\bigoplus_{u\in X^{p+1}/\Sigma_{p+1}} H_n\left(E_*G_{u}\textrm{\textcircled{$\ltimes$}}_{G_u}
Sym_*^{(p)} ; k\right)$. Each $H_n\left(E_*G_{u}\textrm{\textcircled{$\ltimes$}}_{G_u}
Sym_*^{(p)} ; k\right)$ is a finite-dimensional $k$-module. The inner sum is
finite as long as $X$ is finite. Thm.~\ref{thm.connectivity} shows the outer
sum is finite as well.
\end{proof}
The bounds on connectivity are conjectured to be tight. This is certainly true
for $p \equiv 1$ (mod $3$), based on Thm.~16 of~\cite{VZ}. Corollary~12 of the
same paper establishes that either $H_{2k}\left(
Sym_*^{(3k-1)}\right) \neq 0$ or $H_{2k}\left(Sym_*^{(3k)}\right) \neq 0$. For
$k \leq 2$, both statements are true. When the latter condition is true, this
gives a tight bound on connectivity for $p \equiv 0$ (mod $3$). When the former
is true, there is not enough information for a tight bound, since we are more
interested in proving that $H_{2k-1}\left(Sym_*^{(3k-1)}\right)$ is non-zero,
since for $k = 1, 2$, we have computed the integral homology, $H_1\left(
Sym_*^{(2)}\right) = \mathbb{Z}$ and $H_3\left(Sym_*^{(5)}\right) =\mathbb{Z}$.
\subsection{Filtering $Sym_*^{(p)}$ by partition types}
In~\ref{sec.rep_sym}, we saw that $Sym_*^{(n)}$ decomposes over $k\Sigma_{n+1}$
as a direct sum of the submodules $Sym_\lambda$ for partitions $\lambda \vdash
(n+1)$. We may filter $Sym_*^{(n)}$ by the size of the largest component of the
partition,
\[
\mathscr{F}_pSym_q^{(n)} := \bigoplus_{\lambda \vdash (n+1),
|\lambda| = n+1-(p+q), \lambda_0 \leq p+1} Sym_{\lambda},
\]
where $\lambda = [\lambda_0, \lambda_1, \ldots, \lambda_{n-q}]$, is written in
non-increasing order. The differential of $Sym_*^{(n)}$ respects this
filtering, since it can only reduce the size of partition components. With
respect to this filtering, we have an $E^0$ term for a spectral sequence:
\[
E_{p,q}^0 \cong \bigoplus_{\lambda \vdash (n+1), |\lambda| = n+1-(p+q),
\lambda_0 = p+1} Sym_{\lambda}.
\]
The vertical differential $d^0$ is induced from $d$ by keeping only those
terms of $d(W)$ that share largest component size with $W$.
\section{A Partial Resolution}\label{sec.partres}
As before, $k$ is a commutative ground ring. In this section, we find an
explicit partial resolution of the trivial $\Delta S^{\mathrm{op}}$-module
$\underline{k}$ by projective modules, allowing the computation of $HS_0(A)$ and
$HS_1(A)$ for a unital associative $k$-algebra $A$. The resolution will be
constructed through a number of technical lemmas. Recall
from~\ref{sub.stand-res}, the modules $k\left[ \mathrm{Mor}_{\Delta S}\left( -,
[q] \right)\right]$ are projective as $\Delta S^{\mathrm{op}}$-module. In
proving exactness, it suffices to examine the individual sub-$k$-modules,
$k\left[ \mathrm{Mor}_{\Delta S}\left( [n], [q] \right)\right]$.
\begin{lemma}\label{lem.0-stage}
For each $n \geq 0$, the sequence, $0 \gets k \stackrel{\epsilon}{\gets}
k\left[\mathrm{Mor}_{\Delta S}\left([n], [0]\right)\right]\stackrel{\rho}
{\gets} k\big[\mathrm{Mor}_{\Delta S}\left([n], [2])\right]$
is exact, where $\epsilon$ is defined by $\epsilon(\phi) = 1$ for any morphism
$\phi : [n] \to [0]$, and $\rho$ is defined by $\rho(\psi) = (x_0x_1x_2)\circ
\psi - (x_2x_1x_0)\circ\psi$ for any morphism $\psi : [n] \to [2]$. Note,
$x_0x_1x_2$ and $x_2x_1x_0$ are $\Delta S$ morphisms $[2] \to [0]$ written in
tensor notation.
\end{lemma}
\begin{proof}
Clearly, $\epsilon$ is surjective. Now, $\epsilon\rho = 0$, since
$\rho(\psi)$ consists of two morphisms with opposite signs. Let $\phi_0 = x_0
x_1 \ldots x_n : [n] \to [0]$. The kernel of $\epsilon$ is spanned by
elements $\phi - \phi_0$ for $\phi \in \mathrm{Mor}_{\Delta S}([n],[0])$. So,
it suffices to show that the submodule of $k\left[\mathrm{Mor}_{\Delta S}
\left([n], [0]\right)\right]$ generated by $(x_0x_1x_2)\psi - (x_2x_1x_0)\psi$
for $\psi : [n] \to [2]$ contains all of the elements $\phi - \phi_0$. In
other words, it suffices to find a sequence $\phi =: \phi_k, \phi_{k-1},
\ldots, \phi_2, \phi_1, \phi_0$ so that each $\phi_i$ is obtained from
$\phi_{i+1}$ by reversing the order of 3 (possibly emtpy) blocks, $XYZ \to
ZYX$. Let $\phi = x_{i_0}x_{i_1}\ldots x_{i_n}$. If $\phi = \phi_0$, we may
stop here. Otherwise, we may produce a sequence ending in $\phi_0$ by way of
a certain family of rearrangements:
\textit{$k$-rearrangment}: $x_{i_0}x_{i_1} \ldots x_{i_{k-1}}x_{i_k}x_{k+1}
\ldots x_n \leadsto x_{k+1}\ldots x_n x_{i_k}x_{i_0}x_{i_1} \ldots
x_{i_{k-1}}$, where $i_k \neq k$. That is, a $k$-rearrangement only applies
to those monomials that agree with $\phi_0$ in the final $n-k$ indeterminates,
but not in the final $n-k+1$ indeterminates. If $k = n$, then this
rearrangement reduces to the cyclic rearrangment, $x_{i_0}x_{i_1} \ldots
x_{i_n} \leadsto x_{i_n} x_{i_0}x_{i_1} \ldots x_{i_{n-1}}$.
Beginning with $\phi$, perform $n$-rearrangements until the final
indeterminate is $x_n$. For convenience of notation, let this new monomial be
$x_{j_0}x_{j_1} \ldots x_{j_n}$. (Of course, $j_n = n$.) If $j_k = k$ for
all $k = 0, 1, \ldots, n$, then we are done. Otherwise, there will be a
number $k$ such that $j_k \neq k$ but $j_{k+1} = k + 1, \ldots, j_n = n$.
Perform a $k$-rearrangement followed by enough $n$-rearrangements so that
the final indeterminate is again $x_n$. The net result of these rearrangements
is that the ending block $x_{k+1}x_{k+2}\ldots x_n$ remains fixed while the
beginning block $x_{j_0}x_{j_1}\ldots x_{j_{k}}$ becomes cyclically permuted
to $x_{j_k}x_{j_0} \ldots x_{j_{k-1}}$. It is clear that applying this
combination of rearrangements repeatedly will finally obtain a monomial
$x_{\ell_0}x_{\ell_1} \ldots x_{\ell_{k-1}} x_k x_{k+1} \ldots x_n$. Now
repeat the process, until after a finite number of steps, we finally obtain
$\phi_0$.
\end{proof}
Let $\mathscr{B}_n := \{ x_{i_0}x_{i_1}\ldots x_{i_{k-1}} \otimes x_{i_k}
\otimes x_{k+1}x_{k+2} \ldots x_{n} \;:\; 1\leq k \leq n, i_k \neq k \}$.
$k[\mathscr{B}_n]$ is a free submodule of $k\left[\mathrm{Mor}_{\Delta S}\left(
[n], [2]\right)\right]$ of size $(n+1)! - 1$.
\begin{cor}\label{cor.B_n}
When restricted to $k[\mathscr{B}_n]$, the map $\rho$ of
Lemma~\ref{lem.0-stage} is surjective onto the kernel of $\epsilon$.
\end{cor}
\begin{proof}
In the proof of Lemma~\ref{lem.0-stage}, the $n$-rearrangements correspond to
the image of elements $x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1$,
with $i_n \neq n$. For $k < n$, $k$-rearrangements correspond to the image of
elements $x_{i_0} \ldots x_{i_{k-1}} \otimes x_{i_k} \otimes x_{k+1} \ldots
x_{n}$, with $i_k \neq k$.
\end{proof}
\begin{lemma}\label{lem.rank}
$k\left[\mathrm{Mor}_{\Delta S}\left([n], [m]\right)\right]$ is a free
$k$-module of rank $(m+n+1)!/m!$.
\end{lemma}
\begin{proof}
A morphism $\phi : [n] \to [m]$ of $\Delta S$ is nothing more than an
assignment of $n+1$ objects into $m+1$ compartments, along with a total
ordering of the original $n+1$ objects, hence $\#\mathrm{Mor}_{\Delta S}
\left([n], [m]\right) = \binom{m+n+1}{m}(n+1)! = \frac{(m+n+1)!}{m!}$.
\end{proof}
\begin{lemma}\label{lem.rho-iso}
$\rho|_{k[\mathscr{B}_n]}$ is an isomorphism $k[\mathscr{B}_n] \cong
\mathrm{ker}\,\epsilon$.
\end{lemma}
\begin{proof}
Since the rank of $k\left[\mathrm{Mor}_{\Delta S}\left([n], [0]\right)\right]$
is $(n+1)!$, the rank of the $\mathrm{ker}\,\epsilon$ is $(n+1)! - 1$. The
isomorphism then follows from Corollary~\ref{cor.B_n}.
\end{proof}
\begin{lemma}\label{lem.4-term-relation}
The relations of the form:
\begin{equation}\label{eq.4-term}
XY \otimes Z \otimes W + W \otimes ZX \otimes Y + YZX \otimes 1 \otimes W
+ W \otimes YZ \otimes X \approx 0
\end{equation}
\begin{equation}\label{eq.1-term}
\qquad \mathrm{and} \qquad
1 \otimes X \otimes 1 \approx 0
\end{equation}
collapse $k\big[\mathrm{Mor}_{\Delta S}([n], [2])\big]$ onto
$k[\mathscr{B}_n]$.
\end{lemma}
\begin{proof}
This proof proceeds in multiple steps.
{\bf Step 1. [Degeneracy Relations]} $X \otimes Y \otimes 1 \approx X \otimes
1 \otimes Y \approx 1 \otimes X \otimes Y$.
First, observe that letting $X = Y = W = 1$ in Eq.~(\ref{eq.4-term}) yields
$Z \otimes 1 \otimes 1 \approx 0$, since $1 \otimes Z \otimes 1 \approx 0$.
Then, letting $X = Z = W = 1$ in Eq.~(\ref{eq.4-term}) produces $1 \otimes 1
\otimes Y \approx 0$. Thus, any formal tensor with two trivial factors
is equivalent to $0$.
Next, let $Z = W = 1$ in Eq.~(\ref{eq.4-term}). Then using the above
observation, we obtain $1 \otimes X \otimes Y + 1 \otimes Y \otimes X \approx
0$, that is, $1 \otimes X \otimes Y \approx -(1 \otimes Y \otimes X)$. Then,
if we let $X = W = 1$, we obtain $Y \otimes Z \otimes 1 + 1 \otimes Z
\otimes Y \approx 0$, which is equivalent to $Y \otimes Z \otimes 1 - 1
\otimes Y \otimes Z \approx 0$.
Finally, let $X = Y = 1$ in Eq.~(\ref{eq.4-term}). The expression reduces
to $Z \otimes 1 \otimes W - 1 \otimes Z \otimes W \approx 0$.
{\bf Step 2. [Sign Relation]} $X \otimes Y \otimes Z \approx -(Z \otimes Y
\otimes X)$.
Let $Y = 1$ in Eq.~(\ref{eq.4-term}), and use the degeneracy relations
to rewrite the result as $X \otimes Z \otimes W + 1 \otimes W \otimes ZX +
1 \otimes ZX \otimes W + W \otimes Z \otimes X \approx 0$. Since
$1 \otimes ZX \otimes W \approx -(1 \otimes W \otimes ZW)$, the desired
result follows: $X \otimes Z \otimes W + W \otimes Z \otimes X \approx 0$.
{\bf Step 3. [Hochschild Relation]} $XY \otimes Z \otimes 1 - X \otimes YZ
\otimes 1 + ZX \otimes Y \otimes 1 \approx 0$. This relation is named
after the similar relation, $XY \otimes Z - X \otimes YZ + ZX \otimes Y$,
that arises in the Hochschild complex.
Let $W = 1$ in Eq.~(\ref{eq.4-term}), and use the degeneracy and sign
relations to obtain the desired result.
{\bf Step 4. [Cyclic Relation]} $\displaystyle{\sum_{j = 0}^n \tau_n^j\left(x_{i_0}
x_{i_1} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1\right) \approx 0}$,
where $\tau_n \in \Sigma_{n+1}$ is the $(n+1)$-cycle $(0,n,n-1,\ldots,2,1)$,
which acts by permuting the indices.
For $n = 0$, there are no such relations
(indeed, no relations at all). For $n=1$, the cyclic relation takes the form
$x_0 \otimes x_1 \otimes 1 + x_1 \otimes x_0 \otimes 1 \approx 0$, which
follows from degeneracy and sign relations.
Assume now that $n \geq 2$. For each $k = 1, 2, \ldots, n-1$, define:
\[
\left\{\begin{array}{ll}
A_k := & x_{i_0}x_{i_1}\ldots x_{i_{k-1}},\\
B_k := & x_{i_k},\\
C_k := & x_{i_{k+1}} \ldots x_{i_n}.
\end{array}\right.
\]
By the Hochschild relation, $0 \approx \sum_{k=1}^{n-1}
(A_kB_k \otimes C_k \otimes 1 - A_k \otimes B_kC_k \otimes 1 + C_kA_k \otimes
B_k \otimes 1)$. But for $k \leq n-2$, $A_kB_k \otimes C_k \otimes 1 =
A_{k+1} \otimes B_{k+1}C_{k+1} \otimes 1$.
Thus, after some cancellation,
\begin{equation}\label{eq.cyclic-relation}
0 \approx - A_1 \otimes B_1 C_1 \otimes 1 + A_{n-1}B_{n-1} \otimes C_{n-1}
\otimes 1 + \sum_{k=1}^{n-1} C_kA_k \otimes B_k \otimes 1.
\end{equation}
Now observe that sign and degeneracy relations imply
that $- A_1 \otimes B_1 C_1 \otimes 1 \approx x_{i_1} \ldots x_{i_n} \otimes
x_{i_0} \otimes 1$. The term $A_{n-1}B_{n-1} \otimes C_{n-1} \otimes 1$ is
equal to $x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1$, and for
$1 \leq k \leq n-1$, $C_kA_k \otimes B_k \otimes 1 = x_{i_{k+1}} \ldots
x_{i_n}x_{i_0} \ldots x_{i_{k-1}} \otimes x_{i_k} \otimes 1$. Thus,
Eq.~(\ref{eq.cyclic-relation}) can be rewritten as the cyclic relation,
\[
0 \approx (x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1)
+ \sum_{k=0}^{n-1} x_{i_{k+1}} \ldots x_{i_n}x_{i_0} \ldots x_{i_{k-1}}
\otimes x_{i_k} \otimes 1.
\]
{\bf Step 5.} Every element of the form $X \otimes Y \otimes 1$ is equivalent
to a linear combination of elements of $\mathscr{B}_n$.
To prove this, we shall induct on the size of $Y$. Suppose $Y$ consists of
a single indeterminate. That is, $X \otimes Y \otimes 1 =
x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1$. Now, if $i_n \neq n$,
we are done. Otherwise, we use the cyclic relation to write
$x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1 \approx -\sum_{j = 1}^n
\tau_n^j\left(x_{i_0} \ldots x_{i_{n-1}} \otimes x_{i_n} \otimes 1\right)$.
Now suppose $k \geq 1$ and any element $Z \otimes W \otimes 1$ with $|W| = k$
is equivalent to an element of $k[\mathscr{B}_n]$. Consider $X \otimes Y
\otimes 1 = x_{i_0} \ldots x_{i_{n-k-1}} \otimes x_{i_{n-k}} \ldots x_{i_n}
\otimes 1$.
Let
\[
\left\{\begin{array}{ll}
A := & x_{i_0}x_{i_1}\ldots x_{i_{n-k-1}},\\
B := & x_{i_{n-k}} \ldots x_{i_{n-1}},\\
C := & x_{i_n}.
\end{array}\right.
\]
Then, by the Hochschild relation, $X \otimes Y \otimes 1 = A \otimes BC
\otimes 1 \approx AB \otimes C \otimes 1 + CA \otimes B \otimes 1$.
But since $C$ has one indeterminate and $B$ has $k$ indeterminates, this last
expression is equivalent to an element of $k[\mathscr{B}_n]$ by inductive
hypothesis.
{\bf Step 6. [Modified Hochschild Relation]} $XY \otimes Z \otimes W - X
\otimes YZ \otimes W + ZX \otimes Y \otimes W \approx 0$, modulo $k\left[
\mathscr{B}_n\right]$.
First, we show that $X \otimes Y \otimes W + Y \otimes X \otimes W \approx 0
\pmod{k\left[\mathscr{B}_n\right]}$. Indeed, if we let $Z = 1$ in
Eq.~(\ref{eq.4-term}), then sign and degeneracy relations yield:
$ X \otimes Y \otimes W + Y \otimes X \otimes W \approx XY \otimes W \otimes
1 + YX \otimes W \otimes 1$, {\it i.e.}, by step 5,
\begin{equation}\label{eq.step8a}
X \otimes Y \otimes W \approx -(Y \otimes X \otimes W)
\pmod{k\left[\mathscr{B}_n\right]}.
\end{equation}
Now, using sign and degeneracy relations, Eq.~(\ref{eq.4-term}) can be
re-expressed:
\[
XY \otimes Z \otimes W - Y \otimes ZX \otimes W
+ YZX \otimes W \otimes 1 - X \otimes YZ \otimes W \approx 0.
\]
Using Eq.~(\ref{eq.step8a}), we then arrive at the modified Hochschild
relation,
\begin{equation}\label{eq.step8b}
XY \otimes Z \otimes W - X \otimes YZ \otimes W + ZX \otimes Y \otimes W
\approx 0 \pmod{k\left[\mathscr{B}_n\right]}.
\end{equation}
{\bf Step 7. [Modified Cyclic Relation]} $\displaystyle{\sum_{j = 0}^k \tau_k^j\left(
x_{i_0}x_{i_1} \ldots x_{i_{k-1}} \otimes x_{i_k} \otimes x_{i_{k+1}}\ldots
x_{i_n}\right)} \approx 0$, modulo $k\left[\mathscr{B}_n\right]$. Note, the
$(k+1)$-cycle $\tau_k$ permutes the indices $i_0, i_1, \ldots, i_k$, and fixes
the rest.
Eq.~(\ref{eq.step8a}) proves the modified cyclic relations for $k=1$.
The modified cyclic relation for $k \geq 2$ follows from the modified
Hochschild relation in the same manner as in step 4. Of course, this time
all equivalences are taken modulo $k\left[\mathscr{B}_n\right]$.
{\bf Step 8.} Every element of the form $X \otimes Y \otimes x_n$ is
equivalent to an element of $k[\mathscr{B}_n]$.
We shall use the modified cyclic and modified Hochschild relations in a
similar way as cyclic and Hochschild relations were used in step 5. Again we
induct on the size of $Y$. If $|Y| = 1$, then $X \otimes Y \otimes x_n =
x_{i_0} \ldots x_{i_{n-2}} \otimes x_{i_{n-1}} \otimes x_{n}$. If $i_{n-1}
\neq n-1$, then we are done. Otherwise, use the modified cyclic relation to
re-express $X \otimes Y \otimes x_n$ as a sum of elements of $k\left[
\mathscr{B}_n\right]$.
Next, suppose $k \geq 1$ and any element $Z \otimes W \otimes x_n$ with $|W| =
k$ is equivalent to an element of $k[\mathscr{B}_n]$. Consider an element
$X \otimes Y \otimes x_n$ with $|Y| = k + 1$. Write $Y = BC$ with
$|B| = k$ and $|C| = 1$ and use the modified Hochschild relation to rewrite
$X \otimes BC \otimes x_n$ in terms of two elements whose middle tensor
factors are either $B$ or $C$ (modulo $k\left[\mathscr{B}_n\right]$). By
inductive hypothesis, the rewritten expression must lie in $k[\mathscr{B}_n]$.
{\bf Step 9.} Every element of $k\left[\mathrm{Mor}_{\Delta S}\left([n], [2]
\right)\right]$ is equivalent to a linear combination of elements from the
following set:
\begin{equation}\label{eq.step10}
\mathscr{C}_n := \{X \otimes x_{i_n} \otimes 1 \;|\; i_n \neq n\} \cup
\{X \otimes x_{i_{n-1}} \otimes x_n \;|\; i_{n-1} \neq n-1\} \cup
\{X \otimes Y \otimes Zx_n \;|\; |Z| \geq 1\}
\end{equation}
Note, the $k$-module generated by $\mathscr{C}_n$ contains $k\left[
\mathscr{B}_n\right]$.
Let $X \otimes Y \otimes Z$ be an arbitrary element of $k\left[
\mathrm{Mor}_{\Delta S}\left([n], [2]\right)\right]$. If $|X| = 0$, $|Y|=0$,
or $|Z| = 0$, then the degeneracy relations and step 5 imply that $X \otimes Y
\otimes Z$ is equivalent to an element of $k\left[\mathscr{B}_n\right]$.
Suppose now that $|X|, |Y|, |Z| \geq 1$. If $x_n$ occurs in $X$, use the
relation $X \otimes Y \otimes W \approx -(Y \otimes X \otimes W) \pmod{k\left[
\mathscr{B}_n\right]}$ to ensure that $x_n$ occurs in the middle factor. If
$x_n$ occurs in $Z$, use the sign relation and the above relation to put
$x_n$ into the middle factor. In any case, it suffices to assume our element
has the form: $X \otimes Ux_nV \otimes Z$. Using the modified Hochschild
relation, $X \otimes Ux_nV \otimes Z$ $\approx -(Z \otimes V \otimes XUx_n) +
Z \otimes VX \otimes Ux_n,$ $\pmod{k\left[\mathscr{B}_n\right]}$. The first
term is certainly in $k[\mathscr{C}_n]$, since $|X| \geq 1$. If $|U| > 0$,
the second term also lies in $k[\mathscr{C}_n]$. If, on the other hand, $|U|
= 0$, then step 8 implies that $Z \otimes VX \otimes x_n$ is an element
of $k[\mathscr{B}_n]$.
Observe that Step 9 proves Lemma~\ref{lem.4-term-relation} for $n = 0, 1, 2$,
since in these cases, any elements that fall within the set
$\{X \otimes Y \otimes Zx_n \;|\; |Z| \geq 1\}$ must have either $|X| = 0$
or $|Y| = 0$, hence are equivalent via the degeneracy relation to elements of
$k\left[\{X \otimes x_{i_n} \otimes 1 \;|\; i_n \neq n\}\right]$.
In what follows, assume $n \geq 3$.
{\bf Step 10.} Every element of $k\left[\mathrm{Mor}_{\Delta S}\left([n], [2]
\right)\right]$ is equivalent, modulo $k\left[\mathscr{B}_n\right]$, to a
linear combination of elements from the following set:
\begin{equation}\label{eq.step10a}
\mathscr{D}_n := \{X \otimes x_{i_{n-2}} \otimes x_{n-1}x_{n} \;|\; i_{n-2}
\neq n-2\} \cup \{X \otimes Y \otimes Zx_{n-1}x_n \;|\; |Z| \geq 1\}.
\end{equation}
First, we require a relation that transports $x_n$ from the end of a tensor:
\begin{equation}\label{eq.step10b}
W \otimes Z \otimes Xx_n \approx W \otimes x_nZ \otimes X
\pmod{k\left[\mathscr{B}_n\right]}.
\end{equation}
Letting $Y = x_n$ in Eq.~(\ref{eq.4-term}), and making use of the sign
relation, we have: $W \otimes Z \otimes Xx_n \approx W \otimes ZX \otimes x_n
+ x_nZX \otimes W \otimes 1 + W \otimes x_nZ \otimes X$. By steps 5 and 8,
$W \otimes Z \otimes Xx_n \approx W \otimes x_nZ \otimes X$, modulo elements
of $k\left[\mathscr{B}_n\right]$.
Now, let $X \otimes Y \otimes Z$ be an arbitrary element of $k\left[
\mathrm{Mor}_{\Delta S}\left([n], [2]\right)\right]$. Locate $x_{n-1}$ and
use the techniques of Step 9 to re-express $X \otimes Y \otimes Z$ as a linear
combination of terms of the form: $X_j \otimes Y_j \otimes Z_jx_{n-1}$,
modulo $k\left[\mathscr{B}_n\right]$. Our goal is to re-express each term
as a linear combination of vectors in which $x_n$ occurs only in the second
tensor factor.
If $x_n$ occurs in $X_j$, then observe $X_j \otimes Y_j \otimes Z_jx_{n-1}
\approx -(Y_j \otimes X_j \otimes Z_jx_{n-1})$, $\pmod{k\left[\mathscr{B}_n
\right]}$.
If $x_n$ occurs in $Z_j$, then first substitute $Y = x_{n-1}$ into
Eq.~(\ref{eq.4-term}), obtaining the relation:
\[
Xx_{n-1} \otimes Z \otimes W + W \otimes ZX \otimes x_{n-1}
+ x_{n-1}ZX \otimes 1 \otimes W + W \otimes x_{n-1}Z \otimes X \approx 0
\]
\[
\Rightarrow \; W \otimes Z \otimes Xx_{n-1} \approx W \otimes ZX \otimes
x_{n-1} + W \otimes x_{n-1}Z \otimes X \pmod{k\left[\mathscr{B}_n\right]}
\]
By the modified Hochschild relation, $W \otimes x_{n-1}Z \otimes X \approx
Wx_{n-1} \otimes Z \otimes X + ZW \otimes x_{n-1} \otimes X$, $\pmod{k
\left[\mathscr{B}_n\right]}$, then using sign relations etc., we obtain:
\[
W \otimes Z \otimes Xx_{n-1} \approx W \otimes ZX \otimes x_{n-1}
+ Z \otimes X \otimes Wx_{n-1} - ZW \otimes X \otimes x_{n-1},
\pmod{k\left[\mathscr{B}_n\right]}.
\]
Thus, we can express our original element $X \otimes Y \otimes Z$ as a linear
combination of elements of the form $X' \otimes U'x_nV' \otimes Z'x_{n-1}$,
$\pmod{k\left[\mathscr{B}_n\right]}$. Then using modified Hochschild etc.,
rewrite each such term as follows:
\[
X' \otimes U'x_nV' \otimes Z'x_{n-1} \approx X'U' \otimes x_nV' \otimes
Z'x_{n-1} - U' \otimes x_nV'X' \otimes Z'x_{n-1}, \pmod{k\left[
\mathscr{B}_n\right]}.
\]
By Eq.~(\ref{eq.step10b}), we transport $x_n$ to the end of each term, so
$X' \otimes U'x_nV' \otimes Z'x_{n-1} \approx X'U' \otimes V' \otimes Z'
x_{n-1}x_n - U' \otimes V'X' \otimes Z'x_{n-1}x_n$, modulo elements of
$k\left[\mathscr{B}_n\right]$. If $|Z'| \geq 1$, then we are done. Otherwise,
we have some elements of the form $X'' \otimes Y'' \otimes x_{n-1}x_n$. Use
an induction argument analogous to that in step 8 to re-express this type of
element as a linear combination of elements of the form $U \otimes x_{i_{n-2}}
\otimes x_{n-1}x_n$ such that $i_{n-2} \neq n-2$, modulo elements of $k\left[
\mathscr{B}_n\right]$.
{\bf Step 11.} Every element of $k\left[\mathrm{Mor}_{\Delta S}\left([n],
[2]\right)\right]$ is equivalent to an element of $k\left[\mathscr{B}_n
\right]$.
We shall use an iterative re-writing procedure. First of all, define sets:
\[
\mathscr{B}_n^{j} := \{ A \otimes x_{i_{n-j}} \otimes x_{n-j+1} \ldots
x_n \;|\; i_{n-j} \neq n-j\},
\]
\[
\mathscr{C}_n^{j} := \{ A \otimes B \otimes Cx_{n-j+1} \ldots
x_n \;|\; |C| \geq 1\}.
\]
Now clearly, $\mathscr{B}_n = \bigcup_{j=0}^{n-1} \mathscr{B}_n^{j}$.
In what follows, `reduced' will always mean reduced modulo elements of
$k\left[\mathscr{B}_n\right]$. By steps 9 and 10, we can reduce an arbitrary
element $X \otimes Y \otimes Z$ to linear combinations of elements in
$\mathscr{B}_n^0 \cup \mathscr{B}_n^1 \cup \mathscr{B}_n^2 \cup
\mathscr{C}_n^2$. Suppose now that we have reduced elements to linear
combinations of elements from the set $\mathscr{B}_n^0 \cup \mathscr{B}_n^1
\cup \ldots \cup \mathscr{B}_n^j \cup \mathscr{C}_n^j$, for some $j \geq 2$.
I claim any element of $\mathscr{C}_n^j$ can be re-expressed as a linear
combination of elements from the set $\mathscr{B}_n^0 \cup \mathscr{B}_n^1
\cup \ldots \cup \mathscr{B}_n^{j+1} \cup \mathscr{C}_n^{j+1}$. Indeed,
let $X \otimes Y \otimes Zx_{n-j+1} \ldots x_n$, with $|Z| \geq 1$. Let
$w := x_{n-j+1} \ldots x_n$. We may now think of $X \otimes Y \otimes Zw$
as consisting of the `indeterminates' $x_0, x_1, \ldots, x_{n-j}, w$, hence,
by step 10, we may reduce this element to a linear combination of elements
from the set $\{ X \otimes x_{i_{n-j-1}} \otimes x_{n-j}w \;|\; i_{n-j-1}
\neq n-j-1\} \cup \{ X \otimes Y \otimes Zx_{n-j}w \;|\; |Z| \geq 1 \}$.
This implies the element may written as a linear combination
of elements from the set $\mathscr{B}_n^{j+1} \cup \mathscr{C}_n^{j+1}$,
modulo elements of the form $A \otimes B \otimes 1$ and $A \otimes B \otimes
x_{n-j+1}\ldots x_n$. Since $\{A \otimes B \otimes x_{n-j+1}x_{n-j+2}\ldots
x_n\} \subseteq \mathscr{C}_n^{j-1}$, the inductive hypothesis ensures that
the there is set containment $\{A \otimes B \otimes x_{n-j+1}\ldots
x_n\} \subseteq \mathscr{B}_n^0 \cup \ldots \cup \mathscr{B}_n^{j}$. This
completes the inductive step.
After a finite number of iterations, then, we can re-express any element
$X \otimes Y \otimes Z$ as a linear combination from the set
$\mathscr{B}_n^0 \cup \ldots \mathscr{B}_n^{n-1} \cup \mathscr{C}_n^{n-1}
= \mathscr{B}_n \cup \mathscr{C}_n^{n-1}$. But $\mathscr{C}_n^{n-1} =
\{ A \otimes B \otimes Cx_{2} \ldots x_n \;|\; |C| \geq 1\}$. Any element
from this set has either $|A| = 0$ or $|B| = 0$, therefore is equivalent to an
element of $k[\mathscr{B}_n]$ already.
\end{proof}
\begin{cor}\label{cor.k-contains-one-half}
If $\frac{1}{2} \in k$, then the four-term relation $XY \otimes Z \otimes W +
W \otimes ZX \otimes Y + YZX \otimes 1 \otimes W + W \otimes YZ \otimes X
\approx 0$ is sufficient to collapse $k\left[\mathrm{Mor}_{\Delta S}\left([n],
[2]\right)\right]$ onto $k\left[\mathscr{B}_n\right]$.
\end{cor}
\begin{proof}
We only need to modify step 1 of the previous proof. We will establish
that $X \otimes 1 \otimes 1 \approx 1 \otimes X \otimes 1 \approx 1 \otimes 1
\otimes X \approx 0$.
Setting three variables at a time equal to $1$ in Eq.~(\ref{eq.4-term})
we obtain,
\begin{equation}\label{eq.step1_1}
2(W \otimes 1 \otimes 1) + 2(1 \otimes 1 \otimes W) \approx 0,
\quad \textrm{when $X = Y = Z = 1$.}
\end{equation}
\begin{equation}\label{eq.step1_2}
Z \otimes 1 \otimes 1 + 3(1 \otimes Z \otimes 1) \approx 0,
\quad \textrm{when $X = Y = W = 1$.}
\end{equation}
\begin{equation}\label{eq.step1_3}
2(Y \otimes 1 \otimes 1) + 1 \otimes Y \otimes 1 + 1 \otimes 1 \otimes Y
\approx 0, \quad \textrm{when $X = Z = W = 1$.}
\end{equation}
Equivalently, we have a system of linear equations,
\[
\left[\begin{array}{ccc}
2 & 0 & 2 \\
1 & 3 & 0 \\
2 & 1 & 1
\end{array}\right]
\left[ \begin{array}{c}
z_1 \\
z_2 \\
z_3
\end{array}\right]
= 0,
\]
where $z_1 := X \otimes 1 \otimes 1$, $z_2 := 1 \otimes X \otimes 1$, and
$z_3 := 1 \otimes 1 \otimes X$. Since the determinant of the coefficient
matrix is $-4$, the matrix is invertible in the ring $k$ as long as $1/2 \in
k$.
\end{proof}
Lemma~\ref{lem.4-term-relation} together with Lemmas~\ref{lem.rho-iso}
and~\ref{lem.0-stage} show the following sequence of $k$-modules is exact:
\begin{equation}\label{eq.part_res-n}
0 \gets k \stackrel{\epsilon}{\gets} k\left[\mathrm{Mor}_{\Delta S}\left([n],
[0]\right)\right] \stackrel{\rho}{\gets} k\left[\mathrm{Mor}_{\Delta S}\left(
[n], [2]\right)\right]
\stackrel{(\alpha, \beta)}{\longleftarrow} k\left[
\mathrm{Mor}_{\Delta S}\left([n], [3]\right)\right] \oplus k\left[
\mathrm{Mor}_{\Delta S}\left([n], [0]\right)\right],
\end{equation}
where $\alpha : k\left[\mathrm{Mor}_{\Delta S}\left([n], [3]\right)\right]
\to k\left[\mathrm{Mor}_{\Delta S}\left([n], [2]\right)\right]$ is given by
composition with the $\Delta S$ morphism, $x_0x_1 \otimes x_2 \otimes x_3 + x_3
\otimes x_2x_0 \otimes x_1 + x_1x_2x_0 \otimes 1 \otimes x_3 + x_3 \otimes
x_1x_2 \otimes x_0$, and $\beta : k\left[\mathrm{Mor}_{\Delta S}\left([n], [0]
\right)\right] \to k\left[\mathrm{Mor}_{\Delta S}\left([n], [2]\right)\right]$
is induced by $1 \otimes x_0 \otimes 1$. This holds for all $n \geq 0$, so
we have constructed a partial resolution of $\underline{k}$ by projective
$\Delta S^\mathrm{op}$-modules:
\begin{equation}\label{eq.part_res}
0 \gets k \stackrel{\epsilon}{\gets} k\left[\mathrm{Mor}_{\Delta S}\left(-,
[0]\right)\right] \stackrel{\rho}{\gets} k\left[\mathrm{Mor}_{\Delta S}
\left(-, [2]\right)\right] \stackrel{(\alpha, \beta)}{\longleftarrow}
k\left[\mathrm{Mor}_{\Delta S}\left(-, [3]\right)\right] \oplus k\left[
\mathrm{Mor}_{\Delta S}\left(-, [0]\right)\right]
\end{equation}
\section{Using the Partial Resolution for Low Degree Computations}
\label{sec.lowdeg}
Let $A$ be a unital associative algebra over $k$. Since Eq.~(\ref{eq.part_res})
is a partial resolution of $\underline{k}$, it can be used to find $HS_i(A)$
for $i = 0, 1$.
\subsection{Main Theorem}
\begin{theorem}\label{thm.partial_resolution}
$HS_i(A)$ for $i=0,1$ may be computed as the degree $0$ and degree $1$
homology groups of the following (partial) chain complex:
\begin{equation}\label{eq.partial_complex}
0\longleftarrow A \stackrel{\partial_1}{\longleftarrow} A\otimes A\otimes A
\stackrel{\partial_2}{\longleftarrow}(A\otimes A\otimes A\otimes A)\oplus A,
\end{equation}
where
\[
\partial_1 : a\otimes b\otimes c \mapsto abc - cba,
\]
\[
\partial_2 : \left\{\begin{array}{lll}
a\otimes b\otimes c\otimes d &\mapsto& ab\otimes c\otimes d +
d\otimes ca\otimes b + bca\otimes 1\otimes d + d\otimes bc\otimes a,\\
a &\mapsto& 1\otimes a\otimes 1.
\end{array}\right.
\]
\end{theorem}
\begin{proof}
Tensoring the complex~(\ref{eq.part_res}) with $B_*^{sym}A$ over $\Delta S$,
we obtain a complex that computes $HS_0(A)$ and $HS_1(A)$. The statement
of the theorem then follows from isomorphisms induced by the evaluation
map, $k\left[\mathrm{Mor}_{\Delta S}\left(-, [p]\right)\right]
\otimes_{\Delta S} B_*^{sym}A \stackrel{\cong}{\longrightarrow} B_p^{sym}A$.
\end{proof}
\subsection{Degree $0$ Symmetric Homology}
\begin{theorem}\label{thm.HS_0}
For a unital associative algebra $A$ over commutative ground ring $k$,
$HS_0(A) \cong A/([A,A])$, where $([A,A])$ is the ideal generated by the
commutator submodule $[A,A]$.
\end{theorem}
\begin{proof}
By Thm.~\ref{thm.partial_resolution}, $HS_0(A) \cong A/k\left[\{abc-cba\}
\right]$
as $k$-module. But $k\big[\{abc-cba\}\big]$ is an ideal of $A$. Now
clearly $[A,A] \subseteq k\left[\{abc-bca\}\right]$. On the other hand,
$k\left[\{abc-cba\}\right] \subseteq ([A,A])$ since $abc - cba = a(bc-cb) +
a(cb) - (cb)a$.
\end{proof}
\begin{cor}
If $A$ is commutative, then $HS_0(A) \cong A$.
\end{cor}
\begin{rmk}
Theorem~\ref{thm.HS_0} implies that symmetric homology does not preserve
Morita equivalence, since for $n>1$, $HS_0\left(M_n(A)\right) = M_n(A)/\left(
[M_n(A),M_n(A)]\right) = 0$, while in general $HS_0(A) = A/([A,A]) \neq 0$.
\end{rmk}
\subsection{Degree $1$ Symmetric Homology}\label{sub.deg-1-HS}
Using \verb|GAP|, we have made the following explicit computations of degree 1
integral symmetric homology. See section~\ref{sec.gap-fermat} for a discussion
of how computer algebra systems were used in symmetric homology computations.
\begin{center}
\begin{tabular}{l|l}
$A$ & $HS_1(A \;|\; \mathbb{Z})$ \\
\hline
$\mathbb{Z}[t]/(t^2)$ & $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ \\
$\mathbb{Z}[t]/(t^3)$ & $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ \\
$\mathbb{Z}[t]/(t^4)$ & $(\mathbb{Z}/2\mathbb{Z})^4$ \\
$\mathbb{Z}[t]/(t^5)$ & $(\mathbb{Z}/2\mathbb{Z})^4$ \\
$\mathbb{Z}[t]/(t^6)$ & $(\mathbb{Z}/2\mathbb{Z})^6$ \\
\hline
$\mathbb{Z}[C_2]$ & $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ \\
$\mathbb{Z}[C_3]$ & $0$ \\
$\mathbb{Z}[C_4]$ & $(\mathbb{Z}/2\mathbb{Z})^4$ \\
$\mathbb{Z}[C_5]$ & $0$ \\
$\mathbb{Z}[C_6]$ & $(\mathbb{Z}/2\mathbb{Z})^6$ \\
\hline
\end{tabular}
\end{center}
Based on these calculations, we conjecture:
\begin{conj}
\[
HS_1\big(k[t]/(t^n)\big) = \left\{\begin{array}{ll}
(k/2k)^n, & \textrm{if $n \geq 0$ is even.}\\
(k/2k)^{n-1} & \textrm{if $n \geq 1$ is odd.}
\end{array}\right.
\]
\end{conj}
\begin{rmk}
The computations of $HS_1\big(\mathbb{Z}[C_n]\big)$ are consistent with those of
Brown and Loday~\cite{BL}. See~\ref{sub.2-torsion} for a more
detailed treatment of $HS_1$ for group rings.
\end{rmk}
Additionally, $HS_1$ has been computed for the following examples. These
computations were done using \verb|GAP| in some cases and in others,
\verb|Fermat|~\cite{LEW} computations on sparse matrices
were used in conjunction with the \verb|GAP| scripts. ({\it e.g.} when the
algebra has dimension greater than $6$ over $\mathbb{Z}$).
\begin{center}
\begin{tabular}{l|l}
$A$ & $HS_1(A \;|\; \mathbb{Z})$\\
\hline
$\mathbb{Z}[t,u]/(t^2, u^2)$ & $\mathbb{Z} \oplus (\mathbb{Z}/2\mathbb{Z})^{11}$\\
$\mathbb{Z}[t,u]/(t^3, u^2)$ & $\mathbb{Z}^2 \oplus (\mathbb{Z}/2\mathbb{Z})^{11} \oplus \mathbb{Z}/6\mathbb{Z}$\\
$\mathbb{Z}[t,u]/(t^3, u^2, t^2u)$ & $\mathbb{Z}^2 \oplus (\mathbb{Z}/2\mathbb{Z})^{10}$\\
$\mathbb{Z}[t,u]/(t^3, u^3)$ & $\mathbb{Z}^4 \oplus (\mathbb{Z}/2\mathbb{Z})^7 \oplus (\mathbb{Z}/6\mathbb{Z})^5$\\
$\mathbb{Z}[t,u]/(t^2, u^4)$ & $\mathbb{Z}^3 \oplus (\mathbb{Z}/2\mathbb{Z})^{20} \oplus \mathbb{Z}/4\mathbb{Z}$\\
$\mathbb{Z}[t,u,v]/(t^2, u^2, v^2)$ & $\mathbb{Z}^6 \oplus (\mathbb{Z}/2\mathbb{Z})^{42}$\\
$\mathbb{Z}[t,u]/(t^4, u^3)$ & $\mathbb{Z}^6 \oplus (\mathbb{Z}/2\mathbb{Z})^{19} \oplus \mathbb{Z}/6\mathbb{Z} \oplus
(\mathbb{Z}/12\mathbb{Z})^2$\\
$\mathbb{Z}[t,u,v]/(t^2, u^2, v^3)$ & $\mathbb{Z}^{11} \oplus (\mathbb{Z}/2\mathbb{Z})^{45} \oplus
(\mathbb{Z}/6\mathbb{Z})^4$\\
$\mathbb{Z}[i,j,k], i^2=j^2=k^2=ijk=-1$ & $(\mathbb{Z}/2\mathbb{Z})^8$\\
$\mathbb{Z}[C_2 \times C_2]$ & $(\mathbb{Z}/2\mathbb{Z})^{12}$\\
$\mathbb{Z}[C_3 \times C_2]$ & $(\mathbb{Z}/2\mathbb{Z})^{6}$\\
$\mathbb{Z}[C_3 \times C_3]$ & $(\mathbb{Z}/3\mathbb{Z})^{9}$\\
$\mathbb{Z}[S_3]$ & $(\mathbb{Z}/2\mathbb{Z})^2$\\
\hline
\end{tabular}
\end{center}
\subsection{Splittings of the Partial Complex}\label{sub.splittings}
Under certain circumstances, the partial complex in
Thm.\ref{thm.partial_resolution}
splits as a direct sum of smaller complexes. This observation becomes
increasingly important as the dimension of the algebra increases. Indeed, some
of the computations of the previous section were done using splittings.
\begin{definition}
For a commutative $k$-algebra $A$ and $u \in A$, define the $k$-modules:
\[
\left(A^{\otimes n}\right)_u := \{ a_1 \otimes a_2 \otimes \ldots \otimes
a_n \in A^{\otimes n} \;|\; a_1a_2\cdot \ldots \cdot a_n = u \}
\]
\end{definition}
\begin{prop}
If $A = k[M]$ for a commutative monoid $M$, then
complex~(\ref{eq.partial_complex}) splits as a direct sum of complexes
\begin{equation}\label{eq.u-homology}
0\longleftarrow (A)_u \stackrel{\partial_1}{\longleftarrow}
(A\otimes A\otimes A)_u
\stackrel{\partial_2}{\longleftarrow}
(A\otimes A\otimes A\otimes A)_u\oplus (A)_u,
\end{equation}
where $u$ ranges over the elements of $M$. Thus, for $i = 0,1$, we have
$HS_i(A) \cong \bigoplus_{u \in M} HS_i(A)_u$.
\end{prop}
\begin{proof}
Since $M$ is a commutative monoid, there are direct sum decompositions as
$k$-module: $A^{\otimes n} = \bigoplus_{u \in M} \left( A^{\otimes n}
\right)_u$. The boundary maps $\partial_1$ and $\partial_2$ preserve the
products of tensor factors, so the inclusions $\left(A^{\otimes n}\right)_u
\hookrightarrow A^{\otimes n}$ induce maps of complexes, hence the complex
itself splits as a direct sum.
\end{proof}
\begin{definition}
For each $u$, the homology groups of complex~(\ref{eq.u-homology}) will be
called the {\it $u$-layered symmetric homology} of $A$, denoted $HS_i(A)_u$.
\end{definition}
We may use layers to investigate the symmetric homology of $k[t]$. This algebra
is monoidal, generated by the monoid $\{1, t, t^2, t^3, \ldots \}$. Now, the
$t^m$-layer symmetric homology of $k[t]$ will be the same as the $t^m$-layer
symmetric homology of $k\left[M^{m+2}_{m+1}\right]$, where $M^p_q$ denotes the
cyclic monoid generated by an indeterminate $s$ with the property that $s^p =
s^q$. Using this observation and subsequent computation, we conjecture:
\begin{conj}\label{conj.HS_1freemonoid}
\[
HS_1\left(k[t]\right)_{t^m} = \left\{\begin{array}{ll}
0 & m = 0, 1\\
k/2k, & m \geq 2\\
\end{array}\right.
\]
\end{conj}
This conjecture has been verified up to $m = 18$, in the case $k = \mathbb{Z}$.
\subsection{2-torsion in $HS_1$}\label{sub.2-torsion}
The occurrence of 2-torsion in $HS_1(A)$ for the examples considered
in~\ref{sub.deg-1-HS} and~\ref{sub.splittings} comes as no surprise,
based on Thm.~\ref{thm.HS_group}. First consider the following chain
of isomorphisms: $\pi_{2}^s(B\Gamma) = \pi_2\left(\Omega^{\infty}S^{\infty}
(B\Gamma)\right) \cong \pi_{1}\left(\Omega\Omega^{\infty}S^{\infty}(B\Gamma)
\right) \cong \pi_{1}\left(\Omega_0\Omega^{\infty}S^{\infty}(B\Gamma)\right)
\stackrel{h}{\to} H_1\left(\Omega_0\Omega^{\infty}S^{\infty}(B\Gamma)\right)$.
Here, $\Omega_0\Omega^{\infty}S^{\infty}(B\Gamma)$ denotes the component of the
constant loop, and $h$ is the Hurewicz homomorphism, which is an isomorphism
since $\Omega_0\Omega^{\infty}S^{\infty}(B\Gamma)$ is path-connected and $\pi_1$
is abelian (since it is actually $\pi_2$ of a space).
On the other hand, by Thm.~\ref{thm.HS_group}, $HS_1(k[\Gamma]) \cong H_1
\left(\Omega\Omega^{\infty}S^{\infty}(B\Gamma); k\right) \cong H_1\left(
\Omega\Omega^{\infty}S^{\infty}(B\Gamma)\right) \otimes k$. Note, all tensor
products will be over $\mathbb{Z}$ in this section. Now $\Omega\Omega^{\infty}
S^{\infty}(B\Gamma)$ consists of disjoint homeomorphic copies of $\Omega_0
\Omega^{\infty} S^{\infty}(B\Gamma)$, one for each element of $\Gamma/[\Gamma,
\Gamma]$, (where $[\Gamma, \Gamma]$ is the commutator subgroup of $\Gamma$), so
we may write $H_1\left(\Omega\Omega^{\infty}S^{\infty}(B\Gamma)\right) \otimes k
\cong H_1\left(\Omega_0\Omega^{\infty}S^{\infty}(B\Gamma)\right) \otimes k
\left[\Gamma/[\Gamma, \Gamma]\right]$ and obtain the following result:
\begin{prop}\label{prop.stablegrouphomotopy}
If $\Gamma$ is a group, then $HS_1(k[\Gamma]) \cong \pi_{2}^s(B\Gamma) \otimes
k\left[ \Gamma/[\Gamma, \Gamma] \right]$.
\end{prop}
As an immediate corollary, if $\Gamma$ is abelian, then $HS_1(k[\Gamma])
\cong \pi_2^s(B\Gamma) \otimes k[\Gamma]$. Moreover, by results of Brown and
Loday~\cite{BL}, if $\Gamma$ is abelian, then $\pi_2^s(B\Gamma)$ is the reduced
tensor square, $\Gamma\, \widetilde{\wedge} \,\Gamma = \left(\Gamma \otimes
\Gamma\right)/\approx$, where $g \otimes h \approx - h \otimes g$ for all $g, h
\in \Gamma$. (This construction is notated with multiplicative group action
in~\cite{BL}, since they deal with the more general case of non-abelian groups.)
\begin{prop}\label{prop.HS_1-C_n}
\[
HS_1(k[C_n]) = \left\{\begin{array}{ll}
(\mathbb{Z}/2\mathbb{Z})^n & \textrm{$n$ even.}\\
0 & \textrm{$n$ odd.}
\end{array}\right.
\]
\end{prop}
\begin{proof}
$\pi_2^s(BC_n) = \mathbb{Z}/2\mathbb{Z}$ if $n$ is even, and $0$ if $n$ is odd.
The result then follows from Prop.~\ref{prop.stablegrouphomotopy}, as
$k\left[ C_n/[C_n, C_n] \right] \cong k[C_n] \cong k^n$, as $k$-module.
\end{proof}
\section{Relations to Cyclic Homology}\label{sec.cyc-homology}
The relation between the symmetric bar construction and the cyclic bar
construction arising from the inclusions $\Delta C \hookrightarrow \Delta S$
gives rise to a natural map $HC_*(A) \to HS_*(A)$
Indeed, by remark~\ref{rmk.HC}, we may define cyclic homology thus:
$HC_*(A) = \mathrm{Tor}_*^{\Delta C}( \underline{k}, B_*^{sym}A )$, where
we understand $B_*^{sym}A$ as the restriction of the functor to $\Delta C$.
Using the partial complex of Thm.~\ref{thm.partial_resolution}, and an
analogous one for computing cyclic homology (c.f.~\cite{L}, p.~59), the
map $HC_*(A) \to HS_*(A)$ for degrees $0$ and $1$ is induced by the following
partial chain map:
\[
\begin{diagram}
\node{ 0 }
\node{ A }
\arrow{w}
\arrow{s,r}{ \gamma_0 = \mathrm{id} }
\node{ A \otimes A }
\arrow{w,t}{ \partial_1^C }
\arrow{s,r}{ \gamma_1 }
\node{ A^{\otimes 3} \oplus A }
\arrow{w,t}{ \partial_2^C }
\arrow{s,r}{ \gamma_2 }
\\
\node{ 0 }
\node{ A }
\arrow{w}
\node{ A^{\otimes 3} }
\arrow{w,t}{ \partial_1^S }
\node{ A^{\otimes 4} \oplus A }
\arrow{w,t}{ \partial_2^S }
\end{diagram}
\]
In this diagram, the boundary maps in the upper row are defined as follows:
\[
\partial_1^C : a \otimes b \mapsto ab - ba
\]
\[
\partial_2^C : \left\{\begin{array}{ll}
a \otimes b \otimes c &\mapsto ab \otimes c - a \otimes bc + ca \otimes b\\
a &\mapsto 1 \otimes a - a \otimes 1
\end{array}\right.
\]
The boundary maps in the lower row are defined as in Thm.~\ref{thm.partial_resolution}.
\[
\partial_1^S : a \otimes b \otimes c \mapsto abc - cba
\]
\[
\partial_2^S : \left\{\begin{array}{ll}
a \otimes b \otimes c \otimes d &\mapsto ab \otimes c \otimes d
- d \otimes ca \otimes b + bca \otimes 1 \otimes d
+ d \otimes bc \otimes a\\
a &\mapsto 1 \otimes a \otimes 1
\end{array}\right.
\]
The partial chain map is given in degree 1 by $\gamma_1(a \otimes b) := a \otimes b
\otimes 1$. In degree 2, $\gamma_2$ is defined on the summand $A^{\otimes 3}$ via
\[
a \otimes b \otimes c \mapsto (a \otimes b \otimes c \otimes 1
- 1 \otimes a \otimes bc \otimes 1
+ 1 \otimes ca \otimes b \otimes 1
+ 1 \otimes 1 \otimes abc \otimes 1
- b \otimes ca \otimes 1 \otimes 1)
- 2abc - cab,
\]
and on the summand $A$ via
\[
a \mapsto (-1 \otimes 1 \otimes a \otimes 1) + (4a).
\]
\subsection{Examples}
To provide some examples, consider the maps $\gamma_1 : HC_1(\mathbb{Z}[t]/(t^n)) \to
HS_1(\mathbb{Z}[t]/(t^n))$.
It can be shown ({\it e.g.} by direct computation) that $HC_1(\mathbb{Z}[t]/(t^2)) \cong
\mathbb{Z} / 2\mathbb{Z}$ is generated by the $1$-chain $t \otimes t$. $\gamma_1(t \otimes t) = t \otimes t
\otimes 1 \in HS_1(\mathbb{Z}[t]/(t^2)$ is a non-trivial element of $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ (which may
be verified by direct computation as well).
The map $HC_1(\mathbb{Z}[t]/(t^3)) \to HS_1(\mathbb{Z}[t]/(t^2))$ may be similarly analyzed. Here, the chain
$t\otimes t + t \otimes t^2$ is a generator of $HC_1(\mathbb{Z}[t]/(t^3)) \cong \mathbb{Z}/6\mathbb{Z}$, which gets sent by
$\gamma_1$ to $t\otimes t\otimes 1 + t \otimes t^2 \otimes 1$, a non-trivial element of
$HS_1(\mathbb{Z}[t]/(t^3)) \cong \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$.
The case $n=4$ is bit more interesting. Here, $HC_1(\mathbb{Z}[t]/(t^4)) \cong \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/12\mathbb{Z}$,
generated by $t \otimes t$ and $t\otimes t^2 + t \otimes t^3$, respectively. The image
of the map $\gamma_1$ in $HS_1(\mathbb{Z}[t]/(t^4))$ is $(\mathbb{Z}/2\mathbb{Z})^2 \subseteq (\mathbb{Z}/2\mathbb{Z})^4$.
\section{Using Computer Algebra Systems for Computing Symmetric Homology}
\label{sec.gap-fermat}
The computer algebra systems \verb+GAP+, \verb+Octave+ and \verb+Fermat+
were used to verify proposed theorems and also to obtain some concrete
computations of symmetric homology for some small algebras. A tar-file of the
scripts that were created and used for this work is available at
\url{http://arxiv.org/e-print/0807.4521v1/}. This tar-file contains the
following files:
\begin{itemize}
\item \verb+Basic.g+ \quad - Some elementary functions, necessary for some
functions in \verb+DeltaS.g+
\item \verb+HomAlg.g+ \quad - Homological Algebra functions, such as
computation of homology groups for chain complexes.
\item \verb+Fermat.g+ \quad - Functions necessary to invoke \verb+Fermat+
for fast sparse matrix computations.
\item \verb+fermattogap+, \verb+gaptofermat+ \quad - Auxiliary text files for
use when invoking \verb+Fermat+ from \verb+GAP+.
\item \verb+DeltaS.g+ \quad - This is the main repository of scripts used to
compute various quantities associated with the category $\Delta S$, including
$HS_1(A)$ for finite-dimensional algebras $A$.
\end{itemize}
In order to use the functions of \verb+DeltaS.g+, simply copy the above files
into the working directory (such as \verb+~/gap/+), invoke \verb+GAP+, then read
in \verb+DeltaS.g+ at the prompt. The dependent modules will automatically be
loaded (hence they must be present in the same directory as \verb+DeltaS.g+).
Note, most of the computations involving homology require substantial memory
to run. I recommend calling \verb+GAP+ with the command line option
``\verb+-o +{\it mem}'', where {\it mem} is the amount of memory to be allocated
to this instance of \verb+GAP+. All computations done in this dissertation can
be accomplished by allocating 20 gigabytes of memory. The following provides a
few examples of using the functions of \verb+DeltaS.g+
\begin{verbatim}
[ault@math gap]$ gap -o 20g
gap> Read("DeltaS.g");
gap>
gap> ## Number of morphisms [6] --> [4]
gap> SizeDeltaS( 6, 4 );
1663200
gap>
gap> ## Generate the set of morphisms of Delta S, [2] --> [2]
gap> EnumerateDeltaS( 2, 2 );
[ [ [ 0, 1, 2 ], [ ], [ ] ], [ [ 0, 2, 1 ], [ ], [ ] ],
[ [ 1, 0, 2 ], [ ], [ ] ], [ [ 1, 2, 0 ], [ ], [ ] ],
[ [ 2, 0, 1 ], [ ], [ ] ], [ [ 2, 1, 0 ], [ ], [ ] ],
[ [ 0, 1 ], [ 2 ], [ ] ], [ [ 0, 2 ], [ 1 ], [ ] ],
[ [ 1, 0 ], [ 2 ], [ ] ], [ [ 1, 2 ], [ 0 ], [ ] ],
[ [ 2, 0 ], [ 1 ], [ ] ], [ [ 2, 1 ], [ 0 ], [ ] ],
[ [ 0, 1 ], [ ], [ 2 ] ], [ [ 0, 2 ], [ ], [ 1 ] ],
[ [ 1, 0 ], [ ], [ 2 ] ], [ [ 1, 2 ], [ ], [ 0 ] ],
[ [ 2, 0 ], [ ], [ 1 ] ], [ [ 2, 1 ], [ ], [ 0 ] ],
[ [ 0 ], [ 1, 2 ], [ ] ], [ [ 0 ], [ 2, 1 ], [ ] ],
[ [ 1 ], [ 0, 2 ], [ ] ], [ [ 1 ], [ 2, 0 ], [ ] ],
[ [ 2 ], [ 0, 1 ], [ ] ], [ [ 2 ], [ 1, 0 ], [ ] ],
[ [ 0 ], [ 1 ], [ 2 ] ], [ [ 0 ], [ 2 ], [ 1 ] ], [ [ 1 ], [ 0 ], [ 2 ] ],
[ [ 1 ], [ 2 ], [ 0 ] ], [ [ 2 ], [ 0 ], [ 1 ] ], [ [ 2 ], [ 1 ], [ 0 ] ],
[ [ 0 ], [ ], [ 1, 2 ] ], [ [ 0 ], [ ], [ 2, 1 ] ],
[ [ 1 ], [ ], [ 0, 2 ] ], [ [ 1 ], [ ], [ 2, 0 ] ],
[ [ 2 ], [ ], [ 0, 1 ] ], [ [ 2 ], [ ], [ 1, 0 ] ],
[ [ ], [ 0, 1, 2 ], [ ] ], [ [ ], [ 0, 2, 1 ], [ ] ],
[ [ ], [ 1, 0, 2 ], [ ] ], [ [ ], [ 1, 2, 0 ], [ ] ],
[ [ ], [ 2, 0, 1 ], [ ] ], [ [ ], [ 2, 1, 0 ], [ ] ],
[ [ ], [ 0, 1 ], [ 2 ] ], [ [ ], [ 0, 2 ], [ 1 ] ],
[ [ ], [ 1, 0 ], [ 2 ] ], [ [ ], [ 1, 2 ], [ 0 ] ],
[ [ ], [ 2, 0 ], [ 1 ] ], [ [ ], [ 2, 1 ], [ 0 ] ],
[ [ ], [ 0 ], [ 1, 2 ] ], [ [ ], [ 0 ], [ 2, 1 ] ],
[ [ ], [ 1 ], [ 0, 2 ] ], [ [ ], [ 1 ], [ 2, 0 ] ],
[ [ ], [ 2 ], [ 0, 1 ] ], [ [ ], [ 2 ], [ 1, 0 ] ],
[ [ ], [ ], [ 0, 1, 2 ] ], [ [ ], [ ], [ 0, 2, 1 ] ],
[ [ ], [ ], [ 1, 0, 2 ] ], [ [ ], [ ], [ 1, 2, 0 ] ],
[ [ ], [ ], [ 2, 0, 1 ] ], [ [ ], [ ], [ 2, 1, 0 ] ] ]
gap>
gap> ## Generate only the epimorphisms [2] -->> [2]
gap> EnumerateDeltaS( 2, 2 : epi );
[ [ [ 0 ], [ 1 ], [ 2 ] ], [ [ 0 ], [ 2 ], [ 1 ] ],
[ [ 1 ], [ 0 ], [ 2 ] ], [ [ 1 ], [ 2 ], [ 0 ] ],
[ [ 2 ], [ 0 ], [ 1 ] ], [ [ 2 ], [ 1 ], [ 0 ] ] ]
gap>
gap> ## Compose two morphisms of Delta S.
gap> a := Random(EnumerateDeltaS(4,3));
[ [ 0 ], [ 2, 4, 1 ], [ ], [ 3 ] ]
gap> b := Random(EnumerateDeltaS(3,2));
[ [ ], [ 3, 0, 2 ], [ 1 ] ]
gap> MultDeltaS(b, a);
[ [ ], [ 3, 0 ], [ 2, 4, 1 ] ]
gap> MultDeltaS(a, b);
Maps incomposeable
[ ]
gap>
gap> ## Examples of using morphisms of Delta S to act on simple tensors
gap> A := TruncPolyAlg([3,2]);
<algebra of dimension 6 over Rationals>
gap> ## TruncPolyAlg is defined in Basic.g
gap> ## TruncPolyAlg([i_1, i_2, ..., i_n]) is generated by
gap> ## x_1, x_2, ..., x_n, under the relation (x_j)^(i_j) = 0.
gap> g := GeneratorsOfLeftModule(A);
[ X^[ 0, 0 ], X^[ 0, 1 ], X^[ 1, 0 ], X^[ 1, 1 ], X^[ 2, 0 ], X^[ 2, 1 ] ]
gap> x := g[2]; y := g[3];
X^[ 0, 1 ]
X^[ 1, 0 ]
gap> v := [ x*y, 1, y^2 ];
gap> ## v represents the simple tensor xy \otimes 1 \otimes y^2.
[ X^[ 1, 1 ], 1, X^[ 2, 0 ] ]
gap> ActByDeltaS( v, [[2], [], [0], [1]] );
[ X^[ 2, 0 ], 1, X^[ 1, 1 ], 1 ]
gap> ActByDeltaS( v, [[2], [0,1]] );
[ X^[ 2, 0 ], X^[ 1, 1 ] ]
gap> ActByDeltaS( v, [[2,0], [1]] );
[ 0*X^[ 0, 0 ], 1 ]
gap>
gap> ## Symmetric monoidal product on DeltaS_+
gap> a := Random(EnumerateDeltaS(4,2));
[ [ ], [ 2, 1, 0 ], [ 3, 4 ] ]
gap> b := Random(EnumerateDeltaS(3,3));
[ [ ], [ ], [ ], [ 1, 3, 2, 0 ] ]
gap> MonoidProductDeltaS(a, b);
[ [ ], [ 2, 1, 0 ], [ 3, 4 ], [ ], [ ], [ ], [ 6, 8, 7, 5 ] ]
gap> MonoidProductDeltaS(b, a);
[ [ ], [ ], [ ], [ 1, 3, 2, 0 ], [ ], [ 6, 5, 4 ], [ 7, 8 ] ]
gap> MonoidProductDeltaS(a, []);
[ [ ], [ 2, 1, 0 ], [ 3, 4 ] ]
gap>
gap> ## Symmetric Homology of the algebra A, in degrees 0 and 1.
gap> SymHomUnitalAlg(A);
[ [ 0, 0, 0, 0, 0, 0 ], [ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 0, 0 ] ]
gap> ## '0' represents a factor of Z, while a non-zero p represents
gap> ## a factor of Z/pZ.
gap>
gap> ## Using layers to compute symmetric homology
gap> C2 := CyclicGroup(2);
<pc group of size 2 with 1 generators>
gap> A := GroupRing(Rationals, DirectProduct(C2, C2));
<algebra-with-one over Rationals, with 2 generators>
gap> ## First, a direct computation without layers:
gap> SymHomUnitalAlg(A);
[ [ 0, 0, 0, 0 ], [ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 ] ]
gap> ## Next, compute HS_0(A)_u and HS_1(A)_u for each generator u.
gap> g := GeneratorsOfLeftModule(A);
[ (1)*<identity> of ..., (1)*f2, (1)*f1, (1)*f1*f2 ]
gap> SymHomUnitalAlgLayered(A, g[1]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> SymHomUnitalAlgLayered(A, g[2]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> SymHomUnitalAlgLayered(A, g[3]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> SymHomUnitalAlgLayered(A, g[4]);
[ [ 0 ], [ 2, 2, 2 ] ]
gap> ## Computing HS_1( Z[t] ) by layers:
gap> SymHomFreeMonoid(0,10);
HS_1(k[t])_{t^0} : [ ]
HS_1(k[t])_{t^1} : [ ]
HS_1(k[t])_{t^2} : [ 2 ]
HS_1(k[t])_{t^3} : [ 2 ]
HS_1(k[t])_{t^4} : [ 2 ]
HS_1(k[t])_{t^5} : [ 2 ]
HS_1(k[t])_{t^6} : [ 2 ]
HS_1(k[t])_{t^7} : [ 2 ]
HS_1(k[t])_{t^8} : [ 2 ]
HS_1(k[t])_{t^9} : [ 2 ]
HS_1(k[t])_{t^10} : [ 2 ]
gap> ## Poincare polynomial of Sym_*^{(p)} for small p.
gap> ## There is a check for torsion, using a call to Fermat
gap> ## to find Smith Normal Form of the differential matrices.
gap> PoincarePolynomialSymComplex(2);
C_0 Dimension: 1
C_1 Dimension: 6
C_2 Dimension: 6
D_1
SNF(D_1)
D_2
SNF(D_2)
2*t^2+t
gap> PoincarePolynomialSymComplex(5);
C_0 Dimension: 1
C_1 Dimension: 30
C_2 Dimension: 300
C_3 Dimension: 1200
C_4 Dimension: 1800
C_5 Dimension: 720
D_1
SNF(D_1)
D_2
SNF(D_2)
D_3
SNF(D_3)
D_4
SNF(D_4)
D_5
SNF(D_5)
120*t^5+272*t^4+t^3
\end{verbatim}
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}\label{sec:intro}
\IEEEPARstart{W}{ith} the recent and rapid advancements in the area of machine learning, Neural Networks (NNs) have become the driving force for
embedded devices advancing a variety of domains, such as object detection, speech recognition and more~\cite{jouppi2017datacenter}. However, such devices are generally characterized by limited computing capabilities and they are also operating under strict power budgets due to tight temperature constraints~\cite{amrouch2020npu}. Furthermore,
NNs, and specifically Deep NNs (DNNs), are continuously evolving, becoming more and more computationally intensive in order to accommodate the latest research and industry accuracy requirements. During the inference phase of NNs, the dominant arithmetic operation, performed mainly on the convolution and fully connected layers, is the multiply-accumulate (MAC) operation. Accordingly, embedded devices integrate DNN accelerators as a solution to the throughput/latency requirements. Such accelerators comprise large amounts of MAC units, for instance, the Google Edge TPU comprises 4K MAC units~\cite{cass2019taking} and the Samsung embedded NPU integrates 6K MAC units~\cite{park20219}.
Even though the hardware accelerators might be a solution towards addressing the computing limitations of embedded devices, integrating thousands of MAC units in order to keep up with the computational demands, results in increased energy consumption~\cite{amrouch2020npu}. Interestingly, previous research works~\cite{sarwar2018energy,mrazek2019alwann,zervakis2021control,tasoulas2020weight,zervakis2020design,VenkataramaniJPROC2020} have shown that a great amount of these computations can tolerate at least some degree of approximation, thus reducing energy consumption and without sacrificing the NN inference accuracy. Thus, exploiting the principle of approximate computing, we can trade-off the system's energy efficiency with respect to the NN accuracy. This has led to the design and development of approximate circuits, and particularly multipliers~\cite{sarwar2018energy}. As shown in~\cite{VenkataramaniJPROC2020}, approximately 90\% of the DNN computations are general matrix multiplication operations. Hence, employing approximate multipliers delivers considerable energy reduction with respect to the entire accelerator.
Current techniques for designing approximate multipliers, mainly focus on the introduction of low error applying also retraining to recover the accuracy loss~\cite{sarwar2018energy}. However, this is not always feasible due to proprietary datasets and models~\cite{mrazek2019alwann,zervakis2021control}. Moreover, retraining requires to simulate the approximate multiplier, precluding the exploitation of hardware optimizations (e.g., AVX instructions) and exponentially increases the training time, especially for deeper networks~\cite{mrazek2019alwann}.
\textbf{In this work}, we target DNN inference and apply approximation to maximize energy gains without significant losses in accuracy. Particularly,
we present a \textit{reconfigurable multiplier, that follows a different approach than state-of-the-art, by comprising one exact mode of operation and two synergistic approximate modes, one positive and one negative, that aim to balance the introduced error.}
Additionally, we present an approximation strategy that assigns NN operations to specific approximation modes based on the respective layer, filter, and weight value of the operation.
The proposed approach doesn't require retraining and aims to minimize the convolution error by introducing, in a directed manner, positive and negative approximation in the performed multiplications.
The contributions of our work are summarized in the following points:\\
\begin{inparaenum}[(1)]
\item We design an approximate multiplier that approximates the generation of the partial products and comprises three operation modes: the Zero Error (ZE) mode (i.e., exact), the Positive Error (PE) mode, and the Negative Error (NE) mode.\\
\item By approximating more or less partial products, we control the applied approximation and tune the accuracy-energy reduction trade-off as required.\\
\item We present a methodology where for each NN filter, we exploit the two synergistic modes of the presented multiplier, and group all the weight values into two balanced summation sets, with the final goal being the convolution error converging to zero and consequently achieving high inference accuracy.
\end{inparaenum}
\section{Related works}\label{sec:Related}
Approximate DNN accelerators have attracted a vast research interest.
A significant fraction of a DNN operations (about 90\%) is attributed to GEMM operations, i.e., convolution and matrix multiplications~\cite{VenkataramaniJPROC2020}.
Such computations rely on MAC units and the state of the art approximates the multipliers to boost the energy efficiency of the overall accelerator.
Cartesian Genetic Programming is used in~\cite{mrazek2016design,ansari2019improving} to generate fixed approximate multipliers and replace the accurate ones, achieving high hardware gains for a minimal accuracy loss or even accuracy improvement.
\cite{sarwar2018energy} introduced a multiplier-less artificial neuron that is based on additions only.
Nevertheless, \cite{mrazek2016design,sarwar2018energy,ansari2019improving} require retraining, which as aforementioned is not always feasible.
In~\cite{mrazek2019alwann} the authors avoid retraining and use layer-based approximation
in which a different fixed approximate multiplier from~\cite{mrazek2017evoapproxsb} is used per layer.
In addition, a weight tuning method is employed targeting to reduce the introduced multiplication error.
The work in~\cite{mrazek2020using} extends the approximate multipliers of~\cite{mrazek2017evoapproxsb} and shows that, in simple DNNs, high energy gains and minimal accuracy loss can be achieved, even without retraining.
However, for more complex DNNs the energy gains are not maintained.
Acknowledging the need for runtime reconfiguration, \cite{zervakis2020design} generates approximate multipliers with dynamically reconfigurable accuracy and uses them to apply layer-wise approximation in DNNs by changing the multiplier's accuracy mode per layer.
The work in~\cite{tasoulas2020weight} uses~\cite{zervakis2020design} to generate low variance approximate reconfigurable multipliers, and proposes a weight-oriented approximation for DNN inference.
\cite{hanif2019cann} employs a curable approximation in which the MAC's adder is split into low and high parts and the carry of low part is accumulated by the neighboring MAC unit.
The carry of the last MAC unit is not corrected.
However, \cite{hanif2019cann} is evaluated on the LeNet architecture, a very shallow architecture which cannot provide the amount of operations recent DNNs do.
The work in~\cite{riaz2020caxcnn} uses the Canonic Sign Digit Encoding to represent the weights and employs truncation as an approximation method.
The architecture proposed in~\cite{hammad9cnn} uses Dynamic and Static Segmented approximate multipliers that support high and low precision for the size of the segment.
A trainable input classifier is used to select the required precision per segment.
\cite{park2021design} targets energy consumption minimization of MAC-based signal processing algorithms.
\cite{park2021design} utilizes different fixed approximate multipliers in an interleaved manner to compensate errors during the accumulate operations.
Nevertheless,~\cite{hammad9cnn,park2021design} consider $16$-bit inference and can be deemed inapplicable in modern DNN accelerators that use mainly $8$-bit precision~\cite{jouppi2017datacenter}.
Finally,~\cite{zervakis2021control} introduces a control variate approximation to recover the accuracy loss due to approximate multiplications.
The overall convolution error is estimated at runtime and it is finally accumulated in the output result. However, for the error accumulation, it requires an additional column of MAC units.
\textbf{Our work differentiates} from the state of the art as since it does not require retraining and it employs a reconfigurable multiplier that supports positive/negative error as well as accurate execution.
In addition, we propose a filter-oriented weight mapping methodology to map DNNs to the modes of the approximate multiplier so that given accuracy loss constraints are satisfied.
\section{Proposed Methodology}\label{sec:Methodology}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{resources/methodology-overview.pdf}
\caption{An overview of the proposed methodology}
\label{fig:methodology_overview}
\vspace{-15pt}
\end{figure}
An overview of our approach is depicted in Fig.~\ref{fig:methodology_overview}.
First, we present our positive/negative approximate multiplier and show a rigorous error analysis that is exploited in the error optimization of our mapping methodology (Section~\ref{sec:perfmult}). Then, given a trained DNN, we quantize weights and activations to 8-bit (in the range [0, 255]~\cite{jacob2018quantization}) and we apply our mapping methodology, responsible for assigning the weights to the modes of our multiplier (Section~\ref{sec:mapping}).
\subsection{Positive/Negative Approximate Multiplier in NNs}\label{sec:perfmult}
The most complex computation in the CNN inference phase is the convolution operation. The latter is expressed as:
\begin{equation}\label{eq:conv}
G=B+\sum_{i=1}^{k}{W_i\cdot A_i},
\end{equation}
where $W_i$ are the weights, $A_i$ are the input activations, and $B$ is the bias of the neuron.
We assume a microarchitecture similar to Google TPU that comprises a big
systolic MAC array~\cite{jouppi2017datacenter,cass2019taking}.
In addition, we consider a weight-stationary mapping and we replace the exact multipliers with approximate ones.
Denoting $\epsilon_i$ the error of the approximate multiplication:
\begin{equation}\label{eq:multerror}
\epsilon_i = W_i\cdot A_i - W_i\cdot A_i|_{approx}
\end{equation}
the error $\epsilon_G $ of the approximate convolution is given by:
\begin{equation}\label{eq:converror}
\epsilon_G = G - G|_{approx} = \sum_{i=1}^{k}{\epsilon_i}.
\end{equation}
In~\cite{ZervakisTVLSI2016}, the authors proposed an approximate multiplier with predictable (known a priori) error.
The multiplier of~\cite{ZervakisTVLSI2016} eliminates the generation of some partial products (they are set to zero) and thus less partial products need to be accumulated.
The technique in \cite{ZervakisTVLSI2016} always leads to positive error as the approximate product is always smaller than the exact one.
Consequently, when approximating (eliminating) the $z$ least partial products, the error $e_i$ is given by:
\begin{equation}\label{eq:peerr}
\begin{split}
\epsilon_i &= W_i\cdot A_i - W_i\cdot (A_i-A_i\text{ mod }2^z) \\
&= W_i\cdot r_i, \text{ with } r_i = A_i\text{ mod }2^z.
\end{split}
\end{equation}
Hence, the average multiplication error and the error variance $\forall A_i$ of~\cite{ZervakisTVLSI2016} are given by:
\begin{equation}\label{eq:pestats}
\begin{split}
\mathrm{E}[\epsilon_i] &=\frac{2^z-1}{2}W_i\\
\mathrm{Var}(\epsilon_i) &=\frac{2^{2z}-1}{12}W_i.
\end{split}
\end{equation}
The authors in~\cite{LeonMICRO2018} proposed to use switches and control, at run-time, the number of partial products that will be approximated (i.e., set the $z$ value at run-time).
Hence,~\cite{LeonMICRO2018} supports also exact multiplications (i.e., when $z=0$).
In our work, we extend~\cite{LeonMICRO2018} to support three different modes: \emph{Zero Error (ZE), Positive Error (PE), and Negative Error (NE)}.
The ZE mode refers to the exact operation, in which no error is introduced in the multiplication.
In the PE mode, the $z$ least partial products are perforated and thus positive error is obtained.
In the NE mode, we force the generation and accumulation of the $z$ least partial products and thus negative error is obtained.
Fig.~\ref{fig:mux} presents how the three operating modes (ZE, PE, and NE) can be configured at run-time.
Considering the weight-stationary mapping, in both NE and PE modes, the $z$ partial products remain fixed at run-time (for several cycles)
leading to reduced switching activity and thus high power gains.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{resources/mux.pdf}
\caption{Run-time approximate generation of the \textit{n}-th partial product. ZE, PE, and NE modes are supported. Simple partial product generation is used as an illustrative example.}
\label{fig:mux}
\end{figure}
Since, in the NE mode, we force the generation of the partial products, the multiplication error $\epsilon_i$ is given by:
\begin{equation}\label{eq:neerr}
\begin{split}
\epsilon_i &= W_i\cdot A_i - W_i\cdot \big((A_i+(2^z-1-A_i\text{ mod }2^z)\big)\\
&= -W_i\cdot(2^z-r_i-1), \text{ with } r_i = A_i\text{ mod }2^z.
\end{split}
\end{equation}
Thus, in the NE mode, the average multiplication error and the error variance $\forall A_i$ are given by:
\begin{equation}\label{eq:nestats}
\begin{split}
\mathrm{E}[\epsilon_i] &=-\frac{2^z-1}{2}W_i\\
\mathrm{Var}(\epsilon_i) &=\frac{2^{2z}-1}{12}W_i.
\end{split}
\end{equation}
As a result, from~\eqref{eq:pestats} and~\eqref{eq:nestats}, the average error and the error variance, $\forall A_i$, of our approximate multiplier are given by:
\begin{equation}\label{eq:axmult}
\begin{split}
\mathrm{E}[\epsilon_i] &=s\frac{2^z-1}{2}W_i\\
\mathrm{Var}(\epsilon_i) &=\frac{2^{2z}-1}{12}W_i,
\end{split}
\end{equation}
where $s=1$ in the PE mode, $s=-1$ in the NE mode, and $z=0$ in the ZE mode.
Without any loss of generality, each multiplier in the DNN accelerator can be configured to a different mode, i.e., each multiplier features different $s$ and $z$ values, named $s_i$ and $z_i$ respectively.
Therefore, using~\eqref{eq:converror} and~\eqref{eq:axmult}, the average convolution error $\mathrm{E}[\epsilon_G]$ is given by:
\begin{equation}\label{eq:avgconv}
\mathrm{E}[\epsilon_G] = \sum_{i=1}^{k}{\mathrm{E}[\epsilon_i]} = \sum_{i=1}^{k}{s_i\frac{2^{z_i}-1}{2}W_i}
\end{equation}
and the convolution error variance $\mathrm{Var}(\epsilon_{G})$ is given by:
\begin{equation}\label{eq:varcov}
\begin{split}
\mathrm{Var}(\epsilon_{G}) & = \sum_{i=1}^{k}{\mathrm{Var}(\epsilon_i)}+\sum_{\mathclap{1\leq i \leq j \leq k}}{\mathrm{Cov}(\epsilon_i, \epsilon_j)} \\
& = \sum_{i=1}^{k}{\frac{2^{2z_i}-1}{12}W_i} + \sum_{\mathclap{1\leq i \leq j \leq k}}{W_iW_j\cancelto{0}{\mathrm{Cov}(r_i,r_j)}} \\
& = \sum_{i=1}^{k}{\frac{2^{2z_i}-1}{12}W_i}.
\end{split}
\end{equation}\noindent
in which $r_i$ and $r_j$ are uncorrelated and thus their covariance $\mathrm{Cov}(r_i,r_j)$ is zero.
Exploiting that \eqref{eq:avgconv} and \eqref{eq:varcov} depend only on the weights and leveraging the fact that the weight values are obtained after training and quantization,
we can minimize the convolution error (i.e., minimize~\eqref{eq:avgconv} and \eqref{eq:varcov}) by carefully setting the approximation parameters of each multiplier (i.e., $s_i$ and $z_i$).
Finally, Table~\ref{tab:energy} shows the achieved energy reduction of the PE and NE modes for different $z$ values.
As it can be seen, the energy gains increase as the value of $z$ increases.
However, the magnitude of the multiplication error, both in PE and in NE, becomes larger as well, as calculated by~\eqref{eq:axmult}.
Therefore, in Section~\ref{sec:mapping} we present a method to map the weights to specific modes in order to keep the overall inference accuracy loss low.
The experimental setup and tool flow used to obtain the values reported in Table~\ref{tab:energy} are described in Section~\ref{sec:Evaluations}.
\begin{table}[]
\centering
\caption{Energy gains of our $8$-bit Positive/Negative Approximate Multiplier}
\label{tab:energy}
\begin{tabular}{c|ccc}
Mode & $z=1$ & $z=2$ & $z=3$ \\
\toprule
\toprule
PE & 8.3\% & 20.23\% & 36.6\% \\
NE & 5.5\% & 16.17\% & 31.8\% \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Filter- and weight-based mapping}\label{sec:mapping}
In this section, we present a filter-oriented method for mapping the weights of an NN to the three aforementioned modes of the approximate multiplier as well as deciding the value of $z$ for each one of them.
For our analysis, the available values for $z$ are $1$, $2$, and $3$. We also tested values greater than $3$, but the introduced error was very large and violated our tight accuracy thresholds. Our five-step mapping procedure aims to satisfy a given accuracy drop threshold while maximizing the number of weights that are assigned to high $z$ values. Fig.~\ref{fig:methodology_eg} depicts an illustrative example of the steps.
\begin{figure}
\centering
\resizebox{0.93\columnwidth}{!}{\includegraphics{resources/methodology_eg_resilience_2.pdf}}
\caption{Illustrative example of the proposed five-step methodology.}
\label{fig:methodology_eg}
\end{figure}
\underline{\textit{Step 1 - Layer resilience:}} The goal of this step is to identify how error resilient each convolution layer of the targeted NN is (Fig~\ref{fig:methodology_eg}~\circled{1}). Initially, we consider that all weights are assigned to the exact mode (i.e., ZE mode). Then, for each layer of the NN separately, starting from the first one, we count the occurrences of each weight value \emph{per filter}.
We define as $w_{i,f,l}$ the number of times that weight $i$ occurs in layer $l$ of filter $f$.
We take advantage of the positive/negative architecture of the proposed multiplier and we assign the $w_{i,f,l}/2$ weights to the PE mode and the rest half of the weights to the NE mode in order to cancel out the introduced error (see \eqref{eq:avgconv}).
If $w_{i,f,l} \% 2 \neq 0$, we map $\left \lfloor{w_{i,f,l}/2}\right \rfloor$ times the weight $i$ to the PE and NE modes and the remaining occurrence of $i$ to the ZE mode, keeping it also in a residue list, unique for each filter, to be used in the last step for further tuning. We call this concept \emph{filter-oriented error balancing.}
For this particular step, we set $z=3$ (for all weights) for the PE and NE modes, as it introduces the highest error compared to $z=2$ and $z=1$, achieving higher energy gains.
Since the procedure described above is performed for each layer separately, we record the accuracy output of the network each time and we determine which layers are more sensitive to approximate multiplications, and which layers show small or no drop in the final network accuracy. Once we have obtained the layer resilience information, the output of this step is a list of convolution layers of the network sorted based on error resiliency (i.e., highest accuracy to lowest inference accuracy).
At this point, although the weights assigned to the PE and NE modes (positive and negative error) are balanced, the convolution error is still defined by its variance, as \eqref{eq:varcov} shows.
The latter depends exponentially on $z$.
Hence, this steps sorts the layers with respect to $z$ tolerance.
\underline{\textit{Step 2 - Accumulative inference accuracy:}} In this step, our goal is to find how many layers can be mapped to high approximation (high $z$) simultaneously, using the filter-oriented error balancing method presented in Step 1 (Fig~\ref{fig:methodology_eg}~\circled{2}). Thus, starting from the most error resilient layer of the network towards the least resilient one, we map the convolution layers to the PE and NE modes following the procedure described previously, but this time in an accumulative way, still using $z=3$. We stop this step once we have reached the accuracy drop threshold.
\underline{\textit{Step 3 - Exploring lower approximation:}}
In this step, we repeat the procedure that was described in Steps 1 and 2, however in this case we set $z=2$ (Fig~\ref{fig:methodology_eg}~\circled{3}). However, we do not perform the layer resilience and accumulative accuracy process to all convolution layers of the network, but only to the remaining ones out of the procedure described in Step 2. When this step is finished, in most cases we are left with portion of the network's convolution layers mapped to PE/NE using $z=3$, some mapped using $z=2$, and the remainder of the layers mapped to ZE.
\underline{\textit{Step 4 - Fine-grain exploration:}}
Up to this point, the actions described in Steps 1-3 let us reduce the search space. Particularly, we explored the error resilience of the NN layers for $z=3$ and $z=2$ but there are still layers entirely mapped to the ZE mode. These layers, that cannot tolerate approximation for $z=3$ and $z=2$, will be mapped to the PE/NE modes with $z=1$. However, this new mapping to $z=1$ can severely impact NN accuracy, violating the accuracy threshold. Hence, in this step we perform a fine-grain exploration for different mapping combinations among the different $z$ values in order to balance this newly introduced error (Fig~\ref{fig:methodology_eg}~\circled{4}). Additionally, this step lets us perform a more thorough search for more valid mappings and let us create a Pareto-front.
The exploration is performed in three parts.
First, we start moving one by one all the layers mapped to $z=3$ to $z=2$, starting from the layer that was the last one to be mapped to $z=3$ and we keep the solutions that satisfy the accuracy drop threshold requirement.
Second, we follow the same concept for the layers mapped to approximate modes with $z=2$, this time moving them to $z=1$, while still keeping each mapping combination that satisfies the accuracy drop threshold. This part is a step towards maximizing energy savings, since all the layers mapped to $z=3$ remain this way, but the layers mapped to $z=2$ are being moved to $z=1$ in order to reduce the accuracy drop.
Finally, all layers initially mapped to PE/NE with $z=3$ are moved to $z=1$ approximate modes, while keeping each mapping combination along the way that does not violate the accuracy threshold requirement. This is another way to drastically reduce the introduced error, as the approximation under $z=3$ is more aggressive, mostly relying on the layers mapped to $z=2$ for energy gains. Overall, the output of this step is a list of valid mappings, with varying energy savings, utilized in Step 5 for final tuning.
\underline{\textit{Step 5 - Addressing the residue weights:}}
So far, in all previous steps, the weights included into the residue lists of each filter, described in Step 1, are being mapped to the ZE mode. Thus, in this step we use all the mapping configurations found so far, that satisfy the accuracy threshold, and we map all these residue weights to either the PE or NE mode (Fig~\ref{fig:methodology_eg}~\circled{5}). Hence, for each filter we partition the residue list into \emph{two balanced summation sets} using the Largest Differencing Method~\cite{karmarkar1982efficient} (LDM) algorithm. Then, we map all weight values in the first set to the PE mode, and the weight values in the second set to the NE mode. Again, in this step we keep all the solutions that satisfy the accuracy requirement. For all the solutions, the residue weights will be mapped to approximate modes starting with $z=1$, then $z=2$ and finally $z=3$, in an attempt to push the approximation for better energy results.
Overall, targeting high energy gains, our mapping methodology aims in assigning each weight to either PE or NE with a high $z$ value (see Table~\ref{tab:energy}).
Steps 1-4 perform an exploration in which entire layers are approximated (see mapping procedure in Step 1) using a greedy procedure that tries to find the highest $z$ value per layer.
After Step 4, the focus is given on the residue weights, which up to that point are mapped to ZE.
Considering~\eqref{eq:avgconv} and that up to now the positive and negative error weights are completely balanced, the average convolution error in steps 1-4 is zero.
Therefore, only the convolution error variance~\eqref{eq:varcov} affects the accuracy.
Finally, in step 5 we focus on assigning residue weights to non-ZE modes (i.e., boost further the energy gains).
Note that, LDM aims to create subsets whose sums are as equal as possible, but it does not guarantee a balanced final partitioning.
Thus, after step 5, \eqref{eq:avgconv} is close to zero (as we discuss later) but might not be actual zero.
For this reason also, applying LDM from the beginning (Step 1) would lead to sub-optimal solutions.
Using LDM for all filter weights, would result to a biased error (non zero~\eqref{eq:avgconv}) and thus, during the $z$ optimization both \eqref{eq:avgconv} and~\eqref{eq:varcov} would contribute to the accuracy loss, resulting in smaller $z$ values per layer and/or more complex $z$ allocation procedure.
Considering \eqref{eq:avgconv}, the efficiency of the error balancing (i.e., how close \eqref{eq:avgconv} will be to zero) depends on the weight values.
Weight values close to each other increase the probability of error cancellation when employing our positive/negative approximation.
Fig.~\ref{fig:weight_act_distr} shows the weight value distributions for two different NNs: GoogleNet~\cite{szegedy2015going} and ResNet20~\cite{he2016deep} on the CIFAR-10 dataset.
As shown, for both NNs, the weight distributions are close to normal and weight values feature low dispersion.
Finally, setting the mode of operation is seamlessly performed at run-time as the mapping decision is stored with the weights values (i.e., $3$ bits per weight to encode $z$, ZE, and NE). As described in~\cite{riaz2020caxcnn}, targeting recent batch processing DNN accelerators, the storage requirements for similar methods are low since the required memory space is averaged over the entire batch.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{resources/weights_distributions.pdf}
\caption{Distributions of weight values for all layers of the GoogleNet and ResNet20 NNs on the CIFAR-10 dataset under 8-bit quantization.}
\label{fig:weight_act_distr}
\end{figure}
\section{Results and Evaluation}\label{sec:Evaluations}
In this section, we provide the experimental evaluation of our proposed method in terms of energy savings and accuracy loss.
As MAC operations consume a very significant portion of total energy cost, we evaluate the energy reduction w.r.t. the MAC operations.
Note that MAC units are the basic building block of any DNN accelerator.
Additionally, we present comparative results against a variety of state-of-the-art techniques.
For the accuracy evaluation we consider seven DNNs of varying size and characteristics: ResNet20~\cite{he2016deep}, ResNet32~\cite{he2016deep}, ResNet44~\cite{he2016deep}, ResNet56~\cite{he2016deep}, MobileNetv2~\cite{sandler2018mobilenetv2}, GoogleNet~\cite{szegedy2015going}, and ShuffleNet~\cite{zhang2018shufflenet}.
The DNNs were trained on four different datasets: CIFAR-10~\cite{krizhevsky2009learning}, CIFAR-100~\cite{krizhevsky2009learning}, GTSRB~\cite{stallkamp2012man} and LISA~\cite{mogelmose2012vision}.
Overall, $28$ models are considered in our analysis.
In all experiments, 8-bit post-training quantization is used~\cite{jacob2018quantization}.
\subsection{Overview of Methods under Comparison }\label{sec:methods}
To evaluate our method, we conducted evaluation experiments with other state-of-the-art methods that employ approximate computing techniques, such as fixed approximation across all layers of an NN, or similar fine-grain weight-based approximation mapping. Specifically, we chose the following methods for evaluation comparison:
\underline{Exact}: This method uses exact 8-bit multipliers and is therefore the baseline for our experiments.
\underline{ALWANN}~\cite{mrazek2019alwann}: This method utilizes approximate multipliers from the library in~\cite{mrazek2017evoapproxsb} and employs weight-tuning to minimize the error that the approximate multiplications incur. Note that all multipliers used in this method are fixed and do not comprise different modes of operation. Additionally, this method utilizes non-uniform approximate architectures across the network (i.e., different approximate multiplier per layer) eliminating flexibility and applicability to other networks and datasets when implemented in hardware. For this reason and for fair comparisons, we consider an homogeneous architecture for~\cite{mrazek2019alwann}. In our evaluation, for each use case, we considered all of the Pareto-optimal approximate multipliers described in~\cite{mrazek2017evoapproxsb}, as different NN might require different approximate multiplier from~\cite{mrazek2017evoapproxsb} to satisfy the accuracy loss threshold.
\underline{LVRM}~\cite{tasoulas2020weight}: This is a more fine-grain weight mapping approach that employs a low-variance reconfigurable multiplier and additionally applies a constant error correction term by modifying the biases of the filters.
\underline{ConVar}~\cite{zervakis2021control}: This work uses fix approximation enhanced with a run-time error correction method. \cite{zervakis2021control} induces high approximation at multiplier level to achieve high energy gains and relies on the error correction to achieve high accuracy at convolution and consequently inference levels.
\underline{Filter Balanced Sets (FBS)}: In this method we use the proposed positive/negative multiplier and we employ the concept of LDM on all the weights of all the layers to create two balanced summation sets per filter, instead of applying this concept on just the residue weights as we do in our methodology. By comparing with this method, we want to showcase that just creating balanced sets per layer (from step 1) leads to a biased error and suboptimal results. For fair comparison, we tried all $z$ combinations and we selected the one that satisfies the accuracy thresholds and yields the highest energy gains.
All the aforementioned methods do not require retraining as our methodology.
In addition, they enable us to evaluate our work against the state of the art, i.e., fixed approximation with statistical error correction~\cite{mrazek2019alwann} and~\cite{zervakis2021control}, but also against more fine-grain run-time reconfigurable approximation~\cite{tasoulas2020weight}.
\emph{We additionally evaluated the methods presented in~\cite{sarwar2018energy,ansari2019improving,mrazek2020using} and~\cite{hammad9cnn}}, however they are not included in our analysis for the following reasons. Both works in~\cite{sarwar2018energy,ansari2019improving} require the retraining of the considered NNs. On the other hand, our proposed method, ALWANN~\cite{mrazek2019alwann}, LVRM~\cite{tasoulas2020weight}, and ConVar~\cite{zervakis2021control} do not require retraining eliminating also the associated time overhead. By bypassing NN retraining, the accuracy delivered by~\cite{sarwar2018energy,ansari2019improving} was poor and did not satisfy any of the considered accuracy thresholds. Furthermore, the work in~\cite{hammad9cnn} is based on 16-bit inference and also produced very poor accuracy when considering 8-bit quantization, as in our analysis. Additionally, although the work in~\cite{mrazek2020using} provided acceptable results for the CIFAR-10 dataset, it did not result in admissible accuracy losses for the CIFAR-100 dataset. The latter is in compliance with the authors conclusion that for simple models the retraining can be avoided when using approximate multipliers, but for complex ones the accuracy drops significantly without retraining. Consequently, we did not include the aforementioned works in our evaluation, since we aim for strict accuracy loss constraints that, almost always~\cite{sarwar2018energy,ansari2019improving,mrazek2020using,hammad9cnn} failed to satisfy.
\subsection{Experimental Setup}
As we target high accuracy, we consider the following accuracy drop thresholds: $0.5\%$, $0.75\%$, and $1\%$. All the aforementioned NNs are trained on each dataset described above.
Specifically, all NNs are trained using the Tensorflow machine learning library~\cite{tensorflow2015-whitepaper}, and are then frozen and quantized to 8-bit. The accuracy evaluations are performed by describing in \texttt{C} all the approximate multipliers and using the approximate extension of Tensorflow proposed in~\cite{VaverkaDATE2020}.
Accuracy loss is calculated w.r.t. the accuracy achieved by the 8-bit quantized model with exact multiplications.
Regarding the energy gains, we describe all the examined MAC units in Verilog RTL and industry-strength tools are used for the hardware analysis.
All the MAC units are synthesized using Synopsys Design Compiler and are mapped to a $14$nm technology library calibrated with Intel data~\cite{amrouch2020npu}.
The \texttt{compile\_ultra} command is used for synthesis targeting the maximum frequency that the exact design achieves.
We run post-synthesis timing simulations using Mentor Questasim and $1$ million randomly generated inputs to capture the switching activity of the MAC units.
The switching activity is fed to Synopsys PrimeTime to calculate the power consumption.
In each MAC unit we replace the multiplier with the respective approximate one.
To be in compliance with~\cite{mrazek2019alwann} and~\cite{tasoulas2020weight}, in order to implement our approximate multiplier we used the exact multiplier (1JFF) from~\cite{mrazek2017evoapproxsb} as baseline.
Similarly, 1JFF is used in the exact MAC unit.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{resources/cifar10_BS.pdf}
\caption{CIFAR-10 normalized MAC operation energy savings.}
\label{fig:cifar10_results}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{resources/cifar100_BS.pdf}
\caption{CIFAR-100 normalized MAC operation energy savings. For the $0.5\%$ and $0.75\%$ thresholds for the ResNet44 case, ConVar~\cite{zervakis2021control} could not produce acceptable solutions, while ALWANN~\cite{mrazek2019alwann} resulted in nearly $0\%$ gains in energy for ResNet44 and ResNet56.}
\label{fig:cifar100_results}
\end{figure*}
\subsection{Results}
For each of the 4 datasets we considered, Fig.\ref{fig:cifar10_results}~-~\ref{fig:lisa_results} show the respective results for all NNs and all three different accuracy thresholds $0.5\%$, $0.75\%$ and $1\%$. Specifically, Fig.~\ref{fig:cifar10_results}~-~\ref{fig:lisa_results}
report the energy reduction achieved by our method as well as the state of the art. Energy reduction is calculated w.r.t. the energy consumption of the exact design.
As mentioned in Section~\ref{sec:methods}, for ALWANN~\cite{mrazek2019alwann} we considered all the approximate multipliers from the library in~\cite{mrazek2017evoapproxsb} and show the results from the multiplier that yielded the highest energy gains for each accuracy threshold. Additionally, we evaluated FBS for $z\in[1,3]$ and included the results that yielded the highest energy gains.
Fig.~\ref{fig:cifar10_results} shows the energy savings for the CIFAR-10 dataset.
Overall, across all $7$ NNs our approach achieves an average of $17.43\%$ in energy gains for all the considered accuracy thresholds when compared to the exact mode of operation.
Specifically, our method sustains an $18\%$ energy reduction for the ResNet20 and ResNet32, $22\%$ for the ResNet44 and ResNet56, and $12\%$ for the ShuffleNet. Some variation in energy gains is observed for the MobileNetv2 (from $11\%$ to $13\%$), and the greatest variation is observed for the GoogleNet, where our method achieves from $14\%$ up to $20\%$ energy gains. Furthermore, we observe that the energy reduction gains increase as the NNs become deeper with more layers (e.g., ResNet56).
ALWANN~\cite{mrazek2019alwann} and LVRM~\cite{tasoulas2020weight} achieve on average $3.77\%$ and $11.57\%$ gain in energy respectively.
The multiple modes of approximation that LVRM~\cite{tasoulas2020weight} introduces can adapt to various accuracy thresholds, and in this case manages to slightly surpass our proposed method's gains by $1\%$ for the $0.5\%$ threshold for MobileNetv2. The ConVar~\cite{zervakis2021control} method sustains a $19.2\%$ gain in energy, surpassing our proposed method's results by $1.77\%$, while FBS resulted in an average of $7.8\%$ in energy gains.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{resources/GTSRB_BS.pdf}
\caption{GTSRB normalized MAC operation energy savings. For the $0.5\%$ threshold for the ShuffleNet case, ConVar~\cite{zervakis2021control} could not produce acceptable solutions, while ALWANN~\cite{mrazek2019alwann} resulted in nearly $0\%$ gains in energy for ResNet44.}
\label{fig:gtsrb_results}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{resources/LISA_BS.pdf}
\caption{LISA normalized MAC operation energy savings. For the $0.5\%$ threshold for the MobileNetv2 case, ConVar~\cite{zervakis2021control} could not produce acceptable solutions, while ALWANN~\cite{mrazek2019alwann} resulted in nearly $0\%$ gains in energy for ResNet32.}
\label{fig:lisa_results}
\end{figure*}
Fig.~\ref{fig:cifar100_results} shows the energy savings for the CIFAR-100 dataset. For this dataset, the achieved gains are the lowest we observed in our evaluation since it is a more challenging dataset~\cite{mrazek2020libraries}.
However, our proposed method still attained an average gain of approximately $15.33\%$ across all NNs and accuracy thresholds.
The lowest gains on this dataset were observed for the ResNet20 ($8\%$) and for the MobileNetv2 ($12\%$). However, our method's maximum achievable gains in energy for this dataset are still high, reaching up to $22\%$ for the ResNet56 and $21\%$ for the ResNet32. For the Mobilenetv2 our method achieves similar energy gains to LVRM~\cite{tasoulas2020weight} (from $11\%$ to $13\%$), and for the ResNet20 LVRM~\cite{tasoulas2020weight} surpasses our method's gains by an average of $4.5\%$ for the three accuracy thresholds. Overall, LVRM~\cite{tasoulas2020weight} and ALWANN~\cite{mrazek2019alwann} achieve an average of $11.71\%$ and $2.88\%$ in energy gains respectively, i.e., $1.31$x and $5.32$x lower than our method.
Again, ALWANN~\cite{mrazek2019alwann} does not result in significant energy gains for this dataset, while LVRM~\cite{tasoulas2020weight} maintains energy gains above a minimum $11\%$, without however achieving energy gains above $13\%$ across all NNs.
In this dataset, ConVar~\cite{zervakis2021control} again reached a $19.2\%$ in energy gains, however not across all considered NNs. Specifically, the considered multiplier \emph{could not satisfy} neither of the $0.5\%$ and $0.75\%$ accuracy thresholds for ResNet44, yielding no results in these cases ending up in an average of $17.37\%$ in energy gains for the CIFAR-100 dataset. Similarly, FBS did not produce acceptable results for any of the accuracy thresholds for GoogleNet and ResNet20 and only satisfied the $1\%$ threshold for the MobileNetv2 and ResNet32 NNs, while also failing to satisfy the $0.5\%$ threshold for ResNet56, ending up with an average gain in energy of just $2.11\%$. This behavior validates our argument that the proposed filter-oriented error balancing method produces better results.
Fig.~\ref{fig:gtsrb_results} depicts the energy savings for the GTSRB dataset. Our proposed method achieves an average of $18.71\%$ in energy savings across all NNs and accuracy thresholds, while LVRM~\cite{tasoulas2020weight} and ALWANN~\cite{mrazek2019alwann} achieve $11.81\%$ and $3.44\%$ respectively. The lowest observed value in the attained energy savings of our method is approximately $12\%$ for the MobileNetv2, while the highest is $23\%$ for GoogleNet for the $1\%$ threshold. Excluding MobileNetv2, our method shows energy gains of a minimum of $17\%$ for the rest of the NNs while respecting all accuracy requirements. Again, ConVar~\cite{zervakis2021control} \emph{could not produce an acceptable result} for the $0.5\%$ threshold for ShuffleNet, ending up to an average of $18.29\%$ in energy savings. FBS also failed to produce acceptable results, specifically for ShuffleNet and for all the considered thresholds, resulting in $9.98\%$ energy savings on average across the dataset.
Finally the corresponding results for the LISA dataset are shown in Fig.~\ref{fig:lisa_results}. For this dataset, our method achieves the highest energy gains observed throughout our evaluation experiments, being $21.86\%$ on average (across all NNs and thresholds). In this dataset, LVRM~\cite{tasoulas2020weight} and ALWANN~\cite{mrazek2019alwann} achieve $12.57\%$ and $7\%$ respectively.
Specifically, the minimum observed energy gains value of our proposed method was $18\%$ for MobileNetv2. Additionally, our method achieved $20\%$ in energy gains for ResNet20 and $23\%$ for the rest of the NNs for all the accuracy thresholds. For this dataset, the average gain in energy that ALWANN~\cite{mrazek2019alwann} achieved was doubled in comparison to the other datasets. For the MobileNetv2, ShuffleNet and GoogleNet NNs, the gains of LVRM~\cite{tasoulas2020weight} were similar and in some threshold cases lower than the ones of ALWANN~\cite{mrazek2019alwann}. For the $0.5\%$ for MobileNetv2, ConVar~\cite{zervakis2021control} \emph{again failed to meet the requirement} and it did not result in acceptable accuracy, ending up to an average of $18.29\%$ in energy savings for this dataset as well. Likewise, FBS did not produce any acceptable results for none of the considered thresholds for MobileNetv2, ending up with an average of $9.94\%$ in energy gains.
The NNs that we considered in our evaluation (shown in Fig.~\ref{fig:cifar10_results}~-~\ref{fig:lisa_results}) were trained on $4$ different datasets, and the methods we included in our comparison did not require retraining. Our method resulted in solution mappings that respected all considered accuracy thresholds for all NNs, yielding high gains in energy for every case.
Our methodology achieves overall higher energy gains when compared to the corresponding reconfigurable weight-oriented method presented in LVRM~\cite{tasoulas2020weight}, surpassing it in some cases by as much as $12\%$. On average, our method achieved $18.33\%$, while ConVar~\cite{zervakis2021control} yielded $18.29\%$, LVRM~\cite{tasoulas2020weight} $11.9\%$, FBS $7.46\%$ and ALWANN~\cite{mrazek2019alwann} $4.27\%$ in terms of energy savings. ConVar~\cite{zervakis2021control} was the method that reached and maintained energy savings similar to our proposed method's, in some cases surpassing our results by up to $11\%$ (ResNet20 for CIFAR-100). However,
ConVar~\cite{zervakis2021control} failed \emph{repeatedly} to satisfy the given accuracy thresholds as opposed to our technique that always satisfied the accuracy constraints.
By pushing the approximation more we were able to counteract smaller energy gains in some cases with greater gains in others, reaching average energy savings similar to ConVar~\cite{zervakis2021control}.
FBS also failed to produce acceptable solutions on multiple occasions, justifying our choice to only employ LDM on smaller sets of weight values in the final step of our mapping methodology.
\section{Conclusion}\label{sec:Conclusion}
In this work, we present an approximate multiplier that can be configured to generate positive, negative, or no error.
Our mathematical analysis demonstrates that by leveraging the known weight values to carefully set the modes of our approximate multiplier per weight, we can minimize the convolution error and thus attain high inference accuracy.
To achieve this, we propose a filter-oriented mapping methodology that aims to satisfy a given accuracy drop threshold while maximizing the applied approximation, targeting high energy efficiency.
Our extensive experimental evaluation, shows that our filter-oriented approximation with our positive/negative multiplier outperforms significantly the state of the art.
It is noteworthy that our proposed technique does not require DNN retraining.
\section*{Acknowledgement}
This work is supported in part by the German Research Foundation (DFG) through the project ``ACCROSS: Approximate Computing aCROss the System Stack''.
\newpage
\balance
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
An indication that certain evolution equations might have remarkable
mathematical properties came with the discovery of an infinite number
of conservation laws for the Korteweg-de Vries (KdV) equation,
$u_t + u u_x + u_{3x} = 0 $.
The conserved quantities $u$ and $u^2$, corresponding to conservation of
momentum and energy, respectively, were long known, and Whitham (1974)
had found a third one, $u^3 - 3 u_x^2,$ which corresponds to Boussinesq's
moment of instability.
Zabusky and Kruskal found a fourth and fifth. However, the search for
additional conserved densities was halted due to a mistake in
their computations.
Miura eventually continued the search and, beyond the missing sixth,
found an additional three conserved densities (Newell, 1983).
It became clear that the KdV equation had an infinite sequence of
conservation laws, later proven to be true.
The existence of an infinity of conserved densities was an important link
in the discovery of other special properties of the KdV equation
(Zakharov, 1990).
It led, for example, to the construction of the Miura transformation,
which connects the solutions of the KdV and modified KdV equations.
Consequently, the famous Lax pair was found, which associates a couple of
linear equations to the KdV equation.
From that, the inverse scattering technique (IST) for direct linearization of
integrable equations was developed, and it was then shown that the KdV
equation, and many other integrable equations, admit bi-Hamiltonian structures.
There are several motives to construct conserved densities of
partial differential equations (PDEs) explicitly.
The first few conservation laws have a physical interpretation.
Additional ones may facilitate the study of both quantitative and
qualitative properties of solutions (Scott {\em et al.\/}, 1973).
Furthermore, the existence of a sequence of conserved densities
(perhaps with gaps) predicts integrability.
Yet, the non-existence of conserved quantities does not preclude integrability.
There are indeed equations, viz. dissipative equations, with only
{\em one} conserved density, which can be directly integrated.
The most notable is the Burgers equation, which can be transformed
into the {\it linear} heat equation via the Cole-Hopf transformation
(Zakharov, 1990).
Yet another compelling argument to explicitly construct conserved
densities relates to the numerical solution of PDEs.
It is desirable that the semi-discretization conserves the discrete
analogues of the continuous conserved quantities.
In particular, the conservation of a positive definite quadratic quantity
may prevent the occurrence of nonlinear instabilities in the numerical
scheme.
The use of conservation laws in solving the Boussinesq equation
numerically has been illustrated in Hickernell (1983).
Sanz-Serna (1982) describes a scheme for the integration in time of PDEs,
which is explicit and capable of conserving discretized quadratic functionals.
Since $u \;{\rm and}\; u^2$ are conserved densities for the KdV equation,
a discrete scheme should have conservation of
momentum, $\sum_{j} U_{j}^{n},$ and energy $\sum_{j} {[U_{j}^{n}]}^2.$
In Sanz-Serna (1982), an explicit self-adaptive conservative scheme with
conservation of energy
and momentum is given.
Conservation of energy implies boundedness of the solutions, and therefore
obviates the occurrence of any blowup phenomena.
For more details about numerical applications of conservation laws see
LeVeque (1992).
We present a new symbolic algorithm for computing closed-form
polynomial-type conservation laws for systems of nonlinear
evolution equations. Our algorithm is also applicable to wave equations,
viz. the Boussinesq equation, provided that the PDE (of higher-order in time)
can be written as a system of evolution equations (first order in time).
In contrast to fairly complicated algorithms designed by Bocharov (1991),
Gerdt and Zharkov (1990), and Gerdt (1993), we introduce an algorithm that
is based in part on ideas presented in
Hereman and Zhuang (1995), Ito (1994), and Ito and Kako (1985),
Kruskal {\em et al.\/} (1970), Kodama (1985),
Miura {\em et al.\/} (1968), Verheest and Hereman (1995), and
Willox {\em et al.\/} (1995).
Our algorithm has the advantage that it is fairly straightforward to
implement in any symbolic language.
We also present a software package {\bf condens.m}, written in
{\it Mathematica} syntax, which automates the tedious computation
of closed-form expressions for conserved densities and fluxes.
In Section 2, we give the definitions of conservation law, density and flux.
We also state a theorem from calculus of variations about the Euler-Lagrange
equations, which will play a role in our algorithm.
The remainder of the section is devoted to a detailed exposition of the
algorithm, which was implemented in {\it Mathematica}, and successfully
tested on many well-known evolution systems from soliton theory.
In Section 3, we address applications of our program {\bf condens.m}.
In particular, we show how it could be used in a search for integrable
fifth-order evolution equations of KdV type, where we retrieved all the
known integrable cases.
We carried out a similar computer search for a parameterized class of
seventh-order evolution equations.
More examples and test results are presented in Section 4.
In Section 5, we describe the usage of our code, and indicate
its capabilities and limitations.
In Section 6, a comparison with other programs is given.
Also, several ongoing projects are briefly addressed.
\section{Computation of Conserved Densities}
\subsection{Definitions}
For simplicity, consider a single PDE,
\begin{equation}\label{pde}
\Delta (x,t,u(x,t)) = 0,
\end{equation}
where $t \in I\!\!R $ denotes time, $x \in I\!\!R$ is the spatial
variable, and $u(x,t) \in I\!\!R$ is the dependent variable.
A {\em conservation law} is of the form
\begin{equation}\label{conslaw}
{\rm D}_{t} \rho + {\rm D}_{x} J = 0,
\end{equation}
which is satisfied for all solutions of (\ref{pde}).
The functional $\rho(x,t)$ is the {\em conserved density\/},
$J(x,t)$ is the associated {\em flux\/}.
Both are, in general, functions of $x, t, u,$ and its partial derivatives
with respect to $x.$
Furthermore, ${\rm D}_{t}$ denotes the total derivative with respect to t;
${\rm D}_{x}$ the total derivative with respect to $x$
(Ablowitz and Clarkson, 1991).
Specifically, $\rho$ is a {\it local} conserved density if $\rho$ is a
local functional of $u$, i.e. if the value of $\rho$ at any $x$ depends
only on the values of $u$ in an arbitrary small neighborhood of $x.$
If $J$ is also local, then (\ref{conslaw}) is a {\it local conservation law}.
In particular, if $\rho$ is a polynomial in $u$ and its $x$ derivatives,
and does not depend explicitly on $x$ or $t$, then $\rho$ is called
a {\it polynomial} conserved density.
If $J$ is also such a polynomial, then (\ref{conslaw}) is
called a {\it polynomial conservation law}.
There is a close relationship between constants of motion and
conservation laws.
Indeed, for polynomial-type $\rho$ and $J$, integration of
(\ref{conslaw}) yields
\begin{equation} \label{const}
P = \int_{-\infty}^{+\infty} \rho \; dx = {\rm constant},
\end{equation}
provided that $J$ vanishes at infinity.
For ordinary differential equations, the $P\/$'s are constants of motion.
\begin{example}
{\rm The most famous evolution equation from soliton theory, the
KdV equation (Miura, 1968),
\begin{equation} \label{KdVeq}
u_t + u u_x + u_{3x} = 0 ,
\end{equation}
is known to have infinitely many polynomial conservation laws.
The first three are
\begin{eqnarray}
& &(u)_t + \left ( {1 \over 2} u^2 + u_{2x} \right )_x = 0,\\
& & \left (u^2 \right)_t +
\left ( {2 \over 3} u^3 - {u_x}^2 + 2 u u_{2x} \right)_x = 0,\\
& &\left ( u^3 - 3 {u_x}^2 \right )_t +
\left ( {3 \over 4} u^4 - 6 u {u_x}^2 + 3 u^2 u_{2x} + 3 {u_{2x}}^2 -
6 u_x u_{3x} \right)_x = 0.
\end{eqnarray}
}
\end{example}
The first two express conservation of momentum and energy, respectively.
They are easy to compute by hand.
The third one, less obvious and requiring more work, corresponds
to Boussinesq's moment of instability (Newell, 1983).
Observe that the KdV equation and its densities $\rho = u, u^2 $ and
$u^3 - 3 {u_x}^2 $ are all invariant under the scaling symmetry
\[
(x,t,u) \rightarrow (\lambda x, \lambda^3 t, \lambda^{-2} u),
\]
where $\lambda$ is a parameter.
Stated differently, $u$ carries the weight of two derivatives with respect
to $x;$ denoted symbolically by
\[ u \sim {\partial^2 \over \partial x^2}.
\]
Scaling invariance, which is a special Lie-point symmetry, is an intrinsic
property of many integrable nonlinear PDEs.
Our algorithm exploits this idea in the construction of conserved densities.
\subsection{Euler Operator}
We introduce a tool from calculus of variations: the Euler operator, which is
very useful for testing if an expression is a total derivative
(Ito and Kako, 1985; Olver, 1986), without having to carry out any
integrations by parts.
\begin{theorem}\label{thm5}
If $f = f(x,y_1,\ldots,y_1^{(n)},\dots,y_{N},\ldots,y_{N}^{(n)})$,
then ${\cal L\/}_{\vec{y}}(f) \equiv \vec{0},$
if and only if
\vskip 3pt
\noindent
${\displaystyle f= \frac{\mbox{d}}{dx} g}$,
where $ g = g(x,y_1,\ldots,y_1^{(n-1)},\dots,
y_N,\ldots,y_N^{(n-1)}) $.
\end{theorem}
\noindent
In this theorem, for which a proof can be found in
Olver (1986, pp. 252),
$\vec{y} = [y_1,\ldots,y_N]^{T},$
\vskip 4pt
\noindent
$ {\cal L\/}_{\vec{y}}(f) = [{\cal L\/}_{y_1}(f),\ldots,
{\cal L\/}_{y_N}(f)]^T, \; $
$ \vec{0} = [0,\ldots,0]^{T}, $ with $T$ for transpose, and where
\[{\cal L\/}_{y_i} = \frac{\partial }{\partial{y_i}} -
\frac{d}{dx}( \frac{\partial }{\partial{{y_i}'}})+
\frac{d^2}{dx^2}( \frac{\partial }{\partial{{y_i}''}})+\cdots+
(-1)^n \frac{d^n}{dx^n} (\frac{\partial }{\partial{{y_i}^{(n)}}}),
\]
is the {\em Euler operator} (or variational derivative).
We will use this theorem in our algorithm.
\subsection{Algorithm}
We now describe our algorithm to compute polynomial conservation laws
(\ref{conslaw}) for systems of nonlinear evolution equations of
order $n,$
\begin{equation}\label{multisys}
{\vec u}_t =
{\vec {\cal F}} ({\vec u}(x,t),{\vec u~}'(x,t),...,{\vec u}^{(n)}(x,t)),
\end{equation}
where ${\vec u} = [u_1,\ldots,u_N]^T\/$, or, component-wise,
\begin{equation} \label{givensystem}
u_{i,t} + {\cal F}_i (u_j, u_{j}^{'}, u_{j}^{''}, \ldots, u_{j}^{(n)} ) = 0,
\quad i=1,2,\ldots,N,\quad j=1,2,\ldots,N,
\end{equation}
where
\[{\displaystyle u_{i,t} \buildrel \rm def \over =
\frac{\partial{u_i}}{\partial{t}} ,\; \quad \;
{u_j}^{(n)} = u_{j,nx} \buildrel \rm def \over =
\frac{\partial^n{(u_j)}}{\partial{x^n}},}
\]
and all components of ${\vec u} $ depend on $x$ and $t.$
\vskip 4pt
\noindent
Our goal is to compute the densities
$\rho({\vec u},...,{\vec u}^{(m_1)}) $ and the fluxes
$ J({\vec u},...,{\vec u}^{(m_2)}) $ of order $m_1$ and $m_2$, respectively.
There will be no restriction on the order $n$ of the system (\ref{multisys}),
or on its degree of nonlinearity. But, all the components of ${\vec {\cal F}}$
must be polynomial in their arguments.
Furthermore, we only consider systems of evolution equations in $t$ with
one spatial variable $x$.
In our algorithm, we tacitly assume that we have an evolution equation
for every dependent variable $u_i$.
In cases where there are more dependent variables than equations,
one can always add trivial evolution equations $u_{i,t} = 0.$
Further on, we use the notation $u_{j,nx}$ instead of ${u_j}^{(n)}$ because
it is closer to the notation in our code.
\vfill
\newpage
\noindent
{\bf Step 1$\;\;\;$Determine the weights of variables and parameters}
\vskip 4pt
We define the {\em weight} of a variable as the number of partial
derivatives with respect to $x$ the variable carries, and the
{\em rank} of a term as the total weight in terms of partial
derivatives with respect to $x$ (Kruskal {\em et al.\/}, 1970;
Miura {\em et al.\/}, 1968).
The rank is assumed to be nonnegative and rational.
For the system (\ref{givensystem}), we first try to determine
the weights (scaling properties) of all the variables.
We assume that all terms in a particular equation have the same rank.
We call this property {\it uniformity in rank}.
Different equations in (\ref{givensystem}) can have different ranks.
Having defined the weight of a variable in terms of $x-$derivatives,
we (can) pick
$ w(\frac{\partial{}}{\partial{x}}) = 1,\ldots,
w(\frac{\partial^n{}}{\partial{x^n}}) = n, $
where $w\/$ returns the weight of its argument.
For simple systems, in particular those with uniform rank,
only the variables $u_i$ and $\frac{\partial{}}{\partial{t}}$ have weights.
However, to be able to handle more general systems, we allow for
constant parameters to be introduced and also for these
parameters to carry weights.
The trick of introducing parameters with weights allows one to handle
equations without uniform rank.
Let us assume that there are $P$ such parameters in the system,
denoted by $p_i, \; i=1,2,...,P.$
Thus, the extended list of {\em variables} that carry weights is
$\{\frac{\partial{}}{\partial{t}},u_1,u_2,\ldots,u_N,p_1,p_2,\ldots,p_P\}$.
Weights must be nonnegative and rational, except for
$w(\frac{\partial}{\partial t}),$ which may be positive, zero, or negative.
More precisely, the weight of at least one $u_i$ must be positive;
the weights of the parameters $p_i$ must be nonnegative.
We proceed with the following steps:
\begin{enumerate}
\item[(a)] Take the $i^{th}$ equation in (\ref{givensystem}).
Denote the number of terms in the equation by $K_i.$
\item[(b)] Compute the rank $r_{i,k}\/$ of the $k^{th}$ term in
the $i^{th}$ equation as follows:
\newline
\noindent
$
\displaystyle{ r_{i,k} = d(x) + d(t) \; w(\frac{\partial{}}{\partial{t}}) +
\sum_{j=1}^{N} g(u_j) \; w(u_j)+ \sum_{j=1}^{P}
g(p_j) \; w(p_j),\quad k=1,2,\ldots,K_i, }
$
\newline
\noindent
where $g\/$ returns the degree of nonlinearity of its argument,
$d\/$ returns the number of partial derivatives with respect to its
argument. For evolution equations $d(t)$ is either zero or one.
\item[(c)] Assuming uniformity in rank in the $i^{th}$ equation,
form the linear system
\[ A_i = \{r_{i,1} = r_{i,2} = \cdots = r_{i,K_i} \}. \]
\item[(d)] Repeat steps (a) through (c) for all of the equations
in (\ref{givensystem}).
\item[(e)] Gather the equations $A_i$ to form the global linear system
$\displaystyle {\cal A} = \bigcup_{i=1}^{N} A_i. $
\item[(f)] Solve $\cal A$ for the $N+P+1$ unknowns $w(u_j)$, $w(p_j)$
and $w(\frac{\partial{}}{\partial{t}})$.
\end{enumerate}
If the solution of the system $\cal A$ still has free, consider two cases:
\begin{enumerate}
\item If two or more weights are undetermined, prompt the user to enter
choices.
\item If only one weight is free, say $w(u_k),$ take the equations obtained
in (f), set their left hand sides equal to one, and solve them piece by
piece for $w(u_k).$ Include the choice $w(u_k) = 1$.
For all choices for $w(u_k)$ test:
(i) if $w(u_k)$ is negative, increment it until it is positive;
(ii) reject $w(u_k)$ if any other weight is negative.
Continue with the smallest integer value for $w(u_k)$ if present, else
take the smallest fractional value.
This produces at most {\it one} positive value for the free weight, out of
possibly infinitely many choices.
If the algorithm fails, prompt the user to enter a value for $w(u_k).$
\end{enumerate}
\noindent
{\bf Step 2$\;\;\;$Construct the form of the density}
\vskip 4pt
The second step involves the construction of the polynomial
density with a prescribed rank $R.$
All the terms in the density $\rho$ must have that same rank $R$.
Since we may introduce parameters with weights, the fact that
the density will be a sum of monomials of uniform rank does not
necessarily imply that the density must be uniform in rank with respect to
the dependent variables.
Note that the rank $R$ can differ from any of the ranks of the
equations in (\ref{givensystem}). The rank $R$ must be positive and
rational.
Let ${\cal V}=\{v_1,v_2,\ldots,v_Q \}$ be the sorted list of all the
variables with positive weights, including the parameters $p_i,$
but excluding $\frac{\partial{}}{\partial{t}}.$
The variables are ordered according to descending weights: $w(v_1)$ is
the largest weight, $w(v_Q)$ is the smallest.
The following procedure is used to determine the form of the density
of rank $R:$
\begin{enumerate}
\item[(a)] Form all monomials of rank $R$ or less by taking combinations
of the variables in $\cal V.$
Recursively, form sets consisting of ordered pairs. In each pair,
the first component has a specific combination of different powers of the
variables, the second component has the weight of the first component.
\vskip 2pt
\noindent
Set ${\cal B}_{0} = \{(1;0)\}$ and proceed as follows:
\vskip 2pt
\noindent
{\bf For} $q=1$ through $Q$ {\bf do}
\begin{itemize}
\item[] {\bf For} $m=0$ through $M-1$ {\bf do}
\begin{itemize}
\item[] Form $\displaystyle B_{q,m} = \bigcup_{s=0}^{b_{q,m}}
\{ (T_{q,s}; W_{q,s}) \}, \; $
where $M$ is the number of pairs in ${\cal B}_{q-1}, \;$
\vskip 2pt
\noindent
$ T_{q,s} = T_{q-1,m} \; {v_q}^s, \;$
$ W_{q,s} = W_{q-1,m} + s \; w(v_q), \;$
$(T_{q-1,m};W_{q-1,m})$ is the
\vskip 3pt
\noindent
${(m+1)}^{st}$ ordered pair
in ${\cal B}_{q-1}, \;$ and $ b_{q,m} = \lbrack\!\lbrack
\frac{R- W_{q-1,m}}{w(v_q)} \rbrack\!\rbrack $
\vskip 3pt
\noindent
is the maximum allowable power of $v_q.$
\end{itemize}
\item[] Set $\displaystyle {\cal B}_{q} = \bigcup_{m=0}^{M-1} B_{q,m}.$
\end{itemize}
\item[(b)] Set ${\cal G} = {\cal B}_{Q}.$
Note that ${\cal G}$ has all possible combinations of powers of the
variables that produce rank $R$ or less.
\vskip 1pt
\noindent
\item[(c)] Introduce partial derivatives with respect to $x$.
For each pair $( T_{Q,s}; W_{Q,s})$ in ${\cal G}$, apply
$\displaystyle \frac{\partial^{\ell}{}}{\partial{x^{\ell}}}$
to the term $T_{Q,s}$, provided $\displaystyle \ell=R- W_{Q,s} $ is integer.
This introduces just enough partial derivatives with respect to $x$, so that
all the pairs retain weight $R.$
Gather in a set ${\cal H}$ all the terms that result from computing the
various $\displaystyle
\frac{\partial^{\ell}{(T_{Q,s})}}{{\partial{x^{\ell}}}}.$
\item[(d)] Remove those terms from ${\cal H}$ that can be written as a total
derivative with respect to $x$, or as a derivative up to terms
kept prior in the set. Call the resulting set ${\cal I},$ which consists of
the {\em building blocks\/} of the density $\rho$ with desired rank $R.$
\vskip 1pt
\noindent
\item[(e)] If ${\cal I}$ has $I$ elements, then their linear combination
will produce the polynomial density of rank $R.$ Therefore,
\[ \displaystyle{
\rho = \sum_{i=1}^{I} c_i \; {\cal I}(i),
}
\]
where ${\cal I}(i)$ denotes the $i^{th}$ element in ${\cal I},$ and
$c_i$ are numerical coefficients, still to be determined.
\end{enumerate}
\vfill
\newpage
\noindent
{\bf Step 3$\;\;\;$Determine the unknown coefficients in the density}
\vskip 6pt
Recall that a conservation law is of the form
$\displaystyle{ D_t \rho + D_x J = 0, }$ or
$ \displaystyle{ D_t \rho = -D_x J, }$
which means that $D_t \rho$ must be the negative of the
$x-$derivative of a functional $J.$
After computation of $D_t \rho,$ remove all ${(u_{i,t})}^{(j)},\;
j=0,1,2,...$ from the expression, using the evolution equations in
(\ref{givensystem}).
The resulting expression for $D_t \rho$ must be a total $x-$derivative
of some expression.
To verify this we use Theorem \ref{thm5} and apply the Euler operator.
We require that the resulting Euler-Lagrange equations vanish identically
by the appropriate choice of the coefficients $c_i.$
This leads to a {\it linear} system for the ${c_i}.$
The system must be analyzed and solved for the unknown $c_i.$
In general, the procedure is as follows:
\begin{enumerate}
\item[(a)] Compute $D_t \rho$ and replace all ${(u_{i,t})}^{(j)},
i=0,1,...,N$ and $j=0,1,2,...$ using the evolution equations in
(\ref{givensystem}).
\vskip 4pt
\noindent
\item[(b)] The resulting expression, called $E$, must equal $D_x (-J)$
for some functional $J.$
\noindent
Apply to $E$ the Euler operator ${\cal L}$ from Theorem \ref{thm5}.
If $E$ is completely integrable no terms will be left,
i.e. $ {\cal L\/}_{\vec{u}}(E) \equiv {\vec 0}$.
If terms remain, set them equal to zero and form the linear
system for the ${c_i}$, denoted by ${\cal S}.$
\vskip 4pt
\noindent
\item[(c)] Depending on whether or not there are parameters in
${\cal S},$ two cases occur:
\begin{itemize}
\item[(i)] If the only unknowns in ${\cal S}$ are $c_i$, solve
${\cal S}$ for the $c_i$.
Substitute the (non-empty) solution into the expression of $\rho$ to obtain
its final form.
\vskip 4pt
\noindent
\item[(ii)] If in addition to the coefficients $c_i$ there are parameters
$p_i$ in ${\cal S},$ then determine the conditions on these parameters,
so that a density in the given rank exists for at least some $c_i$ nonzero.
These {\em compatibility conditions} assure that the system ${\cal S}$
has non-trivial solutions.
Obviously, all $c_i$ equal to zero would give the trivial (zero) density.
Solving the compatibility conditions may lead to different densities of
the same rank, corresponding to different choices of the parameters.
Thus, generating the compatibility conditions enables one to filter out all
the cases for which there exists a nontrivial density of given rank.
Let ${\cal C}$ $=$ $\{ c_1,c_2,\ldots,c_I \}$ be the set of all the
coefficients that appear in the density.
In order to determine all possible compatibility conditions,
proceed as follows:
\vskip 4pt
\noindent
Under the assumption that no $p_i$ in $\cal S$ is {\it zero\/},
analyze the system $\cal S.$ First determine the coefficients $c_i$
that always must be zero. Exclude these $c_i$ from $\cal C$
and set $i = i'$, where $i'$ is the smallest index of the $c_j$ that
remain in $\cal C$.
\vskip 4pt
\noindent
{\bf While} ${\cal C} \neq \{ \}$ {\bf do}:
\begin{itemize}
\item[] For the building block ${\cal I}(i)$ with coefficient $c_i$
to appear in $\rho$, one needs $c_i \not= 0 .$ Therefore, set $c_i = 1$ and
eliminate all the other $c_j$ from ${\cal S}.$
This gives compatibility conditions consistent with the presence of
the term $c_i {\cal I}(i)$ in $\rho.$
\vskip 3pt
\noindent
{\bf If} $\cal S$ becomes inconsistent, or compatibility conditions
require some of the parameters to be zero,
\vskip 5pt
\noindent
{\bf then}:
\begin{itemize}
\item[] $c_i$ must be zero.
Hence, set
$ {\cal C} = {\cal C} \backslash \{c_i\},$ and $i=i',$ where $i'$ is the
smallest index of the ${c_j}$ that remain in ${\cal C},$
\end{itemize}
{\bf else}:
\begin{itemize}
\item[] Analyze the compatibility conditions and for each resulting branch;
solve the system ${\cal S}$ for $c_j$, and substitute the solution into
the expression of $\rho$ to obtain its final form.
Then, collect those ${c_j}$ which are zero for {\it all} of the branches
into a set ${\cal Z}.$
Since the $c_i$ in ${\cal Z}$ might not have occurred in any of the
densities yet, set
$ {\cal C} = {\cal C} \cap {\cal Z},$ and $i=i',$ where
$i'$ is the smallest index of the $c_j$ that are still in ${\cal C}.$
\end{itemize}
\end{itemize}
\end{itemize}
\item[(d)] Compute $ J = - \int E \; dx, $ via integration by parts.
\end{enumerate}
The three examples below illustrate the algorithm. In the first example,
we show Steps 1 and 2 in detail. The second and third example illustrate
details of Step 3. The examples 4.1 and 4.2 illustrate what happens if
there are free weights.
\vskip 3pt
\noindent
\begin{example}
{\rm
The wave equation,
\begin{equation} \label{Bouseq}
u_{tt} - u_{2x} + 3 u u_{2x} + 3 {u_x}^2 + \alpha u_{4x} = 0,
\end{equation}
was proposed by Boussinesq to describe surface water waves whose horizontal
scale is much larger than the depth of the water
(Ablowitz and Clarkson, 1991; Hickernell, 1983).
Conservation laws play a key role in the study of (\ref{Bouseq})
for they can be used to prove that solutions are bounded for certain sets
of initial conditions
(Hickernell, 1983),
or, conversely, to prove that solutions fail to exist after a finite time.
For computing conservation laws, we rewrite (\ref{Bouseq}) as a system of
first-order equations,
\begin{eqnarray} \label{Boussys1}
u_t + v_x &=& 0, \nonumber \\
v_t + u_x - 3 u u_x - \alpha u_{3x} &=& 0,
\end{eqnarray}
where $v$ is an auxiliary dependent variable.
It is easy to verify that the terms $u_x$ and $\alpha u_{3x}$ in the second
equation do not allow for uniformity in rank.
To circumvent the problem we use a trick: we introduce an
auxiliary parameter $\beta$ with
(unknown) weight, and replace (\ref{Boussys1}) by
\begin{eqnarray} \label{Boussys2}
u_t + v_x &=& 0,\nonumber \\
v_t + \beta u_x - 3 u u_x - \alpha u_{3x} &=& 0,
\end{eqnarray}
or, in our notation
\begin{eqnarray} \label{prgboussys1}
u_{1,t} + u_{2,x} &=& 0, \nonumber \\
u_{2,t} + \beta \; u_{1,x} - 3 u_1 u_{1,x} - \alpha \; u_{1,3x} &=& 0.
\end{eqnarray}
Using the procedure in Step 1, we obtain the weights by first computing
\[
\begin{array}{rclcrcl}
r_{1,1} &=& 1 \; w(\frac{\partial{}}{\partial{t}}) + 1
\; w(u_1),& \quad\quad &
r_{1,2} &=& 1+ 1 \; w(u_2), \\[.6mm]
r_{2,1} &=& 1 \; w(\frac{\partial{}}{\partial{t}}) + 1
\; w(u_2),& \quad\quad &
r_{2,2} &=& 1+ 1 \; w(u_1) + 1 \; w(\beta), \\[.6mm]
r_{2,3} &=& 1+ 2 w(u_1),& \quad\quad &
r_{2,4} &=& 3+ 1 \; w(u_1),
\end{array}
\]
then forming the systems
\[
\begin{array}{rclcrclcrcl}
A_1 &=& \{ r_{1,1} = r_{1,2} \},&\!\!\!\!\!&
A_2 &=& \{ r_{2,1} = r_{2,2} = r_{2,3}=r_{2,4} \}, &\!\!\!\!\!\!&
\; {\rm and} \;\;\; {\cal A} &=& A_1 \cup A_2,
\end{array}
\]
and, finally, solving
$\cal A$ for $w(u_1),w(u_2),w(\frac{\partial{}}{\partial{t}})$ and
$w(\beta).$ This yields
\[ w(u_1) = 2, \; w(u_2) = 3, \; w(\beta) = 2, \;\; {\rm and} \;
w(\frac{\partial{}}{\partial{t}}) = 2. \]
Hence, the scaling properties of (\ref{prgboussys1}) are such that
\[
u_1 \sim \beta \sim {\partial^2 \over \partial x^2}, \quad
u_2 \sim {\partial^3 \over \partial x^3}, \quad
\frac{\partial{}}{\partial{t}} \sim \frac{\partial^2{}}{\partial{x^2}},
\]
which expresses that (\ref{prgboussys1}) is invariant under the
scaling symmetry
\[
(x, t, u_1, u_2, \beta)
\rightarrow
(\lambda x, \lambda^2 t, \lambda^{-2} u_1,
\lambda^{-3} u_2, \lambda^{-2} \beta).
\]
Let us construct the form of the density with rank $R=6$.
Here, ${\cal V} = \{u_2, u_1, \beta \},$ hence, $v_1 = u_2, v_2 = u_1 $ and
$v_3 = \beta$ and, obviously, $Q = 3.$
We follow the procedure outlined in
Step 2:
\begin{enumerate}
\item[(a)] For $\boldmath q=1,m=0$:
\vskip 4pt
\noindent
$ b_{1,0} = \lbrack\!\lbrack \frac{6}{3} \rbrack\!\rbrack = 2. $
Thus, with
$ T_{1,s} = {u_2}^s, \; {\rm and} \; W_{1,s} = 3 s, \; $ where $s=0,1,2,$
we obtain
\[ {\cal B}_{1} = B_{1,0} = \{ (1;0),(u_2;3),({u_2}^2;6) \} . \]
\vskip 6pt
\noindent
For $\boldmath q=2,m=0$:
\vskip 4pt
\noindent
$ b_{2,0} = \lbrack\!\lbrack \frac{6-0}{2} \rbrack\!\rbrack =3.$
So, with
$ T_{2,s} = {u_1}^s, \; {\rm and} \; W_{2,s} = 2 s, $ with $ s = 0,1,2,3,$
we obtain
\[ B_{2,0} = \{ (1;0),(u_1;2),({u_1}^2;4),({u_1}^3;6) \}.\]
\vskip 6pt
\noindent
For $\boldmath q=2,m=1$:
\vskip 4pt
\noindent
we obtain
\[ B_{2,1} = \{ (u_2;3), (u_1 u_2;5) \}, \]
since
$ b_{2,1} = \lbrack\!\lbrack \frac{6-3}{2} \rbrack\!\rbrack =1, \;$
$ T_{2,s} =
u_2 \; {u_1}^s, \; {\rm and} \; W_{2,s} = 3 + 2 s, \;\;{\rm and}\;\; s=0,1. $
\vskip 10pt
\noindent
For $\boldmath q=2,m=2$:
\vskip 3pt
\noindent
$ b_{2,2} = \lbrack\!\lbrack \frac{6-6}{2} \rbrack\!\rbrack =0.$
Therefore,
$\displaystyle B_{2,2} = \{ ({u_2}^2;6) \}.$
Hence,
\[ {\cal B}_{2} =
\{ (1;0),(u_1;2),({u_1}^2;4),({u_1}^3;6),(u_2;3), (u_1 u_2;5),
({u_2}^2;6) \}. \]
For $\boldmath q=3$:
\vskip 3pt
\noindent
we introduce possible powers of $\beta.$ So,
\[
\begin{array}{lclclcl}
B_{3,0} &=& \{ (1;0),(\beta;2),(\beta^2;4),(\beta^3;6) \},&\quad&
B_{3,4} &=& \{ (u_2;3),(\beta u_2;5) \},\\
B_{3,1} &=& \{ (u_1;2),(\beta u_1;4),(\beta^2 u_1;6) \},&\quad&
B_{3,5} &=& \{ (u_1 u_2;5) \}, \\
B_{3,2} &=& \{ ({u_1}^2;4),(\beta {u_1}^2;6) \},&\quad&
B_{3,6} &=& \{ ({u_2}^2;6) \}. \\
B_{3,3} &=& \{ ({u_1}^3;6) \},
\end{array}
\]
and
\begin{eqnarray*}
{\cal B}_{3} &=&
\{ (1;0), (\beta;2), (\beta^2;4), (\beta^3;6), (u_1;2), (\beta u_1;4),
(\beta^2 u_1;6), ({u_1}^2;4),\\
& &\; (\beta {u_1}^2;6), ({u_1}^3;6), (u_2;3), (\beta u_2;5),
(u_1 u_2;5), ({u_2}^2;6) \}.
\end{eqnarray*}
\item[(b)] Set ${\cal G} = {\cal B}_{3}$.
\item[(c)] Next, we apply derivatives to the first components of the
pairs in ${\cal G}$.
\vskip 5pt
\noindent
Computation of $\ell$ for each pair of ${\cal G}$ gives
\[
\ell = 6,4,2,0,4,2,0,2,0,0,3,1,1,\; {\rm and} \; 0.
\]
Note that in this case all values for $\ell$ are integer.
Gathering the terms that come from applying the indicated number,
$\ell,$ of partial derivatives with respect to $x$, gives
\[
\!{\cal H}\!=\!\{0, \beta^3, u_{1,4x},
\beta u_{1,2x}, \beta^2 u_1, {u^2_{1,x}}, u_1 u_{1,2x},
\beta u_1^2, u_1^3, u_{2,3x}, \beta u_{2,x}, u_1 u_{2,x},
u_{1,x} u_2, u_2^2 \}.
\]
\item[(d)] Removing from ${\cal H}$ the constant terms, and the terms
that can be written as a $x-$derivative, or as a $x-$derivative up to
terms retained earlier in the set ${\cal I}$, yields
\[ {\cal I} = \{ \beta^2 u_1, \beta {u_1}^2, {u_1}^3, {u_2}^2, u_{1,x} {u_2},
{u_{1,x}}^2 \}. \]
\item[(e)] Combining these building blocks, the form of the density with
rank 6 follows;
\[ \rho = c_1 \; \beta^2 u_1 + c_2 \; \beta {u_1}^2 + c_3 \; {u_1}^3 +
c_4 \; {u_2}^2 +c_5 \; u_{1,x} {u_2} + c_6 \; {u_{1,x}}^2 . \]
\end{enumerate}
\noindent
Using Step 3 of the algorithm (illustrated in detail in the next example)
we compute the density of rank 6 in the original variables:
$$ \rho = \beta u^2 - u^3 + v^2 + \alpha {u_x}^2. $$
\vskip 6pt
\noindent
Analogously, we computed densities of rank $R \le 6.$
The first four densities of (\ref{Boussys2}) are:
\[
\begin{array}{rclcrcl}
\rho_1 &=& u,& \quad\quad &
\rho_2 &=& v,\\
\rho_3 &=& u v,& \quad\quad &
\rho_4 &=& \beta u^2 - u^3 + v^2 + \alpha {u_x}^2.
\end{array}
\]
After substitution of $\beta = 1$ into the densities above, one gets
the densities of
(\ref{Boussys1}) even though initially this system was not uniform in rank.
This trick, which involves the introduction of one or more
extra parameters with weights, can always be attempted if any equation
in (\ref{givensystem}) lacks uniformity in rank.
}
\end{example}
\vskip 5pt
\noindent
\begin{example}
{\rm
Hirota and Satsuma (1981) proposed a coupled system of KdV equations,
\begin{eqnarray}\label{HirSatpar}
u_t - 6 \alpha u u_x + 6 v v_x - \alpha u_{3x} &=& 0, \nonumber \\
v_t + 3 u v_x + v_{3x} &=& 0,
\end{eqnarray}
where $\alpha$ is a nonzero parameter.
System (\ref{HirSatpar}) describes interactions of two long waves with
different dispersion relations. It is known to be completely integrable for
$\alpha=\frac{1}{2}.$
In our notation, (\ref{HirSatpar}) can be rewritten as
\begin{eqnarray}\label{prghirsatpar}
u_{1,t} - 6 \alpha {u_1} u_{1,x} + 6 u_2 u_{2,x} -
\alpha u_{1,3x} &=& 0,
\nonumber \\
u_{2,t} + 3 u_1 u_{2,x} + u_{2,3x} &=& 0,
\end{eqnarray}
which has scaling properties,
$ u_1 \sim u_2 \sim \frac{\partial^2{}}{\partial{x^2}}, \;
\frac{\partial{}}{\partial{t}} \sim \frac{\partial^3{}}{\partial{x^3}}, $
and a density of rank 4:
\[ \rho = c_1 \; {u_1}^2 + c_2 \; u_1 u_2 + c_3 \; {u_2}^2.\]
Thus far we used Steps 1 and 2 of the algorithm.
We now illustrate Step 3, which fixes the undetermined
coefficients $c_i.$
\begin{enumerate}
\item[(a)] Compute $D_t \rho$ and replace all the mixed derivatives
${(u_{i,t})}^{(j)}$ by using (\ref{prghirsatpar}). Then,
\begin{eqnarray*}
\!\!\!\!\!\!\!\!E \!\!\!\!&=&\!\!\!\!
2 c_1 u_1 \!\left( 6 \alpha u_1 u_{1,x} -
6 u_2 u_{2,x} + \alpha u_{1,3x} \right) \! +
c_2 u_2 \! \left( 6 \alpha u_1 u_{1,x} -
6 u_2 u_{2,x} + \alpha u_{1,3x} \right) \\
& & - c_2 u_1 \left( 3 u_1 u_{2,x} + u_{2,3x} \right)-
2 c_3 u_2 \left( 3 u_1 u_{2,x} + u_{2,3x} \right) .
\end{eqnarray*}
\item[(b)] Apply the Euler
operator to get the linear system for the coefficients $c_1,c_2$ and $c_3$:
\[ {\cal S} = \{ (1+\alpha) c_2 = 0,\; 2 \; c_1+c_3 = 0 \}. \]
\vskip 2pt
\noindent
\item[(c)] Obviously, ${\cal C} = \{ c_1,c_2,c_3 \}$ and $\cal S$
has one parameter, $\alpha.$
Thus, we search for compatibility conditions:
\begin{itemize}
\item Set $\boldmath c_1 = 1, $ which gives
\[ \{ c_1 = 1, \; c_2 = 0, \; c_3 = -2 \} \]
as one of the solutions without any constraint on the parameter $\alpha$.
Since only $c_2 = 0,$ one has ${\cal Z} = \{c_2\}$ and
$ {\cal C} = {\cal C} \cap {\cal Z} = \{ c_2 \},$ with $i' = 2$.
\item Set $\boldmath c_2 = 1.$ This leads to the compatibility
condition $\alpha = -1$ and the solution
\[ \{ c_1 = - \frac{1}{2}\; c_3, \;\; c_2 = 1 \}. \]
Since ${\cal Z} = \{ \} $ and, consequently,
${\cal C} = {\cal C} \cap {\cal Z} = \{ \},$ the procedure ends.
\end{itemize}
\end{enumerate}
Therefore, one gets {\it two densities} of rank 4, one without
any constraint on $\alpha,$ one with a constraint.
In summary:
\begin{itemize}
\item $\rho = {u_1}^2 - 2 \; {u_2}^2, \;$ and
\vskip 4pt
\noindent
\item $\rho = - \frac{1}{2} c_3 {u_1}^2 + u_1 u_2 + c_3 {u_2}^2, \;$ with
compatibility condition $\alpha = -1.$
\end{itemize}
\vskip 2pt
\noindent
A search for densities for (\ref{HirSatpar}) of rank $R \le 8$ resulted in:
\vskip 6pt
\noindent
{\bf Rank 2:} There is no condition on $\alpha.$
One always has the density $ \rho = u. $
\vskip 10pt
\noindent
{\bf Rank 4:} At this level, two branches emerge:
\begin{enumerate}
\item Without condition on $\alpha,$ one obtains $ \rho = u^2 - 2 v^2.$
\item For $\alpha=-1$ one has
$\rho = u v - \frac{c}{2}\; (u^2 - 2 v^2),\;\; c {\rm \;\; is \;\; free}.$
\vskip 2pt
\noindent
Hence, for $\alpha=-1,$ there is a second independent conserved
density: $\rho = u v.$
\end{enumerate}
{\bf Rank 6:} There is no condition on $\alpha.$ One obtains
\[ \rho= (1 + \alpha ) u^3 - 3 u v^2 - \frac{1}{2}(1+\alpha) {u_x}^2
+ 3 {v_x}^2. \]
{\bf Rank 8:} The system has conserved density
\[ \rho=u^4 - \frac{12}{5} u^2 v^2 + \frac{12}{5} v^4
- 2 u {u_x}^2 + \frac{1}{5} {u_{2x}}^2 + \frac{8}{5} v u_{x} v_{x}
-\frac{24}{5} u {v_x}^2 + \frac{8}{5} {v_{2x}}^2, \]
provided that $\alpha = \frac{1}{2}.$\\
Therefore, $\alpha = \frac{1}{2}$ appears again as one tries to compute the
density of rank 8.
\newline\noindent
This value of $\alpha$ leads to the only integrable
case in the parameterized system (\ref{HirSatpar}).
\vskip 4pt
\noindent
For $\alpha=\frac{1}{2},$ we also computed the density of {\bf Rank 10}:
\begin{eqnarray*}
\rho &\!=\!& \!u^5 -\frac{20}{7} u^3 v^2 +\frac{20}{7} u v^4
- 5 u^2 {u_x}^2
+ \frac{10}{7} v^2 {u_x}^2 + u {u_{2x}}^2
- \frac{1}{14} {u_{3x}}^2 + \frac{40}{7} u v u_{x} v_{x} \\
&& - \frac{20}{7} u^2 {v_x}^2
-\frac{80}{7} v^2 {v_x}^2 - \frac{24}{7} u_{2x} {v_{x}}^2
- \frac{4}{7} v u_{2x} v_{2x} + \!\frac{40}{7} u {v_{2x}}^2 -
\frac{8}{7} {v_{3x}}^2.
\end{eqnarray*}
}
\end{example}
\vskip 1pt
\noindent
\begin{example}
{\rm
Ito (1982) proposed the coupled nonlinear wave equation
(Ablowitz and Clarkson, 1991):
\begin{eqnarray}\label{Itosys}
u_t + 6 u u_x + 2 v v_x + u_{3x} &=& 0, \nonumber\\
v_t + 2 v u_x + 2 u v_x &=& 0,
\end{eqnarray}
which differs from the Hirota-Satsuma system in the interaction and
dispersion terms for $v.$
In the absence of $v,$ system (\ref{Itosys}) reduces to the KdV equation.
It is a Hamiltonian system with infinitely many conservation laws.
The scaling properties of the system (\ref{Itosys}) are
\[ u \sim v \sim \frac{\partial^2{}}{\partial{x^2}},\quad
\frac{\partial{}}{\partial{t}} \sim \frac{\partial^3{}}{\partial{x^3}},
\]
and the first five densities are:
\[
\begin{array}{rclcrcl}
\!\!\!\!\!\rho_1 &=& c_1 u + c_2 v, &\;\quad & \!\rho_2 &=& u^2 + v^2, \\
\\
\!\!\!\!\!\rho_3 &=& u^3 + u v^2 -\frac{1}{2} {u_x}^2, & \;\quad &
\!\rho_4 &=& u^4 +\!\frac{6}{5} u^2 v^2 +\!\frac{1}{5} v^4 -\!2 u {u_x}^2
+ \!\frac{1}{5} {u_{2x}}^2 - \!\frac{4}{5} v u_{x} v_{x},
\end{array}
\]
\begin{eqnarray*}
\rho_5 &=& \!u^5 + \!\frac{10}{7} u^3 v^2 + \!\frac{3}{7} u v^4
- \!5 u^2 {u_x}^2 - \!\frac{5}{7} v^2 {u_x}^2 + \!u {u_{2x}}^2
- \!\frac{1}{14} {u_{3x}}^2 - \!\frac{20}{7} u v u_x v_x
- \!\frac{2}{7} v^2 {v_x}^2 \\
& & + \frac{2}{7} u_{2x} {v_{x}}^2 + \frac{2}{7} v u_{2x} v_{2x}.
\end{eqnarray*}
\noindent
To illustrate Step 3 of the algorithm in more detail,
we consider,
\begin{eqnarray}\label{Itosyspar}
u_t + 6 u u_x + 2 v v_x + u_{3x} &=& 0, \nonumber\\
v_t + \alpha ( u_x v + u v_x ) &=& 0,
\end{eqnarray}
which is a parameterized version of (\ref{Itosys}).
The form of the density of rank 6 is
\[ \rho = c_1 \; {u}^3 + c_2 \; {u}^2 v + c_3 \; u {v}^2 +
c_4 \; {v}^3 + c_5 \; {u_x}^2 + c_6 \; u_x v_x + c_7 \; {v_x}^2.
\]
\noindent
We continue with Part (b) of Step 3. After applying the Euler operator,
the linear system $\cal S$ for the coefficients $c_1$ through $c_7$ is
\begin{eqnarray*}
{\cal S} &=& \{ (6-\alpha) \; c_2 = 0,\; c_1 - c_3 = 0, \;
2 \; c_2 - 3 \; \alpha \; c_4 = 0, \;
c_1 + 2 \; c_5 = 0, \; c_3 + 2 \; c_5 = 0, \\
& & c_6 = 0, \; c_3 + 2 \; c_5 - \alpha \; c_7 = 0, \;
\alpha \; c_7 = 0, \; 2 \; c_2 + \alpha \; c_6 = 0, \;
2 \; c_2 - 6 \; c_6 + \alpha \; c_6 = 0 \}.
\end{eqnarray*}
Obviously, ${\cal C} = \bigcup_{i=1}^{7} \{c_i\}$ and $\cal S$ has one
parameter, $\alpha.$ Thus, we start the search for compatibility conditions.
From the sixth and eighth equations in $\cal S$ we conclude
that $ c_6 = c_7 = 0 $. Therefore, replace ${\cal C} $ by
$ {\cal C} \backslash \{ c_6, \; c_7 \} = \bigcup_{i=1}^{5} \{c_i\} $.
\begin{itemize}
\item Set $\boldmath c_1 = 1 $ and solve $\cal S$, which gives
\[ \{ c_1 = 1, \; c_2 = 0, \; c_3 = 1, \; c_4 = 0, \; c_5 = - \frac{1}{2}, \;
c_6 = 0, \; c_7 = 0 \}, \]
without any constraint on the parameter $\alpha$.
Now, ${\cal Z} = \{c_2, \; c_4, \; c_6, \; c_7 \}$ and
$ {\cal C} $ is replaced by
$ {\cal C} \cap {\cal Z} = \{ c_2, \; c_4 \},$ with $i' = 2$.
\item Set $\boldmath c_2 = 1.$ This leads to an inconsistent
system. Thus, ${\cal C} = {\cal C} \backslash \{ c_2 \} = \{ c_4 \},$ with
$i' = 4$.
\item Set $\boldmath c_4 = 1.$ This implies that $\alpha = 0$.
Hence, ${\cal C} = {\cal C} \backslash \{ c_4 \} = \{ \},$
and the procedure ends.
\end{itemize}
In summary:
$
\displaystyle{ \rho = {u}^3 + u \; {v}^2 - \frac{1}{2} {u_x}^2 },
$
without any constraint on $\alpha$.
This is the same density as for (\ref{Itosys}).
Computation of the density of rank 8 for (\ref{Itosyspar}) leads
to the condition $\alpha = 2 $ and density $\rho_4$ listed above.
For $\alpha = 2,$ system (\ref{Itosyspar}) reduces to the
integrable case (\ref{Itosys}).
}
\end{example}
\section{Applications}
For systems with parameters, the algorithm described in Section 2 can be
used to find the necessary conditions on the parameters so that the systems
might have densities of fixed rank.
If for certain parameters a system has a large number of conserved
densities, it is candidate to be completely integrable.
This is the major application of the algorithm, and also of our Mathematica
program {\bf condens.m}.
The next examples illustrate the computer analysis of equations with
parameters.
\subsection{Fifth-Order Korteweg-de Vries Equations}
Consider the family of fifth-order KdV equations,
\begin{equation} \label{KdV5par}
u_t + \alpha u^2 u_{x} + \beta u_x u_{2x} + \gamma u u_{3x} + u_{5x} = 0,
\end{equation}
where $\alpha,\beta,\gamma$ are nonzero parameters.
Special cases of (\ref{KdV5par}) are well known in the literature
(Fordy and Gibbons, 1980; Hirota and Ito, 1983; Kupershmidt and Wilson, 1981;
Satsuma and Kaup, 1977).
Indeed, for $\alpha = 30, \beta = 20, \gamma = 10,$ equation (\ref{KdV5par})
reduces to the Lax (1968) equation.
The SK equation, due to Sawada and Kotera (1974), and also Dodd and
Gibbon (1977), is obtained for $\alpha = 5, \beta = 5, \gamma = 5.$
The KK equation, due to Kaup (1980) and Kupershmidt, corresponds to
$\alpha = 20, \beta = 25, \gamma = 10,$ and the equation due to Ito (1980)
arises for $\alpha = 2, \beta = 6, \gamma = 3$.
The scaling properties of (\ref{KdV5par}) are such that
\[ u \sim \frac{\partial^2}{\partial{x^2}},\quad
\frac{\partial}{\partial{t}} \sim \frac{\partial^5}{\partial{x^5}}. \]
Using our algorithm, one easily computes the {\em compatibility conditions}
for the parameters $\alpha, \beta$ and $\gamma,$ so that (\ref{KdV5par})
admits a polynomial conserved density of fixed rank.
\vskip 3pt
\noindent
The results are:
\vskip 5pt
\noindent
{\bf Rank 2:} There are no conditions on the parameters.
Indeed, equation (\ref{KdV5par}) can be written as a
conservation law with density $\rho = u.$
\vskip 5pt
\noindent
{\bf Rank 4:} The equation has density $\rho = u^2$ provided that
\begin{equation}\label{KdV5con4}
\beta = 2 \gamma.
\end{equation}
Only the Lax and Ito equations have this density.
\vskip 5pt
\noindent
{\bf Rank 6:} The condition
\begin{equation}\label{KdV5con6}
\alpha = - \frac{1}{10} (2 {\beta}^2 - 7 \beta \gamma + 3 {\gamma}^2)
\end{equation}
guarantees the existence of the density of rank 6:
\[ \rho = (2 \beta-\gamma) u^3 - 15 {u_x}^2, \quad
\gamma \neq 2 \beta. \]
The Lax, SK and KK equations have this density.
\vskip 5pt
\noindent
{\bf Rank 8:} There are two branches:
\begin{enumerate}
\item If the condition (\ref{KdV5con4}) holds then
$$
\rho = \alpha u^4 - 6 \gamma u {u_x}^2 + 6 {u_{2x}}^2.
$$
This branch covers the Lax and Ito cases.
\item If the condition
\begin{equation}\label{KdV5con82}
\alpha = - \frac{1}{45} (2 {\beta}^2 - 7 \beta \gamma - 4 {\gamma}^2 )
\end{equation}
holds, one has
\[
\rho = {(2 \beta + \gamma )}^2 u^4 - 135 (2 \beta + \gamma) u {u_x}^2 +
675 {u_{2x}}^2, \quad \gamma \neq -2 \beta.
\]
This branch covers the SK, KK and Ito equations.
\end{enumerate}
{\bf Rank 10:} The conditions
\begin{equation}\label{KdV5con10}
\beta = 2 \gamma \;\;{\rm and}\;\; \alpha = \frac{3}{10} {\gamma}^2
\end{equation}
must hold in order to have the density
\[
\rho = \gamma^3 u^5 - 50 \gamma^2 u^2 {u_x}^2 + 100 \gamma u {u_{2x}}^2
- \frac{500}{7} {u_{3x}}^2.
\]
Only the Lax equation has this density.
Naturally, the following question arises:
What are the necessary conditions for the parameters
$\alpha, \beta $ and $\gamma $ so that (\ref{KdV5par}) might have
infinitely many polynomial conservation laws?
\vskip 5pt
\noindent
{\bf Lax Case:} The Lax equation admits densities of rank 4 and 6.
Combining the conditions (\ref{KdV5con4}) and (\ref{KdV5con6}) leads
to
\begin{equation} \label{Lax5con}
\alpha = \frac{3}{10} \gamma^2 {\rm \;\;and\;\;} \beta = 2 \gamma,
\end{equation}
fixing $\alpha$ and $\beta$ in terms of $\gamma$ (the latter parameter can
always be scaled to 1).
The conditions in (\ref{Lax5con}) lead to the Lax equation, which is known to
be completely integrable and has infinitely many conserved densities.
\vskip 5pt
\noindent
{\bf SK-KK Cases:} The SK and KK equations admit densities of rank 6 and 8.
Combining (\ref{KdV5con6}) and (\ref{KdV5con82}) gives
\begin{eqnarray}
\alpha = \frac{1}{5} \gamma^2 &{\rm and}& \beta = \gamma,
\label{SK5con} \\
\alpha = \frac{1}{5} \gamma^2 &{\rm and}& \beta = \frac{5}{2} \gamma.
\label{KK5con}
\end{eqnarray}
The conditions in (\ref{SK5con}) lead to the SK equation,
and those in (\ref{KK5con}) to the KK equation. Both equations are
indeed integrable and have an infinite sequence of conserved densities.
\vskip 5pt
\noindent
{\bf Ito Case:} The Ito equation admits only three densities.
Combining (\ref{KdV5con4}) and (\ref{KdV5con82}) gives
\begin{equation} \label{Ito5con}
\alpha = \frac{2}{9} \gamma^2 {\rm \;\;and\;\;} \beta = 2 \gamma.
\end{equation}
These conditions reduce (\ref{KdV5par}) to the Ito equation which is not
integrable.
\vskip 2pt
\noindent
Using our code we were able to retrieve all known integrable cases
in the family (\ref{KdV5par}), as given in e.g. Mikhailov {\it et al.\/}
(1987).
The Lax equation comes as no surprise since it is the fifth-order symmetry
of the KdV equation (\ref{KdVeq}).
Our program confirms that, apart from the KdV equation, there
are only two non-trivial integrable equations in the family
(\ref{KdV5par}), namely the KK and SK equations.
Some of the conserved densities are listed in Tables 1 and 2.
A proof for the gaps (with period three) between conservation laws for the
SK and KK equations is given in Sanders and Wang (1996).
Mathematical rigorous proofs for the existence of conservation laws of
such evolution equations are given in Sanders and Wang (1995b, 1996).
\subsection{Seventh-Order Korteweg-de Vries Equations}
Consider the family of seventh-order KdV equations,
\begin{equation}\label{KdV7par}
u_t+a u^3 u_x+b {u_x}^3 +c u u_x u_{2x}+d u^2 u_{3x} +e u_{2x} u_{3x}+
f u_x u_{4x} + g u u_{5x} + u_{7x} = 0,
\end{equation}
where $a,b,c,d,e,f$ and $g$ are nonzero parameters. Special cases of
(\ref{KdV7par}) are known in the literature.
For $a = 252, b = 63, c = 378, d = 126, e = 63, f = 42, g = 21$, equation
(\ref{KdV7par}) reduces to the SK-Ito equation, due to Sawada and Kotera,
and Ito (1980).
For $a = 140, b = 70, c = 280,d = 70, e = 70, f = 42, g = 14,$ equation
(\ref{KdV7par}) belongs to the KdV hierarchy studied by Lax (1968).
For $a = 2016, b = 630, c = 2268,d = 504,$
$e = 252, f = 147, g = 42,$
the equation belongs to the Kaup-Kupershmidt hierarchy
(Bilge, 1992; Gerdt, 1993).
The scaling properties of (\ref{KdV7par}) are
$ \displaystyle {
u \sim \frac{\partial^2{}}{\partial{x^2}},\quad
\frac{\partial{}}{\partial{t}} \sim \frac{\partial^7{}}{\partial{x^7}}.
}
$
\vskip 1pt
\noindent
With {\bf condens.m}, we computed the {\em compatibility conditions}
for the parameters, so that conserved densities of fixed rank exist.
The results are:
\vskip 3pt
\noindent
{\bf Rank 2:} $ \rho = u $ if $b = \frac{1}{2}(c-2 d)$.
True for both the Lax and SK-Ito equations.
\vskip 5pt
\noindent
{\bf Rank 4:} One has $ \rho = u^2 $ if
$b = c - 3 d {\rm \;\;and\;\;} e = 5 (f - 2 g);$
only true for Lax's equation.
\vskip 4pt
\noindent
{\bf Rank 6:} There are three branches.
If one of the following three sets of conditions holds:
\begin{eqnarray*}
\!\!\!\!\!\!\!\!\!
a & = & \frac{1}{294} (28 b f -42 c f +120 f^3 -42 b g + 63 c g-720 f^2 g +
1350 f g^2-810 g^3), \\
d & = & \frac{5}{14} (2 f^2-9 f g +9 g^2) {\rm \;\;and\;\;}
e = \frac{14 c}{2 f-3 g}, \\
{\rm or } \\
a & = & \frac{1}{294} (28 b f -42 c f -40 f^3 -42 b g + 63 c g+240 f^2 g -
450 f g^2+270 g^3), \\
d & = & - \frac{5}{42} (2 f^2-9 f g +9 g^2) {\rm \;\;and\;\;}
e = \frac{2}{3} \frac{(21 c+20 f^2-90 f g+90 g^2)}{2 f-3 g}, \\
{\rm or } \\
a & = & \frac{1}{42} (4 b f -6 c f+24 d f -6 b g +9 c g -36 d g), \\
e & = & \frac{14 c -14 d+10 f^2-45 f g+45 g^2}{2 f -3 g},
\end{eqnarray*}
then
$$
\rho = (2 f - 3 g) u^3 - 21 {u_x}^2, \quad f \neq \frac{3}{2} g.
$$
{\bf Rank 8:}
\begin{eqnarray*}
& & \rho =
(49 c+10 f^2-45 f g+20 g^2) u^4 - 252 (2 f-g) u {u_x}^2 + 1764 {u_{2x}}^2, \\
& & c \neq -\frac{1}{49} (10 f^2 - 45 f g + 20 g^2),
\end{eqnarray*}
provided that
\begin{eqnarray*}
a & = & -\frac{1}{882}(49 c f+10 f^3-196 c g-85 f^2 g+200 f g^2-80 g^3 ),
\nonumber \\
b & = & \frac{1}{42}(14 c+2 f^2-9 f g+4 g^2), \quad
d = \frac{1}{42}(7 c-2 f^2+9 f g-4 g^2), {\rm \;\;and\;\;} \\
e & = & 2 f-g .
\end{eqnarray*}
\vskip 2pt
\noindent
Combining the conditions for {\bf Rank 2} through {\bf Rank 8} yields
\begin{eqnarray}
& & a= \frac{4}{147} g^3,\;\; b = \frac{1}{7} g^2,\;\; c = \frac{6}{7} g^2,
\;\; d = \frac{2}{7} g^2,\;\; e = 3 g,\; f = 2 g, \label{SKIto7con} \\
{\rm or} \nonumber \\
& & a=\frac{5}{98} g^3,\;\; b=\frac{5}{14} g^2,\;\; c=\frac{10}{7} g^2,\;\;
d = \frac{5}{14} g^2,\;\; e= 5 g,\;\; f = 3 g ,\label{Lax7con} \\
{\rm or} \nonumber \\
& & a=\frac{4}{147} g^3,\;\; b=\frac{5}{14} g^2,\;\; c=\frac{9}{7} g^2,\;\;
d = \frac{2}{7} g^2,\;\; e= 6 g,\;\; f = \frac{7}{2} g. \label{Third7con}
\end{eqnarray}
In each case all of the parameters are fixed in terms of $g,$
which could be scaled to 1.
The equations (\ref{SKIto7con}), (\ref{Lax7con}), and (\ref{Third7con})
belong to the SK-Ito, Lax and KK hierarchies, respectively, and they are
all integrable (Bilge, 1992).
Some of the conserved densities for the SK-Ito and Lax equations are listed
in Table 3 for the choices $g=21$ and $g=14,$ respectively.
The conditions in (\ref{Third7con}) are satisfied for
$a = 2016, b = 630,$
$c = 2268, d = 504,$
$e = 252, f = 147, g = 42,$ and the first
five conserved densities of (\ref{KdV7par}),
corresponding to the KK-hierarchy, then are:
\begin{eqnarray*}
\rho_1 &=& u,
\quad\quad\quad\quad
\rho_2 = u^3 - \frac{1}{8} {u_x}^2,
\quad\quad\quad\quad
\rho_3 = u^4 -\frac{3}{4} u {u_x}^2 + \frac{1}{48} {u_{2x}}^2, \\
\\
\rho_4 &=& u^6 - \frac{35}{8} u^3 {u_x}^2 - \frac{31}{384} {u_x}^4 +
\frac{17}{32} u^2 {u_{2x}}^2 +\frac{37}{1152} {u_{2x}}^3
- \frac{5}{192} u {u_{3x}}^2 + \frac{1}{2304} {u_{4x}}^2, \\
\\
\rho_5 &=& u^7 - \frac{63}{8} u^4 {u_x}^2 - \frac{287}{320} u {u_x}^4
+ \frac{161}{120} u^3 {u_{2x}}^2 + \frac{97}{640} {u_x}^2 {u_{2x}}^2
+ \frac{737}{2880} u {u_{2x}}^3 \\
\\
& & - \frac{53}{480} u^2 {u_{3x}}^2 - \frac{133}{5760} u_{2x} {u_{3x}}^2
+ \frac{1}{240} u {u_{4x}}^2 - \frac{1}{17280} {u_{5x}}^2.
\end{eqnarray*}
Our results merely confirm the computer analysis of (\ref{KdV7par})
carried out with REDUCE by Bilge (1992) and Gerdt (1993).
\begin{table}[htbp]
\caption{Conserved Densities for Sawada-Kotera and Lax 5th-order Equations}
\label{table1}
\begin{center}
\begin{tabular}{||c|l|l||}
\hline
& & \\
Density
& $\;\;\;\;\;\;$ Sawada-Kotera equation
& $\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ Lax equation \\
& & \\ \hline
& & \\
$\rho_1$ & $u$ & $u$ \\
& & \\ \hline
& & \\
$ \rho_2$ & ---& $\frac{1}{2} u^2$ \\
& & \\ \hline
& & \\
$\rho_3 $ & $\frac{1}{3} u^3- {u_x}^2 $
& $\frac{1}{3} u^3 -\frac{1}{6} {u_x}^2 $ \\
& & \\ \hline
& & \\
$ \rho_4 $ & $ \frac{1}{4} u^4- \frac{9}{4}u {u_x}^2 +\frac{3}{4} {u_{2x}}^2$
& $ \frac{1}{4} u^4- \frac{1}{2}u {u_x}^2 +\frac{1}{20} {u_{2x}}^2$ \\
& & \\ \hline
& & \\
$\rho_5$ & --- & $\frac{1}{5} u^5
- u^2 {u_x}^2
+ \frac{1}{5} u {u_{2x}}^2
- \frac{1}{70} {u_{3x}}^2 $ \\
& & \\ \hline
& & \\
$\rho_6$ & $\frac{1}{6} u^6
- \frac{25}{4} u^3 {u_x}^2
- \frac{17}{8} {u_x}^4
+ 6 u^2 {u_{2x}}^2$
& $\frac{1}{6} u^6
- \frac{5}{3} u^3 {u_x}^2
- \frac{5}{36} {u_x}^4
+ \frac{1}2 u^2 {u_{2x}}^2 $ \\
& & \\
&$+ 2 {u_{2x}}^3 - \frac{21}{8} u {u_{3x}}^2
+ \frac{3}{8} {u_{4x}}^2 $
& $+ \frac{5}{63} {u_{2x}}^3 - \frac{1}{14} u {u_{3x}}^2
+ \frac{1}{252} {u_{4x}}^2 $ \\
& & \\ \hline
& & \\
$\rho_7$ & $\frac{1}{7} u^7
- 9 u^4 {u_x}^2
- \frac{54}{5} u {u_x}^4
+ \frac{57}{5} u^3 {u_{2x}}^2$
& $\frac{1}{7} u^7
- \frac{5}{2} u^4 {u_x}^2
- \frac{5}{6} u {u_x}^4
+ u^3 {u_{2x}}^2 $ \\
& & \\
&$+ \frac{648}{35} {u_x}^2 {u_{2x}}^2 + \frac{489}{35} u {u_{2x}}^3
- \frac{261}{35} u^2 {u_{3x}}^2 $
& $+ \frac{1}{2} {u_x}^2 {u_{2x}}^2 + \frac{10}{21} u {u_{2x}}^3
- \frac{3}{14} u^2 {u_{3x}}^2 $ \\
& & \\
&$- \frac{288}{35} u_{2x} {u_{3x}}^2 + \frac{81}{35} u {u_{4x}}^2
- \frac{9}{35} {u_{5x}}^2$
& $- \frac{5}{42} u_{2x} {u_{3x}}^2 + \frac{1}{42} u {u_{4x}}^2
- \frac{1}{924} {u_{5x}}^2$ \\
& & \\ \hline
& & \\
$\rho_8$ & --- & $\frac{1}{8} u^8
- \frac{7}{2} u^5 {u_x}^2
- \frac{35}{12} u^2 {u_x}^4
+ \frac{7}{4} u^4 {u_{2x}}^2 $ \\
& & \\
& & $+ \frac{7}{2} u {u_{x}}^2 {u_{2x}}^2 + \frac{5}{3} u^2 {u_{2x}}^3
+ \frac{7}{24} {u_{2x}}^4 $ \\
& & \\
& & $ + \frac{1}{2} u^3 {u_{3x}}^2- \frac{1}{4} {u_{x}}^2 {u_{3x}}^2
- \frac{5}{6} u u_{2x} {u_{3x}}^2 $ \\
& & \\
& &$+ \frac{1}{12} u^2 {u_{4x}}^2 + \frac{7}{132} u_{2x} {u_{4x}}^2
- \frac{1}{132} u {u_{5x}}^2$ \\
& & \\
& & $ + \frac{1}{3432} {u_{6x}}^2 $ \\
& & \\ \hline
\end{tabular}
\end{center}
\end{table}
\noindent
\begin{table}[htbp]
\caption{Conserved Densities for Kaup-Kupershmidt and Ito 5th-order Equations}
\label{table2}
\begin{center}
\begin{tabular}{||c|l|l||}
\hline
& & \\
Density
& $\;\;\;\;\;\;$ Kaup-Kupershmidt equation
& $\;\;\;\;\;\;\;$ Ito equation \\
& & \\ \hline
& & \\
$\rho_1$ & $u$ & $u$ \\
& & \\ \hline
& & \\
$ \rho_2$ & ---& $\frac{1}{2} u^2$ \\
& & \\ \hline
& & \\
$\rho_3 $ & $\frac{1}{3} u^3 -\frac{1}{8} {u_x}^2 $
& --- \\
& & \\ \hline
& & \\
$\rho_4 $ & $\frac{1}{4} u^4-\frac{9}{16}u {u_x}^2 +\frac{3}{64} {u_{2x}}^2$
& $ \frac{1}{4} u^4- \frac{9}{4}u {u_x}^2 +\frac{3}{4} {u_{2x}}^2$ \\
& & \\ \hline
& & \\
$\rho_5$ & --- &--- \\
& & \\ \hline
& & \\
$\rho_6$ & $\frac{1}{6} u^6
- \frac{35}{16} u^3 {u_x}^2
- \frac{31}{256} {u_x}^4
+ \frac{51}{64} u^2 {u_{2x}}^2$
& --- \\
& & \\
&$+\frac{37}{256} {u_{2x}}^3 - \frac{15}{128} u {u_{3x}}^2
+ \frac{3}{512} {u_{4x}}^2 $
& \\
& & \\ \hline
& & \\
$\rho_7$ & $\frac{1}{7} u^7
- \frac{27}{8} u^4 {u_x}^2
- \frac{369}{320} u {u_x}^4
+ \frac{69}{40} u^3 {u_{2x}}^2$
& --- \\
& & \\
&$+ \frac{2619}{4480} {u_x}^2 {u_{2x}}^2 + \frac{2211}{2240} u {u_{2x}}^3
- \frac{477}{1120} u^2 {u_{3x}}^2 $
& \\
& & \\
&$- \frac{171}{640} u_{2x} {u_{3x}}^2 + \frac{27}{560} u {u_{4x}}^2
- \frac{9}{4480} {u_{5x}}^2$
& \\
& & \\ \hline
& & \\
$\rho_8$ & --- & --- \\
& & \\ \hline
$ \rho_9$ & $\frac{1}{9} u^9
- \frac{13}{2} u^6 {u_x}^2
- \frac{427}{32} u^3 {u_x}^4
- \frac{10431}{8960} {u_x^6} $
& --- \\
& & \\
& $ + \frac{21}{4} u^5 {u_{2x}}^2
+ \frac{12555}{448} u^2 {u_x}^2 {u_{2x}}^2
+ \frac{2413}{224} u^3 {u_{2x}}^3 $
& \\
& & \\
& $ + \frac{16461}{1792} {u_x}^2 {u_{2x}}^3
+ \frac{1641}{256} u {u_{2x}}^4
- \frac{267}{112} u^4 {u_{3x}}^2 $
& \\
& & \\
& $ - \frac{3699}{896} u {u_x}^2 {u_{3x}}^2
- \frac{4383}{448} u^2 u_{2x} {u_{3x}}^2
- \frac{76635}{19712} {u_{2x}}^2 {u_{3x}}^2 $
& \\
& & \\
& $ -\frac{18891}{19712} u_x {u_{3x}}^3
+ \frac{141}{224} u^3 {u_{4x}}^2
+ \frac{8649}{39424} {u_x}^2 {u_{4x}}^2 $
& \\
& & \\
& $ + \frac{27639}{19712} u u_{2x} {u_{4x}}^2
+ \frac{2715}{39424} {u_{4x}}^3
- \frac{927}{9856} u^2 {u_{5x}}^2 $
& \\
& & \\
& $ - \frac{2943}{39424} u_{2x} {u_{5x}}^2
+ \frac{9}{1232} u {u_{6x}}^2
- \frac{9}{39424} {u_{7x}}^2 $
& \\
& & \\ \hline
\end{tabular}
\end{center}
\end{table}
\noindent
\begin{table}[htbp]
\caption{Conserved Densities for Sawada-Kotera-Ito and
Lax 7th-order Equations}
\label{table3}
\begin{center}
\begin{tabular}{||c|l|l||}
\hline
& & \\
Density
& $\;\;\;\;\;\;$ Sawada-Kotera-Ito equation
& $\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ Lax equation \\
& & \\ \hline
& & \\
$\rho_1$ & $u$ & $u$ \\
& & \\ \hline
& & \\
$ \rho_2$ & --- & $ \frac{1}{2} u^2$ \\
& & \\ \hline
& & \\
$\rho_3 $ & $\frac{1}{3} u^3 - \frac{1}{3} {u_x}^2 $
& $ \frac{1}{3} u^3 - \frac{1}{6} {u_x}^2 $ \\
& & \\ \hline
& & \\
$ \rho_4 $ & $\frac{1}{4} u^4 -\frac{3}{4} u {u_x}^2 +\frac{1}{12} {u_{2x}}^2$
& $ \frac{1}{4} u^4 - \frac{1}{2} u {u_x}^2 + \frac{1}{20}{u_{2x}}^2$ \\
& & \\ \hline
& & \\
$\rho_5$ & --- & $\frac{1}{5} u^5 - u^2 {u_x}^2 + \frac{1}{5} u {u_{2x}}^2
- \frac{1}{70} {u_{3x}}^2 $ \\
& & \\ \hline
& & \\
$\rho_6$ & $ \frac{1}{6} u^6 - \frac{25}{12} u^3 {u_x}^2
- \frac{17}{72} {u_x}^4 + \frac{2}{3} u^2 {u_{2x}}^2 $
& $ \frac{1}{6} u^6 - \frac{5}{3} u^3 {u_x}^2 - \frac{5}{36} {u_x}^4
+ \frac{1}{2} u^2 {u_{2x}}^2 $ \\
& & \\
& $
+\frac{2}{27} {u_{2x}}^3 - \frac{7}{72} u {u_{3x}}^2+\frac{1}{216} {u_{4x}}^2$
& $+\frac{5}{63} {u_{2x}}^3 -\frac{1}{14} u {u_{3x}}^2 +
\frac{1}{252} {u_{4x}}^2 $ \\
& & \\ \hline
& & \\
$\rho_7$ & $\frac{1}{7} u^7 - 3 u^4 {u_x}^2 - \frac{6}{5} u {u_x}^4
+ \frac{19}{15} u^3 {u_{2x}}^2 $
& $ \frac{1}{7} u^7 - \frac{5}{2} u^4 {u_x}^2 - \frac{5}{6} u {u_x}^4
+ u^3 {u_{2x}}^2 $ \\
& & \\
& $ + \frac{24}{35} {u_x}^2 {u_{2x}}^2 + \frac{163}{315} u {u_{2x}}^3
- \frac{29}{105} u^2 {u_{3x}}^2$
& $ +\frac{1}{2} {u_x}^2 {u_{2x}}^2 + \frac{10}{21} u {u_{2x}}^3
- \frac{3}{14} u^2 {u_{3x}}^2 $ \\
& & \\
& $-\frac{32}{315} u_{2x} {u_{3x}}^2 + \frac{1}{35} u {u_{4x}}^2
- \frac{1}{945} {u_{5x}}^2$
& $ -\frac{5}{42} u_{2x} {u_{3x}}^2 + \frac{1}{42} u {u_{4x}}^2
- \frac{1}{924} {u_{5x}}^2 $ \\
& & \\ \hline
& & \\
$\rho_8$ & --- & $ \frac{1}{8} u^8 - \frac{7}{2} u^5 {u_x}^2
-\frac{35}{12} u^2 {u_x}^4 + \frac{7}{4} u^4 {u_{2x}}^2 $ \\
& & \\
& & $ + \frac{7}{2} u {u_x}^2 {u_{2x}}^2 + \frac{5}{3} u^2 {u_{2x}}^3 +
\frac{7}{24} {u_{2x}}^4 $ \\
& & \\
& & $ - \frac{1}{2} u^3 {u_{3x}}^2 -\frac{1}{4} {u_x}^2 {u_{3x}}^2
-\frac{5}{6} u u_{2x} {u_{3x}}^2 $ \\
& & \\
& & $ + \frac{1}{12} u^2 {u_{4x}}^2 + \frac{7}{132} u_{2x} {u_{4x}}^2
- \frac{1}{132} u {u_{5x}}^2 $ \\
& & \\
& & $ + \frac{1}{3432} {u_{6x}}^2$ \\
& & \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{More Examples}
\subsection{Nonlinear Schr\"{o}dinger Equation}
The nonlinear Schr\"{o}dinger (NLS) equation (Ablowitz and Clarkson, 1991),
\begin{equation}\label{NLSeq}
i q_t - q_{2x} + 2 {|q|}^2 q =0,
\end{equation}
arises as an asymptotic limit of a slowly varying dispersive wave envelope
in a nonlinear medium, and as such has significant applications in
nonlinear optics, water waves, and plasma physics.
Equation (\ref{NLSeq}) is known to be completely integrable,
and together with the ubiquitous KdV equation (\ref{KdVeq}),
is one of the most studied
soliton equations.
\newline
\noindent
There are two different ways to compute conserved densities for (\ref{NLSeq}).
\vskip 3pt
\noindent
{\bf Method 1}$\;\;$ One can replace (\ref{NLSeq}) by
\begin{eqnarray}\label{NLSsys}
u_t - v_{2x}+2 v (u^2+v^2) &=& 0, \nonumber \\
v_t + u_{2x}-2 u (u^2+v^2) &=& 0,
\end{eqnarray}
where $q = u + i v.$ The scaling properties are such that
\[u \sim v \sim \frac{\partial{}}{\partial{x}},\quad
\frac{\partial{}}{\partial{t}} \sim \frac{\partial^2{}}{\partial{x^2}}. \]
We computed seven conserved densities with our program, of which the first
six are:
\[
\begin{array}{rclcrcl}
\!\!\!\!\!\!\rho_1 &=& u^2+v^2,
&\;\;\quad\quad &
\rho_2 &=& v u_x, \\
\!\!\!\!\!\!\rho_3 &=& u^4+2 u^2 v^2+v^4+{u_x}^2+{v_x}^2,
&\;\;\quad\quad &
\rho_4 &=& u^2 v u_x + \frac{1}{3} v^3 u_x + \frac{1}{6} v_x u_{2x},
\end{array}
\]
\begin{eqnarray*}
\rho_5 &=& u^6 + 3 u^4 v^2 + 3 u^2 v^4 + v^6 + 5 u^2 {u_x}^2 + 3 v^2 {u_x}^2
+ 3 u^2 {v_x}^2 + 5 v^2 {v_x}^2 + 4 \;u v u_x v_x \\
& & + \frac{1}{2} {u_{2x}}^2 + \frac{1}{2} {v_{2x}}^2,\\
\\
\rho_6 &=& u^4 v u_x + \frac{2}{3} u^2 v^3 u_x + \frac{1}{5} v^5 u_x
+ \frac{1}{3} u {u_x}^2 v_x + \frac{1}{3} u^2 u_{2x} v_x
+ \frac{1}{3} v^2 u_{2x} {v_x} + \;\frac{1}{3} v u_x {v_x}^2 \\
& & + \frac{1}{30} u_{3x} v_{2x}.
\end{eqnarray*}
\vskip 1pt
\noindent
{\bf Method 2}$\;\;$ One could consider $q$ and $q^{*}$ as independent
variables and add the complex conjugate of (\ref{NLSeq}) to the NLS equation.
Absorbing $i$ in the scale of $t$ then yields:
\begin{eqnarray}\label{NLSsys2}
q_t - q_{2x} + 2 q^2 q^{*} &=& 0, \nonumber \\
q_t^{*} + q_{2x}^{*} - 2 q^{*2} q &=& 0.
\end{eqnarray}
According to the procedure described at the end of Step 1 in Section 2.3,
our program computes the constraints
\[
w(q) = 2 - w(q^{*}), \quad\quad w(q^{*}) = w(q^{*}),
\quad w(\frac{\partial{}}{\partial{t}}) = 2,
\]
and sets the left hand sides of the first two equal to one.
Then, it solves the equations
\[
1 = 2 - w(q^{*}), \quad\quad 1 = w(q^{*}),
\]
piece by piece. Both lead to the solution $w(q^{*}) = 1.$
Hence, the program continues with $w(q) = w(q^{*}) = 1.$
\newline
\noindent
For rank 2 through rank 6
our program produces the conserved densities:
\[
\begin{array}{rclcrcl}
\!\!\!\rho_1 &=& q q^{*}, & \quad\quad & \rho_2 &=& q^{*} q_x,\\
\\
\!\!\!\rho_3 &\!=\!& q^2 q^{*2} + q_x q_x^{*}, & \quad\quad &
\rho_4 &\!=\!& q q^{*2} q_x + \frac{1}{3} q_{2x} q_x^{*},
\end{array}
\]
\[
\!\!\!\!\!\!\!\;\;\rho_5 \;\;=\;\; q^3 q^{*3} + \frac{1}{2} q^{*2} q_x^2
+ 4 q q^{*} q_x q_x^{*} + \frac{1}{2} q^2 q_x^{*2}
+ \frac{1}{2} q_{2x} q_{2x}^{*}.
\]
Obviously, these two sets of conservation laws are connected
(but not piece by piece) via a simple change of variables:
$ u = \frac{1}{2} (q + q^{*}), v = \frac{1}{2i} (q - q^{*}). $
The second method has the advantage that the conserved densities are
expressed in the original variable $q$ and it conjugate $q^{*}.$
On the other hand, the conserved densities from Method 2 may not be
independent.
\subsection{Non-dispersive Long Wave System}
The non-dispersive long wave equations (Kupershmidt, 1985a)
\begin{eqnarray}\label{Longwavesys}
u_t + v u_x + u v_x &=& 0, \nonumber \\
v_t + u_x + v v_x &=& 0,
\end{eqnarray}
is another example of an integrable system.
We use this example to illustrate how our code determines a free weight.
Indeed, for (\ref{Longwavesys}),
$ w(u) = 2 w(v), \; w(v)= w(v), \;
w(\frac{\partial{}}{\partial{t}}) = 1 + w(v), $
with $w(v)$ as the only free weight.
As described in Step 1 of the algorithm, the program sorts the right hand
sides of these constraints, sets their left hand sides equal to one,
and proceeds with solving
\[
1 = w(v), \quad\quad
1 = 2 w(v), \quad\quad
1 = w(v) + 1,
\]
one by one. That leads to the choices $w(v) = 1, w(v) = \frac{1}{2}, $ and
$w(v) = 0.$
Since the latter is zero it is incremented by one to get $w(v) = 1.$
\vskip 2pt
\noindent
The first choice, $w(v) =1,$ gives
$w(u) = w(\frac{\partial{}}{\partial{t}}) = 2 .$
The second choice, $w(v) =\frac{1}{2},$ gives
$w(u) = 1, w(\frac{\partial{}}{\partial{t}}) = \frac{3}{2}.$
Another valid choice (not considered by our program) would be
$w(v) =\frac{1}{4},$ $w(u) = \frac{1}{2},$ and
$w(\frac{\partial{}}{\partial{t}}) = \frac{5}{4}.$ Obviously, there
are infinitely many fractional choices for $w(v). $
Recall that fractional weights and ranks are indeed allowed.
The program continues automatically with the smallest integer choice,
$w(v) =1.$ Hence, $ w(u) = 2, $ or, symbolically,
\[
u \sim {\partial^2 \over \partial x^2} \;\;\; {\rm and} \;\;\;
v \sim {\partial \over \partial x}.
\]
For rank one through eight we obtained the following densities:
\[
\begin{array}{rclcrcl}
\rho_1 & = & v, & \quad\quad &
\rho_2 & = & u, \\
\rho_3 & = & u v, & \quad\quad &
\rho_4 & = & u^2 + u v^2,\\
\rho_5 & = & u^2 v + \frac{1}{3} u v^3 ,& \quad\quad &
\rho_6 & = & u^3 + 3 u^2 v^2 + \frac{1}{2} u v^4, \\
\rho_7 & = & u^3 v + u^2 v^3 + \frac{1}{10} u v^5,& \quad\quad &
\rho_8 & = & u^4 + 6 u^3 v^2 + 3 u^2 v^4 + \frac{1}{5} u v^6.
\end{array}
\]
This set of densities remains the same for any valid choice of
$w(v)$. Indeed, we could have computed the conserved density
$ \rho = u v $ with different choices of the free weight $w(v).$
This $\rho$ has rank $R = w(u) + w(v) = 3 w(v).$
If we choose $w(v) = \frac{1}{2}$ then $ \rho = u v$ has rank
$R = \frac{3}{2}.$
On the other hand, for the choice $w(v) = \frac{1}{4},$ this density
has rank $R = \frac{3}{4}.$
In conclusion, choosing $w(v)$ differently does not affect the form of the
desired densities, provided the rank of $\rho$ is also adjusted appropriately.
Due to the superposition principle, the same argument can be made if
$\rho$ is the sum of many terms, involving $x-$derivatives of the
dependent variables.
\subsection{Three-Component Korteweg-de Vries Equation}
Consider the 3-component extension of the KdV equation (Kupershmidt, 1985b),
\begin{eqnarray}\label{3cKdV}
u_t -6 u u_x +2 v v_x+2 w w_x-u_{3x} &=& 0, \nonumber \\
v_t-2 v u_x-2 u v_x &=& 0, \\
w_t -2 w u_x-2 u w_x &=& 0, \nonumber
\end{eqnarray}
which can be written as a bi-Hamiltonian system with
infinitely many conservation laws.
The scaling properties indicate that
\[
u \sim v \sim w \sim {\partial^2 \over \partial x^2}, \quad
\frac{\partial{}}{\partial{t}} \sim \frac{\partial^3{}}{\partial{x^3}},
\]
and the first four densities for (\ref{3cKdV}) are:
\begin{eqnarray*}
\rho_1 &=& c_1 u +c_2 v +c_3 w, \quad
\rho_2 = u^2-v^2-w^2, \quad
\rho_3 = u^3 - u v^2 - u w^2 - \frac{1}{2}{u_x}^2, \\
\\
\rho_4 &=& u^4 - \frac{6}{5} u^2 v^2 +\!\frac{1}{5} v^4
- \!\frac{6}{5} u^2 w^2 +\!\frac{2}{5} v^2 w^2 + \! \frac{1}{5} w^4
- \! 2 u {u_x}^2 \! + \!\frac{1}{5} u_{2x}^2 \!
+ \!\frac{4}{5} v u_x v_x \!+\!\frac{4}{5} w u_x w_x.
\end{eqnarray*}
Obviously, from $\rho_1$ we can see that $u, v$ and $w$ are independent
conserved densities.
\section{Using the Program condens.m}
We now describe the features, scope and limitations of our program
{\bf condens.m}, which is written in {\it Mathematica} syntax (Wolfram, 1996).
The program {\bf condens.m} has its own {\it menu\/} interface with
30 samples of data files.
Users are assumed to have access to {\it Mathematica} (version 2.0 or higher).
The code {\bf condens.m} and the data files should be put in one directory.
\subsection{The Menu Interface}
\noindent
After {\it Mathematica\/} comes up with `In[1]:=', type
\begin{verbatim}
In[1]:= <<condens.m
\end{verbatim}
to read in the code {\bf condens.m} and start the program.
Via its menu interface, the program will automatically
prompt you for answers.
\begin{example}
{\rm Let us compute the density of rank 4 (provided it exists)
for the Drinfel'd-Sokolov system (Ablowitz and Clarkson, 1991):
\begin{eqnarray} \label{drinsoko}
& & u_t + 3 v v_x = 0, \nonumber \\
& & v_t + 2 v_{3x} + 2 u v_x + u_x v = 0.
\end{eqnarray}
Since this example is in the menu, start the program and pick entry 25
from the menu.}
\begin{verbatim}
*** MENU INTERFACE *** (page: 3)
-------------------------------------------
21) MVDNLS System (d_mvdnls.m)
22) Kaup System-parameterized (d_pkaup.m)
23) Kaup System (d_kaup.m)
24) Kaup-Broer System (d_broer.m)
25) Drinfel'd-Sokolov System (d_soko.m)
26) Dispersive Long Wave System (d_disper.m)
27) Non-dispersive Long Wave System (d_nodisp.m)
28) 3-Component KdV System (d_3ckdv.m)
29) 2-Component Nonlinear Schrodinger Equation (d_2cnls.m)
30) Boussinesq System (d_bous.m)
nn) Next Page
tt) Your System
qq) Exit the Program
------------------------------------------
ENTER YOUR CHOICE: 25
Enter the rank of rho: 4
Enter the name of the output file: drisokr4.o
*********************************************************
WELCOME TO THE MATHEMATICA PROGRAM
by UNAL GOKTAS and WILLY HEREMAN
FOR THE COMPUTATION OF CONSERVED DENSITIES.
Version 3.0 released on February 24, 1997
Copyright 1996
*****************************************************************
.
.
*****************************************************************
2
The normalized density rho[x,t] is : u
2
******************************************************************
2 2
The corresponding flux j[x,t] is: 2 u u - 2 u + 4 u u
1 2 2,x 2 2,xx
******************************************************************
Result of explicit verification (rho_t + J_x) = 0
******************************************************************
\end{verbatim}
\noindent
{\rm
At the end of computation, the normalized density and flux are available.
In the absence of parameters, both are normalized according to the
coefficient of the first term in the density.
If there are parameters, the common denominators that have been multiplied
through are shown. To see the density and flux in standard
Mathematica notation, type \verb|rho[x,t]| or \verb|j[x,t]|.}
{\rm However, type \verb|pdeform[rho[x,t]]| or \verb|pdeform[j[x,t]]|
to see them in subscript notation.}
{\rm Note that the form of the densities $ \rho $ is not unique.
Densities can always be integrated by parts to obtain {\it equivalent}
forms, modulo total derivatives.
In {\it Mathematica} version 2.2, equivalent forms can be obtained via the
command \verb|Integrate[rho[x,t],x]|.}
\end{example}
\subsection{Preparing Data Files}
To test systems that are not in the menu, prepare a data file in
the format of our data files.
Of course, the name for a new data file should not coincide with any name
already listed in the menu, unless you intended to modify those data files.
\begin{example}
{\rm
For the parameterized Hirota-Satsuma system (\ref{HirSatpar}) the data
file reads:
}
\begin{verbatim}
(* start of data file d_phrsat.m *)
debug = False;
(* Hirota-Satsuma System with one parameter *)
eq[1][x,t]=D[u[1][x,t],t]-aa*D[u[1][x,t],{x,3}]-
6*aa*u[1][x,t]*D[u[1][x,t],x]+6*u[2][x,t]*D[u[2][x,t],x];
eq[2][x,t]=D[u[2][x,t],t]+D[u[2][x,t],{x,3}]+3*u[1][x,t]*D[u[2][x,t],x];
noeqs = 2;
name = "Hirota-Satsuma System (parameterized)";
parameters = {aa};
weightpars = {};
(* user can supply the rank of rho and a name for the output file *)
(* rhorank = 6; *)
(* myfile = "phrsatr6.o; *)
(* user can give the weights of u[1], u[2] and partial t *)
(* weightu[1]=2; weightu[2]=2; weight[t]=3; *)
(* user can give the form of rho. Here, for density of rank 6: *)
(* formrho[x,t]={c[1]*u[1][x,t]^3+c[2]*u[1][x,t]*u[2][x,t]^2+
c[3]*D[u[1][x,t],x]^2+c[4]*D[u[2][x,t],x]^2}; *)
formrho[x,t] = {};
(* end of data file d_phrsat.m *)
\end{verbatim}
\end{example}
\vskip 2pt
\noindent
{\rm
Explanation of the lines in the data file:
\vskip 4pt
\noindent
\verb|debug = False;|
\vskip 3pt
\noindent
No trace of internal computations, otherwise, set it {\it True}.
\vskip 6pt
\noindent
\verb|eq[k][x,t] = ...;|
\vskip 3pt
\noindent
Give the $k^{th}$ equation of the system in {\it Mathematica} notation.
Note that there is no \verb|== 0| at the end.
\vskip 6pt
\noindent
\verb|noeqs = 2;|
\vskip 3pt
\noindent
Specify the number of equations in the system.
\vskip 6pt
\noindent
\verb|name = "Hirota-Satsuma System (parameterized)";|
\vskip 3pt
\noindent
Pick a short name for the system. The quotes are essential.
\vskip 6pt
\noindent
\verb|parameters = {aa};|
\vskip 3pt
\noindent
Give the list of the parameters in the system.
If there are none, set {\it parameters = \{ \}}.
\vskip 5pt
\noindent
\verb|weightpars = {};|
\vskip 3pt
\noindent
Give the list of the parameters that are assumed to have weights.
Note that weighted parameters are {\it not} listed in {\em parameters},
which is the list of parameters without weight.
\vskip 6pt
\noindent
\verb|(* rhorank = 6; *)|
\vskip 3pt
\noindent
Optional; give the desired rank of the density. Useful if you want
to work with the program less interactively (in batch mode).
\vskip 6pt
\noindent
\verb|(* myfile = "hirsatr6.o; *)|
\vskip 3pt
\noindent
Optional; give a name of the output file. Useful for less
interactive use of the program.
\vskip 6pt
\noindent
\verb|(* weightu[1]=2; weightu[2]=2; weight[t]=3; *)|
\vskip 3pt
\noindent
Optional; specify a choice for {\it some or all} of the weights.
The program then skips the computation of the weights, but
still checks for consistency. Particularly useful if there are several
free weights and non-interactive use is preferred.
\vskip 1pt
\noindent
\verb|formrho[x,t] = {};|
\vskip 1pt
\noindent
The program will {\it compute} the form of $ \rho $ of the given rank.
\vskip 1pt
\noindent
\begin{verbatim}
formrho[x,t]={c[1]*u[1][x,t]^3+c[2]*u[1][x,t]*u[2][x,t]^2+
c[3]*D[u[1][x,t],x]^2+c[4]*D[u[2][x,t],x]^2};
\end{verbatim}
\vskip 1pt
\noindent
Alternatively, one could give a form of $ \rho $ (here for rank 6).
The density must be given in expanded form and with coefficients c[i].
The braces are essential.
If $ \rho $ is given, the program skips both the computation
of the weights and the form of the density.
Instead, the code uses what is given and computes the coefficients c[i].
This option allows one, for example, to test densities from the literature.
\vskip 6pt
\noindent
Anything within \verb|(*| and \verb|*)| are comments, and ignored by
{\it Mathematica}.
\vskip 6pt
\noindent
Once the data file is ready, run it via the choice ``\verb|tt) Your System|"
in the menu.
}
\subsection{Scope and Limitations}
Our program can handle PDEs that can be written as systems of
evolution equations. The evolution equations must be polynomials in the
dependent variables (no integral terms). Only two independent variables
($x$ and $t$) are allowed.
No terms in the evolution equations should {\it explicitly} depend on
$x$ and $t.$
Theoretically, there is no limit on the number of evolution equations.
In practice, for large systems, the computations may take a long time or
require a lot of memory.
The computational speed depends primarily on the amount of memory.
The program only computes polynomial conserved densities in the
dependent variables and their derivatives, without explicit dependencies
on $x$ and/or $t.$
By design, the program constructs only densities that are uniform in rank.
The uniform rank assumption for the monomials in $\rho$ allows one to
compute {\it independent} conserved densities piece by piece,
without having to split linear combinations of conserved densities.
Due to the superposition principle, a linear combination of conserved
densities of unequal rank is still a conserved density.
This situation arises frequently when parameters with weight are
introduced in the PDEs.
The input systems may have one or more parameters, which are assumed to
be nonzero. If a system has parameters, the program will attempt to
compute the compatibility conditions for these parameters such that
densities (of a given rank) exist.
The assumption that all parameters in the given evolution equation must
be nonzero is necessary.
As a result of setting parameters to zero in a given system of evolution
equations, the weights and therefore the rank of $\rho$ might change.
In general, the compatibility conditions for the parameters could be
highly nonlinear, and there is no general algorithm to solve them.
The program automatically generates the compatibility conditions,
and solves them for parameters that occur linearly (see Section 3.2).
Gr\"obner basis techniques could be used to reduce complicated
nonlinear systems into equivalent, yet simpler, non-linear systems.
For PDEs with parameters and when the system for the coefficients $c_i$
is complicated, the program saves that system and its coefficient matrix,
etc., in the file {\it worklog.m}.
Independent from the program, the worklog files
can later be analyzed with Mathematica functions.
The assumption that the evolution equations are uniform in rank is not
very restrictive. If the uniform rank condition is violated, the user
can introduce one or more parameters with weights.
This also allows for some flexibility in the form of the densities.
Although built up with terms that are uniform in rank, the densities do not
have to be uniform in rank with respect to the dependent variables
{\it alone}. This is illustrated in Example 2.2.
Conversely, when the uniform rank condition {\it is} fulfilled, the
introduction of extra parameters (with weights) in the given PDE
leads to a linear combination of conservation laws, not to new ones.
In cases where it is not clear whether or not parameters with weight
should be introduced, one should start with parameters without weight.
If this causes incompatibilities in the assignment of weights (due to
non-uniformity), the program may provide a suggestion. Quite often,
it recommends that one or more parameters be moved from the list of
{\it parameters} into the list {\it weightpars} of weighted parameters.
For systems with two or more free weights, the user will
be prompted to enter values for the free weights. If only one
weight is free, the program will automatically compute some choices for
the free weight, and eventually continue with the lowest integer or
fractional value (see Examples 4.1 and 4.2).
The program selects this value for the free weight; it is just one choice
out of possibly infinitely many.
If the algorithm fails to determine a suitable value, the user will be
prompted to enter a value for the free weight.
Negative weights are not allowed, except for $w(\frac{\partial}{\partial t}).$
If $w(\frac{\partial}{\partial t}) < 0, $ the program will give a warning,
but continue with the computations.
Zero weights are allowed, but at least one of the dependent variables
must have positive weight. The code checks these conditions, and if they
are violated the computations are aborted.
Note that {\it fractional weights} and densities of {\it fractional rank}
are permitted.
Our program is a tool in the search of the first half-dozen
conservation laws. An existence proof (showing that there are
indeed infinitely many conservation laws) must be done analytically,
e.g. by explicitly constructing the recursion operator
(Kodama, 1985; Sanders and Wang, 1995b, 1996) that connects conserved
densities, or by computing high-order symmetries with Lie symmetry
software (Hereman, 1996).
If our program succeeds finding a large set of independent conservation laws,
there is a good chance that the system of PDEs has infinitely many
conserved densities and that the recursion operator could be
constructed explicitly.
If the number of conservation laws is 3 or less, most likely the PDEs are
not integrable, at least not in that coordinate representation.
\section{Other Software Packages}
This section gives a review of other symbolic software for the computation
of conserved densities, together with recent developments in this area.
\subsection{SYMCD}
The program {\bf SYMCD}, written by Ito (1994), is an improved version of
{\bf CONSD} by Ito and Kako (1985). Both programs are in REDUCE.
Similar to our program, {\bf SYMCD} uses scaling properties to
compute polynomial conserved densities for systems of any number
of evolution equations with uniform rank.
{\bf CONSD} had a limit on the number of evolution equations.
This limitation was removed in {\bf SYMCD}.
Evolution equations must be polynomial in the dependent variables and
their derivatives, and variables with negative weight are not allowed.
With REDUCE 3.5 on an IBM Risc 6000 work station, we tested the version
of {\bf SYMCD} released in 1996.
Our test cases included equations (\ref{KdVeq}), (\ref{Boussys2}),
(\ref{HirSatpar}), (\ref{KdV5par}) and (\ref{3cKdV}).
For systems with or without parameters, {\bf SYMCD} gives the same conserved
densities as our program (up to terms that can be converted via
integration by parts).
However, {\bf SYMCD} does not properly handle systems with parameters.
It stops after generating the necessary conditions on the parameters,
which must be analyzed separately.
Such analyses revealed that the conditions do not always lead to a density of
fixed rank. Indeed, in solving for the undetermined coefficients,
{\bf SYMCD} considers all possible branches in the solution,
irrespective of whether or not these branches lead to a conserved density,
as confirmed by Ito (1996).
Another major difference is that parameters with weights are not allowed
in {\bf SYMCD}, which restricts the scope of {\bf SYMCD} to systems with
uniform rank.
In conclusion, our code is more sophisticated than {\bf SYMCD}
in handling systems with parameters and systems of non-uniform rank.
For more information contact Ito at
[email protected].
\subsection{DELiA}
The PC package {\bf DELiA} for {\it Differential Equations with Lie Approach},
developed in the period 1989-1991, is an outgrowth of the
{\bf SCoLar} project by Bocharov and Bronstein.
{\bf DELiA}, written in Turbo PASCAL by Bocharov (1991) and coworkers,
is a commercial computer algebra system for investigating differential
equations using Lie's approach.
The program deals with symmetries, conservation laws, integrability and
equivalence problems.
It has a special routine for systems of evolution equations, which we used
for computing conserved densities.
We tested {\bf DELiA 1.5.1} on a PC with Intel 486 processor and 16 MB
of RAM.
Our test cases included equations (\ref{KdVeq}), (\ref{Boussys2}),
(\ref{HirSatpar}),
(\ref{HirSatpar}) with $\alpha = 1/2$,
(\ref{KdV5par}) with arbitrary $\alpha,$ $\beta = 20$ and $\gamma = 10\/$,
(\ref{Longwavesys}) and (\ref{3cKdV}).
For (\ref{KdVeq}) and (\ref{HirSatpar}) with $\alpha = 1/2,$
{\bf DELiA} returned the same densities as the ones listed in this paper,
up to terms that differ via integration by parts.
With {\bf DELiA}, one cannot compute densities for (\ref{Boussys2}),
(\ref{Longwavesys}), and (\ref{3cKdV}). These systems are out of its class:
the program requires second or higher-order spatial derivative terms in
all equations.
For systems with parameters, {\bf DELiA} does not automatically compute the
densities corresponding to the (necessary) conditions on the parameters.
One has to use {\bf DELiA}'s integrability test first,
which determines conditions based on the existence of formal symmetries.
Since these integrability conditions are neither necessary nor sufficient
for the existence of conserved densities, one must further analyze the
conditions manually.
Once the parameters are fixed, one can compute the densities.
For further information we refer to Bocharov at [email protected].
\subsection{FS}
The REDUCE program {\bf FS} for ``formal symmetries'' was written
by Gerdt and Zharkov (1990).
The code {\bf FS} can be applied to polynomial nonlinear PDEs of
evolution type, which are linear with respect to the highest-order
spatial derivatives and with non-degenerated, diagonal coefficient matrix
for the highest derivatives. The algorithm in {\bf FS} requires that
the evolution equation are of order two or higher in the
spatial variable.
However, the formal symmetry approach does not require that the evolution
equations are uniform in rank.
We tested {\bf FS} with REDUCE 3.5 on an IBM Risc 6000 work station for
equations (\ref{KdVeq}), (\ref{Boussys2}), (\ref{KdV5par}),
(\ref{Longwavesys}) and (\ref{3cKdV}).
We were unable to compute the densities for systems (\ref{Boussys2}),
(\ref{Longwavesys}), and (\ref{3cKdV}), since {\bf FS} is not applicable
to such systems.
For (\ref{KdVeq}), {\bf FS} gave the same densities as our program,
up to terms that differ via integration by parts.
Like {\bf DELiA}, applied to equations with parameters, {\bf FS} computes
the conditions on the parameters using the symmetry approach.
For (\ref{KdV5par}), {\bf FS} and {\bf condens.m} gave equivalent
sets of compatibility conditions on the parameters.
More information can be obtained from Gerdt at [email protected].
\subsection{Software under Development}
In addition to the available software, several research groups are
currently developing software for conserved densities.
Sanders and Wang at the Free University of Amsterdam are
developing a software package in Maple and FORM to compute both conserved
densities and recursion operators
(Sanders and Roelofs, 1994; Sanders and Wang, 1995a).
Their approach is more abstract and relies on the implicit function theorem.
In contrast to our algorithm, their code makes no assumptions about the
form of the conserved density.
Further information can be obtained from Sanders at [email protected].
Wolf (1995) at the University of London is designing a package in REDUCE for
the computation of conserved densities, which implements two methods.
If the density is of fixed order, then the program constructs it directly.
This method is currently limited to two independent variables.
In the second method the characteristic
function of the conservation law is constructed first, and the density and
flux are recovered via integration by parts. There is no limitation on
the number of independent variables for this method.
Both approaches use Wolf's program CRACK for solving overdetermined
systems of PDEs.
See Hereman (1996) for a review of CRACK and for additional references
to Wolf's work.
Wolf's algorithm is particularly efficient for showing the non-existence
of conservation laws of high order. In contrast to our program, it also
allows one to compute non-polynomial conserved densities.
For further information contact Wolf at [email protected].
Ahner, Tschantz, and Cook at Vanderbilt University are working on a
similar project in {\it Mathematica} (Ahner, 1995).
We have no details about their algorithm or implementation.
Contact Ahner at [email protected] for further information.
Hickman (1996) at the University of Canterbury, Christchurch, New Zealand,
has implemented a slight variation of our algorithm in Maple. Instead of
computing the differential monomials in the density by repeated
differentiation,
Hickman uses a tree structure combining the appropriately weighted building
blocks.
For further information, send email to [email protected].
\section{Conclusions}
A prototype of {\bf condens.m} was used to compute conserved densities of
a generalized KdV equation due to Schamel (Verheest and Hereman, 1995), and
a modified vector derivative NLS equation (Willox {\em et al.\/}, 1995).
Based on the results obtained with the software, integrability questions for
these equations could be answered.
We offer the scientific community a {\it Mathematica} package to carry out
the tedious calculations of conserved densities for systems of nonlinear
evolution equations.
Details about the algorithm and its implementation can be found in
G\"{o}kta\c{s} (1996a), and G\"{o}kta\c{s} and Hereman (1997).
The code {\bf condens.m}, together with several data and output files,
is available via anonymous FTP (G\"{o}kta\c{s}, 1996b).
Extensions of the algorithm presented in this paper towards PDEs with
more than one spatial variable, dynamical systems, and systems of
differential-difference equations are under development.
Further information about extensions can be obtained via email to
[email protected] or [email protected].
\section*{Acknowledgements}
The authors are grateful to F. Verheest for many valuable comments and
suggestions.
The Belgian National Fund for Scientific Research is thanked for a
special research grant which made possible the stay of W.H. at the
University of Ghent, where this work was started.
\"{U}G appreciates the generous financial aid from the Turkish Government
which supported his studies in part.
V. Gerdt, M. Hickman, M. Ito, J. Sanders and T. Wolf are thanked for
sharing information about their packages.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The idea of determining the $S$-matrix of a theory from the
structure of its singularities was intensively studied in the 60's
\cite{Weinberg:1964ev, Weinberg:1964ew,Weinberg:1965rz,Olive:1964,
Chew:1966, Olive:1967}. The main idea was to consider the $S$-matrix
as an analytic function of the kinematical invariants with
singularities determined by physical input. Most efforts were
directed towards the study of hadronic interactions. Nowadays we
know that understanding analytically the strong interactions is a
very hard problem.
A more modern version, similar in spirit, was developed in the 90's,
where extensive use of branch cut singularities was shown to be a
powerful tool in the computation of scattering amplitudes in
Yang-Mills theories. These techniques came to be known as the
unitarity based method
\cite{Bern:1994zx,Bern:1994cg,Bern:1995db,Bern:1996je,Bern:1996ja,Bern:1997sc,Bern:2004cz}.
More recently \cite{Britto:2004nc,Buchbinder:2005wp,Cachazo:2008dx}
, the highest codimension singularities, called leading
singularities already in the 60's, were shown to be a powerful tool
in computing amplitudes. At one-loop, the problem of computing
amplitudes is reduced to that of computing tree amplitudes for
theories where no triangles or bubbles appear. One such theory is
${\cal N}=4$ super Yang-Mills (SYM) and it has been hypothesized
that so too is ${\cal N}=8$ supergravity
\cite{Bern:1998sv,Bern:2005bb,BjerrumBohr:2005xx,BjerrumBohr:2006yw}.
In this paper we show that at higher loop orders the power of the
leading singularity has only been partially
unleashed~\cite{Buchbinder:2005wp,Cachazo:2008dx}. By carefully
studying all leading singularities we propose a method which reduces
the computation of multi-loop and multi-particle amplitudes in
${\cal N}=4$ SYM to the computation of residues (which end up being
related to tree amplitudes) and to the solution of {\it linear}
equations.
The discontinuity across the leading singularity at one loop is
computed by collecting all Feynman diagrams that share four given
propagators and then cutting the propagators to turn the sum into
simply the product of four tree-level amplitudes. Cutting means
removing the principal value part of a Feynman propagator, {\it
i.e.} $1/(\ell^2+i\epsilon)\rightarrow \delta^{(+)}(\ell^2)$. Thus
the integral over the four-vector $\ell$ is completely localized by
the four delta functions. The value of the integral for each
solution $\ell_*$ is given by the jacobian of the change of
variables from $\ell$ to the argument of the delta functions
evaluated at $\ell_*$. The final answer is then the sum over of all
contributions. In massless theories there are two solutions
$\ell_*^{(1)}$ and $\ell_*^{(2)}$. In general, the product of the
four tree-level amplitudes gives different answers at the different
points $\{ \ell_*^{(1)},\ell_*^{(2)}\}$. An important exception is
when the number of particles is four. In this case, the product of
amplitudes is equal to $A^{\rm tree}st$ for both values of $\ell$.
In general, the support of the delta functions is outside the region
where $\ell$ is a real vector. This means that the computation of
the leading singularity\footnote{Here and throughout the paper we
will abuse terminology and refer to the discontinuity across a given
leading singularity as the leading singularity itself.} is more
naturally interpreted in terms of a contour integral in $\Bbb{C}^4$
where $\ell$ is now a complex vector. There are two distinct leading
singularities and the prescription of~\cite{Britto:2004nc} is
equivalent to choosing a contour that picks up both residues. A
natural question is what the role of each isolated leading
singularity is. Naively, one could try to expand the one-loop
amplitude in scalar boxes and then compute their coefficients by
matching the residues at a given leading singularity. If one did
that one would find different answers for the same coefficient which
would be a contradiction.
In this paper we show that the resolution to this puzzle is that at
one-loop, reduction formulas that express {\it e.g.} pentagons in
terms of boxes, derived in \cite{Passarino:1978jh,van
Neerven:1983vr,Bern:1993kr}, are not valid for generic complex
integration contours. This means that in order to write an
expression in terms of scalar integrals which reproduces {\it all}
singularities of Feynman diagrams correctly one has to allow for
higher point integrals. This explains why four particles is special
and why the first non-trivial case is that of five particles, where
one should allow for a pentagon.
At higher loops, the original approach of~\cite{Buchbinder:2005wp}
is to sum over all solutions. In~\cite{Cachazo:2008dx}, it was
stated that one should be able to work with individual
singularities, but this was in the context of four-particles where
there is no distinction.
In this paper we show that requiring agreement between Feynman
diagrams and scalar (or generalized scalar) integrals on each
individual leading singularity provides enough linear equations to
determine higher-loop and multi-particle amplitudes.
In an nutshell, one starts collecting all Feynman diagrams with
chosen propagators that give rise to topologies with only boxes. By
writing an ansatz of scalar integrals ({\it i.e.}, with numerator
one) one requires agreement at all leading singularities
independently. This gives a set of linear equations. Sometimes the
system of equations does not have solutions which means that
generalized scalar integrals ({\it i.e.}, with numerators that are
propagators with negative power) must be added. The numerators serve
as zeroes to cancel poles so that the new integrals have zero
residue on leading singularities where they are not needed.
We explain the method via examples. Our main example is the
five-particle two-loop amplitude. This amplitude was first computed
in \cite{Bern:2006vw}. In this case, our computation only requires
solving linear equations in two variables!
An interesting new feature already found in \cite{Bern:1997nh}, as
compared to the four-particle amplitude, is that after normalizing
by the corresponding tree-level amplitude one finds terms that are
parity even and some that are parity odd. Using our technique, it
becomes very transparent why the answer does not have definite
parity. The reason is that leading singularities come in pairs. For
real external momenta, one of them would be located at the complex
conjugate value of the other. For four particles, the value of
Feynman diagrams on each leading singularity is the same and hence
one gets a parity even answer. For five or more particles, the
values differ and one gets complex solutions.
It is important to mention that the actual helicity structure of the
amplitude only enters as the inhomogeneous part of the linear
equations that determine the coefficients of the integrals. In this
sense, the homogeneous part of the linear system of equations is
universal. Since for five particles, all non-trivial helicity
configurations are MHV (or ${\overline{\rm MHV}}$) the feature just
described is not very surprising. This is why we present the linear
equations which determine a large class of terms contributing to
six-particle MHV and next-to-MHV two-loop amplitudes. Solving the
equations could allow a comparison to the recent computation of the
parity even part of the six-particle MHV amplitude
in~\cite{Bern:2008ap}.
This paper is organized as follows. In section II we establish our
conventions. In section III we give a detailed explanation of the
leading singularity technique at one-loop. The discussion of how the
scalar basis must be extended is given for five particles since the
results there can be directly applied to the two-loop case. In
section IV we introduce the leading singularity technique at two
loops in the simplest case, {\it i.e.}, the four-particle amplitude.
In section V we present the main example of the paper, the
five-particle two-loop amplitude, in complete detail. In section VI
we present the linear equations for coefficients in MHV and
next-to-MHV six-particle two-loop amplitudes. In section VII we give
conclusions and future directions.
\section{Preliminaries}
\label{sec:prelims}
Scattering amplitudes of on-shell particles in ${\cal N}=4$ SYM with
gauge group $U(N)$ can be written as a sum over color-stripped
partial amplitudes using the color
decomposition~\cite{Bern:1996je,Mangano:1990by,Dixon:1996wi}. Each
partial amplitude admits a large $N$ expansion. More explicitly,
\begin{eqnarray}
{\cal A}_n(1,2,\ldots , n) &=& \delta^{(4)}(p_1+p_2+\ldots + p_n)\
{\rm
Tr}(T^{a_1}T^{a_2}\ldots T^{a_n})\ A_n(1,2,\ldots, n) \nonumber\\
&&\qquad\qquad + \ {\rm permutations}\ +\ \ldots
\end{eqnarray}
where the sum is over non-cyclic permutations of the external states
(cyclic ones being symmetries of the trace) and the ellipsis
represents terms with double and higher numbers of traces. $A_n$ may
be expanded in perturbation theory and we denote the $L$-loop planar
partial amplitude by $A_n^{(L)}$. We also use $A_n^{(0)}=A_n^{\rm
tree}$.
Our conventions are:
\begin{itemize}
\item A tree-level MHV amplitude has two particles of negative
helicity and the rest of positive helicity.
\item A null vector $p_\mu$ is written as a bispinor as $p_{a\dot
a}=\lambda_a\tilde\lambda_{\dot a}$.
\item The Lorentz invariant inner product of two null vectors $p_\mu$ and $q_\mu$ is given by $2p\cdot q =
\langle \lambda_p, \lambda_q\rangle
[\tilde\lambda_p,\tilde\lambda_q]$. We also use $\langle \lambda_p,
\lambda_q\rangle = \langle p,q\rangle$ and $
[\tilde\lambda_p,\tilde\lambda_q] = [p,q]$.
\item All external momenta in an amplitude are outgoing and usually
denoted by $k_i$.
\item Some useful Lorentz invariant combinations are
$s_{ij}=(k_i+k_j)^2$, $t_{ijl}=(k_i+k_j+k_l)^2$, and
$[a|b+c|d\rangle = [a,b]\langle b,d\rangle+[a,c]\langle c,d\rangle$.
\end{itemize}
\section{Leading Singularity at One-Loop}
\label{sec:1loop}
Any one-loop amplitude in ${\cal N}=4$ SYM may be expressed both as
a sum over Feynman diagrams, and also in terms of scalar box
integrals \cite{Bern:1994zx}:
\begin{equation}
A_n^{(1)}= \sum\left\{\hbox{1-loop Feynman diagrams}\right\} =
\sum_{\cal I} B_{\cal I}\times I(K_1^{\cal I},K_2^{\cal I},K_3^{\cal
I},K_4^{\cal I}) \label{eq:quadruple}
\end{equation}
where the second sum is over all partitions ${\cal I}$ of
$\{1,2,\ldots, n\}$ into four non-empty sets, $K_i^{\cal I}$ equals
the sum of the momenta in the $i^{\rm th}$ subset of partition
${\cal I}$ and the $B_{\cal I}$ are coefficients to be determined.
Since we are working with color ordered Feynman diagrams, one
considers only partitions that respect the color ordering.
Scalar box integrals are of the form
\begin{equation}
I(K_1,K_2,K_3,K_4) := \int\frac{d^4\ell}{(2\pi)^4}
\frac{1}{\ell^2(\ell-K_1)^2(\ell-K_1-K_2)^2(\ell+K_4)^2}\ .
\label{eq:box}
\end{equation}
Since scalar integrals are known explicitly (see {\it e.g.}
\cite{smirnov}), the problem of computing any one-loop amplitude is
reduced to that of determining the coefficients $B_{\cal I}$.
The amplitude, defined in terms of Feynman diagrams, possesses many
singularities. Some of them are branch cut singularities and
comparing them on both sides is a way of obtaining information about
the coefficients and in many cases it allows their determination
like for MHV amplitudes \cite{Bern:1994zx}. The main difficulty is
that a given branch cut is generically shared by several scalar
integrals and therefore several coefficients appear at the same time
and the information must be disentangled.
\begin{figure}
\includegraphics[scale=0.50]{Quadruple.eps}
\caption{Computation of a coefficient using the leading singularity
of a box. The lines circling the propagators represent the $T^4$
contour of integration. The left hand side of the figure represents
the sum of all 1-loop Feynman diagrams - note that only those
Feynman diagrams that contain the displayed propagators actually
contribute to this particular contour integral.}
\label{fig:quadruple}
\end{figure}
It turns out that the highest codimension singularities of Feynman
diagrams, called leading singularities, receive contributions from a
{\it single} scalar box integral. Thus, using leading singularities
of Feynman diagrams one can determine all coefficients, $B_{\cal
I}$, one by one \cite{Britto:2004nc}.
If we let $f_i(\ell)$ with $i=1,\ldots, 4$ correspond to the four
factors in the denominator of~(\ref{eq:box}), then the leading
singularity is computed by replacing $1/f_i(\ell)$ by
$\delta(f_i(\ell))$. Applying this to both sides
of~(\ref{eq:quadruple}) one finds that
\begin{equation}
\sum_{\ell_*} \left.\det\left(\frac{\partial
f_i}{\partial\ell_\mu}\right)^{-1}\sum_{\rm Multiplet}\prod_{i=1}^4
A^{{\rm tree}\ (i)}\right|_{\ell=\ell_*} = \left.B_{\cal J}\times
\sum_{\ell_*}\det\left(\frac{\partial
f_i}{\partial\ell_\mu}\right)^{-1}\right|_{\ell=\ell_*}\ ,
\label{eq:wipi}
\end{equation}
where the sum over $\ell_*$ means a sum over all solutions to the
equations $f_i(\ell)=0$ for $i=1,\ldots,4$. In general there are two
solutions. The second sum on the l.h.s. is over all members of the
${\cal N}=4$ supermultiplet as choices of internal particles.
When the number of external particles is four, {\it i.e.} $n=4$,
both solutions give a jacobian factor equal to $1/(st)$ and the sum
over the product of tree-level amplitudes equals $A^{\rm tree}st$.
This implies that $B=A^{\rm tree}st$. For five or more particles the
sum over tree amplitudes is not the same for both solutions. In
fact, for five particles the product of tree amplitudes vanishes in
one solution and it gives $A^{\rm tree}st$ in the other. Here
$s=(K_1+K_2)^2$ and $t=(K_2+K_3)^2$ with $K_i$'s as
in~(\ref{eq:box}). Using this in~(\ref{eq:wipi}) one finds that $B
=A^{\rm tree}st/2$ which is the correct answer~\cite{Dixon:1996wi}.
Note the factor of half coming from the fact that on the r.h.s.
of~(\ref{eq:wipi}) one gets a factor of two.
\subsection{Sharpening The Leading Singularity}
The solutions, $\ell_*$, to the equations $f_i(\ell)=0$ are complex
in general. This makes the mathematical interpretation of the
leading singularity more transparent if defined as a contour
integral in $\Bbb{C}^4$. The contour integral is obtained by simply
taking~(\ref{eq:box}) and allowing $\ell$ to be a complex vector.
The contour of integration has the topology of a four-torus,
$T^4\cong (S^1)^4$, around each isolated singularity, and it is
given by $\Gamma = \left\{\ell : |f_i(\ell)|=\epsilon_i\right\}$
with $\epsilon_i$ some small positive number. The integral is
computed by a generalization of Cauchy's theorem to higher
dimensions (see for example section 5.1 of \cite{GH}). This can be
seen by a change of variables to local coordinates $z_i$'s in terms
of which the singular point is located at the origin, {\it i.e.}
$z_i= f_i(\ell)=0$. Now it is clear that the contour of integration,
$\Gamma = \left\{\ell : |z_i|=\epsilon_i\right\}$, when projected on
each $z_i$-plane is just a circle of radius $\epsilon_i$. The
integral becomes\footnote{Here and throughout the paper we use the
convention that the symbol $\oint$ contains a factor of $1/(2\pi
i)$.}
\begin{equation}
I = \left(\prod_{i=1}^4\oint_{z_i=0} \frac{dz_i}{z_i}\right)
\det\left(\frac{\partial z_i}{\partial\ell_\mu}\right)^{-1}.
\end{equation}
A natural questions that comes from this formalism is how to compare
the sum over Feynman diagrams and the sum over scalar integrals when
evaluated on each leading singularity independently. In other words,
we would like to make sense of~(\ref{eq:wipi}) term by term in the
sum over solutions. The reader might anticipate a problem from the
fact that the sum over tree amplitudes gives different answers for
$n>4$. Before giving the resolution to the puzzle explicitly for
$n=5$ let us discuss in detail the four-particle case.
\subsubsection{Four Particles}
For four particles there is a single planar configuration with no
triangles or bubbles. This is depicted in figure~\ref{fig:Lead5}A.
The location of the two leading singularities is: $\ell_{a\dot a} =
\alpha \lambda^{(1)}_a\tilde\lambda^{(4)}_{\dot a}$ and $\ell_{a\dot
a} = \tilde{\alpha} \lambda^{(4)}_a\tilde\lambda^{(1)}_{\dot a}$
with $\alpha = [1,2]/[4,2]$ and $\tilde\alpha = \langle
1,2\rangle/\langle 4,2\rangle$.
It is straightforward to compute the product of the four
three-particle tree-level amplitudes on each of the solutions. In
each case we find the same answer $A^{\rm tree}st$. In order to
reproduce this with scalar integrals we start with a single box and
a coefficient to be determined.
Comparing both sides evaluated on each of the two leading
singularities we find the same condition:
\begin{equation}
B = A^{\rm tree} st.
\end{equation}
This indeed gives the correct answer. As we will see, the fact that
we got the same condition by requiring the two sides to match on
each leading singularity independently is only a peculiarity of
$n=4$ and it will not be true for any $n>4$.
\begin{figure}
\includegraphics[scale=0.50]{Lead5.eps}
\caption{Sum over Feynman diagrams with no triangles or bubbles. (A)
Unique four-particle configuration. (B) Unique five-particle
topology with some choice of external labels. For this particular
choice of external helicities there is a single configuration of
internal helicities.} \label{fig:Lead5}
\end{figure}
\subsubsection{Five Particles}
Five particles is the first non-trivial example where the conditions
coming from the two leading singularities are different. At this
point there seem to be a puzzle since requiring agreement on each
singularity independently leads to distinct coefficients for the
same scalar box integral which is indeed a contradiction.
The resolution to this puzzle is simple but subtle. It has to do
with the fact that when momenta are complex, the integrand of loop
amplitudes is invariant under the complexification of the (double
cover) of the Lorentz group, {\it i.e.} $SL(2,\Bbb{C})\times
SL(2,\Bbb{C})$. However, the integral might be invariant under
different subgroups of $SL(2,\Bbb{C})\times SL(2,\Bbb{C})$ depending
on the choice of contour\footnote{A similar situation was
encountered in the derivation of MHV diagrams from twistor string
theory in \cite{Cachazo:2004kj}.}. The physical choice of contour
requires the loop momentum to be a real vector and hence it breaks
the group $SL(2,\Bbb{C})\times SL(2,\Bbb{C})$ down to the diagonal
$SL(2,\Bbb{C})_D$ which is a double cover of the physical Lorentz
group $SO(3,1)$. In particular, on the physical contour, scalar
integrals are parity invariant. This is what fails on the individual
$T^4$ contours.
Let us illustrate this by computing a five-particle MHV amplitude at
one-loop.
Once again there is a single topology of a planar configuration of
Feynman diagrams with only boxes. This is shown in
figure~\ref{fig:Lead5}B with a particular choice of labels. In this
case, the location of the two leading singularities is
\begin{equation}
q^{(1)}_{a\dot a} = \lambda^{(1)}_a\left( \tilde\lambda^{(1)}_{\dot
a}+\frac{\langle 2,3\rangle}{\langle
1,3\rangle}\tilde\lambda^{(2)}_{\dot a}\right), \qquad
q^{(2)}_{a\dot a} = \left(
\lambda^{(1)}_{a}+\frac{[2,3]}{[1,3]}\lambda^{(2)}_{a}\right)\tilde\lambda^{(1)}_{\dot
a}
\end{equation}
Evaluating the product of the four tree-level amplitudes is again
straightforward and gives
\begin{equation}
\left.\sum_{\rm Multiplet}\prod_{i=1}^4 A^{{\rm tree}\
(i)}\right|_{q=q^{(1)}} = A^{\rm tree}_5 s_{12}s_{23}, \qquad
\left.\sum_{\rm Multiplet}\prod_{i=1}^4 A^{{\rm tree}\
(i)}\right|_{q=q^{(2)}} = 0. \label{eq:solvi}
\end{equation}
\begin{figure}
\includegraphics[scale=0.50]{Lead4.eps}
\caption{(A) Scalar box integral for five particles. The particular
choice of labels corresponds to the integral $I^{(a):1}$ in the
text. (B) Scalar pentagon integral. This integral is denoted by
$I^{(b)}$. As in the rest of the paper, external momenta are taken
to be outgoing.} \label{fig:Lead4}
\end{figure}
We want to reproduce this behavior using scalar integrals. The
natural candidate is the scalar box integral in
figure~\ref{fig:Lead4}A. It is easy to see that on each of the
leading singularities the scalar box integral gives the same
residue, {\it i.e.}, $1/(s_{12}s_{23})$. If this box were the only
contribution on the scalar integral side we would find that by
comparing to the first equation in~(\ref{eq:solvi}) the coefficient
would be $B=A^{\rm tree}_5 s_{12}s_{23}$, which is twice the known
answer. If, instead, we use the second equation in~(\ref{eq:solvi})
we would find $B=0$ which is also wrong.
This is the puzzle mentioned earlier. In order to discover the
resolution, let us assume we did not know that ${\cal N}=4$ SYM
amplitudes can be written purely in terms of scalar
boxes\footnote{In dimensional regularization this is true only to
order $\epsilon^0$ except for $n=4$ when is true to all orders in
$\epsilon$.}. The natural starting point to reproduce the behavior
of the collection of the Feynman diagrams is the scalar box integral
in figure~\ref{fig:Lead4}. However, as we have seen, this is not
enough. Therefore we need to expand the basis. The only other scalar
integral that can contribute to the same leading singularities, but
with different residues, is a pentagon (see
figure~\ref{fig:Lead4}B).
Denoting the coefficient of the pentagon by $C$ we find that
reproducing~(\ref{eq:solvi}) means
\begin{equation}
B + \frac{C}{(q^{(1)}+k_5)^2} = A^{\rm tree}s_{12}s_{23}, \qquad B
+ \frac{C}{(q^{(2)}+k_5)^2} = 0. \label{eq:juno}
\end{equation}
where $(q+k_5)^2$ is the only propagator of the pentagon not shared
by the box.
These equations can easily be solved. A convenient way to express
the solution is obtained by making the following definitions
\begin{equation}
\beta_{i} := \left(1+ \frac{\langle i+2,i+3\rangle [i+2,i] }{\langle
i+1,i+3\rangle [i+1,i]} \right)^{-1}, \qquad \tilde\beta_{i} :=
\left(1+ \frac{\langle i+2,i\rangle [i+2,i+3] }{\langle i+1,i\rangle
[i+1,i+3]} \right)^{-1}.
\end{equation}
Note that for real momenta, $\tilde\beta_i$ is the complex conjugate
of $\beta_i$. This will be important when identifying odd and even
pieces under parity.
The solution to the equations in~(\ref{eq:juno}) is
\begin{equation}
B = -A^{\rm tree}
\frac{s_{12}s_{23}\tilde\beta_5}{\beta_5-\tilde\beta_5}, \qquad C =
A^{\rm tree}\frac{s_{51}s_{12}s_{23}}{\beta_5-\tilde\beta_5}.
\label{eq:upsi}
\end{equation}
The attentive reader might anticipate another possible
contradiction: If we have fixed the coefficient of the pentagon by
using the leading singularity in figure~\ref{fig:Lead5}B, it is hard
to believe that it will simultaneously solve the equations coming
from figure~\ref{fig:Lead5}B after a cyclic permutation of the
labels\footnote{Note that the pentagon is invariant under a cyclic
permutation of the labels.}. For this to be true the same
coefficient of the pentagon should work in all cases, {\it i.e.},
$C/A^{\rm tree}$ in~(\ref{eq:upsi}) must be invariant under cyclic
permutations of the labels of the external particles!
Indeed, an explicit computation reveals that
\begin{equation}
{\cal C}:=\frac{C}{A^{\rm tree}} =
\frac{s_{i,i+1}s_{i+1,i+2}s_{i+2,i+3}}{\beta_i-\tilde\beta_i}
\label{eq:surprise}
\end{equation}
is the same quantity for all $i\in \{1,2,3,4,5\}$. How to write
${\cal C}$ in a manifestly invariant form will become clear in the
next section.
We are ready to write down the final answer for the amplitude. In
order to compare with the known result it is convenient to separate
contributions into parity even and parity odd pieces. Note that the
coefficient of the pentagon has definite parity; it is parity odd.
The coefficient of the box does not have definite parity. In order
to decompose it we write in the numerator of $B$, $\tilde\beta_5 =
(\tilde\beta_5 -\beta_5)/2+(\tilde\beta_5+\beta_5)/2$. The amplitude
is then given as
\begin{equation}
\frac{A^{(1)\rm MHV}_5}{A^{\rm tree MHV}_5} = \sum_{cyclic}\left(
\frac{1}{2}s_{12}s_{23}I^{(a):1} -
\frac{1}{2}\left(\frac{\beta_5+\tilde\beta_5}{\beta_5-\tilde\beta_5}\right)s_{12}s_{23}I^{(a):1}\right)
+ {\cal C}\; I^{(b)}. \label{eq:wewe}
\end{equation}
where $I^{(a):i}$ is a scalar box integral with three massless legs
given by the momenta of the $i^{\rm th}$, $(i+1)^{\rm th}$ and
$(i+2)^{\rm th}$ particles while $I^{(b)}$ is a pentagon with all
massless legs as shown in figure~(\ref{fig:Lead4})B.
We claim that this is the correct answer. In order to prove it we
have to go back to the usual contour of integration where $\ell$ is
a real vector and where integrals must be dimensionally regulated.
In that case we have to show that~(\ref{eq:wewe}) reduces to
\begin{equation}
\left.\frac{A^{(1)\rm MHV}_5}{A^{\rm tree MHV}_5}\right|_{\rm
Dim.Reg.} = \sum_{cyclic}\left( \frac{1}{2}s_{12}s_{23}I^{(a):1}
\right).
\end{equation}
Comparing our answer~(\ref{eq:wewe}) with the answer for real $\ell$
and in dimensional regularization we conclude that the scalar box
and pentagon integrals when dimensionally regulated must satisfy the
following identity
\begin{equation}
I^{(b)} = \frac{1}{2}\sum_{i=1}^5\left(
\frac{\beta_{i-1}+\tilde\beta_{i-1}}{s_{i-1,i}}\right)I^{(a):i}.
\label{eq:lilo}
\end{equation}
In other words, a pentagon can be written as a sum of five boxes
with particular coefficients.
This is an example of what is known as a reduction formula
\cite{Passarino:1978jh,van Neerven:1983vr,Bern:1993kr}. Indeed, one
can check that the coefficients in~(\ref{eq:lilo}) agree with those
obtained in \cite{Bern:1993kr} by using differential equations, {\it
i.e.},
\begin{equation}
\frac{\beta_{i}+\tilde\beta_{i}}{s_{i,i+1}} =
(\alpha_{i-2}-\alpha_{i-1}+\alpha_i-\alpha_{i+1}+\alpha_{i+2})\alpha_1
\alpha_2\alpha_3\alpha_4\alpha_5 s_{i+1,i+2}s_{i+2,i+3}
\end{equation}
with $\alpha_i = s_{i+1,i+2}s_{i+2,i+3}/\Xi$ and $\Xi =
\sqrt{-s_{12}s_{23}s_{34}s_{45}s_{51}}$. If we had known the
existence of reductions formulas but not their form then the leading
singularities would have given an elementary way of finding
them\footnote{In comparing with \cite{Bern:1993kr}, we had to shift
the index $i$ because what we call $I^{(a):i}$ is called
$I^{(a):i-1}$ in \cite{Bern:1993kr}.}.
Let us conclude this section by explaining why reduction formulas do
not hold on the $T^4$ contours used to compute independent leading
singularities. First observe that the choice of one such contour
breaks the $\Bbb{Z}_2$ symmetry that exchanges the two factors in
$SL(2,\Bbb{C})\times SL(2,\Bbb{C})$ which is the symmetry of the
integrand. This is nothing but a parity transformation. The reason
this affects the reduction formulas is that their derivation relies
on the identity~\cite{van Neerven:1983vr}
\begin{equation}
\int_\Gamma d^4\ell \frac{\epsilon_{\mu\nu\rho\sigma}\ell^\mu
R_1^\nu R_2^\rho
R_3^\sigma}{\ell^2(\ell+R_1)^2(\ell+R_2)^2(\ell+R_3)^2} = 0
\label{eq:redu}
\end{equation}
with $\Gamma$ a real contour\footnote{If any of the $R_i^2$ vanishes
this integral is divergent. The analysis in dimensional
regularization was performed in \cite{Bern:1993kr}. Out $T^4$
contour renders all integrals finite and this is why we do not need
any regulators.}.
A simple way to prove this is by noting that
\begin{equation}
I^\mu = \int_{\Gamma} d^4\ell
\frac{\ell^\mu}{\ell^2(\ell+R_1)^2(\ell+R_2)^2(\ell+R_3)^2} = A_1
R_1^\mu +A_2 R_2^\mu + A_3 R_3^\mu \label{eq:loren}
\end{equation}
for some scalar functions $A_i$. This is obviously true by Lorentz
invariance including parity invariance. In general any four vector,
in particular, $I^\mu$ can be expanded in a basis of vectors given
by $R_1^\mu$, $R_2^\mu$, $R_3^\mu$ and
$\epsilon^{\mu\nu\rho\sigma}R_{1\nu} R_{2\rho} R_{3\sigma}$. The
latter does not contribute in (\ref{eq:loren}) because it is not
parity invariant.
Now it is clear that on a given $T^4$ contour corresponding to a
single leading singularity the integral is not parity invariant and
(\ref{eq:loren}) does not hold due to the presence of the extra
vector $\epsilon^{\mu\nu\rho\sigma}R_{1\nu} R_{2\rho} R_{3\sigma}$
in the expansion of $I^\mu$. This also shows why in the original
quadruple cut technique introduced in \cite{Britto:2004nc}, which is
equivalent to summing over both contributions, reduction formulas
are valid. Summing over both contours preserves parity since a
parity transformation corresponds to exchanging the two $T^4$'s.
\subsubsection{Higher Point Amplitudes}
Repeating the procedure we applied to the five-particle case to
higher point amplitudes one should find reduction formulas for
higher point scalar integrals since we know that in dimensional
regularization all n-particle amplitudes are given in terms of
boxes. A case of special interest in the following is $n=6$. We
postpone the discussion to section~\ref{sec:peek} where several
coefficient of two-loop six-particles amplitudes are studied.
\section{Leading Singularity at Two Loops}
The physical meaning of leading singularities at higher loops was
given in \cite{Cachazo:2008dx}. It was shown that Feynman diagrams
arrange themselves so that a meaningful $T^{4L}\cong (S^1)^{4L}$
contour can be defined at $L$ loops. A generalization of what was
done at one-loop would require interpreting the integral over $L$
loop momenta as a contour integral in $\Bbb{C}^{4L}$. However,
combining Feynman diagrams that share the topology of a graph with
only boxes generically gives rise to less than $4L$ propagators.
Therefore defining a $T^{4L}$ contour is more subtle than in the
one-loop case. As shown in \cite{Buchbinder:2005wp}, once a partial
integration is done on one of the loop variables, new
propagator-like singularities appear which can be used to define the
$T^{4L}$ contour iteratively. The physical meaning given in
\cite{Cachazo:2008dx} is that after the partial integration, one
produces a set of Feynman diagrams at one loop order less than the
original ones and which possesses new tree-level factorization
channels. Those channels are the hidden singularities needed to
define the $T^{4L}$ contour. In this section we restrict our
attention to $L=2$. We start with the $n=4$ case in order to clarify
the concepts just explained.
\subsection{Four Particles}
We will illustrate the use of the leading singularity technique on a
well known case: Four-particle two-loop amplitudes. These were first
computed in \cite{Bern:1997nh,Bern:1998ug} and are given by
\begin{equation}
A^{(2)}_4 = A^{\rm tree}_4\left( st^2 I^{(1)} + s^2t I^{(2)}
\right) \label{eq:koki}
\end{equation}
where the integrals $I^{(1)}$ and $I^{(2)}$ are shown in figure
~\ref{fig:Lead1}.
\begin{figure}
\includegraphics[scale=0.50]{Lead1.eps}
\caption{Basis of scalar two-loop integrals for four-particle
amplitudes in ${\cal N}=4$ SYM.} \label{fig:Lead1}
\end{figure}
We now proceed to reproduce this result using the leading
singularities.
In ${\cal N}=4$ SYM if we sum over Feynman diagrams and consider
contour of integrations where all legs attached to a one-loop
subdiagram are on-shell, such a subdiagram does not contain
triangles or bubbles. This means that in order to study all leading
singularities, we only need to consider sums of Feynman diagrams
that contain only boxes. In the case at hand, these are Feynman
diagrams with the topology of a 2-loop ladder.
A particular choice of external particles is shown in
figure~\ref{fig:Lead2}. Performing the integration over the $p$
momentum we find that the product over the four three-particle
tree-level amplitudes, including the jacobian, gives rise to a
four-particle tree-level amplitude \cite{Cachazo:2008dx}. Therefore,
we are left with the sum over one-loop Feynman diagrams shown on the
upper right of figure~\ref{fig:Lead2}. Note that we have used that
for $n=4$ the product of amplitudes gives the same answer when
evaluated on the two solutions $p^{(1)}$ and $p^{(2)}$.
The tree-level four particle amplitude has two factorization
channels. One of them is in the limit when $(k_1+k_2)^2\rightarrow
0$ while the other is in the limit when $(q-k_1)^2\rightarrow 0$. It
must now be clear how to define the remaining $T^4$ in order to
perform the $q$ integration; one uses the original three propagators
together with the new propagator $1/(q-k_1)^2$. On this contour the
tree amplitude factorizes and gives rise to the diagram in the
bottom left of figure~\ref{fig:Lead2}. This is identical to the
one-loop case. Once again there are two solutions $q^{(1)}$ and
$q^{(2)}$. On both solutions the diagrams evaluate to\footnote{The
computation also involves the jacobian. In other words, we are
computing the full residue.} $A^{\rm tree}$.
On the scalar integral side, we start with an integral of the form
$I^{(2)}$ shown in figure~\ref{fig:Lead1}. After performing the
integration over $p$ one finds that the jacobian gives rise to a
factor of $1/(s(q-k_1)^2)$. This means that the scalar integral has
a non-zero residue on the same $T^8$ used for the evaluation of the
Feynman diagrams. One finds that the total integration gives
$1/(s^2t)$.
Comparing both sides we conclude that the coefficient of $I^{(2)}$
is $s^2t A^{\rm tree}_4$.
\begin{figure}
\includegraphics[scale=0.50]{Lead2.eps}
\caption{Computation of the amplitude using the leading singularity
of Feynman diagrams. The residue of the $p$ integral on any of the
two $T^4$'s represented by the lines circling the propagators is a
tree-level four particle amplitude. Choosing a $T^4$ in the $q$
variable which induces a factorization in the $(q-k_1)^2$ channel of
the tree amplitude one finds the diagram enclosed by dashed lines.
The residue on the final $T^4$ is again a four-particle tree-level
amplitude.} \label{fig:Lead2}
\end{figure}
By using a cyclic permutation of the labels we find that the
coefficient of the integral $I^{(1)}$ must be $st^2 A^{\rm tree}_4$.
Combining the results we reproduce the known answer~(\ref{eq:koki}).
\section{Two-Loop Five-Particle Amplitude}
Five-particle MHV two-loop amplitudes in ${\cal N}=4$ SYM were
computed in \cite{Bern:2006vw} using the unitarity based method.
Here we write the known answer and then show how to re-derive it
using the leading singularity technique. The answer we find comes
out in an strikingly different form. It is important to make a
distinction between MHV and ${\overline{\rm MHV}}$ as the two
amplitudes, when normalized by $A^{\rm tree}_5$, are different.
The expression given in \cite{Bern:2006vw} is
\begin{equation}
\begin{array}{ccl}
A^{(2){\rm MHV}}_5 & = & \frac{1}{8} A^{\rm tree, MHV}_5\sum_{\rm
cyclic}\left(
s_{12}^2s_{23}I^{(a)}(\epsilon)+s_{12}^2s_{15}I^{(b)}(\epsilon)+s_{12}s_{34}s_{45}I^{(c)}(\epsilon)
\right.\\
& & \left. + R\left[ 2I^{(d)}(\epsilon) -2s_{12}I^{(e)}(\epsilon) +
\frac{s_{12}}{s_{34}s_{45}}\left(
\frac{\delta_{-++}}{s_{23}}I^{(b)}(\epsilon)-\frac{\delta_{-+-}}{s_{51}}I^{(a)}(\epsilon)\right)
+ \frac{\delta_{+-+}}{s_{23}s_{51}}I^{(c)}(\epsilon) \right]\right)
\end{array}
\label{eq:theirs}
\end{equation}
where
\begin{equation}
\begin{array}{ccl}
R & = & \epsilon_{1234}s_{12}s_{23}s_{34}s_{45}s_{51}/G_{1234},\\
\delta_{abc} & = &
s_{51}s_{12}+as_{12}s_{23}+bs_{23}s_{34}-s_{45}s_{51}+cs_{34}s_{45},\\
\epsilon_{1234} & = & 4i\varepsilon_{\mu\nu\rho\sigma}k_1^\mu
k_2^\nu k_3^\rho k_4^\sigma = {\rm tr}\left[\gamma_5
k\!\!\!\slash_1k\!\!\!\slash_2k\!\!\!\slash_3k\!\!\!\slash_4 \right],
\end{array}
\label{eq:fivebrane}
\end{equation}
and
\begin{equation}
G_{1234} = {\rm det}\left( \begin{array}{cccc} 0 & s_{12} & s_{13} & s_{14}\\
s_{12} & 0 & s_{23} & s_{34} \\ s_{13} & s_{23} & 0 & s_{34} \\
s_{14} & s_{24} & s_{34} & 0
\end{array}\right)
\end{equation}
All integrals $I^{(a)}$, $I^{(b)}$, $I^{(c)}$, $I^{(d)}$ and
$I^{(e)}$ are defined for a particular choice of external particles
as shown in figure~\ref{fig:Lead3}.
\begin{figure}
\includegraphics[scale=0.50]{Lead3.eps}
\caption{Basis of two-loop integrals for five-particle amplitudes in
${\cal N}=4$ SYM. Only one choice of external labels is shown. The
full basis is obtained by considering all cyclic permutation of
labels.} \label{fig:Lead3}
\end{figure}
\subsection{Computation Using The Leading Singularity Technique}
We start by following the same steps as in the four-particle case.
There are three different topologies of diagrams with only boxes. We
have to analyze all of them and built an expression in terms of
scalar integrals which reproduces the behavior of Feynman diagrams
when evaluated in all possible leading singularities.
The three topologies are depicted in figure~\ref{fig:Lead7} where a
particular choice of external labels was made. The total set of
configurations is then obtained by cyclic permutations of the
labels.
\begin{figure}
\includegraphics[scale=0.50]{Lead7.eps}
\caption{All inequivalent topologies of sums of Feynman diagrams
with only boxes for five particles at two loops. A particular choice
of external labels is shown. All cyclic permutations of the labels
must also be considered.} \label{fig:Lead7}
\end{figure}
\subsubsection{First Topology}
Consider the set of Feynman diagrams in figure~\ref{fig:Lead7}A.
This is the first of the three different topologies that contain
only boxes for five particles.
Carrying out the integration over a $T^4$ contour in the $p$
variables we find that for any of the two solutions we get a
four-particle tree-level amplitude with $1$ and $2$ as external legs
(see figure~\ref{fig:Lead6}). In order to continue, we choose a
$T^4$ contour in the $q$ variables that induces a factorization of
the four-particle amplitude that contains the external particles $1$
and $2$. Note that in this case there is a second four-particle
amplitude; the one that contains external particles $4$ and $5$. A
$T^4$ which induces a factorization of the second four-particle
amplitude must also be considered and we postpone its analysis to
section~\ref{sec:adi}. Note that this second possibility was not
present in the four-particle case.
\begin{figure}
\includegraphics[scale=0.50]{Lead6.eps}
\caption{Evaluation of the sum over Feynman diagrams on two
different $T^8$ contours. The residue of the $p$ integral on any of
the two $T^4$'s represented by the lines circling the propagators is
a tree-level four particle amplitude. For $n=5$, as opposed to
$n=4$, there are two choices for the $T^4$ contour in the $q$
variable. Here we choose the one that induces a factorization in the
$(q-k_1)^2$ channel of the tree amplitude. The other $T^4$ is also
important and is considered in figure \ref{fig:Lead9}. The residue
on the final $T^4$ is the five-particle tree-level amplitude on
$q^{(1)}$ and zero on $q^{(2)}$.} \label{fig:Lead6}
\end{figure}
From the bottom left of figure~\ref{fig:Lead6} we find that the
problem at hand is identical to the one-loop five-particle amplitude
discussed in the previous section. The difference between the two
cases comes in analyzing the scalar integrals. By analogy with the
one loop case, we start by assuming that the basis contains
integrals $I^{(a)}$ and $I^{(e)}$ in figure~\ref{fig:Lead3}. After
carrying out the integration over the first $T^4$ in the scalar
integrals, we find that the problem also reduces to that of the
one-loop case except that the coefficients are multiplied by an
extra factor of $1/s_{12}$. In other words, if we denote the
coefficients of the two-loop integrals $I^{(a)}$ and $I^{(e)}$ by
$B^{\rm 2-loop}$ and $C^{\rm 2-loop}$ respectively, then the
equations are
\begin{equation}
\frac{B^{\rm 2\hbox{-} loop}}{s_{12}} + \frac{C^{\rm 2\hbox{-}
loop}}{s_{12}(q^{(1)}+k_5)^2} = A^{\rm tree}s_{12}s_{23}, \qquad
\frac{B^{\rm 2\hbox{-} loop}}{s_{12}} + \frac{C^{\rm 2\hbox{-}
loop}}{s_{12}(q^{(2)}+k_5)^2} = 0. \label{eq:popi}
\end{equation}
The solution is obtained by making the substitution $B \rightarrow
B^{\rm 2\hbox{-} loop}/s_{12}$ and $C \rightarrow C^{\rm 2\hbox{-} loop}/s_{12}$
in~(\ref{eq:upsi}), {\it i.e,}
\begin{equation}
B^{\rm 2\hbox{-} loop} = - A^{\rm tree}
\frac{s_{12}^2s_{23}\tilde\beta_5}{\beta_5-\tilde\beta_5}, \qquad
C^{\rm 2\hbox{-} loop} = A^{\rm
tree}\frac{s_{51}s_{12}^2s_{23}}{\beta_5-\tilde\beta_5}.
\label{eq:lala}
\end{equation}
Recall that the coefficient of the pentagon $C$ at one loop was
invariant under cyclic permutations of the labels. Here the symmetry
has been explicitly broken by the extra factor of $s_{12}$ and hence
the integral $I^{(e)}$ must be inside the sum over cyclic
permutations.
\subsubsection{Second Topology}
The next topology of Feynman diagrams is shown in
figure~\ref{fig:Lead7}B. The contributing integrals are clearly
$I^{(b)}$ and $I^{(e)}$ in figure~\ref{fig:Lead3}. After performing
the integration over $p$ on a $T^4$ contour and choosing the
remaining $T^4$ in the $q$ variables to induce a factorization on
the four-particle amplitude with $1$ and $2$, we go back to a
one-loop calculation related to the one done in the previous case by
a cyclic permutation.
The solution for the coefficients is obtained by performing a cyclic
permutation on the one-loop coefficients and {\it then} multiplying
by the $1/s_{12}$ factor to convert them into two-loop coefficients.
The explicit form of the coefficients is
\begin{equation}
B^{\rm 2\hbox{-} loop} = - A^{\rm tree}
\frac{s_{51}s_{12}^2\tilde\beta_4}{\beta_4-\tilde\beta_4}, \qquad
C^{\rm 2\hbox{-} loop} = A^{\rm
tree}\frac{s_{45}s_{51}s_{12}^2}{\beta_4-\tilde\beta_4}.
\label{eq:polis}
\end{equation}
Note that we have computed the coefficient of the pentagon-box
integral, $I^{(e)}$, once again and the answer looks very different.
Using the same identity~(\ref{eq:surprise}) as in the previous
section it is clear that the two formulas are the same since all we
have done is to multiply them by the same $s_{12}$ factor.
\subsubsection{Third Topology}
Let us consider the final topology shown in figure~\ref{fig:Lead7}C.
The possible scalar diagrams that contribute to the $T^8$ which
computes the natural leading singularity of this topology are shown
in~\ref{fig:Lead8}. Let us denote their coefficients by $B$, $C_5$
and $C_3$ respectively. The coefficients $C_3$ and $C_5$ have
already been computed. So in practice one would need a single
equation to determine $B$. Let us however consider the two equations
that arise and check that they can be consistently solved.
\begin{figure}
\includegraphics[scale=0.45]{Lead8.eps}
\caption{Scalar integrals contributing to the third kind of
topology. In the computation of the residues the propagators not
used as poles remain and must be evaluated at the location of the
singularity. There is one propagator left in each pentagon-box
integral and they are enclosed by dashed lines in the figure.}
\label{fig:Lead8}
\end{figure}
The situation here is different from the situation in the previous
cases since the sum over Feynman diagrams has zero residue on the
leading singularities we will consider. From that point of view, we
can think of the integral in figure~\ref{fig:Lead8}A as canceling
unphysical singularities in the integrals in
figures~\ref{fig:Lead8}B and ~\ref{fig:Lead8}C which were already
shown to be present in the amplitude.
The location of the unphysical leading singularities is given by
\begin{equation}
p^{(1)} = \frac{[1,2]}{[5,2]}\lambda_1\tilde\lambda_5,\quad q^{(1)}
= -\frac{\langle 5,4\rangle}{\langle
1,4\rangle}\lambda_1\tilde\lambda_5, \label{eq:oli}
\end{equation}
and
\begin{equation}
p^{(2)} = \frac{\langle 1,2\rangle}{\langle
5,2\rangle}\lambda_5\tilde\lambda_1,\quad q^{(2)} = -\frac{[ 5,4]}{[
1,4]}\lambda_5\tilde\lambda_1. \label{eq:symi}
\end{equation}
In order to understand how these arise note that the middle
propagator, $1/(q-p)^2$, in figure ~\ref{fig:Lead8}A has two poles
on the locus where $p^2=q^2=0$. These are $\langle p,q\rangle =0$
and $[p,q]=0$. Using both of them at the same time gives the $T^8$
used to obtain (\ref{eq:oli}) and (\ref{eq:symi}).
Since $p$ and $q$ are proportional to each other in both solutions
it is clear that the sum over Feynman diagrams shown in
figure~\ref{fig:Lead7}C vanishes.
The equations we have to solve take the form
\begin{equation}
B + \frac{C_5}{(q^{(i)}-k_1-k_2)^2} +
\frac{C_3}{(p^{(i)}-k_2-k_3)^2} = 0
\end{equation}
for $i=1,2$. The consistency condition that the known coefficients
$C_3$ and $C_5$ must satisfy for these equations to have a solution
is
\begin{equation}
\frac{C_5}{(q^{(1)}-k_1-k_2)^2} + \frac{C_3}{(p^{(1)}-k_2-k_3)^2}
=\frac{C_5}{(q^{(2)}-k_1-k_2)^2} + \frac{C_3}{(p^{(2)}-k_2-k_3)^2}.
\label{eq:colli}
\end{equation}
Using the explicit form of the coefficients
\begin{equation}
C_3 = A^{\rm
tree}\frac{s_{34}s_{45}^2s_{51}}{\beta_3-\tilde\beta_3}, \qquad C_5
=A^{\rm tree}\frac{s_{51}s_{12}^2s_{23}}{\beta_5-\tilde\beta_5}
\end{equation}
one can check that~(\ref{eq:colli}) is indeed satisfied. Therefore
any of the two equations determine $B$. Choosing $i=1$ gives
\begin{equation}
B = - \frac{C_5}{(q^{(1)}-k_1-k_2)^2} -
\frac{C_3}{(p^{(1)}-k_2-k_3)^2}.
\end{equation}
This expression can be dramatically simplified if~(\ref{eq:colli})
is used to solve for $C_3$ in terms of $C_5$. The answer turns out
to be
\begin{equation}
B=-\frac{C_5}{s_{12}}.
\end{equation}
This is the coefficient of integral $I^{(d)}$ in figure
\ref{fig:Lead3}.
\subsubsection{Additional Leading Singularities}
\label{sec:adi}
As mentioned in the discussion of the first kind of topology, once
the integration over the $T^4$ corresponding to the $p$ variables is
carried out, one is left with a sum over Feynman diagrams that
possesses two four-particle amplitudes (see figure~\ref{fig:Lead6});
one which contains particles $1$ and $2$ and another which contains
particles $4$ and $5$. Previously, we chose the remaining $T^4$
contour such that the amplitude with $1$ and $2$ factorizes. Now we
have to check that the leading singularity corresponding to $4$ and
$5$ factorizing also works.
Here we get four equations corresponding to any combination of
$q^{(1)}$, $q^{(2)}$ and $p^{(1)}$, $p^{(2)}$. Note that none of the
scalar integrals with the topology of two boxes, {\it i.e.}
$I^{(a)}$, $I^{(b)}$, $I^{(d)}$ contributes to these leading
singularities. At this point, we have a single integral that
contributes, {\it i.e.}, $I^{(e)}$. Not surprisingly, it is not
possible to solve all the equations with the coefficient of a single
integral. This means that at least one more integral is missing.
The requirements for the new integral are:
\begin{itemize}
\item It must be non-zero on all the new $T^8$ contours.
\item It must vanish on all the previous $T^8$ contours since they have
already been accounted for.
\end{itemize}
The only possibility is to start with an integral of the form
$I^{(e)}$ which ensures the first requirement and then introduce a
zero in the numerator with removes the pole $(q-k_1)^2$ which enters
in the definition of all other $T^8$'s already studied. Removing the
pole at $(q-k_1)^2$ guarantees that the new integral has zero
residue in all previous leading singularities thus fulfilling the
second requirement.
The new integral has the topology of $I^{(e)}$ and a numerator
factor $(q-k_1)^2$. This integral is precisely $I^{(c)}$ in
figure~\ref{fig:Lead3}.
Let us show that by adding $I^{(c)}$ to the basis all remaining
leading singularities can be accounted for. Let its coefficient be
$D$.
The four leading singularities are located at
\begin{equation}
p^{(1)}(q) =\frac{[1,2]}{[q,2]}\lambda_1\tilde\lambda_q, \quad
p^{(2)}(q) =\frac{\langle1,2\rangle}{\langle
q,2\rangle}\lambda_q\tilde\lambda_1,
\end{equation}
and
\begin{equation}
q^{(1)} = -\lambda_5\left(\tilde\lambda_5+\frac{\langle
3,4\rangle}{\langle 3,5\rangle}\tilde\lambda_4\right), \qquad
q^{(2)} = -\left(\lambda_5+\frac{[3,4]}{[
3,5]}\lambda_4\right)\tilde\lambda_5.
\end{equation}
\begin{figure}
\includegraphics[scale=0.50]{Lead9.eps}
\caption{Evaluation of the sum over Feynman diagrams over the new
$T^8$ contours. In figure \ref{fig:Lead6} we chose the second $T^4$
to induce a factorization in the $(q-k_1)^2$ channel, here we choose
it to induce a factorization of the second four-particle amplitude
in the $(q+k_5)^2$ channel.} \label{fig:Lead9}
\end{figure}
The value of $p$ does not enter in the computation and we only have
to consider the two equations coming from the two values of $q$'s.
The equations are obtained in a completely analogous way from the
previous calculations following the steps in figure~\ref{fig:Lead9}.
The two equations are
\begin{equation}
\frac{C}{s_{12}s_{34}s_{45}(q^{(1)}-k_1)^2}
+\frac{D}{s_{34}s_{45}s_{12}}= A^{\rm tree}, \qquad
\frac{C}{(q^{(2)}-k_1)^2} +D= 0.
\end{equation}
The coefficient $C$ has already been computed so in practice one
would simply use the second equation to determine $D$. Let us not do
that and solve the equations for $D$ and $C$.
Defining
\begin{equation}
\gamma := \left(1+\frac{\langle 3,4\rangle [4,1]}{\langle 3,5\rangle
[5,1]} \right)^{-1}, \qquad \tilde\gamma := \left(1+\frac{\langle
4,1\rangle [3,4]}{\langle 5,1\rangle [3,5]} \right)^{-1}
\end{equation}
the solution is given as follows
\begin{equation}
C = A^{\rm
tree}\frac{s_{34}s_{45}s_{51}s_{12}}{(\gamma-\tilde\gamma)}, \qquad
D = -A^{\rm tree}s_{12}s_{34}s_{45}\frac{\tilde\gamma}{\gamma -
\tilde\gamma}.
\end{equation}
An explicit computation reveals that the new expression for $C$
agrees with that of $C^{2\hbox{-} \rm loop}$ found earlier.
\subsubsection{Final Result}
We now collect all the results obtained by requiring that all
leading singularities are reproduced correctly. Using the scalar
integrals defined in figure~\ref{fig:Lead3} the two-loop amplitude
is
\begin{equation}
\frac{A^{(2)}_5}{A^{(0)}_5} =\!\! \sum_{cyclic}\! s_{12}\left( \!
-\frac{s_{12}s_{23}\tilde\beta_5}{\beta_5-\tilde\beta_5}I^{(a)}
-\frac{s_{51}s_{12}\tilde\beta_4}{\beta_4-\tilde\beta_4}I^{(b)}-
\frac{s_{34}s_{45}\tilde\gamma}{\gamma-\tilde\gamma}I^{(c)}-
\frac{s_{51}s_{23}}{\beta_5-\tilde\beta_5}I^{(d)}
+\frac{s_{51}s_{12}s_{23}}{\beta_5-\tilde\beta_5}I^{(e)}\! \right)
\label{eq:ours}
\end{equation}
where
\begin{equation}
\beta_{i} := \left(1+ \frac{\langle i+2,i+3\rangle [i+2,i] }{\langle
i+1,i+3\rangle [i+1,i]} \right)^{-1}, \qquad \gamma :=
\left(1+\frac{\langle 3,4\rangle [4,1]}{\langle 3,5\rangle [5,1]}
\right)^{-1},
\end{equation}
and $\tilde\beta_i$ and $\tilde\gamma$ the parity conjugated
expressions of $\beta_i$ and $\gamma$ respectively.
An even simpler expression can be obtained if one uses the different
identities found in the previous subsections to write
\begin{equation}
\frac{1}{\gamma -\tilde\gamma} =
\frac{s_{12}s_{23}}{s_{34}s_{45}}\frac{1}{(\beta_5-\tilde\beta_5)},
\qquad \frac{1}{\beta_4 -\tilde\beta_4} =
\frac{s_{23}}{s_{45}}\frac{1}{(\beta_5-\tilde\beta_5)}.
\label{eq:simp}
\end{equation}
Using (\ref{eq:simp}) in (\ref{eq:ours}) one finds
\begin{equation}
\frac{A^{(2)}_5}{A^{(0)}_5} =\! \sum_{cyclic}\;
\frac{s_{12}s_{23}}{\beta_5-\tilde\beta_5}\left( \!
-s_{12}\tilde\beta_5I^{(a)}
-\frac{s_{51}s_{12}\tilde\beta_4}{s_{45}}I^{(b)}- s_{12}\tilde\gamma
I^{(c)}- s_{51}I^{(d)} +s_{51}s_{12}I^{(e)}\! \right).
\end{equation}
In order to compare with the known answer it is convenient to split
the coefficients in parity even and odd terms. Note that using out
technique both pieces are computed simultaneously and equally
straightforwardly. The separation into definite parity terms is
easily done by recalling that for real momenta, $\tilde\beta$,
$\tilde\gamma$ are the complex conjugate to $\beta$ and $\gamma$
respectively. This means that we can write, for example,
\begin{equation}
-\frac{s_{12}^2s_{23}\tilde\beta_5}{\beta_5-\tilde\beta_5}I^{(a)} =
\frac{1}{2}s_{12}^2s_{23}\left( 1-
\frac{\beta_5+\tilde\beta_5}{\beta_5-\tilde\beta_5}\right)I^{(a)}.
\end{equation}
Note that some coefficients are naturally parity odd. Like those of
$I^{(e)}$ and $I^{(d)}$.
It is easy to check analytically using a symbolic manipulation
program like {\tt Mathematica} that, up to an overall normalization
of $1/4$, (\ref{eq:ours}) is exactly equal to~(\ref{eq:theirs}).
\section{A Peek At Two-Loop Six-Particle Amplitudes: MHV and Next-to-MHV}
\label{sec:peek}
One of the advantages of our technique is that the homogenous part
of the linear equations that determine the coefficients of the
integrals is helicity independent! All the helicity information
enters in the inhomogeneous part.
In order to illustrate this feature the first non-trivial case is
that of six particles. This is the first case where next-to-MHV
configurations are possible. In this section we choose a particular
subset of Feynman diagrams contributing to two-loop six-particle
amplitudes. Studying the leading singularities one can write down
linear equations that determine the {\it complete} coefficient of
all pentagon-pentagon integrals and of a certain class of
pentagon-box integrals. The determination of the complete set of
linear equations which gives the full amplitude is outside the scope
of this paper and we leave it for future work.
The set of Feynman diagrams we consider generates the five
topologies shown in figure \ref{fig:LeadSix3}. We write down
explicitly the equations coming from $(A)$ and $(B)$, as the
remaining three, $(C)$, $(D)$, and $(E)$, are completely analogous
to $(B)$.
\begin{figure}
\includegraphics[scale=0.40]{LeadSix3.eps}
\caption{Topologies of sums over Feynman diagrams which can be used
to determine the coefficients of all pentagon-pentagon and some
class of pentagon-box integrals in a six-particle two-loop
amplitude. Some choice of external labels has been made. No
helicities have been assigned since all choices, MHV and
next-to-MHV, can be treated simultaneously.} \label{fig:LeadSix3}
\end{figure}
\subsection{Topology $(A)$}
The integrals contributing to the first kind of topology are shown
in figure \ref{fig:LeadSix2}. If we were to follow the same steps as
in the previous section we would start by taking only the first two
integrals in the figure and then realize that it is not possible to
solve the four equations that arise by comparing all leading
singularities. From the experience with the five-particle case, we
start by adding eight integrals with numerators such that they will
contribute to the four leading singularities under consideration but
they will not contribute to other singularities where they are not
needed just like in the case of $I^{(c)}$ for five particles.
Using the labels in the figure, the four leading singularities are
found by choosing any combination of $p_*$'s and $q_*$'s from
\begin{equation}
p^{(1)} = \frac{\langle 2,3\rangle}{\langle
1,3\rangle}\lambda_1\tilde\lambda_2,\quad p^{(2)} =
\frac{[2,3]}{[1,3]}\lambda_2\tilde\lambda_1,\quad q^{(1)} =
-\frac{\langle 5,6\rangle}{\langle
4,6\rangle}\lambda_4\tilde\lambda_5,\quad q^{(2)} =-\frac{[ 5,6 ]}{[
4,6]}\lambda_5\tilde\lambda_4.
\end{equation}
\begin{figure}
\includegraphics[scale=0.35]{LeadSix2.eps}
\caption{Scalar and generalized scalar integrals that can contribute
to the first kind of topology. Their coefficients are
$B,C,E_1,E_2,E_3,E_4,D_1,D_2,D_3,D_4$ respectively. In order to
avoid cluttering of the figure we have added the minimal amount of
information needed to determine the labels. In the double box
diagram, other external legs are labeled following the color
ordering. All pentagon-pentagon diagram naturally inherit their
labeling from the double-box diagram.} \label{fig:LeadSix2}
\end{figure}
Let us denote the evaluation of the sum over Feynman diagrams (see
figure \ref{fig:LeadSix3}A) in a particular solution $(p, q)$ by
$F_{p,q}$. Then the equations are
\begin{equation}
\begin{array}{ccl}
s_{12}s_{23}s_{45}s_{56}\; F_{p,q} & = & \left[ B +
\frac{1}{(p+q-k_{234})^2}\left(
C+(E_1+D_1(q-k_{234})^2)(p+k_{61})^2+ \right. \right.
\\ & & \left.\left. (E_3+D_2(p-k_{234})^2)(q-k_{234})^2+
(E_4+D_3(p+k_{61})^2)(q-k_{34})^2+ \right. \right.\\ & &
\left.\left. (E_2+D_4(q-k_{34})^2)(p-k_{234})^2\right)\right]
\end{array}
\label{eq:first}
\end{equation}
The labeling of the coefficients is explained in the caption of
figure \ref{fig:LeadSix2}.
The system at hand involves four equations and ten unknown
coefficients. In order to find a set of equations sufficient to
completely determine the coefficients we have to consider the other
topologies in figure \ref{fig:LeadSix3}.
Before going to the next topology, lead us compute $F(p,q)$ in some
cases in order to illustrate the procedure which turns out to be
fairly simple since the computation is reduced to that at one-loop.
All the steps are shown in figure \ref{fig:LeadSix4}. Consider the
lower box in the two-loop diagram. These are the Feynman diagrams in
a one-loop five-particle amplitude where two of the external momenta
are actually internal legs in the two-loop diagram. From the
one-loop discussion in the previous section we know that depending
on the helicities, one solution, $p_*$, gives zero while the other
gives $A^{\rm tree}_5$. Plugging this into the two-loop diagram one
gets, in the non-zero case, a one-loop six particle diagram. If the
amplitude we are trying to compute is MHV or $\overline{\rm MHV}$
then the answer is either zero or $A^{\rm tree}_6$.
\begin{figure}
\includegraphics[scale=0.40]{LeadSix4.eps}
\caption{Computation of functions $F_{p,q}$ in two cases: MHV with
helicity configuration $\{1^-,2^-,3^+,4^+,5^+,6^+\}$ and next-to-MHV
(split), {\it i.e.} $\{1^-,2^-,3^-,4^+,5^+,6^+\}$
} \label{fig:LeadSix4}
\end{figure}
As an explicit example take helicities to be
$\{1^-,2^-,3^+,4^+,5^+,6^+\}$. Then we find
\begin{equation}
F(p^{(1)},q^{(1)}) = A^{\rm tree}_6,\quad F(p^{(1)},q^{(2)})
=F(p^{(2)},q^{(1)}) =F(p^{(2)},q^{(2)}) =0.
\end{equation}
If the amplitude is next-to-MHV then the answer is more interesting
as it corresponds to the coefficient of a particular one-mass
integral in a six-particle one-loop amplitude. All six-particle
one-loop next-to-MHV amplitudes amplitudes where computed in the
early 90's in \cite{Bern:1994cg}. They are all given in terms of a
quantity defined in eq. 6.13 of \cite{Bern:1994cg} (we have parity
conjugated the expression in \cite{Bern:1994cg})
\begin{equation}
B_0 := \frac{\langle 1|2+3|4]\langle 3|1+2|6]
t^3_{123}}{[1,2][2,3]\langle 4,5\rangle\langle
5,6\rangle(t_{123}t_{345}-s_{12}s_{45})(t_{123}t_{234}-s_{23}s_{56})}.
\end{equation}
As a explicit example take helicities to be
$\{1^-,2^-,3^-,4^+,5^+,6^+\}$. Then we find
\begin{equation}
F(p^{(1)},q^{(1)}) =F(p^{(2)},q^{(1)}) =F(p^{(2)},q^{(2)}) =0, \quad
F(p^{(1)},q^{(2)}) = B_0.
\end{equation}
It should be clear that any other helicity configurations can be
treated analogously.
\subsection{Topology $(B)$}
The scalar integrals that contribute to these leading singularities
are shown in figure \ref{fig:LeadSix1}.
The location of the leading singularities is slightly more involved
as $p$ is determined as a function of $q$. Let us give the two
solutions for $q$ and then give the two $q$-dependent solutions for
$p$.
\begin{equation}
q^{(1)} = -\lambda_1\left(\tilde\lambda_1+\frac{\langle
2,3\rangle}{\langle 1,3\rangle}\tilde\lambda_2\right), \qquad
q^{(2)} = -\left(\lambda_1+\frac{[ 2,3]}{[
1,3]}\lambda_2\right)\tilde\lambda_1.
\end{equation}
The solutions for $p$ are given as $p^{(1)}(q)=(\alpha
\lambda_5+\beta \lambda_6)\tilde\lambda_q$ and $p^{(2)}(q)=\lambda_q
(\tilde\alpha \tilde\lambda_5+\tilde\beta \tilde\lambda_6)$ with
\begin{equation}
\alpha = \frac{\langle
6,4\rangle[5,6][4,q]+(s_{45}+s_{46})[5,q]}{[q,4][q|5+6|4\rangle},
\quad \beta =\frac{\langle
5,4\rangle[5,6][q,4]+(s_{45}+s_{46})[q,6]}{[q,4][q|5+6|4\rangle}
\end{equation}
and $\tilde\alpha, \tilde\beta$ the parity conjugate of $\alpha,
\beta$.
\begin{figure}
\includegraphics[scale=0.30]{LeadSix1.eps}
\caption{Scalar and generalized scalar integrals that can contribute
to the second kind of topology. Their corresponding coefficients are
$F,G,C,E_1,E_2,E_3,E_4,D_1,D_2,D_3,D_4$ respectively. Just as in
figure \ref{fig:LeadSix2}, the labels in the diagram are determine
from the those of the first integral.} \label{fig:LeadSix1}
\end{figure}
Once again we can easily write down the linear equations coming from
imposing the correct behavior at the four leading singularities. Let
us denote by $H_{p,q}$ the value of the sum over Feynman diagrams on
one of the four solutions ${p_*,q_*}$. Then the equations are
\begin{equation}
\begin{array}{ccl}
t_{456}s_{12}s_{23}(q-k_{56})^2 H_{p,q} & = & F + G(q-k_{56})^2 +
\frac{1}{(p+k_6)^2}\left(C+(E_1+D_1(p-k_1)^2)(q-k_6)^2+ \right. \\
& & \left.
(E_3+D_2(q-k_{56})^2)(p-k_1)^2+(E_4+D_3(q-k_6)^2)(p-k_{12})^2+\right.
\\ & & \left. (E_2+D_4(p-k_{12})^2)(q-k_{56})^2 \right).
\end{array}
\label{eq:second}
\end{equation}
The labels of the coefficients are explained in the caption of
figure \ref{fig:LeadSix1}.
Combining the two systems of equations, (\ref{eq:first}) and
(\ref{eq:second}), one finds eight equations.
Repeating the same procedure for the three remaining topologies in
figure \ref{fig:LeadSix3} which are completely analogous to topology
$(B)$ one finds four equations in each case. This gives a total of
twenty equations for seventeen coefficients, {\it i.e.}, eight
pentagon-box integrals and nine pentagon-pentagon integrals.
Using seventeen equations to determine the coefficients leaves three
more as consistency checks.
We end this section by mentioning that the next natural set of
equations to consider comes from double-box topologies where one of
the boxes is only a four-particle one-loop amplitude. For a given
choice of the two external legs to the four-particle amplitude, say
$\{1,2\}$, there are three topologies. These correspond to the
possible ways of distributing the remaining external legs; in our
example we would find $\{3,4,5;6\}, \{3,4;5,6\},\{3;4,5,6\}$. By
looking at the leading singularity which uses the propagator coming
form the jacobian of the four-particle box, these give rise to only
two equations each. Hence we find six equations in total. There are
however seven scalar integrals that contribute. One of them is a
hexagon-box.
Following this line of ideas it would be interesting to continue and
write down the complete set of linear equations which determines all
two-loop six particle amplitudes.
Already with the results presented here, it would be interesting to
compare the parity even part of the coefficient of all
pentagon-pentagon integrals to those obtained recently
in~\cite{Bern:2008ap} for MHV amplitudes.
\section{Conclusions}
The program of determining the S-matrix of physical theories by
studying the structure of its singularities when analytically
continued into complex values of kinematical invariants was very
ambitious. Now we know that one of the main goals of the program
which was to understand the strong interactions has proven to be a
formidable problem.
In the past 20 years we have learned that ${\cal N}=4$ super
Yang-Mills (SYM) can serve as a laboratory where new techniques and
ideas can be tested. Based on the results presented in this paper we
would like to propose that perhaps ${\cal N}=4$ SYM, at least in the
large $N$ limit, can be used to realize the basic ideas of the
$S$-matrix program. Of course, ${\cal N}=4$ SYM in four dimensions
is a conformal theory and no $S$-matrix can be defined. However, if
IR divergencies are regulated using dimensional regularization then
a sensible $S$-matrix can be defined. It is very intriguing that on
the contours which computer the discontinuity across leading
singularities, all integrals are finite and hence make perfect sense
in four dimensions. It is natural to expect the amplitudes on the
torus contours can have physical meaning. It would be interesting to
explore this further.
In general, Feynman diagrams possess many singularities; from poles,
related to resonances, to branch cuts, related to unitarity. When a
given scattering amplitude in theories with spin is considered as a
function of independent complex variables $\lambda_a^{(i)}$ and
$\tilde\lambda_{\dot a}^{(i)}$ the problem of reconstructing it from
the structure of its singularities is certainly out of hand. At
tree-level we learned few years ago
\cite{Britto:2005fq,Britto:2004ap} that one can solve a simpler
problem by concentrating on a single complex variable deformation of
the amplitude. The main simplification comes from the fact by
matching only a very small subset of all poles is enough to
determining the amplitude in terms of smaller ones. This idea led to
recursion relations between on-shell scattering amplitudes.
At loop level, Feynman diagrams posses an intricate structure of
nested branch cuts. It turns out that the discontinuity across a
given branch cut possesses branch cuts itself. The nested structure
gets more an more complicated the higher the loop order. The problem
of finding functions which reproduce all such discontinuities is
clearly very hard. Out of all these singularities, the special class
studied in this paper, called leading singularities, are the ones
which have the highest codimension. We have argued that the
discontinuities associated to them are determined by simple
computations of residues. We have shown that quite remarkably, the
problem of finding a function which reproduces all leading
singularities, which only implies the solution to linear equations,
is enough to completely determine the amplitude in ${\cal N}=4$ SYM.
Summarizing, computing multi-loop, multi-particle amplitudes in
${\cal N}=4$ SYM can be reduced to the computation of residues and
the solution of systems of linear equations. The residues only
involve products of tree-level amplitudes. These possess all the
helicity information and only enter in the inhomogeneous part of the
linear equations. The homogeneous part is universal. If we think
about the amplitude and the linear equations as being objects with
the same information, then it might be that the matrix which
determines the homogenous part of the equations is the right object
to study properties like strong coupling expansions, collinear
limits, IR consistency equations, etc.
A natural question that arises is whether this is property is unique
to ${\cal N}=4$ SYM. The most promising theories are those for which
one-loop amplitudes can be written in terms of only boxes, {\it
i.e.}, bubbles and triangles are absent. ${\cal N}=8$ supergravity
has been hypothesized to have such
property~\cite{Bern:1998sv,Bern:2005bb,BjerrumBohr:2005xx,BjerrumBohr:2006yw}.
This means that one could try to apply the technique presented here
to that case. Already, an important step in this direction was given
in \cite{Cachazo:2008dx}. It would be very interesting to compute
more examples.
In \cite{Cachazo:2008dx}, higher loop four-particle amplitudes in
${\cal N}=4$ SYM were considered. In cases where four-particle
one-loop amplitudes were present as subdiagrams it was shown that
one could related the computation to that at one less loop order. It
would be interesting to explore the consequences that matching all
leading singularities would impose on the generalized scalar
integrals. Perhaps a formal derivation of the idea of corrections
introduced in \cite{Cachazo:2008dx} can be found.
As a final note one can say that the power of matching each
individual leading singularities comes from the fact that the number
of conditions which are naturally linear grows much faster with the
number of loops than those of previous approaches.
\begin{acknowledgments}
It is a pleasure to acknowledge very useful discussions with D.
Skinner. We have also benefited from discussions with P. Benincasa,
E. Buchbinder, L. Dixon and M. Spradlin. This research is supported
by the Government of Canada through Industry Canada and by the
Province of Ontario through the Ministry of Research \& Innovation.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{intro}
Since the revolutionary papers of Bekenstein and Hawking, black hole thermodynamics has been one of the most important subjects among the researchers in the scientific community \cite{Bekenstein1972,Bekenstein1973,Bardeen1973,Hawking1974,Bekenstein1974,Hawking1975}. Considering black holes with temperature and entropy gives us new opportunities to explore many interesting thermodynamic phenomena. Furthermore, black hole thermodynamics gives the first hints of quantum gravity. It also gives the fundamental links between general relativity, thermodynamics and quantum mechanics. One may naturally ask whether black hole as a thermodynamic system shares any similarities with general thermodynamic system or not. These similarities become more clear and certain for black holes in anti-de Sitter (AdS) spacetime.
Black holes in AdS spacetime have been studied widely in the literature since the pioneer paper of Hawking and Page \cite{Hawking1983}. They found a first order phase transition between Schwarzschild AdS black hole and thermal AdS space. Interestingly when Schwarzschild AdS black hole is generalized to charged or rotating case, it shows the vdW fluids like behaviours. The authors of \cite{Chamblin1999a,Chamblin1999b} studied the thermodynamics of charged AdS black holes and they found vdW like first order small$-$large black hole phase transition. This type of phase transition becomes more clear in the extended phase space where the cosmological constant is considered as a thermodynamic pressure. Once treating the cosmological constant as a thermodynamic pressure,
\begin{equation}
\label{pressure}
P=-\frac{\Lambda}{8\pi}\,,
\end{equation}
naturally gives its conjugate quantity as a thermodynamic volume\footnote{Based on this idea, Kastor et al.\cite{Kastor2009} showed that mass of AdS black hole is identified with enthalpy of spacetime.}
\begin{equation}
\label{thermoVolume}
V=\left(\frac{\partial M}{\partial P}\right)_{S,Q,J}\,.
\end{equation}
The charged AdS black hole thermodynamics and phase transition were studied by Kubiznak and Mann\cite{Kubiznak2012}. They showed the charged AdS first order small$-$large black hole phase transition has the same characteristic behaviour with vdW fluids. They were also obtained critical exponents which coincide the exponents of vdW fluids. Up to now, it can be seen from literature that their study has been extended for various solutions of black holes in the AdS spacetime \cite{Gunasekaran2012,Spallucci2013,Zhao2013,Behlaj2013,Cai2013,Mo2014,Xu2014,Li2014,Ma2014,Belhaj2015,Kubiznak2015,Hennigar2015,Caceres2015,Wei2016,Pourhassan2016,Hendi2016,Momeni2017,Ovgun2018,Sun2018,Jamil2018,Nam2018a,Nam2018b,Kuang2018,Zhang2018,Okcu2017,Okcu2018,Mo2018,Zhao2018,Yekta2019}\footnote{One can refer to the recent comprehensive reviews \cite{Altamirano2014,Kubiznak2017} and references therein.}.
Based on the above fact, Rajagopal et al. \cite{Rajagopal2014} obtained vdW black hole solution which has the same thermodynamics with vdW fluids\footnote{See \cite{Pradhan2016,Hu2017} for the thermodynamics of vdW black holes.}. In their interesting paper, they also argued the corresponding stress energy tensor for their solution. They found that the stress energy tensor obeys energy conditions for a certain range of metric parameter. They also showed that their solution is interpreted as near horizon metric. Following the methods in \cite{Rajagopal2014}, a few AdS black hole solutions, which match the thermodynamics of a certain equation of states, were proposed in the literature. In \cite{Delsate2015}, Delsate and Mann generalized the vdW solution in higher dimensions. In \cite{Setare2015}, Setare and Adami obtained Polytropic black hole solution which has the identical thermodynamics with that of the polytropic gas. Interestingly, under the small effective pressure limit, Abchouyeh et al. \cite{Abchouyeh2017} obtained Anyon black hole solution which corresponds to thermodynamics of Anyon vdW fluids. Anyons are the particles that have the intermediate statistics between Fermi-Dirac and Bose-Einstein statistics. Then exact solution of Anyon black hole \cite{Xu2018} was obtained by Xu. Finally, Debnath constructed a black hole solution whose thermodynamics matches the thermodynamics of modified Chaplygin gas \cite{Debnath2018}.
Furthermore, black hole thermodynamics is considered in the context of quantum gravity since the effects of quantum gravity are no longer negligible near the Planck scale. Therefore, thermodynamics of black hole was modified in various quantum gravity approaches \cite{Govindarajan2001,Mann1998,Sen2013,Das2002,Feng2017,Feng2019}. On the other hand, black holes as a gravitational system may give us information about nature of the quantum gravity. Motivated by this fact, Upadhyay and Pourhassan \cite{Upadhyay2019} studied the modification of higher dimensional vdW black hole for the thermal fluctuations interpreted as the quantum effects. They also investigated the thermal fluctuations effects on the thermodynamics of higher dimensional vdW black hole.
It is well known that GUP is one of the phenomenological quantum gravity models and is considered as an modification of standard uncertainty principle \cite{Maggiore1993,Kempf1995,Nozari2012a}. Therefore, it is possible to modify the thermodynamics properties of black hole by taking into account the GUP effects near the Planck scale\cite{Adler2001,Nozari2005,Nozari2008,Nowakowski2009a,Nowakowski2009b,Banerjee2010,Nozari2012b,Ali2012,Gangopadhyay2014,Abbasvandi2016,Feng2016,Luciano2019,Villalpando2019,Sakalli2016,Kanzi2019,Jusufi2020,Bosso2020,Gecim2020,Xiang2009}.
So far, to our best knowledge, GUP has never been extended to the thermodynamics of four dimensional vdW black holes. In this study, we want to explore GUP effects for four dimensional vdW black holes.
The paper is arranged as follows: We first review the heuristic derivation of GUP modified Hawking temperature which was proposed by Xiang and Wen \cite{Xiang2009}. Next, we modify the vdW solution by using the modified Hawking temperature and then we check the energy conditions for stress energy tensor in Sect. \ref{GUPBH}. In Sect. \ref{GUPBHT}, we investigate the GUP corrected thermodynamics quantities and phase transition. Finally, we discuss our results in Sect. \ref{Concl.}. (We use the units $G_{N}=k_{B}=c=L_{pl}=1$).
\section{GUP-Corrected Black Hole Temperature}
\label{GUPR}
Here, we will briefly review a generic GUP correction approach to semi-classical Hawking Temperature \cite{Xiang2009}. The simplest form of GUP is given by \cite{Maggiore1993}
\begin{equation}
\Delta x\Delta p\geq\hbar+\frac{\alpha}{\hbar}\Delta p^{2}\,,
\label{GUP}
\end{equation}
where $\alpha$ is a positive constant. To get the correction to the black hole thermodynamics, we need to solve this inequality for the momentum uncertainty $\Delta p$. The solution of (\ref{GUP}) is given by
\begin{equation}
\label{deltaP1}
\frac{\hbar}{2\alpha}\left(\Delta x+\sqrt{\Delta x^{2}-4\alpha}\right)\geq\Delta p\geq\frac{\hbar}{2\alpha}\left(\Delta x-\sqrt{\Delta x^{2}-4\alpha}\right)\,,
\end{equation}
where we choose the lower bound of the inequality since we can recover the standard uncertainty principle in the limit $\alpha\rightarrow0$. Series expansion of (\ref{deltaP1}) yields
\begin{equation}
\label{expandedDeltaP}
\Delta p\geq\frac{\hbar}{\Delta x}+\frac{\hbar\alpha}{\Delta x^{3}}+\mathcal{O}(\alpha^{2})\,.
\end{equation}
Therefore, by using the above statement we can write $\Delta x\Delta p$ as
\begin{equation}
\label{effectivePlanck}
\Delta x\Delta p\geq\hbar\left(1+\frac{\alpha}{\Delta x^{2}}+\mathcal{O}(\alpha^{2})\right)=\hbar_{eff}\,,
\end{equation}
where the rhs of the inequality can be considered as an effective Planck constant $\hbar_{eff}$. On the other hand, the smallest increase of area for the black hole, which absorbs a particle, is given by
\begin{equation}
\label{areaChange}
\Delta A\geq\Delta x\Delta p\,.
\end{equation}
Taking the uncertainty of position $\Delta x\approx2r_{h}$ and using the (\ref{effectivePlanck}) with the (\ref{areaChange}), one can obtain the increase of the area as
\begin{equation}
\label{areaChange2}
\Delta A\geq\gamma\hbar_{eff}=\gamma\hbar\left(1+\frac{\alpha}{4r_{h}^{2}}+\mathcal{O}(\alpha^{2})\right)\, ,
\end{equation}
where $r_{h}$ stands for event horizon of black hole, and $\gamma$ is a calibration factor which can be obtained in the limit $\alpha\rightarrow0$. Moreover, when the particle is absorbed, the minimum increase in black hole entropy is given $(\Delta S)_{min}=\ln2$. So we can obtain
\begin{equation}
\label{dA/dS}
\frac{dA}{dS}\simeq\frac{(\Delta A)_{min}}{(\Delta S)_{min}}=\frac{\hbar\gamma}{\ln2}\left(1+\frac{\alpha}{4r_{h}^{2}}+\mathcal{O}(\alpha^{2})\right)\,.
\end{equation}
Using the temperature of black hole $T=\frac{dA}{dS}\times\frac{\kappa}{8\pi}$ with (\ref{dA/dS}), we finally find the GUP corrected temperature
\begin{equation}
\label{GUPTemp}
T=\frac{\hbar\gamma}{\ln2}\left(1+\frac{\alpha}{4r_{h}^{2}}+\mathcal{O}(\alpha^{2})\right)\times\frac{\kappa}{8\pi}\,,
\end{equation}
where $\kappa=f'(r_{h})/2$ is the surface gravity of the black hole, and prime denotes the derivative with respect to r. In order to find $\gamma$, we should check the GUP-modified temperature in the limit $\alpha\rightarrow0$. The (\ref{GUPTemp}) should give the standard result $T=\frac{\hbar\kappa}{2\pi}$ when $\alpha$ goes to zero. As a result, we find the calibration factor $\gamma=4\ln2$. We find the GUP modified temperature
\begin{equation}
\label{GUPTemp2}
T=\frac{\hbar_{eff}\kappa}{2\pi}=\frac{\hbar\kappa}{2\pi}\left(1+\frac{\alpha}{4r_{h}^{2}}\right)\,,
\end{equation}
for static and spherically symmetric black holes.
In this section, we briefly review the generic GUP correction to black hole temperature. In the next section, we will use (\ref{GUPTemp2}) to modify the vdW black hole solution. For the clarity of the following discussions, we choose $\hbar=1$ in the rest of the paper.
\section{GUP-Corrected vdW Black Holes}
\label{GUPBH}
It is well-known that vdW equation of state is a generalized version of ideal gas equation. It is given by \cite{Johnston2014}
\begin{equation}\label{vdW}
P=\frac{T}{v-b}-\frac{a}{v^{2}} \,,
\end{equation}
where $v=V/N$ is the specific volume, $a>0$ constant is a measure of the attraction between the particles, and $b>0$ is a measure of the particle volume. One can use vdW equation for describing the behaviour of liquid$-$gas phase transition. In order to construct a black hole solution whose thermodynamic matches with that of (\ref{vdW}), we start with the following spherically symmetric ansatz for the metric
\begin{equation}\label{metric1}
ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega^{2} \,,
\end{equation}
\begin{equation}\label{metric2}
f=\frac{r^{2}}{l^{2}}-\frac{2M}{r}-h(r,P) \,,
\end{equation}
where $l$ is the AdS radius, and the function $h(r,P)$ can be obtained from GUP-corrected black hole temperature. Now, we assume that the given metric is a solution of the Einstein field equations, $G_{\mu\nu}+\Lambda g_{\mu\nu}=8\pi T_{\mu\nu}$. In order to proceed discussion, we choose the stress energy tensor $T^{\mu\nu}$ as an anisotropic fluid source in the following form
\begin{equation}\label{stressEnergy}
T^{\mu\nu}=\rho e_{0}^{\mu}e_{0}^{\nu}+\sum_{i}p_{i}e_{i}^{\mu}e_{i}^{\nu} \,,
\end{equation}
where $e_{i}^{\mu}$, $\rho$ and $p_{i}$ denote the components of the vielbein ($i=1,2,3$), energy density and principle pressure, respectively. We need a physically meaningful stress energy source for our metric ansatz. Therefore, we require our corresponding stress energy tensor should satisfy certain energy conditions such as weak, strong and dominant energy conditions. We will consider these conditions on the energy density $\rho$ and principal pressures $p_{i}$ after determining the metric. Using Einstein field equations with the stress energy tensor in (\ref{stressEnergy}), $\rho$ and $p_{i}$ are given by
\begin{equation}\label{energyDensityWithp1}
\rho=-p_{1}=\frac{1-f-rf'}{8\pi r^{2}}+P \,,
\end{equation}
\begin{equation}\label{principalPressure23}
p_{2}=p_{3}=\frac{rf''+2f'}{16\pi r}-P\,,
\end{equation}
where the relation between thermodynamic pressure and the AdS radius is defined by
\begin{equation}
\label{pressure2}
P=-\frac{\Lambda}{8\pi}=\frac{3}{8\pi l^{2}}\, .
\end{equation}
On the other hand, the mass of black hole can be obtained from $f(r_{h})=0$
\begin{equation}\label{mass}
M=\frac{4}{3}\pi r_{h}^{3}P-\frac{h(r_{h},P)r_{h}}{2}\,.
\end{equation}
At this point, we can give GUP corrected thermodynamic quantities
\begin{equation}\label{temperature}
T=\frac{\hbar_{eff}\kappa}{2\pi}=\left(1+\frac{\alpha}{4r_{h}^{2}}\right)\left(2Pr_{h}-\frac{h(r_{h},P)}{4\pi r_{h}}-\frac{1}{4\pi}\frac{\partial h(r_{h},P)}{\partial r_{h}}\right)\,,
\end{equation}
\begin{equation}\label{entropy}
S=\int\frac{dM}{T}=\pi r_{h}^{2}-\frac{\alpha\pi}{4}\ln\left(\frac{4r_{h}^{2}+\alpha}{\alpha}\right)\,,
\end{equation}
\begin{equation}\label{volume}
V=\left(\frac{\partial M}{\partial P}\right)=\frac{4}{3}\pi r_{h}^{3}-\frac{r_{h}}{2}\frac{\partial h(r_{h},P)}{\partial P}\,,
\end{equation}
\begin{equation}
\label{heatCapacity}
C_{P}=\frac{\partial M/\partial r_{h}}{\partial T/\partial r_{h}}=\frac{8\pi r_{h}^{4}\left(-h+r_{h}\left(\frac{3r_{h}}{l^{2}}-\frac{\partial h}{\partial r_{h}}\right)\right)}{(4r_{h}^{2}+3\alpha)h+r_{h}\left((4r_{h}^{2}-\alpha)\left(\frac{3r_{h}}{l^{2}}-\frac{\partial h}{\partial r_{h}}\right)-r_{h}(4r^{2}+\alpha)\frac{\partial^{2}h}{\partial r_{h}^{2}}\right)}\,,
\end{equation}
where we choose integration constant as $\frac{\alpha\pi}{4}\ln(\alpha)$ to make a dimensionless logarithmic term in (\ref{entropy}). One can also define the specific volume of black hole by \cite{Altamirano2014}
\begin{equation}
\label{specificVolume}
v=\frac{V}{N}\,,
\end{equation}
where $N=\frac{4(d-1)}{d-2}\frac{\mathcal{A}}{L^{2}_{pl}}$ is proportional to the horizon area of black hole $\mathcal{A}$. In $d=4$ dimensions, the specific volume is given by
\begin{equation}
\label{specificVolume2}
v=\frac{3}{2\pi r_{h}^{2}}\left[\frac{4}{3}\pi r^{3}-\frac{r_{h}}{2}\frac{\partial h(r_{h},P)}{\partial P}\right]\,.
\end{equation}
In order to construct a metric solution whose thermodynamics is identical with the vdW fluids, we assume that $h(r,P)=A(r)-PB(r)$. First, we obtain the temperature from (\ref{vdW})
\begin{equation}\label{vdW2}
T=\left(P+\frac{a}{v^{2}}\right)(v-b) \,,
\end{equation}
and using the equality between (\ref{temperature}) and (\ref{vdW2}), we can write
\begin{equation}
\label{equation}
\left(1+\frac{\alpha}{4r_{h}^{2}}\right)\left(2Pr_{h}-\frac{h}{4\pi r_{h}}-\frac{h'}{4\pi}\right)=\left(P+\frac{a}{v^{2}}\right)(v-b)\,,
\end{equation}
where $v=2r_{h}+\frac{3B}{4\pi r_{h}}$. The (\ref{equation}) is organized in the form $F_{1}(r)+F_{2}(r)P=0$, where the functions $F_{1}(r)$ and $F_{2}(r)$ depend on the functions $A(r)$ and $B(r)$ and their derivatives. With ansatz $h$, we get two ordinary differential equations from (\ref{equation}). In other words, we should independently set the functions $F_{1}(r)$ and $F_{2}(r)$ equal zero,
\begin{equation}
\label{F1}
F_{1}(r)=-\frac{1}{4\pi}\left(1+\frac{\alpha}{4r^{2}}\right)\left(A'+\frac{A}{r}\right)-\frac{16\pi^{2}ar^{2}}{(8\pi r^{2}+3B)^{2}}\left(2r+\frac{3B}{4\pi r}-b\right)=0\,,
\end{equation}
\begin{equation}
\label{F2}
F_{2}(r)=\frac{1}{4\pi}\left(1+\frac{\alpha}{4r^{2}}\right)\left(B'+\frac{B}{r_{h}}\right)-\frac{3B}{4\pi r}+\frac{\alpha}{2r}+b=0\,.
\end{equation}
\begin{figure}
\centerline{\includegraphics[width=10cm]{energyCon.eps}}
\vspace*{8pt}
\caption{(Colour online). We only show the energy density $\rho$ ( black dashed line), $\rho+p_{2}$ (red dotted line) and $\rho-p_{2}$ (blue solid line) for the sufficiently small pressure. Thick green vertical line presents the event horizon. We set $a=\frac{1}{2\pi}$, $b=1$, $P=0.001$ and $M=0.16$ \label{f1}}
\end{figure}
The coefficient $B(r)$ is obtained from (\ref{F2})
\begin{equation}
\label{Br}
B(r)=4\pi br-\frac{8\pi r^{2}}{3}+8\epsilon r^{2}+\left(\frac{2\pi b}{3r}+3\epsilon\right)\alpha+\mathcal{O}(\alpha^{2})\,.
\end{equation}
For simplicity, we choose integration constant as $\epsilon=\pi/3$. Hence, we obtain
\begin{equation}
\label{Br2}
B(r)=4\pi br+\left(1+\frac{2b}{3r}\right)\pi\alpha\,.
\end{equation}
Substituting (\ref{Br2}) into (\ref{F1}) and then expanding $F_{1}(r)$ up to second order of $\alpha$, we can find the following equation:
\begin{equation}
\label{F1new}
F_{1}(r)=-\frac{1}{4\pi}\left(1+\frac{\alpha}{4r^{2}}\right)\left(A'+\frac{A}{r}\right)-\frac{2a(b+r)}{(3b+2r)^{2}}+\frac{a(b+2r)(2b+3r)}{4r^{2}(3b+2r)^{3}}\alpha=0\,,
\end{equation}
and this equation yields the solution
\begin{eqnarray}
\label{Ar}
&A(r)=-2\pi a+\frac{\pi ab^{2}(243b^{4}(3b+2r)-9b^{2}(18b+17r)\alpha-(57b+43r)\alpha^{2})}{r(3b+2r)^{2}(9b^{2}+\alpha)^{2}}\nonumber\\+&\frac{a\pi\alpha^{3/2}(-27b^{^{4}}+2\alpha b^{2}+5\alpha^{2})\arctan(2r/\sqrt{\alpha})}{2r(9b^{2}+\alpha)^{3}}+\frac{2\pi ab(1458b^{6}+378\alpha b^{4}+9\alpha^{2}b^{2}-5\alpha^{3})}{r(9b^{2}+\alpha)^{3}}\ln\left(\frac{3b+2r}{2b}\right)\nonumber\\&+\frac{\pi\alpha ab(108b^{4}+45\alpha b^{2}+7\alpha^{2})}{r(9b^{2}+\alpha)^{3}}\ln\left(\frac{4r^{2}+\alpha}{4b^{2}}\right)\,,
\end{eqnarray}
where we choose the suitable integration constant to obtain the dimensionless logarithmic terms. Using the solutions in (\ref{Br2}) and (\ref{Ar}), we determine the modified $h(r,P)$ functions and therefore we get the GUP-corrected vdW black hole solution. As it can be seen from (\ref{Br2}) and (\ref{Ar}), the result in \cite{Rajagopal2014} are obtained in the limit $\alpha\rightarrow0$.
\begin{figure}
\centerline{\includegraphics[width=10cm]{entropy.eps}}
\vspace*{8pt}
\caption{(Colour online). Semi-classical (black solid line) and GUP-corrected (red dashed line) entropies as function of $r_{h}$. We set $a=\frac{1}{2\pi}$, $b=1$.\label{f2}}
\end{figure}
For a valid physical solution, stress energy tensor should satisfy the certain energy conditions. Therefore, the energy conditions should be checked for finding a physically meaningful solution \cite{Poisson2004}:
\begin{eqnarray}
Weak:&\qquad \rho \geq 0\ ,\ \rho+{p}_i \geq 0\\
Strong:&\quad \rho+{\sum}_i{ p}_i\geq 0\, \qquad \rho+{p}_i \geq 0\\
Dominant:&\qquad \rho \geq 0\ , \qquad \rho \geq |{p}_i|\, .
\end{eqnarray}
In Fig.(\ref{f1}), we check the energy conditions. It is possible to satisfy the all energy conditions near the outer of the horizon for the sufficiently small pressure. Since $p_{2}$ is positive for a sufficiently large $r$, it is not displayed in the figure. It seems our solution is physically valid near horizon.
In the next section, we will investigate the thermodynamics of GUP-corrected vdW black holes.
\section{Thermodynamics and Phase Transition}
\label{GUPBHT}
In the previous section, we obtained the GUP-corrected $h(r_{h},P)$ function. Therefore, we can explore the GUP effects for the thermodynamic quantities of vdW black hole. In Fig.(\ref{f2}), we show both semi-classical and GUP-corrected entropies of vdW black hole. It is clearly seen that GUP-corrected entropy is always smaller than semi-classical entropy. Moreover, modified entropy is a monotonically increasing function of $r_{h}$ in the region ($0<r_{h}<\infty$) since
\begin{equation}
\label{derivativeOfEntropy}
\frac{dS}{dr_{h}}=\frac{8\pi r_{h}^{3}}{\alpha+4r_{h}^{2}}\,.
\end{equation}
Corrected temperature may be larger or smaller than original temperature. In Fig.(\ref{f3}), semi-classical and modified temperatures are plotted in terms of $r_{h}$ for the suitable choices of parameters. In Fig.(\ref{f3}a), GUP-corrected temperature has an unstable branch for small black holes, while it has the same characteristic behaviour with semi-classical temperature due to negligible quantum gravity effects for the larger event horizons. The unstable branch corresponds to negative specific heat region. In this case, vdW black hole is thermodynamically unstable for the values of smaller event horizon in the presence of GUP effects. It is also clear that the temperature increases for the GUP correction. In Fig.(\ref{f3}b), corrected temperature shows an unstable branch for smaller event horizons and has ill-defined negative value behaviours for some event horizons. It is obvious that corrected temperature has a smaller value than original temperature for certain event horizons and again has the same characteristic behaviour for larger event horizons due to negligible quantum gravity effects. One may consider difference between corrected and original temperature by
\begin{equation}
\label{tempDiff}
\Delta T=\frac{\alpha(12r_{h}^{2}+8br_{h}+\alpha)}{16r_{h}^{3}}-\frac{\alpha ar_{h}(b+2r_{h})(2b+3r_{h})}{4r_{h}^{3}(3b+2r_{h})^{3}}\,.
\end{equation}
When $\Delta T$ is positive (negative), corrected temperature is larger (smaller) than original temperature. The sign of (\ref{tempDiff}) is determined by the competition between first and second terms. For example, first term may give the most contributions for sufficiently larger event horizon. Therefore, corrected temperature may be larger than original temperature.
Now, we investigate the behaviour of heat capacity for thermodynamic stability and possible phase transition of vdW black hole in Fig.(\ref{f4}). Again, both semi-classical and corrected heat capacities may show stable-unstable phase transitions. In Fig.(\ref{f4}a), we observe that corrected heat capacity diverges for $a=1/2\pi$ and $b=l=1$. So it shows the stable-unstable black hole phase transition in the presence of GUP modification. Pradhan also reported a similar phase transition for vdW black hole in \cite{Pradhan2016}, but the phase transition occurs at the negative values of event horizon for $b=l=1$ and $a=1/2\pi$. Therefore, it is not physically acceptable. In Fig.(\ref{f4}b), both heat capacities change their roles for $a=l=1$ and $b=0.1$. While semi-classical heat capacity shows a stable-unstable phase transition, corrected heat capacity does not show any phase transitions.
We investigate the behaviours of thermodynamic volumes and masses in Figs. (\ref{f5}) and (\ref{f6}), respectively. From (\ref{volume}), volume is given by
\begin{equation}
\label{volume2}
V=\frac{4}{3}\pi r_{h}^{3}+2\pi br_{h}^{2}+\frac{\pi b\alpha}{3}+\frac{\pi\alpha r_{h}}{2}\,.
\end{equation}
Since correction terms are always positive, modified volume is always larger than original volume. As it can be seen from Fig(\ref{f6}), corrected mass may be larger or smaller for a suitable choice of $a$ and $b$.
\begin{figure}
\includegraphics[width=0.5\linewidth]{temp.eps}\hfil
\includegraphics[width=0.5\linewidth]{temp2.eps}
\caption{(Colour online). Semi-classical (black solid line) and GUP-corrected (red dashed line) temperatures as function of $r_{h}$. We set (a) $a=\frac{1}{2\pi}$, $b=l=1$. (b) $b=0.1$, $a=l=1$.}
\label{f3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{heatCa.eps}\hfil
\includegraphics[width=0.5\linewidth]{heatCa2.eps}
\caption{(Colour online). Semi-classical (black solid line) and GUP-corrected (red dashed line) heat capacities as function of $r_{h}$. We set (a) $a=\frac{1}{2\pi}$, $b=l=1$. (b) $b=0.1$, $a=l=1$.}
\label{f4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{Volume.eps}}
\vspace*{8pt}
\caption{Semi-classical (black solid line) and GUP-corrected (red dashed line) thermodynamic volume as function of $r_{h}$. We set $b=1$.\label{f5}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{mass.eps}\hfil
\includegraphics[width=0.5\linewidth]{mass2.eps}
\caption{(Colour online). Semi-classical (black solid line) and GUP-corrected (red dashed line) masses as function of $r_{h}$. We set (a) $a=\frac{1}{2\pi}$, $b=l=1$. (b) $b=0.1$, $a=l=1$.}
\label{f6}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{PV.eps}\hfil
\includegraphics[width=0.5\linewidth]{PV2.eps}
\caption{(Colour online). The $P-r_{h}$ diagram. The temperature of isotherms decrease from top to bottom and correspond to $1.3T_{c}$ (blue dotted line), $T_{c}$(red dashed line) and $0.7T_{c}$ (green solid line). We set $a=\frac{1}{2\pi}$, $b=1$. (a) GUP-corrected equation of state. (b) Original equation of state.}
\label{f7}
\end{figure}
Needles to say, it is not a surprise to expect critical points since the black hole solution has the same thermodynamics with vdW fluids. From (\ref{temperature}) and corrected $h(r,P)$ function, one can obtain the equation of state
\begin{equation}
\label{Eos}
P=\frac{16r_{h}^{3}(2r_{h}+3b)((2r_{h}+3b)^{2}T-2a(r_{h}+b))+4ar_{h}(2r_{h}+b)(3r_{h}+2b)\alpha}{(2r_{h}+3b)^{3}(4r_{h}^{2}+\alpha)(8r_{h}(r_{h}+b)+\alpha)}\,.
\end{equation}
In order to obtain the critical points, we should solve the following two equations:
\begin{equation}
\frac{\partial P}{\partial r_{h}}=\frac{\partial^{2}P}{\partial r_{h}^{2}}=0\,.
\label{CP}
\end{equation}
Since the (\ref{Eos}) is very complicated, we have to find critical points numerically. We set $\alpha=b=1$ and $a=1/2\pi$. So we find $T_{c}=0.03279$, $r_{c}=1.52439$ and $P_{c}=0.00223$. In Fig.(\ref{f7}a), we show the small-large black hole phase transition for modified equation of state. Except the unstable left branch, the phase diagram mostly shows the well-known behaviour of liquid-gas system. Similar behaviours for the GUP corrected phase transition of charged AdS black hole were also showed in the paper\cite{Sun2018} by Sun and Ma. Although original vdW black hole equation of state shows critical points, the phase transition seems physically unreasonable. One can obtain critical points from (\ref{CP})
\begin{equation}
T_{c}=\frac{8a}{27b},\qquad r_{c}=0\,\qquad P_{c}=\frac{a}{27b^{2}}\,.
\label{CP2}
\end{equation}
It is possible to define a positive critical specific volume, but small black hole and phase transition branches correspond to negative event horizons. Therefore, the critical points do not correspond to physical phase transition in the absence of GUP modification.
\section{Conclusions}
\label{Concl.}
In this paper, we have considered the GUP-correction and obtained the GUP-corrected vdW black hole solution. We have also investigated the thermodynamic quantities and the phase transition of that black hole. Black hole thermodynamics should be modified since the quantum gravity effects are taken into account near the Planck scale. Therefore we have considered GUP modification for the vdW black holes. In order to obtain the solution, we have used the GUP-corrected black hole equation of state with that of the vdW fluids. Then, we have found the modified $h(r_{h},P)$ function.
Moreover, we have found the modified thermodynamic quantities such as entropy, temperature, heat capacity and thermodynamic volume. We have shown that GUP affects the thermodynamic behaviours of vdW black holes. GUP-corrected entropy is always smaller than the semi-classical entropy. Modified temperature may be larger or smaller than semi-classical temperature for a suitable choice of metric parameters. Corrected mass may also be larger or smaller than original mass. We have also found corrected volume is always bigger than semi-classical volume. In order to investigate any phase transitions and thermodynamical stabilities, we have analysed the specific heat. Again, one may observe a stable-unstable phase transition for both modified and semi-classical heat capacities. Finally, we have presented the P-V criticality of GUP-corrected vdW black holes. Although one finds critical points for semi-classical equation of state, the phase transition is not physically reasonable due to negative event horizons.
In a summary, we have revealed some properties of thermodynamics and phase transition of vdW black holes due to the GUP-corrections. It is also interesting to study higher-dimensional vdW black holes for the GUP modifications since the importance of the particular dimensions\cite{Delsate2015} and our study can be generalized for the black hole solutions \cite{Setare2015,Abchouyeh2017,Xu2018,Debnath2018} in this direction.
\section*{Acknowledgement}
The authors thank the anonymous reviewers for their helpful and constructive comments.
\section*{Appendix}
In this appendix, we will give the field equations. Once the field equations are obtained, the corresponding stress energy tensor can be constructed from these equations. From (\ref{metric1}), the components of Ricci tensor $R_{\mu\nu}$ are given by
\begin{eqnarray}
\label{a1}
R_{00}=f\left(\frac{f''}{2}+\frac{f'}{r}\right) ,\qquad R_{11}=-\frac{1}{f}\left(\frac{f''}{2}+\frac{f'}{r}\right)\,,
\end{eqnarray}
\begin{eqnarray}
\label{a2}
R_{22}=1-f'r-f ,\qquad R_{33}=R_{\theta\theta}\sin^{2}\theta\,,
\end{eqnarray}
and Ricci scalar R is given by
\begin{equation}
\label{a3}
R=-f''-\frac{4f'}{r}-\frac{2f}{r^{2}}+\frac{2}{r^{2}}\,.
\end{equation}
From $R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+g_{\mu\nu}\Lambda=8\pi T_{\mu\nu}$, we get the components of stress energy tensor,
\begin{eqnarray}
\label{a4}
T_{00}=\frac{f-f^{2}-ff'r}{8\pi r^{2}}+Pf,\qquad T_{11}=\frac{f+f'r-1}{8\pi fr^{2}}-\frac{P}{f}\,,
\end{eqnarray}
\begin{eqnarray}
\label{a5}
T_{22}=\frac{2f'r+f''r^{2}}{16\pi}-Pr^{2},\qquad T_{33}=T_{22}\sin^{2}\theta\,,
\end{eqnarray}
where we use $\Lambda=-8\pi P$. Taking $T^{\mu\nu}$ as an anisotropic fluid source in (\ref{stressEnergy}), one can easily find (\ref{energyDensityWithp1}) and (\ref{principalPressure23}).
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{#1}\vspace{-6 pt}}
\title{Aspects of lattice $\ensuremath{\mathcal N} = 4$ supersymmetric Yang--Mills}
\ShortTitle{Aspects of lattice $\ensuremath{\mathcal N} = 4$ supersymmetric Yang--Mills}
\author{\speaker{David Schaich} \\
Department of Physics, Syracuse University, Syracuse, NY 13244, United States\thanks{Permanent address} \\
\mbox{Kavli Institute for Theoretical Physics, University of California, Santa Barbara, CA 93106, United States} \\
Institut f\"ur Physik, Humboldt-Universit\"at zu Berlin, 12489 Berlin, Germany \\
E-mail: \email{[email protected]}
}
\abstract{
Non-perturbative investigations of $\ensuremath{\mathcal N} = 4$ supersymmetric Yang--Mills theory formulated on a space-time lattice have advanced rapidly in recent years.
Large-scale numerical calculations are currently being carried out based on a construction that exactly preserves a single supersymmetry at non-zero lattice spacing.
A recent development is the creation of an improved lattice action through a new procedure to regulate flat directions in a manner compatible with this supersymmetry, by modifying the moduli equations.
In this proceedings I briefly summarize this new procedure and discuss the parameter space of the resulting improved action that is now being employed in numerical calculations.
}
\FullConference{The 33rd International Symposium on Lattice Field Theory \\ 14--18 July 2015 \\ Kobe, Japan}
\begin{document}
\setlength{\abovedisplayskip}{6 pt}
\setlength{\belowdisplayskip}{6 pt}
$\ensuremath{\mathcal N} = 4$ supersymmetric Yang--Mills (SYM) is a particularly interesting gauge theory that plays important roles in holographic approaches to quantum gravity, investigations of the structure of scattering amplitudes, and the conformal bootstrap program.
It is also the only known four-dimensional theory admitting a lattice regularization that exactly preserves a subset of the supersymmetry algebra at non-zero lattice spacing~\cite{Kaplan:2005ta, Unsal:2006qp, Catterall:2007kn, Damgaard:2008pa, Catterall:2009it}.
This lattice construction provides a promising foundation for large-scale numerical investigations of $\ensuremath{\mathcal N} = 4$ SYM that can in principle access non-perturbative couplings for arbitrary numbers of colors $N$.
(Other approaches to studying $\ensuremath{\mathcal N} = 4$ SYM numerically include Refs.~\cite{Ishii:2008ib, Ishiki:2008te, Ishiki:2009sg, Hanada:2010kt, Honda:2011qk, Honda:2013nfa, Hanada:2013rga}.)
Even though the field of lattice $\ensuremath{\mathcal N} = 4$ SYM is in its infancy, recent computations have provided the first ab~initio numerical results for quantities such as the static potential, to be confronted with perturbative and holographic predictions~\cite{Catterall:2014vka, Catterall:2014vga}.
An exciting new development is the introduction of a procedure to regulate flat directions by modifying the moduli equations in a way that preserves the single exact supersymmetry at non-zero lattice spacing~\cite{Catterall:2015ira, Schaich:2015daa}.
This procedure results in an improved lattice action that exhibits dramatically reduced violations of supersymmetric Ward identities and much more rapid approach to the continuum limit.
This improved action has been implemented in our parallel software for lattice $\ensuremath{\mathcal N} = 4$ SYM~\cite{Schaich:2014pda}, which we make publicly available to encourage independent investigations and the development of a lattice $\ensuremath{\mathcal N} = 4$ SYM community.\footnote{{\tt\href{http://github.com/daschaich/susy}{http://github.com/daschaich/susy}}}
In this proceedings I briefly summarize the new procedure to regulate flat directions without breaking the exact supersymmetry, and discuss the parameter space of the resulting improved lattice action.
Preliminary results for the static potential and pfaffian phase from ongoing numerical calculations using the improved action were presented in \refcite{Schaich:2015daa}.
These investigations, as well as finite-size scaling and Monte Carlo renormalization group~\cite{Catterall:2014mha} analyses of the scaling dimensions of simple conformal operators, will soon be reported in future work.
\section*{Improved lattice action for $\ensuremath{\mathcal N} = 4$ SYM}
The underlying lattice action for $\ensuremath{\mathcal N} = 4$ SYM is the direct discretization of the continuum action produced by topological twisting~\cite{Marcus:1995mq, Kapustin:2006pk},
\begin{align}
S_{\rm formal} = \frac{N}{2\ensuremath{\lambda_{\rm lat}} } \sum_n \Tr{\ensuremath{\mathcal Q} \left(\chi_{ab}(n)\ensuremath{\mathcal D} _a^{(+)}\ensuremath{\mathcal U} _b(n) + \eta(n) \ensuremath{\overline{\mathcal D}} _a^{(-)}\ensuremath{\mathcal U} _a(n) - \frac{1}{2}\eta(n) d(n) \right)} \label{eq:S0} \\
-\frac{N}{8\ensuremath{\lambda_{\rm lat}} } \sum_n \Tr{\ensuremath{\epsilon} _{abcde}\ \chi_{de}(n + \ensuremath{\widehat \mu} _a + \ensuremath{\widehat \mu} _b + \ensuremath{\widehat \mu} _c) \ensuremath{\overline{\mathcal D}} _c^{(-)} \chi_{ab}(n)}, \nonumber
\end{align}
with repeated indices summed and continuum gauge-covariant derivatives replaced by lattice finite-difference operators $\ensuremath{\mathcal D} _a$~\cite{Catterall:2007kn, Damgaard:2008pa}.
All indices run from 1 through 5, corresponding to a discretization on the $A_4^*$ lattice of five linearly dependent basis vectors symmetrically spanning four space-time dimensions~\cite{Unsal:2006qp, Catterall:2014vga}.
At each lattice site $n$ the five complexified gauge links $\ensuremath{\mathcal U} _a(n)$ contain both the gauge and scalar fields.
This results in an enlarged $\ensuremath{\mbox{U(}N\mbox{)}} = \ensuremath{\mbox{SU(}N\mbox{)}} \otimes \ensuremath{\mbox{U(1)}} $ gauge invariance, where $N$ is the number of colors.
In the continuum the U(1) sector decouples from observables in the SU($N$) sector, but this is not true at non-zero lattice spacing $a$.
\ensuremath{\mathcal Q} is the twisted-scalar supersymmetry, whose closed subalgebra $\ensuremath{\mathcal Q} ^2 = 0$ is exactly preserved even for $a > 0$.
The equations of motion for the bosonic auxiliary field $d(n) = \ensuremath{\overline{\mathcal D}} _a^{(-)}\ensuremath{\mathcal U} _a(n)$ determine the moduli space of the system.
The moduli space of the lattice theory matches that of the continuum theory~\cite{Catterall:2011pd}, and in particular possesses flat directions and exact zero modes that destabilize numerical computations.
These flat directions must be regulated in both the SU($N$) and U(1) sectors, generically requiring two deformations of the lattice action.
In Refs.~\cite{Catterall:2014vka, Schaich:2014pda, Catterall:2014vga}, these were added directly to \eq{eq:S0} to produce the unimproved action $S_{\rm unimp} = S_{\rm formal} + S_{\rm soft}$, where
\begin{equation}
\label{eq:Ssoft}
S_{\rm soft} = \frac{N}{2\ensuremath{\lambda_{\rm lat}} } \mu^2 \sum_n \sum_a \left(\frac{1}{N} \Tr{\ensuremath{\mathcal U} _a(n) \ensuremath{\overline{\mathcal U}} _a(n)} - 1\right)^2 + \ensuremath{\kappa} \sum_n \sum_{a < b} \left|\det \ensuremath{\mathcal P} _{ab}(n) - 1\right|^2
\end{equation}
with two tunable auxiliary parameters $\mu$ and $\ensuremath{\kappa} $.
The first ($\mu$) term is a scalar potential that regulates the SU($N$) flat directions and constrains $\vev{\Im\det\ensuremath{\mathcal P} _{ab}}$, where $\ensuremath{\mathcal P} _{ab} = \ensuremath{\mathcal P} _{ba}^*$ is the oriented plaquette in the $a$--$b$ plane.
The U(1) phase of the links cancels out of $\Tr{\ensuremath{\mathcal U} _a(n) \ensuremath{\overline{\mathcal U}} _a(n)}$ in this scalar potential, requiring the addition of the second ($\ensuremath{\kappa} $) term to further constrain the plaquette determinant.
Both terms in \eq{eq:Ssoft} softly break the \ensuremath{\mathcal Q} supersymmetry preserved by the underlying \eq{eq:S0}.
In the relevant region of parameter space the soft-breaking effects of the \ensuremath{\kappa} term are much larger than those of the $\mu$ term~\cite{Catterall:2014vka}.
This motivated our recent development of a general method that can be applied to regulate flat directions in a manner compatible with the \ensuremath{\mathcal Q} supersymmetry~\cite{Catterall:2015ira}.
The procedure produces modified auxiliary field equations of motion
\begin{equation}
\label{eq:EoM}
d(n) = \ensuremath{\overline{\mathcal D}} _a^{(-)}\ensuremath{\mathcal U} _a(n) + G \ensuremath{\mathcal O} (n)\ensuremath{\mathbb I} _N,
\end{equation}
where $\ensuremath{\mathcal O} (n)$ can be any gauge-invariant local bosonic operator and $G$ is a new auxiliary parameter.
The modified equations of motion are obtained by replacing
\begin{equation}
\ensuremath{\mathcal Q} \; \Tr{\eta(n) \left(\ensuremath{\overline{\mathcal D}} _a^{(-)}\ensuremath{\mathcal U} _a(n)\right)} \longrightarrow \ensuremath{\mathcal Q} \; \Tr{\eta(n) \left(\ensuremath{\overline{\mathcal D}} _a^{(-)}\ensuremath{\mathcal U} _a(n) + G\ensuremath{\mathcal O} (n)\ensuremath{\mathbb I} _N\right)},
\end{equation}
in \eq{eq:S0}, introducing a manifestly $\ensuremath{\mathcal Q} $-exact deformation.
The operator $\ensuremath{\mathcal O} (n)$ is now constrained by a \ensuremath{\mathcal Q} Ward identity,
\begin{equation}
\label{eq:Ward}
\vev{\sum_n \Tr{\ensuremath{\mathcal Q} \; \eta(n)}} = \vev{\sum_n \Tr{d(n)}} = NG \vev{\sum_n \ensuremath{\mathcal O} (n)} = 0,
\end{equation}
since the $\Tr{\ensuremath{\overline{\mathcal D}} _a^{(-)} \ensuremath{\mathcal U} _a(n)}$ term in \eq{eq:EoM} vanishes upon summing over all lattice sites $n$.
We take $\ensuremath{\mathcal O} (n) = \sum_{a \neq b} \left(\det\ensuremath{\mathcal P} _{ab}(n) - 1\right) = 2\Re\sum_{a < b}\left(\det\ensuremath{\mathcal P} _{ab}(n) - 1\right)$ in order to replace the \ensuremath{\kappa} term in \eq{eq:Ssoft} that is responsible for the bulk of the soft \ensuremath{\mathcal Q} supersymmetry breaking.
The corresponding Ward identity in \eq{eq:Ward} becomes $\sum_n \sum_{a \neq b} \vev{\det\ensuremath{\mathcal P} _{ab}(n) - 1} = 0$, implying $\vev{\Re\det\ensuremath{\mathcal P} _{ab}} = 1$.
We still need to retain the $\mu$ term in \eq{eq:Ssoft}, to regulate flat directions in the SU($N$) sector and constrain $\vev{\Im\det\ensuremath{\mathcal P} _{ab}}$.
This combination of a supersymmetric plaquette determinant deformation and a soft $\ensuremath{\mathcal Q} $-breaking scalar potential defines the improved lattice action $S_{\rm imp}$ that we are now using in large-scale numerical calculations.
It may be possible to implement both of the necessary deformations in the $\ensuremath{\mathcal Q} $-exact manner enabled by the new procedure summarized above.
In order to do so, however, we would have to make all the scalar potential and plaquette determinant terms non-negative at each lattice site so that they cannot cancel each other out and hence fail to regulate the flat directions.
We explored one way of accomplishing this in \refcite{Catterall:2015ira}, implementing the moduli equations
\begin{equation}
\label{eq:overC}
d(n) = \ensuremath{\overline{\mathcal D}} _a^{(-)}\ensuremath{\mathcal U} _a(n) + B^2 \sum_a \left(\frac{1}{N} \Tr{\ensuremath{\mathcal U} _a(n) \ensuremath{\overline{\mathcal U}} _a(n)} - 1\right)^2 \ensuremath{\mathbb I} _N + G\sum_{a \neq b} \left|\det\ensuremath{\mathcal P} _{ab}(n) - 1\right|^2 \ensuremath{\mathbb I} _N,
\end{equation}
with auxiliary parameters $B$ and $G$.
This choice turned out not to work, most likely because it imposes $15V$ non-trivial constraints for lattice volume $V$, rather than the single constraint of \eq{eq:Ward}.
Thus, although this over-constrained action $S_{\rm over}$ is exactly supersymmetric, lattice calculations employing it are generally unable to reach a supersymmetric vacuum.
Even so, it may be worthwhile to continue searching for more clever ways to further reduce soft supersymmetry breaking in numerical calculations.
\section*{Ward identities and the lattice parameter space}
Our publicly available parallel software has been updated so that the user may employ any of the lattice actions discussed above ($S_{\rm formal}$, $S_{\rm unimp}$, $S_{\rm imp}$ or $S_{\rm over}$) by using appropriate values for the auxiliary parameters $\mu$, $\ensuremath{\kappa} $, $G$ and $B$ in Eqs.~\ref{eq:Ssoft}, \ref{eq:EoM} and \ref{eq:overC}.\footnote{A compilation flag switches between the two different $G$ terms in \eq{eq:EoM} vs.\ \eq{eq:overC}.}
All of our ongoing calculations use the improved action by fixing $\ensuremath{\kappa} = 0$ and $B = 0$ with non-zero values for $\mu$ and $G$.
These two couplings, in combination with the number of colors $N$, the lattice volume $L^4$ or $L^3\ensuremath{\!\times\!} N_t$ and the lattice 't~Hooft coupling $\ensuremath{\lambda_{\rm lat}} $, produce a somewhat complicated parameter space that takes some time to explore.
Here I summarize some results of that exploration and our resulting choices for the auxiliary parameters we are currently using.
To begin, recall that in the end we want to remove the two deformations in the improved action, in order to recover the full $\ensuremath{\mathcal N} = 4$ SYM theory in the continuum limit $(a / L) \to 0$.
This motivates working with the smallest acceptable values of $\mu$ and $G$, extrapolating at least ${\mu \to 0}$.
If these parameters are made too small, however, the lattice calculations will exhibit instabilities, either confinement via U(1) monopole condensation or an excursion along a nearly flat direction (cf.~Figs.~1 and 5 in \refcite{Catterall:2014vka}).
The onset of these instabilities is quite sensitive to both the lattice volume and the 't~Hooft coupling.\footnote{We also observe some preliminary signs of sensitivity to $N$, but this appears less significant.}
As \ensuremath{\lambda_{\rm lat}} increases, either or both of $\mu$ and $G$ must be increased.
Conversely, these parameters can be decreased as $L$ increases towards the continuum limit.
At present we proceed by keeping $G$ fixed and scaling $\mu$ to produce constant $(\mu L)^2 / \ensuremath{\lambda_{\rm lat}} $.
The $\mu \propto 1 / L$ dependence can be motivated by thinking of $\mu$ as providing an effective mass for the scalar U(1) modes.
Scaling $\mu^2 \propto \ensuremath{\lambda_{\rm lat}} $ follows from the form of the scalar potential in \eq{eq:Ssoft}.
Finally, we fix $G$ both for simplicity and because the U(1) sector which it affects decouples from the SU($N$) target theory in the $(a / L) \to 0$ continuum limit, as we will see below~\cite{Catterall:2015ira}.
We monitor \ensuremath{\mathcal Q} Ward identities to assess the amount of soft supersymmetry breaking and U(1) lattice artifacts.
For the improved action three such Ward identities are available: \eq{eq:Ward} fixes $\Re\det\ensuremath{\mathcal P} = 1$; the structure of \eq{eq:S0} predicts the exact value of the bosonic action per lattice site $s_B = 9N^2 / 2$; and \ensuremath{\mathcal Q} acting on $\ensuremath{\mathcal O} \equiv \Tr{\eta \sum_a \ensuremath{\mathcal U} _a \ensuremath{\overline{\mathcal U}} _a}$ produces the relative quantity
\begin{equation}
\label{eq:bilin}
\ensuremath{\delta} \ensuremath{\mathcal Q} \ensuremath{\mathcal O} = \frac{\Tr{d \sum_a \ensuremath{\mathcal U} _a \ensuremath{\overline{\mathcal U}} _a} - \Tr{\eta \sum_a \psi_a \ensuremath{\overline{\mathcal U}} _a}}{\sqrt{\Tr{d \sum_a \ensuremath{\mathcal U} _a \ensuremath{\overline{\mathcal U}} _a}^2 + \Tr{\eta \sum_a \psi_a \ensuremath{\overline{\mathcal U}} _a}^2}},
\end{equation}
where the normalization factor in the denominator is slightly different than that used in \refcite{Catterall:2014vka}.
In addition we monitor the plaquette and the Polyakov loop $|L|$ constructed from the complexified gauge links (which are not unitarized).
$L$ is sometimes called the Maldacena loop, and since $\ensuremath{\mathcal Q} |L| = 0$ it should be roughly $|L| \simeq 1$ for all $\ensuremath{\lambda_{\rm lat}} $.
The onset of instabilities generally leads to either $|L| \ll 1$ (for U(1) confinement) or $|L| \gg 1$ (for an excursion along a nearly flat direction).
\begin{table}[htp]
\caption{\label{tab:continuum} Some observables from U(2) $L^4$ ensembles with $\ensuremath{\lambda_{\rm lat}} = 1$, $G = 0.05$ and $(\mu L)^2 / \ensuremath{\lambda_{\rm lat}} \approx 2.5$.}
\centering
\renewcommand\arraystretch{1.2}
\begin{tabular}{|ccc|ccccc|}
\hline
$L$ & $\mu$ & $G$ & Plaq. & $|L|$ & $s_B / 18$ & $\Re\det\ensuremath{\mathcal P} $ & $1 - \ensuremath{\delta} \ensuremath{\mathcal Q} \ensuremath{\mathcal O} $ \\
\hline
16 & 0.10 & 0.05 & 1.0363(9) & 0.918(113) & 0.99986(4) & 0.99864(19) & 0.9962(5) \\
12 & 0.13 & 0.05 & 1.0285(12) & 0.869(54) & 0.99956(5) & 0.99803(30) & 0.9939(9) \\
8 & 0.20 & 0.05 & 1.0373(21) & 1.147(71) & 0.99877(13) & 0.99507(52) & 0.9837(19) \\
6 & 0.25 & 0.05 & 1.0565(56) & 1.015(37) & 0.99868(31) & 0.98999(158) & 0.9654(52) \\
4 & 0.40 & 0.05 & 1.0780(91) & 1.118(33) & 0.99381(107) & 0.97875(494) & 0.8848(237) \\
\hline
\end{tabular}
\end{table}
\begin{figure}[bp]
\centering
\includegraphics[width=0.425\linewidth]{sB_power}\hfill
\includegraphics[width=0.475\linewidth]{det_power}
\caption{\label{fig:continuum}Continuum extrapolations of \ensuremath{\mathcal Q} Ward identity violations on log--log axes with power-law fits, for both the improved (blue triangles, Table~\protect\ref{tab:continuum}) and unimproved (red $\times$s) lattice actions with $N = 2$, $\ensuremath{\lambda_{\rm lat}} = 1$ and $\mu \sim 1 / L$. {\bf Left:} Deviations of the bosonic action from its exact supersymmetric value $9N^2 / 2$. {\bf Right:} Deviations of $\vev{\Re\det\ensuremath{\mathcal P} }$ from unity. In both cases the improved action produces much smaller Ward identity violations that vanish $\propto (a / L)^2$ in the continuum limit.}
\end{figure}
Table~\ref{tab:continuum} and \fig{fig:continuum} illustrate how these quantities behave upon approaching the continuum limit with $N = 2$ and $\ensuremath{\lambda_{\rm lat}} = 1$.
Violations of each Ward identity appear in the table as deviations from unity, which in all three cases vanish $\propto (a / L)^2$ in the continuum limit, consistent with an $\ensuremath{\mathcal O} (a)$-improved lattice action.
Indeed, the lattice symmetries (including the \ensuremath{\mathcal Q} supersymmetry) forbid all dimension-5 operators~\cite{Catterall:2015ira}, so we expect $\ensuremath{\mathcal O} (a)$ improvement when the soft supersymmetry breaking is sufficiently small.
As mentioned above, the right panel of \fig{fig:continuum} demonstrates that the U(1) sector decouples in the continuum limit ($\Re\det\ensuremath{\mathcal P} \to 1$) even though $G = 0.05$ is fixed.
In addition, both the Polyakov loop and plaquette are near unity after being normalized by $N$, providing further checks of the individual ensembles.
It is curious that the $\ensuremath{\delta} \ensuremath{\mathcal Q} \ensuremath{\mathcal O} $ involving the $\eta \psi_a$ fermion bilinear exhibits significantly larger Ward identity violations than the other two quantities, especially on small lattice volumes.
This pattern persists with either periodic or anti-periodic (thermal) temporal boundary conditions (BCs) for the fermions; all results shown here use thermal BCs.
\begin{table}[htp]
\caption{\label{tab:4nt4} Some observables from U(2) $4^4$ ensembles with $G = 0.05$ and $(\mu L)^2 / \ensuremath{\lambda_{\rm lat}} \approx 2.5$ for $\ensuremath{\lambda_{\rm lat}} < 5$.}
\centering
\renewcommand\arraystretch{1.2}
\begin{tabular}{|ccc|ccccc|}
\hline
\ensuremath{\lambda_{\rm lat}} & $\mu$ & $G$ & Plaq. & $|L|$ & $s_B / 18$ & $\Re\det\ensuremath{\mathcal P} $ & $1 - \ensuremath{\delta} \ensuremath{\mathcal Q} \ensuremath{\mathcal O} $ \\
\hline
0.5 & 0.28 & 0.05 & 1.0911(184) & 1.237(43) & 0.99563(99) & 0.9928(25) & 0.887(26) \\
1.0 & 0.40 & 0.05 & 1.0780(91) & 1.118(33) & 0.99381(107) & 0.9788(49) & 0.885(24) \\
2.0 & 0.57 & 0.05 & 1.0638(28) & 1.066(24) & 0.99060(69) & 0.9347(35) & 0.817(11) \\
3.0 & 0.69 & 0.05 & 1.0583(35) & 1.037(23) & 0.98474(58) & 0.8733(33) & 0.741(8) \\
4.0 & 0.80 & 0.05 & 1.0383(34) & 0.927(20) & 0.97872(59) & 0.7975(47) & 0.668(7) \\
\hline
5.0 & 0.80 & 0.10 & 1.1594(25) & 1.025(25) & 0.97499(36) & 0.9418(30) & 0.819(8) \\
6.0 & 0.80 & 0.10 & 1.1974(30) & 1.156(32) & 0.97301(55) & 0.9297(32) & 0.807(8) \\
7.0 & 1.00 & 0.15 & 1.2216(23) & 0.901(36) & 0.95725(62) & 0.9506(19) & 0.795(7) \\
7.0 & 1.20 & 0.10 & 1.1392(18) & 0.747(26) & 0.94386(52) & 0.8416(30) & 0.590(8) \\
8.0 & 1.00 & 0.25 & 1.2805(32) & 0.897(59) & 0.95134(33) & 0.9782(17) & 0.836(10) \\
8.0 & 1.20 & 0.15 & 1.2287(18) & 0.855(24) & 0.93770(42) & 0.9224(22) & 0.703(8) \\
\hline
\end{tabular}
\end{table}
\newpage
Finally in Table~\ref{tab:4nt4} we consider stronger couplings $\ensuremath{\lambda_{\rm lat}} \leq 8$ on a fixed $4^4$ lattice volume.
For $\ensuremath{\lambda_{\rm lat}} < 5$ we are able to proceed as discussed above, fixing $G = 0.05$ and $(\mu L)^2 / \ensuremath{\lambda_{\rm lat}} \approx 2.5$.
However, the corresponding $\ensuremath{\lambda_{\rm lat}} = 5$ run with $\mu = 0.89$ is unstable.
In order to reach stronger couplings we either have to increase $G$ or move to larger $L$.
With the same $G = 0.05$ and $(\mu L)^2 / \ensuremath{\lambda_{\rm lat}} \approx 2.5$, calculations on $6^4$ lattices encounter no difficulties for $\ensuremath{\lambda_{\rm lat}} < 7$.
The bottom half of the table shows some initial $4^4$ explorations of larger $G$, which allow somewhat smaller $\mu$.
Two ensembles with different balances of $(\mu, G)$ are shown for each of $\ensuremath{\lambda_{\rm lat}} = 7$ and 8.
Although larger $G$ and smaller $\mu$ lead to better Ward identity violations in both cases, they also produce move severe fluctuations in the pfaffian phase (a worse sign problem)~\cite{Schaich:2015daa}.
\section*{Next steps for lattice $\ensuremath{\mathcal N} = 4$ SYM}
Large-scale numerical calculations are now underway using the improved lattice action summarized above, guided by our current understanding of its parameter space.
The primary targets of these studies are the coupling dependence of the Coulomb coefficient $C(\ensuremath{\lambda} )$ in the $\ensuremath{\mathcal N} = 4$ SYM static potential $V(r) = A + C / r$, and the anomalous dimension of the Konishi operator.
In addition we are exploring the possible sign problem of the lattice theory, the restoration of the other supersymmetries $\ensuremath{\mathcal Q} _a$ and $\ensuremath{\mathcal Q} _{ab}$ in the continuum limit, and other interesting aspects of the system.
We look forward to reporting initial results from these wide-ranging investigations in the near future.
\vspace{12 pt}
\noindent {\sc Acknowledgments:}~I thank my collaborators on lattice $\ensuremath{\mathcal N} = 4$ SYM: Simon Catterall, Poul Damgaard, Tom DeGrand and Joel Giedt.
I am also grateful for useful discussions of the lattice parameter space with Anosh Joseph.
Part of this work was performed at the Aspen Center for Physics (U.S.~National Science Foundation [NSF] Grant No.~1066293), the Kavli Institute for Theoretical Physics (NSF Grant No.~PHY11-25915) and Humboldt University (the Emmy Noether Research Group ``Gauge Fields from Strings''), which I thank for their support and hospitality.
Further support came from the U.S.~Department of Energy (DOE), Office of Science, Office of High Energy Physics, under Award Numbers DE-SC0008669 and DE-SC0009998.
Numerical calculations were carried out on the HEP-TH cluster at the University of Colorado and on the DOE-funded USQCD facilities at Fermilab.
\bibliographystyle{utphys}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
Recent years have seen rapid progress in the fabrication of van der Waals heterostructures~\cite{geim_nature_2013} comprising graphene, hexagonal boron nitride (hBN), and other two-dimensional (2D) crystals.
These advances have stimulated a large number of theoretical and experimental studies of the optoelectronic properties of these materials and their heterostructures.~\cite{low_natmat_2016, basov_science_2016,koppens_naturenano_2014,ni_nature_2018} A great deal of work has been focused on graphene plasmons, which, in high-quality sheets encapsulated between hBN crystals, have shown truly tantalizing properties.~\cite{low_natmat_2016, basov_science_2016,koppens_naturenano_2014,ni_nature_2018} In view of such properties, it is not hard at all to envision the realization in the near future of a 2D plasmonic platform where plasmon injection, propagation, and detection, occurs in the complete absence of far-field light but is rather achieved via purely electrical methods.
While all-electrical graphene plasmon detection has been recently demonstrated both in the mid infrared~\cite{lundeberg_naturemater_2017} and Terahertz~\cite{pablo_naturenano_2017} spectral ranges, electrical plasmon injection (EPI) remains elusive.
\begin{figure}
\begin{overpic}[width=1.0\linewidth]{fig01}\end{overpic}
\caption{\label{fig:setup}
(Color online) Schematic representation of the double-layer graphene heterostructure studied in this work.
Two graphene sheets lying in the planes $z = z_{\rm GB}$ and $z = z_{\rm GT}$, represented as layers with thickness $\delta$ (gray), are encapsulated by hBN (green) extending from $z_{\rm IB}$ to $z_{\rm IT}$ above a semi-infinite ${\rm SiO}_2$ substrate (yellow).
The thickness of the bottom, middle, and top hBN layer is $d_{\rm B}$, $d$, and $d_{\rm T}$, respectively.
The top semi-space $z > z_{\rm IT}$ is filled with vacuum or a model molecular material (hatched).
An electric bias voltage $V_{\rm b}$ is maintained between the bottom and the top graphene layers by an external source.
Because of the bias voltage, electrons tunnel from the top to the bottom layer, establishing a tunnel current and inducing electric dipolar charges $\rho(z)$ (red and blue circles), oscillating at an angular frequency $\omega$, which couple to the traveling electric field (orange line) of the collective modes of the double-layer heterostructure. }
\end{figure}
A promising route to achieve EPI is offered by a tunnel current between two graphene sheets separated by a thin insulating barrier.
Plasmon emission by tunnel currents has been demonstrated in metal-semiconductor interfaces,~\cite{tsui_prl_1969, tsui_pr_1969} degenerate semiconductors,~\cite{duke_pr_1969} metal-insulator-metal tunnel junctions,~\cite{lambe_prl_1976} and between metallic tips and surfaces.~\cite{berndt_prl_1991}
Early experiments focused on the spectroscopic signatures of plasmon excitations in the tunnel current.~\cite{wolf_rep_prog_phys_1978}
Later on, plasmon excitations were shown~\cite{lambe_prl_1976} to couple the tunnel current to propagating electromagnetic modes, achieving light emission from tunnel junctions.
Recent spectroscopy studies of graphene layers~\cite{brar_prl_2010} and graphene-based heterostructures~\cite{vdovin_prl_2016,ghazaryan_natureelectron_2018} have demonstrated the existence of electron-plasmon interactions, phonon- and magnon-assisted tunneling, respectively.
These studies suggest that the goal of EPI is within reach, and motivate a thorough theoretical investigation of the phenomenon.
In this work, we theoretically study the problem of EPI via a tunnel current.
We consider the double-layer heterostructure depicted in Fig.~\ref{fig:setup}, comprising two graphene sheets encapsulated by hBN, on an ${\rm SiO}_2$ substrate.
The semi-infinite space above the heterostructure consists of either vacuum or a toy-model molecular material with a simple absorption spectrum.
The graphene double-layer supports electronic collective modes~\cite{profumo_prb_2012} (``optical'' and ``acoustic'' plasmons), which hybridize with the optical phonon polaritons of hBN and the molecular excitations.
An external source applies a bias voltage to the two graphene sheets and generates a tunnel current, which feeds the collective modes of the system.
We calculate the power delivered by the tunnel current to the collective modes as the bias voltage is varied, showing that the system works both as a plasmon source and a spectrally-resolved molecular sensor.
The manuscript is organized as follows.
In Sect.~\ref{sec:model} we outline our theoretical formulation, which includes: (i) the quantum mechanical description of the tunneling electrons; (ii) the dielectric functions of the various layers; and (iii) the method to calculate the electric field distribution in the double-layer heterostructure.
In Sect.~\ref{sec:modes} we report analytical expressions for the collective modes of the graphene double-layer, i.e.~the optical and acoustic plasmons, and for the graphene double-layer coupled to a molecular layer, i.e.~a molecular polariton.
The results of our numerical calculations are discussed in Sect.~\ref{sec:results} and compared to the analytical expressions.
Finally, in Sect.~\ref{sec:conclusions} we draw our main conclusions.
\section{Theoretical formulation}
\label{sec:model}
\subsection{Theories of plasmon injection by a tunnel current}
\label{sec:theories}
The calculation of the elastic tunnel current between metallic surfaces separated by a thin insulating layer was first considered by J.~Bardeen in a seminal paper,~\cite{bardeen_prl_1961} which introduced the fundamental concepts that later evolved into the so-called ``transfer-Hamiltonian'' method.~\cite{duke_book_1969}
This method was soon adapted to take into account inelastic tunneling,~\cite{bennet_pr_1968} i.e.~the interaction of tunneling electrons with impurities and collective electronic excitations localized around the insulating layer.
This approach was successfully applied to the case of surface plasmons at metal-semiconductor interfaces as well.~\cite{ngai_prl_1969,economou_prb_1971}
Notwithstanding these early successes, the transfer-Hamiltonian method was the object of several critiques, because a rigorous assessment of its range of validity was missing.~\cite{feuchtwang_book_1978}
The two most criticized points of the theory were the perturbative treatment of the tunneling operator and the precise specification of the ``initial'' and ``final'' single-particle wave functions involved in the tunneling process.
The lack of general agreement on the range of validity of the transfer-Hamiltonian method stimulated several alternative, although related, approaches,~\cite{brailsford_prb_1970,caroli_jpc_1971,feuchtwang_prb_1974} to treat elastic and inelastic tunnel currents.
The theories mentioned above were motivated by experiments using the tunnel current as a spectroscopic tool.~\cite{wolf_rpp_1978}
The existence of plasmon modes (or of other kind of excitations) localized around the insulating layer was taken into account by these theories in the form of inelastic tunneling channels, affecting the tunneling rate and the density of states and, hence, the current-voltage characteristics.
After the experimental demonstration of light-emission from tunnel junctions,~\cite{lambe_prl_1976} however, more work was devoted to the relation between the tunnel current and the intensity of the emitted radiation.
The tunnel current excites plasmon modes at the interfaces, which subsequently couple to propagating electromagnetic modes.
The energy-momentum mismatch between plasmon and propagating modes is overcome if the surfaces are sufficiently rough.
Different plasmon modes at the interface have different roles in this two-step process, coupling more to the tunnel current (``slow'' modes) or to the photonic modes (``fast'' modes).
First, a theory by L.C.~Davis~\cite{davis_prb_1977} explained the light emission from tunnel junctions in terms of the classical coupling between the tunnel current and the electric field of the slow-mode plasmon at the interface.
Then, using the transfer-Hamiltonian method, it was proposed~\cite{hone_apl_1978, rendell_prb_1981} that random fluctuations of the tunnel current drive the slow mode in the insulating layer.
Based on this concept, B.~Laks and D.L.~Mills~\cite{laks_prb_1979, laks_prb_1980} formulated a fruitful theory that allowed, in particular, to discuss the role of the slow and fast plasmon modes in the light-emission process.
Later on, the theory by Laks and Mills was used to study light emission in the more complicated geometry of a scanning tunneling microscope tip in the vicinity of a surface.~\cite{johansson_prb_1990, uehara_jjap_1992}
Very recently, the process of plasmon emission by a tunnel current between graphene sheets was studied in Refs.~\onlinecite{enaldiev_prb_2017,devega_acsphotonics_2017}. Both these works are developed around the concept that tunneling is driven by electron-electron interactions between graphene's carriers.
In Ref.~\onlinecite{enaldiev_prb_2017}, plasmon excitations are encoded in the pole structure of the density-density polarization function of the graphene double layer, which is calculated and related to the tunnel current.
Ref.~\onlinecite{devega_acsphotonics_2017}, instead, uses an effective interaction, obtained as the electric potential produced by an external charge screened by the graphene sheets, treated as conductors with finite conductivity.
In this work, we chose to follow the theoretical approach introduced by Davis,~\cite{davis_prb_1977} which consists of the following steps: (i) calculate the stationary wave function of the tunneling electrons; (ii) derive an electronic charge density, oscillating in a dipolar fashion between the two graphene sheets; and (iii) solve the Poisson equation for the electric field in the heterostructure, using the charge density calculated in the previous step as the source term.
Two features set this method apart from other approaches.~\cite{enaldiev_prb_2017,devega_acsphotonics_2017}
First, there is no notion of an effective interaction between electrons and plasmons, as is common to calculations based on a transfer-Hamiltonian formulation.
The advantage is that our approach circumvents the need of a perturbative expansion in the strength of the light-matter interaction.
Second, the dipolar oscillations of the electronic charge density are purely due to the quantum interference between the stationary wave functions of the tunneling electrons.
In this way, the method separates the calculation of the power delivered to the collective modes from the calculation of the back-action of the electric field on the tunnel current density, which was performed elsewhere.~\cite{enaldiev_prb_2017,devega_acsphotonics_2017}
In the following sections, we provide the elements of the theory which are needed to take the steps (i)--(iii) outlined above.
\subsection{Tunneling-induced dipoles}
\label{sec:tunn_dipoles}
\begin{figure}
\begin{overpic}[width=1.0\linewidth]{fig02a}\put(2,55){(a)}\end{overpic}
\begin{overpic}[width=1.0\linewidth]{fig02b}\put(2,55){(b)}\end{overpic}
\caption{\label{fig:charge}
(a) The envelope wave functions $\chi_{\rm i}(z)$ (solid line) and $\chi_{\rm f}(z)$ (dashed line) of the electronic wave functions, for $V_{\rm b} = -1.0~{\rm V}$.
The vertical dotted lines represent the position of the two graphene sheets.
The large bias voltage produces a substantial localization of the two wave functions in the top and bottom graphene sheet, respectively.
(b) The electron charge density $\rho(z)$ defined in Eq.~(\ref{eq:transitioncharge}), which oscillates with angular frequency $\omega$.
The plot demonstrates polarization of opposite charges on the two graphene
sheets, i.e.~an electric dipole along the $z$ direction.
The inset shows the relation between the oscillation energy $\hbar \omega$ and the bias voltage.
The parameters used in the calculation are given in Sect.~\ref{sec:results}. }
\end{figure}
Let us first consider the calculation of the wave function of electrons tunneling in the graphene double-layer.
Along the $z$ direction, the electric potential $U(z)$ experienced by an electron is: (i) constant for $z < z_{\rm GB}$ and $z > z_{\rm GT}$; (ii) linearly varying in the interval $z_{\rm GB} < z < z_{\rm GT}$ because of the bias voltage, ranging from $U(z \to z_{\rm GB}) = -e V_{\rm b} / 2$ to $U(z \to z_{\rm GT}) = e V_{\rm b} / 2$ (where $e$ is the absolute value of the electron charge); and (iii) singular at the positions of the graphene sheets, represented as a negative Dirac delta function with amplitude $2[\hbar^{2} W_{\rm b} / (2 m_{\rm eff})]^{1/2}$, where $m_{\rm eff}$ is the electronic effective mass and $W_{\rm b}$ is the work function of graphene in hBN.
Since the electron wave function is essentially localized around the graphene double-layer, we neglect the interfaces $z = z_{\rm IB}$ and $z = z_{\rm IT}$.
The electron wave function in the heterostructure can be written in the product form
\begin{equation}
\psi({\bm r}, z; t) = \chi(z) e^{i {\bm q} \cdot {\bm r}} e^{- i \varepsilon t / \hbar}~,
\end{equation}
where ${\bm r}$ (${\bm q}$) is a position vector (wave vector) in the graphene plane and the envelope wave function $\chi(z)$ is normalized such that $\int dz |\chi(z)|^{2} = 1$.
The Schr\"odinger equation can be easily solved by separation of variables, using a linear combination of exponentials and Airy functions, and matching boundary conditions at the discontinuities of the potential.
One obtains two bound states $|{\rm i}\rangle$ and $|{\rm f}\rangle$, with envelope wave functions $\chi_{\rm i}(z)$ and $\chi_{\rm f}(z)$, and energies $\varepsilon_{\rm i} > \varepsilon_{\rm f}$.
In this calculation, we neglect the kinetic energy due to the in-plane motion (i.e.~the band dispersion) as well as Fermi statistics.
The crucial step in our approach is to recognize that the electronic system is open, in the sense that it is connected to an external source which injects and extracts electrons.
For this reason, electrons do not persist indefinitely in an eigenstate of the Hamiltonian, but occupy, in general, states which are coherent superpositions of the two eigenstates.
The general electronic wave function then reads
\begin{equation}\label{eq:superposition}
\Psi({\bm r}, z; t) = \alpha_{\rm i} \chi_{\rm i}(z) e^{i {\bm q}_{\rm i} \cdot {\bm r}} e^{-i \varepsilon_{\rm i} t / \hbar} + \alpha_{\rm f} \chi_{\rm f}(z) e^{i {\bm q}_{\rm f} \cdot {\bm r}} e^{-i \varepsilon_{\rm f} t / \hbar}~.
\end{equation}
This wave function describes the electronic state until a quantum jump takes place, realizing a tunneling event from the initial $|{\rm i}\rangle$ to the final state $|{\rm f}\rangle$.
Uncorrelated tunneling events build up the total tunnel current between the two graphene layers.
The quantum dissipative dynamics responsible for the quantum jump could be modeled with a quantum master equation,~\cite{gardiner_zoller} with a Lindblad term describing the action of the external source in terms of electron extraction from the upper layer and electron injection into the bottom layer.
In this work, we leave the precise form of the dissipative dynamics unspecified because, for what follows, the values of the coefficients $\alpha_{\rm i, f}$ are not important and it is sufficient to absorb the product $\alpha_{\rm f}^{\ast} \alpha_{\rm i}$ into the definition of an effective density $\bar{n}_{\rm t}$ of tunneling electrons.
We point out that Eq.~(\ref{eq:superposition}) represents a pure state, but the result of the quantum master equation is in general a density matrix $\hat{\rho}$.
In this case, the role of the product $\alpha_{\rm f}^{\ast} \alpha_{\rm i}$ is played by the ``coherence'' $\langle {\rm f} | \hat{\rho} | {\rm i} \rangle$.
Finally, we notice that, since we neglect the band dispersion, the wave vectors ${\bm q}_{\rm i, f}$ are unrelated.
The charge density derived from the wave function~(\ref{eq:superposition}) is $\rho({\bm r}, z; t) = -e \bar{n}_{\rm t} |\Psi({\bm r}, z; t)|^{2}$.
Upon expanding the squared modulus, one finds two stationary parts, proportional to $|\chi_{\rm i, f}(z)|^{2}$, and two parts oscillating with the transition frequency $\omega = (\varepsilon_{\rm i} - \varepsilon_{\rm f}) / \hbar$ and its complex conjugate.
The oscillating part, which we refer to as ``transition charge density,'' is
\begin{equation}\label{eq:transitioncharge}
\rho_{\rm t}({\bm r}, z; t) = \bar{n}_{\rm t} \rho(z) e^{i {\bm q} \cdot {\bm r}} e^{-i \omega t}~,
\end{equation}
with ${\bm q} = {\bm q}_{\rm i} - {\bm q}_{\rm f}$, and $\rho(z) = - e \chi_{\rm f}(z)^{\ast} \chi_{\rm i}(z)$.
Notice that Eq.~(\ref{eq:transitioncharge}) is in general a complex quantity.
Since the electron wave functions are localized around the position of the graphene planes, the transition charge density has a predominantly dipolar character, as shown in Fig.~\ref{fig:charge}.
In conclusion, electrons, tunneling between bound states, create an electric dipole oscillating at the transition frequency.
The dipole oscillation is of purely quantum origin, because it follows from the superposition in the wave function~(\ref{eq:superposition}).
In the following sections, we study the coupling of the oscillating dipoles to the electric field and the collective modes of the double-layer heterostructure.
\subsection{Dielectric function of the graphene layers}
\label{sec:dielectric_graphene}
In the present and in the following two sections, we detail the dielectric functions of the different layers in the heterostructure of Fig.~\ref{fig:setup}.
To calculate the electric potential, we choose to take into consideration the finite thickness $\delta$ of each graphene sheet, which is smaller than the inter-layer separation $d$, but not negligible.
We then need to provide an effective 3D dielectric function $\epsilon(q, \omega)$ for the finite-thickness graphene layer.
We start out from the random-phase approximation (RPA)~\cite{giuliani_and_vignale_2005}
\begin{equation}\label{eq:rpa_3D}
\epsilon(q, \omega) = 1 - v_{q} \tilde{\chi}^{(0)}(q, \omega)~,
\end{equation}
where $v_{q} = 4 \pi e^{2} / q^{2}$ is the Fourier transform of the Coulomb interaction potential between carriers in the finite-thickness graphene layer and $\tilde{\chi}^{(0)}(q, \omega)$ is the proper non-interacting density-density polarization function (i.e.~the Lindhard function).
To connect this effective Lindhard function to the well-known Lindhard function $\tilde{\chi}_{\rm 2D}^{(0)}(q, \omega)$ of massless Dirac fermions (MDF) in graphene,~\cite{kotov_rmp_2012} we use the linear-response relations~\cite{giuliani_and_vignale_2005} $\bar{n}(q, \omega) = \tilde{\chi}_{\rm 2D}(q, \omega) V_{\rm ext}(q, \omega)$ and $n(q, \omega) = \tilde{\chi}(q, \omega) V_{\rm ext}(q, \omega)$, where $V_{\rm ext}(q, \omega)$ is an external scalar potential and $\bar{n}(q, \omega)$ [$n(q, \omega)$] is the induced density fluctuations in the 2D (finite-thickness) graphene layer.
Neglecting density variations in the $z$ direction, we have $\bar{n}(q, \omega) = \delta \times n(q, \omega)$, which implies $\tilde{\chi}(q, \omega) = \tilde{\chi}_{\rm 2D}(q, \omega) / \delta$.
[The same relation then holds for the non-interacting polarization functions, $\tilde{\chi}^{(0)}(q, \omega) = \tilde{\chi}^{(0)}_{\rm 2D}(q, \omega) / \delta$.]
Substituting these relations into Eq.~(\ref{eq:rpa_3D}), we find
\begin{equation}\label{eq:rpa}
\epsilon(q, \omega) = 1 - \frac{2}{q \delta} v_{{\rm 2D}, q} \tilde{\chi}^{(0)}_{\rm 2D}(q, \omega)~,
\end{equation}
where $v_{{\rm 2D}, q} = 2 \pi e^{2} / q$ is the Fourier transform of the Coulomb interaction potential between MDF in the graphene sheet.
Notice the ``form factor'' $2 / (q \delta)$ which differentiates Eq.~(\ref{eq:rpa}) from the well-known RPA for MDF.
The Lindhard function $\tilde{\chi}^{(0)}_{\rm 2D}$ depends on the average 2D electron density $\bar{n}$.
Here, we assume that the electron density is the same in both graphene sheets and that the Fermi energy lies above the Dirac point.
We have verified that, using Eq.~(\ref{eq:rpa}) in the limit $q \delta \ll 1$, one recovers the correct expressions (see Sect.~\ref{sec:analytical}) for the plasmon spectrum in a single- and double-layer graphene system.
\subsection{Dielectric function of the hBN crystals}
The encapsulant we have chosen for our calculations, hBN, is an anisotropic and uniaxial material, meaning that its dielectric function has different values in the crystal plane $[\epsilon_{xy}(\omega)]$ and in the stacking direction $[\epsilon_{z}(\omega)]$, which are principal directions of the dielectric tensor.~\cite{dai_science_2014,caldwell_naturecommun_2014}
Moreover, hBN is a natural hyperbolic material, i.e.~there are frequency ranges, called \emph{reststrahlen} bands, where $\epsilon_{xy}(\omega)$ and $\epsilon_{z}(\omega)$ have different signs, producing a peculiar propagation of the electric field.~\cite{dai_science_2014,caldwell_naturecommun_2014}
These properties are captured by the following dielectric function
\begin{equation}\label{eq:eps_hbn}
\epsilon_{\alpha}(\omega) = \epsilon_{\alpha,\infty} + \frac{(\epsilon_{\alpha,0} - \epsilon_{\alpha,\infty})\omega_{\alpha,{\rm T}}^{2}}{\omega_{\alpha,{\rm T}}^{2} - \omega^{2} - i \gamma_{\alpha} \omega}~,
\end{equation}
where $\alpha = xy$ or $\alpha = z$.
Here, $\epsilon_{\alpha,\infty}$ and $\epsilon_{\alpha,0}$ are high- and low-frequency dielectric constants, $\omega_{\alpha, {\rm T}}$ is the frequency of transverse optical phonon-polaritons in the $\alpha$ direction, and $\gamma_{\alpha}$ is the corresponding damping rate.
The frequencies of the corresponding longitudinal modes are given by the Lyddane-Sachs-Teller relation~\cite{grosso_book} $\omega_{\alpha, {\rm L}} = \omega_{\alpha, {\rm T}} [\epsilon_{\alpha,0} / \epsilon_{\alpha,\infty}]^{1/2} > \omega_{\alpha, {\rm T}}$.
The lower (upper) reststrahlen band is located in the frequency range $\omega_{z, {\rm T}} < \omega < \omega_{z, {\rm L}}$ ($\omega_{xy, {\rm T}} < \omega < \omega_{xy, {\rm L}}$).
\subsection{Dielectric function of the molecular layer}
\label{sec:analytical}
For the dielectric function of the molecular material at $z > z_{\rm IT}$ we use the expression
\begin{equation}\label{eq:eps_mol}
\epsilon_{\rm mol}(\omega) = \epsilon_{\infty} + \frac{(\epsilon_{0} - \epsilon_{\infty}) \Omega_{0}^{2}}{\Omega_{0}^{2} - \omega^{2} - i \gamma \omega / \hbar}~.
\end{equation}
This expression, similar in form to Eq.~(\ref{eq:eps_hbn}), is easily derived starting from the equation of motion of the position vector ${\bm R}$ of a bound electron in the presence of an electric field ${\bm E}$,
\begin{equation}
\ddot{\bm R}(t) = (-e) {\bm E} / m - \Omega_{0}^{2} {\bm R}(t) - \gamma \dot{\bm R}(t) / \hbar~,
\end{equation}
where $\hbar \Omega_{0}$ is the energy of the electronic resonance, $\gamma$ is its broadening, $m$ is an effective electronic mass, and all the variables oscillate with angular frequency $\omega$.
From the expression for the steady-state polarization ${\bm P}_{\rm mol} = -e n_{\rm mol} {\bm R}$, with $n_{\rm mol}$ the three-dimensional molecular density, one derives the polarization $\chi_{\rm mol}(\omega)$ such that ${\bm P}_{\rm mol} = \chi_{\rm mol}(\omega) {\bm E}$, and then the dielectric function $\epsilon_{\rm mol}(\omega) = \epsilon_{\infty} + 4 \pi \chi_{\rm mol}(\omega)$.
We neglect local-field effects such as those described by the well-known Clausius-Mossotti formula.~\cite{grosso_book}
The high-frequency dielectric constant $\epsilon_{\infty}$ encodes the small-scale details of the molecular material while the low-frequency constant $\epsilon_{0} = \epsilon_{\rm mol}(0)$ is identified from Eq.~(\ref{eq:eps_mol}) as $\epsilon_{0} = \epsilon_{\infty} + 4 \pi e^{2} n_{\rm mol} / (m \Omega_{0}^{2}) > \epsilon_{\infty}$.
In practice, the dielectric function is more easily specified by treating $\Omega_{0}$, $\gamma$, $\epsilon_{0}$, and $\epsilon_{\infty}$ as independent constants.
\subsection{Electric potential in the double-layer heterostructure}
\label{sec:potential}
The electric potential $\phi({\bm r}, z; t) = \phi(z) e^{-i {\bm q} \cdot {\bm r}} e^{-i \omega t}$ in the double-layer heterostructure, in the presence of a tunnel current, is obtained by solving the Poisson equation
\begin{equation}
- \nabla \cdot \left [ \epsilon_{\ell}(\omega) \nabla \phi({\bm r}, z; t) \right ] = 4 \pi \rho_{\rm t}({\bm r}, z; t)~,
\end{equation}
with the transition charge density~(\ref{eq:transitioncharge}) as the source.
The dielectric function $\epsilon_{\ell}(\omega)$ of each layer $\ell$ has been described in the previous sections.
Let us summarize the method to solve the Poisson equation in the heterostructure,~\cite{maier_book_2007} which amounts to a transfer-matrix approach accounting for the presence of the source.
In each layer $z_{\ell} < z < z_{\ell + 1}$ the electric potential is written as $\phi_{\ell}(z) = \alpha_{\ell} e^{-q_{\ell}(z - z_{\ell})} + \beta_{\ell} e^{-q_{\ell}(z_{\ell + 1} - z)} + g_{\ell}(z)$, where the function $g_{\ell}(z)$ solves the Poisson equation in the $\ell$th layer without taking into account the boundary conditions.
The boundary conditions state that: (i) the electric potential is continuous at the interfaces; (ii) the field vanishes away from the heterostructure; and (iii) the $z$ component of the displacement field is continuous at the interfaces.
The last boundary condition holds because, in the Davis' approach,~\cite{davis_prb_1977} the conducting regions (in our case, the graphene layers) are effectively treated as dielectrics, i.e.~the electronic polarization is taken into account in the dielectric function~(\ref{eq:rpa}) and does not generate free charges at the interfaces of the heterostructure.
Applying the boundary conditions, we first find that $q_{\ell} = q [\epsilon_{xy}(\omega)/\epsilon_{z}(\omega)]^{1/2}$ in the hBN layers and $q_{\ell} = q$ otherwise, and we then obtain a linear system $L({\bm q}, \omega)$ of $12$ equations for $12$ unknowns $\{\alpha_{\ell}$, $\beta_{\ell}\}_{\ell = 1}^{6}$, for each pair of values ${\bm q}$, $\omega$, which we solve numerically.
A solution of the linear system $L({\bm q}, \omega)$ is found for any wave vector ${\bm q}$ and frequency $\omega$ because the Poisson equation yields the field $\phi({\bm r}, z; t)$ produced by a given charge density $\rho_{\rm t}({\bm r}, z; t)$.
On the other hand, to find the collective modes of the heterostructure, i.e.~the self-sustained oscillations of the electric field, one has to solve the Laplace equation, i.e.~the Poisson equation with charge density set to zero.
At fixed ${\bm q}$, the Laplace equation can be solved only for a discrete set of values of $\omega$, corresponding to the angular frequencies of the collective modes.
Finding the solutions of the Laplace equation at fixed ${\bm q}$ is thus analogous to calculating the eigenvalues and eigenvectors of a secular equation, with the added considerable difficulty that here the dependence on the eigenvalues (i.e.~$\omega$) is nonlinear.
In practice, we proceed by numerically finding the roots of the function of $\omega$ defined as follows:
(i) we set $\beta_{6} = 1$;
(ii) we solve the reduced linear system given by the first $11$ equations;
(iii) we calculate the value of the $12$th equation.
When this function is zero, then all $12$ equations are solved and $\omega$ is an eigenfrequency of the system.
In this procedure, the ordering of the equations and variables is arbitrary, however, we find a higher numerical accuracy by including in the reduced linear system the equations which represent the continuity of the potential at the interfaces.
Finally, the power $P$ per area $A$ delivered by the tunnel current to the collective modes reads
\begin{equation}\label{eq:dissipation}
\frac{P}{A} = 2 \omega \bar{n}_{\rm t} \int_{z_{\rm IB}}^{z_{\rm IT}} dz {\rm Im}[\phi(z)^{\ast} \rho(z)]~.
\end{equation}
It could be surprising that in Eq.~(\ref{eq:dissipation}) the absorption is due to the same charge density $\rho(z)$ which generates the potential $\phi(z)$.
A more careful look, however, shows that the phase between $\phi(z)$ and $\rho(z)$, which makes the integral non-vanishing, is due to the imaginary part of the dielectric functions.
In other words, Eq.~(\ref{eq:dissipation}) represents the energy dissipated into the electronic and molecular degrees of freedom of the heterostructure.
Since we solve the Poisson equation, ignoring retardation in the Maxwell equations, the electric field that we calculate does not describe coupling to the far-field modes, so the contribution of radiation losses is not present in Eq.~(\ref{eq:dissipation}).
\section{Collective modes of the double-layer heterostructure}
\label{sec:modes}
A graphene double-layer, in a uniform medium with dielectric constant $\bar{\epsilon}$, supports a high-energy ``optical'' plasmon mode with dispersion~\cite{profumo_prb_2012}
\begin{equation}\label{eq:optical_plasmon}
\hbar \omega_{\rm op}(q\to 0) = \sqrt{N_{\rm f} \varepsilon_{\rm F} e^{2} q / (\sqrt{2} \bar{\epsilon})}~,
\end{equation}
where $N_{\rm f} = 4$ is number of fermion flavors and $\varepsilon_{\rm F}$ is the Fermi energy.
The latter is given by $\varepsilon_{\rm F}= \hbar v_{\rm F} k_{\rm F}$, with the Fermi wave vector $k_{\rm F} = \sqrt{\pi \bar{n}}$ and Fermi velocity $v_{\rm F}$.~\cite{katsnelson_book}
(We reiterate our assumption that the electron density is the same in both graphene sheets and that the Fermi energy lies above the Dirac point.)
This mode corresponds to the plasmon mode of a single layer with twice the density.
The double-layer graphene supports also a low-energy acoustic mode with dispersion~\cite{profumo_prb_2012}
\begin{equation}\label{eq:acoustic_plasmon}
\hbar \omega_{\rm ac}(q \to 0) = \hbar v_{\rm F} q \frac{1 + k_{\rm TF} d}{\sqrt{1 + 2 k_{\rm TF} d}}~,
\end{equation}
where $k_{\rm TF} = 4 k_{\rm F} \alpha_{\rm ee}$ is the Thomas-Fermi wave vector, with $\alpha_{\rm ee} = e^{2} / (\bar{\epsilon} \hbar v_{\rm F})$ the electron-electron coupling strength.
Making contact to the jargon used in works concerned with light emission in tunnel junctions (cfr.~Sect.~\ref{sec:theories}), the optical and acoustic mode are the ``fast'' and ``slow'' mode of the heterostructure, respectively.
In the presence of a semi-space characterized by the dielectric function~(\ref{eq:eps_mol}), the optical and acoustic mode hybridize with the molecular oscillations.
It is easy to see that a new collective ``polariton'' mode appears in the spectrum, with a dispersion that tends to
\begin{equation}\label{eq:molecular_polariton}
\hbar \omega_{\rm mp}(q) \to \hbar \Omega_{0} \sqrt{\frac{\epsilon_{0} + \bar{\epsilon}}{\epsilon_{\infty} + \bar{\epsilon}}}~.
\end{equation}
The previous expression turns out to be valid \emph{both} in the short-wavelength $q d$, $q d_{\rm T} \gg 1$ and long-wavelength $q d$, $q d_{\rm T} \ll 1$ limits.
The analysis of the collective modes of the double-layer heterostructure and of the delivered power is complicated by the hyperbolic nature of the hBN.
Indeed, a thick hBN slab acts as a Fabry-Perot resonator, where the electric potential oscillates between the interfaces with an arbitrary large number of nodes.~\cite{tomadin_prl_2015}
All these modes accumulate towards the lower (upper) extreme of the upper (lower) reststrahlen band and separate as the wave vector $q$ increases.
Some of these modes strongly hybridize with the plasmon modes supported by the graphene layers and the polariton mode supported by the molecular oscillations.
In Eqs.~(\ref{eq:optical_plasmon}), (\ref{eq:acoustic_plasmon}), and~(\ref{eq:molecular_polariton}) we have used a uniform, frequency-independent dielectric constant $\bar{\epsilon}$.
To use those formulas in the presence of the hBN layers, and gain a qualitative analytical understanding of the collective modes, one needs to take $\bar{\epsilon} = \sqrt{\epsilon_{xy,0} \epsilon_{z,0}}$.
\begin{figure}
\begin{overpic}[width=1.0\linewidth]{fig03a}\put(2,55){(a)}\end{overpic}
\begin{overpic}[width=1.0\linewidth]{fig03b}\put(2,55){(b)}\end{overpic}
\caption{\label{fig:dispersion}
(Color online) Dispersion of the collective modes of the double-layer heterostructure (points), when the top semi-space is filled with vacuum (a) or the model molecular material (b) defined in Sect.~\ref{sec:model}.
Gray-shaded areas correspond, from low to high values of $\hbar \omega$, to the intra-band electron-hole continuum,~\cite{kotov_rmp_2012} the lower and the upper reststrahlen bands of the hBN layers.
The red, blue, and green solid lines correspond to the analytical dispersion of the optical plasmon Eq.~(\ref{eq:optical_plasmon}), acoustic plasmon Eq.~(\ref{eq:acoustic_plasmon}), and molecular polariton Eq.~(\ref{eq:molecular_polariton}), respectively. }
\end{figure}
\begin{figure}
\begin{overpic}[width=0.49\linewidth]{fig04a}\put(0,95){(a)}\put(30,3){$\hbar \omega = 70~{\rm meV}$}\end{overpic}
\begin{overpic}[width=0.49\linewidth]{fig04b}\put(0,95){(b)}\put(30,3){$\hbar \omega = 116~{\rm meV}$}\end{overpic}
\begin{overpic}[width=0.49\linewidth]{fig04c}\put(0,95){(c)}\put(30,3){$\hbar \omega = 148~{\rm meV}$}\end{overpic}
\begin{overpic}[width=0.49\linewidth]{fig04d}\put(0,95){(d)}\put(30,3){$\hbar \omega = 171~{\rm meV}$}\end{overpic}
\caption{\label{fig:field}
(Color online) Space profile of the electric potential (color map) and of the direction of the electric field (arrows) as a function of $x$ (horizontal axis) and $z$ (vertical axis), in the range $0 < x <200~{\rm nm}$ and $-15~{\rm nm} < z < 10~{\rm nm}$.
For graphical convenience, the axis labels are not shown.
Red (blue) shades correspond to positive (negative) electric potential.
The horizontal black lines denote the locations $z_{\rm GB}$, $z_{\rm GT}$ of the graphene sheets and the horizontal green lines the interfaces $z_{\rm IB}$, $z_{\rm IT}$ of hBN.
The fields correspond to the modes shown in Fig.~\ref{fig:dispersion}(b) at $q = 0.1~{\rm nm}^{-1}$, and at the energy indicated at the bottom of each panel.
(a)-(c) show three modes outside of the reststrahlen bands, corresponding to the acoustic plasmon, molecular polariton, and the optical plasmon, respectively.
(d) Shows a mode within the upper reststrahlen band, exhibiting Fabry-Perot oscillations~\cite{tomadin_prl_2015} in the hBN layers. }
\end{figure}
\begin{figure}
\begin{overpic}[width=1.0\linewidth]{fig05a}\put(2,55){(a)}\end{overpic}
\begin{overpic}[width=1.0\linewidth]{fig05b}\put(2,55){(b)}\end{overpic}
\caption{\label{fig:emission}
Power per unit area $P/A$ delivered by the tunnel current to the collective modes of the double-layer heterostructure with wave vector $q$ (horizontal axis), as the bias voltage $V_{\rm b}$ is tuned (vertical axis), when the top semi-space is filled with vacuum (a) or the model molecular material (b) defined in Sect.~\ref{sec:model}.
The colorbar represents $P/A$ divided by its maximum in ${\rm dB}$.
The intra-band electron-hole continuum and the reststrahlen bands of the hBN layers are clearly visible as extended regions of high absorption.
These region appear distorted with respect to the gray-shaded ares in Fig.~\ref{fig:dispersion} because the relation between $\hbar \omega$ and $V_{\rm b}$ is not linear [see inset of Fig.~\ref{fig:charge}(b)].
Between these regions, sharp continuous features are easily identified with the collective modes shown in Fig.~\ref{fig:dispersion}.
The minor discontinuity around $V_{\rm b} \simeq 0.4~{\rm V}$ is due to the numerical implementation of Airy functions at large arguments. }
\end{figure}
\begin{figure}
\begin{overpic}[width=1.0\linewidth]{fig06a}\put(2,55){(a)}\end{overpic}
\begin{overpic}[width=1.0\linewidth]{fig06b}\put(2,55){(b)}\end{overpic}
\caption{\label{fig:emission_cuts}
(Color online) Power per unit area $P/A$ delivered by the tunnel current to the collective modes of the double-layer heterostructure with wave vector $q = 0.1~{\rm nm}^{-1}$ as the electric bias voltage $V_{\rm b}$ is changed, when the top semi-space is filled with vacuum (a) or the model molecular material (b) defined in Sect.~\ref{sec:model}.
Gray-shaded areas correspond, from low to high values of $V_{\rm b}$, to the intra-band electron-hole continuum, and the lower and upper reststrahlen bands of the hBN layers.
Vertical dotted lines correspond to the value of the bias voltage where a peak of the absorption due to a collective mode is expected, according to the simplified analytical formulas discussed in Sect.~\ref{sec:modes}.
These modes are, from low to high values of $V_{\rm b}$, the acoustic plasmon Eq.~(\ref{eq:acoustic_plasmon}), the molecular polariton Eq.~(\ref{eq:molecular_polariton}), and the optical plasmon Eq.~(\ref{eq:optical_plasmon}). }
\end{figure}
\section{Numerical results}
\label{sec:results}
We now turn to illustrate the main results obtained by numerically solving the model outlined in Sect.~\ref{sec:model}.
Our goal is to show that the peaks of the absorption spectrum, i.e.~the magnitude of $P/A$ given in Eq.~(\ref{eq:dissipation}) as a function of the bias voltage $V_{\rm b}$, correspond to collective modes of the double-layer heterostructure.
For convenience, we summarize in this paragraph all the parameters that we use in the calculation.
The geometry of the double-layer heterostructure is defined by
$d = 1.0~{\rm nm}$,
$d_{\rm B} = 10.0~{\rm nm}$, and
$d_{\rm T} = 5.0~{\rm nm}$.
For the electron density in the graphene sheets we take
$\bar{n} = 3.0\times 10^{12}~{\rm cm}^{-2}$.
The parameters of the negative Dirac delta function potential at the position of the graphene sheets, introduced in Sec.~\ref{sec:tunn_dipoles}, are chosen $W_{\rm b} = 2.25~{\rm eV}$~\cite{kharche_nanolet_2011} (assuming that the bands of graphene and hBN are aligned~\cite{}) and $m_{\rm eff} = 0.5~m_{\rm e}$.~\cite{britnell_science_2012}
The finite thickness of the graphene sheets, introduced in Sec.~\ref{sec:dielectric_graphene}, is taken to be $\delta = 0.2~{\rm nm}$.
The dielectric constant of the substrate is
$\epsilon_{{\rm Si}{\rm O}_2} = 3.9$;
for the molecular ensemble we take the reasonable values
$\epsilon_{0} = 4.0$,
$\epsilon_{\infty} = 1.5$,
$\hbar \Omega_{0} = 100.0~{\rm meV}$, and
$\gamma = 0$;
and for the hBN layers we use~\cite{cai_solidstatecomm_2007}
$\epsilon_{x,\infty} = 4.87$,
$\epsilon_{z,\infty} = 2.95$,
$\epsilon_{x,0} = 6.70$,
$\epsilon_{z,0} = 3.56$,
$\gamma_{xy} = 0.87~{\rm meV}$,
$\gamma_{z} = 0.25~{\rm meV}$,
$\hbar \omega_{z,{\rm T}} = 92.5~{\rm meV}$,
$\hbar \omega_{z,{\rm L}} = 101.6~{\rm meV}$,
$\hbar \omega_{xy,{\rm T}} = 170.1~{\rm meV}$, and
$\hbar \omega_{xy,{\rm L}} = 199.5~{\rm meV}$.
Fig.~\ref{fig:dispersion} shows the dispersion of the collective modes, calculated on a mesh of wave vectors as explained in Sect.~\ref{sec:potential}.
The rich structure of Fabry-Perot-like modes in the reststrahlen bands is prominent.
However, outside of the reststrahlen bands and the intra-band electron-hole continuum, the optical and acoustic plasmon and the molecular polariton are clearly identifiable.
The acoustic plasmon is less hybridized with other modes, being very close to the graphene intra-band continuum, and the analytical expression Eq.~(\ref{eq:acoustic_plasmon}) proves accurate in the whole displayed interval.
For the optical plasmon, the expression in Eq.~(\ref{eq:optical_plasmon}) gives a very good approximation of the numerical result in the long-wavelength limit.
Between the reststrahlen bands, however, where the dispersion of the hybridized mode is much flattened, the analytical expression crosses the numerical results in the neighborhood of $q \simeq 0.1~{\rm nm}^{-1}$ (for the parameters used here).
The expression in Eq.~(\ref{eq:molecular_polariton}) for the molecular polariton correctly captures the long-wavelength limit of the hybrid mode which, for larger wave vectors, becomes the optical plasmon between the reststrahlen bands.
A different mode splits off from the lower reststrahlen band and converges to the molecular polariton expression for $q \gtrsim 0.05~{\rm nm}^{-1}$.
The nature of the modes is demonstrated in Fig.~\ref{fig:field}, where the space profile of the electric potential and the direction of the electric field is shown at fixed wave vector.
For the sake of the graphical representation, the length of the arrows is not proportional to the magnitude of the electric field.
The acoustic plasmon [panel (a)] is characterized by electric potential of opposite sign on the top and bottom graphene sheets.
Between the sheets, the field is thus mostly directed along $z$.
The force lines of the field are almost unperturbed at the interface with the molecular material.
For this reason, hybridization between the acoustic plasmon and the molecular polariton is absent.
For the optical plasmon [panel (c)], the electric potential has the same sign on the top and bottom graphene sheets.
The field is thus mostly directed in the $x-y$ plane between the two graphene sheets.
The behavior of the force lines of the field is different at the interface $z = z_{\rm IB}$ with the substrate ${\rm Si}{\rm O}_{2}$ and $z = z_{\rm IT}$ with the molecular material.
Indeed, as Fig.~\ref{fig:dispersion} shows, the hybridization between the optical plasmon and the molecular polariton is strong.
The molecular polariton mode [panel (b)] is easily identified because the electric potential is strongest at the interface $z = z_{\rm IT}$.
For completeness, we also show a typical Fabry-Perot-like mode within a reststrahlen band [panel (d)].
The mode is characterized by periodic oscillations of the electric potential along $z$, which appear as diagonal stripes of constant potential.
The profile of the potential is only slightly perturbed by the presence of the double-layer, so that the mode resonates between the interfaces $z = z_{\rm IB}$ and $z = z_{\rm IT}$, in the whole region occupied by hBN.
Fig.~\ref{fig:emission} shows the absorption spectrum on a wave vectors mesh.
It is important to notice that, since one cannot span the entire $\hbar \omega$ range by tuning $V_{\rm b}$ [see inset of Fig.~\ref{fig:charge}(b)], the vertical axes in Figs.~\ref{fig:dispersion} and~\ref{fig:emission} are not linearly proportional.
However, a one-to-one correspondence between the peaks of the absorption spectrum and the collective modes can be easily drawn.
This figure clearly shows that, by driving a tunnel current between the graphene sheets, one excites the collective modes of the double-layer heterostructure.
Moreover, the nearby presence of a molecular layer changes the absorption spectrum, which means that the system acts as a frequency-resolved plasmon-enabled detector.~\cite{maier_book_2007,brolo_naturephoton_2012}
These are the main results of this work.
Fig.~\ref{fig:emission_cuts} shows the absorption spectrum at fixed wave vector $q = 0.1~{\rm nm}^{-1}$, i.e.~vertical cuts from Fig.~\ref{fig:emission}, normalized to its maximum value.
The peaks corresponding to absorption by the acoustic plasmon, molecular polariton, and optical plasmon are identified by comparison with the analytical expression which, as shown in Fig.~\ref{fig:dispersion}, are sufficiently accurate in this wave vector range.
We see that the largest absorption is associated with the acoustic plasmon.
This feature can be understood by inspecting the space profile of the electric fields in Fig.~\ref{fig:field}.
The electric field of the acoustic plasmon is largest between the graphene sheets and directed along $z$, and thus it is optimally coupled to the oscillating dipoles generated by the tunneling electrons [see~Fig.~\ref{fig:charge}(b)].
This observation is in agreement with the results of Refs.~\onlinecite{laks_prb_1979, laks_prb_1980}, if one remembers that the acoustic mode is the ``slow'' mode of the graphene-based heterostructure.
Notwithstanding the dominance of the acoustic-plasmon peak, the peak corresponding to the molecular polariton between the reststrahlen bands is also clearly visible.
Finally, since the electron density along $z$ is not purely anti-symmetric [see Fig.~\ref{fig:charge}(b)], due to the finite bias which breaks space inversion around $z = 0$, the optical plasmon mode can also be excited.
\section{Conclusions}
\label{sec:conclusions}
In conclusion, in this work we have calculated the absorption spectrum of a double-layer graphene heterostructure, where a tunnel current between the graphene layers is generated by an external source.
In our theoretical approach, the tunneling electrons generate oscillating dipoles which couple to the collective modes of the double-layer heterostructure, i.e.~the acoustic and optical plasmons and a molecular polariton mode.
This approach highlights the purely quantum nature of the charge density oscillations coupling to the electric field of the plasmon and polariton modes.
We have verified that the peaks of the absorption spectrum correspond to the collective modes of the heterostructure.
Our results show that the setup that we consider can be used \emph{both} as a plasmon source and as a frequency-resolved plasmon-enabled detector.~\cite{maier_book_2007,brolo_naturephoton_2012}
In the first case, we find that acoustic plasmons absorb more power than the other modes due to a better spatial coupling between the field and the oscillating dipoles, and hence are more likely to be excited.
In the second case, the position and resonance frequency of a nearby molecular layer manifests itself as a distinct peak in the absorption spectrum -- see Fig.~\ref{fig:emission_cuts}.
In this case, coupling between the tunnel current and the molecular layer is mediated mostly by the optical plasmon mode, whose field extends further away from the graphene double layer, as shown in Fig.~\ref{fig:field}.
The detection of the molecular resonant frequency is possible in a very large band where the optical plasmon is not overdamped, as long as it does not fall in the reststrahlen bands of hBN, where larger absorption takes place.
However, we reckon that, in the reststrahlen bands,~\cite{dai_science_2014,caldwell_naturecommun_2014} one could use the hyperlensing phenomenon~\cite{li_natcommun_2015,dai_natcommmun_2015,giles_natmater_2018} to couple the optical plasmon to subwavelength absorbers, or to guide the field of the generated modes in a preferred direction.
\acknowledgments
This work was supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 785219 - ``GrapheneCore2''. Fruitful discussions with Silvia Dante, Frank Koppens, Kostya Novoselov, and Dmitry Svintsov are gratefully acknowledged.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{INTRODUCTION}
Semi-inclusive
deep inelastic scattering has
been successfully studied
in the framework of perturbative QCD \cite{aemp},
at least in the case in which the transverse momentum
of the produced hadron is of order of the hard scale $Q^2$.
In the last few years a new attention
has been devoted to this process
in the limit in which the transverse momentum, or
equivalently the momentum
transfer $t=-(p-p^\prime)^2$ between the incoming and outgoing hadron,
is very small with respect to $Q^2$. In this limit
the process is dominated by the target fragmentation
mechanism and, for this reason,
a new approach in terms of the so called {\em fracture functions} has been
proposed \cite{tv}, and developed \cite{grau,arg}.
In this talk I present a calculation \cite{gr1}
of the semi-inclusive cross section in the target fragmentation region ($t\ll Q^2$) in $(\phi^3)_6$ model field theory.
This model
has revealed
itself
a nice laboratory to study
strong interactions at short distances, since it is asymptotically free
and it has a much milder structure of infrared singularities with
respect to QCD \cite{scalar,kub}. In fact there are no soft but only
collinear singularities
and so factorization becomes simpler to deal with \cite{css}.
\section{DIS IN $(\phi^3)_6$}
I will start recalling some results one gets for inclusive DIS.
Let us consider the process
$p+J(q)\to X$ where $J=\f{1}{2} \phi^2$.
We define as usual
\begin{equation}
Q^2=-q^2~~~~~~~~~~~x=\f{Q^2}{2pq}.
\end{equation}
The structure function can be defined as
\begin{equation}
W(x,Q^2)=\f{Q^2}{2\pi} \int d^6y e^{iqy} <\! p|J(y)J(0)|p\! >.
\end{equation}
It is easy
to
calculate the parton-current cross section $w(x,Q^2)$
in dimensional regularization ($D=6-2\epsilon$).
At
lowest order we get
(see Fig. \ref{dis0})
\begin{figure}[htb]
\begin{center}
\begin{tabular}{c}
\epsfxsize=4truecm
\epsffile{dis0.eps}\\
\end{tabular}
\end{center}
\caption{\label{dis0} {\small Lowest order contribution to the deep inelastic
cross section}}
\end{figure}
\begin{equation}
w_0(x,Q^2)=\f{Q^2}{2\pi} 2\pi \delta((p+q)^2)=\delta(1-x).
\end{equation}
The first order corrections are shown in Fig. \ref{dis}.
External self energies
are not taken into account
since we work at $p^2=0$.
In order to take into account the renormalization of the operator $J$
one has to multiply the total contribution by $Z_J^{-2}(Q^2)$ where
\cite{gr1}
\begin{equation}
Z_J(Q^2)=1+\f{5}{12} \f{\lambda^2}{(4\pi)^3}\f{1}{\epsilon}\left(\f{4\pi\mu^2}{Q^2}\right)^\epsilon.
\end{equation}
\begin{figure}[htb]
\begin{center}
\begin{tabular}{c}
\epsfxsize=10truecm
\epsffile{dis.eps}\\
\end{tabular}
\end{center}
\caption{\label{dis} {\small One loop corrections to the
deep inelastic cross section}}
\end{figure}
Up to finite corrections we get
\begin{equation}
\label{ris}
w(x,Q^2) =\delta(1-x)
+\f{\lambda^2}{(4\pi)^3} P(x)\left(-\f{1}{\epsilon}\right)\left(\f{4\pi\mu^2}{Q^2}\right)^\epsilon
\end{equation}
where
\begin{equation}
P(x)=x(1-x)-\f{1}{12}\delta(1-x)
\end{equation}
is the DGLAP kernel for our model.
The contribution to the structure function is obtained as a convolution
with a bare parton density $f_0(x)$
\begin{equation}
W(x,Q^2)=\int_x^1 \f{du}{u}f_0(u) w(x/u,Q^2).
\end{equation}
The collinear divergence in $w(x,Q^2)$ can
be lumped as usual in a $Q^2$ dependent parton density
by
means of the equation
\begin{equation}
\label{pd}
f_0(x) =\int_x^1 \f{du}{u} \Big[\delta(1-u)
+\f{\lambda^2}{(4\pi)^3} P(u)\f{1}{\epsilon}\left(\f{4\pi\mu^2}{Q^2}\right)^\epsilon\Big]
f(x/u,Q^2).
\end{equation}
The scale dependent parton density $f(x,Q^2)$
obeys the DGLAP evolution equation
\begin{equation}
Q^2\f{\partial}{\partial Q^2} f(x,Q^2)=\int_x^1 \f{du}{u} P(u) f(x/u,Q^2).
\end{equation}
For the process $J(q)\to p+X$ with
$q$ timelike a fragmentation function $d(x,Q^2)$
can be defined in the same way
and it obeys the same DGLAP
evolution equation.
At one loop level the timelike DGLAP kernel is the
same as in the spacelike case, but this relation is broken at
two loops \cite{kub}.
\section{SEMI-INCLUSIVE DIS}
In the semi-inclusive case a new structure function can be defined as
\begin{equation}
W(p,p^\prime,q)=\f{Q^2}{2\pi} \sum_X\int d^6x e^{iqx}
<\! p|J(x)|p^\prime X\! >
<\! X p^\prime|J(0)|p\! >.
\end{equation}
We
have calculated \cite{gr1} the partonic cross section in the limit
$t\ll Q^2$
at leading power, by keeping only divergent terms and possible $\log Q^2/t$
contributions.
As expected, the cross section is dominated by target fragmentation.
The first diagram which give contribution
is the one in Fig. \ref{sdis0}.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{c}
\epsfxsize=4.5truecm
\epsffile{sdis0.eps}\\
\end{tabular}
\end{center}
\caption{\label{sdis0} {\small Leading order contribution to one particle
deep inelastic cross section in the region $t\ll Q^2$}}
\end{figure}
It gives
\begin{equation}
w_1(x,z,t,Q^2)=\f{\lambda_0^2}{t^2} x \delta(1-x-z)
\end{equation}
where $\lambda_0$ is the bare coupling constant
and
\begin{equation}
z=\f{p^\prime q}{pq}.
\end{equation}
It turns out that the relevant one loop corrections
come from the diagrams in Fig. \ref{sdis1}.
The other diagrams in fact either give
finite contributions or are suppressed by powers of $t/Q^2$.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{c}
\epsfxsize=9truecm
\epsffile{sdis1.eps}\\
\end{tabular}
\end{center}
\caption{\label{sdis1} {\small One loop leading contributions to the one
particle deep inelastic cross section}}
\end{figure}
The details of the calculation are presented in Ref. \cite{gr1}.
Summing up all the contributions, multiplying by $Z_J^{-2}(Q^2)$,
introducing the running coupling constant
we finally get
\begin{align}
\label{sris}
w(x,z,t,Q^2)&=\f{\lambda^2(t)}{t^2}x
\Big(\delta(1-x-z)+
\f{\lambda^2}{(4\pi)^3}\f{1}{\epsilon}\left(\f{4\pi\mu^2}{t}\right)^\epsilon\Big(\f{1}{6}\delta(1-x-z)\nonumber\\
&-\f{1-x-z}{(x+z)^2}-\f{1-x-z}{(1-x)^2}\Big)
+\f{1}{x} P\left(\f{x}{1-z}\right)\f{\lambda^2}{(4\pi)^3}\log\f{Q^2}{t}\Big).
\end{align}
The structure function is obtained as a convolution with the bare
parton density and fragmentation function.
By using
eq. (\ref{pd})
and the corresponding definition for
the fragmentation function
we get
\begin{align}
\label{final}
W(x,z,t,Q^2)&=
\int_{x+z}^1 \f{du}{u}\int_{\f{z}{u-x}}^1\f{dv}{v^4}
f(u,t)
\f{\lambda^2(t)}{t^2}\f{v^2}{u^2}
\Big[\delta\left(1-\f{x/u}{1-z/uv}\right)\nonumber\\
&+\f{\lambda^2}{(4\pi)^3} P\left(\f{x/u}{1-z/uv}\right)\log\f{Q^2}{t}\Big] d(v,t)
\end{align}
where again only leading $\log Q^2/t$ terms have been considered
and the integration limits are derived using
momentum conservation.
From eq. (\ref{final}) it appears that
the renormalized hard cross section gets a
large $\log Q^2/t $ correction whose coefficient is
the scalar DGLAP kernel.
Such correction, if not properly resummed,
can spoil perturbative calculations in the region $t\ll Q^2$.
Eq. (\ref{final}) shows a new singularity,
which corresponds to the configuration
in which $p^\prime$ becomes parallel to $p$.
When we integrate over $t$,
in order to absorb such singularity,
the introduction
of a new phenomenological distribution, the fracture function \cite{tv}
becomes necessary \cite{grau}.
Eq. (\ref{final}) can also be rewritten in the following form
\begin{align}
\label{eqjet}
W(x,z,t,Q^2)&=\f{\lambda^2(t)}{zt^2}
\int_x^{1-z}\f{dr}{r}\int_{z+r}^1 \f{du}{u(u-r)}
{\hat P}
\left(\f{r}{u}\right) f(u,t)\Big[\delta\left(1-\f{x}{r}\right)\nonumber\\
&+\f{\lambda^2}{(4\pi)^3} P\left(\f{x}{r}\right)\log\f{Q^2}{t}\Big] d\left(\f{z}{u-r},t\right)
\end{align}
where we have defined the A-P real scalar vertex ${\hat P}(x)=x(1-x)$.
The function
\begin{equation}
E^{(1)}(x,Q^2/Q^2_0)=\delta(1-x)+\f{\lambda^2}{(4\pi)^3} P(x) \log \f{Q^2}{Q^2_0}
\end{equation}
appears to be the first order approximation of the evolution kernel $E(x,Q^2/Q^2_0)$
which resums the leading logarithmic series \cite{jet}.
This fact suggests that
an interpretation of eq. (\ref{eqjet}) can be given in terms of
Jet Calculus \cite{jet}.
\section{FACTORIZATION IN TERMS OF CUT VERTICES}
Cut vertices are a generalization of matrix elements of local operators
originally
proposed by Mueller in Ref.\cite{mueller}.
They can be very useful
to give
an interpretation of the results obtained in the previous sections.
Let us go back to Sect.2 and
set $p^2<0$ with $p=(p_+,{\bf 0},p_-)$.
If we choose a frame in which $p_+\gg p_-$ we can write
for the parton-current cross section \cite{mueller}
\begin{equation}
w(p,q)=\int \f{du}{u} v(p^2,u)C(x/u,Q^2)
\end{equation}
where $v(p^2,x)$
is a spacelike cut vertex
with $C(x,Q^2)$ the corresponding coefficient function.
If we define
\begin{equation}
v(x,\epsilon)=\delta(1-x)+\f{\lambda^2}{(4\pi)^3} P(x)\left(-\f{1}{\epsilon}\right)
\end{equation}
\begin{equation}
C(x,Q^2)=\delta(1-x)+\f{\lambda^2}{(4\pi)^3} P(x) \log\f{Q^2}{4\pi\mu^2}
\end{equation}
we can write eq. (\ref{ris}) in the form
\begin{equation}
w(x,Q^2)=\int_x^1 \f{du}{u} v(u,\epsilon)C(x/u,Q^2).
\end{equation}
Here $v(x,\epsilon)$ is a spacelike cut vertex
defined at $p^2=0$ whose mass divergence is regularized dimensionally.
A similar interpretation can be given of eq. (\ref{sris}).
We define
\begin{equation}
{\bar x}=\f{x}{1-z}
\end{equation}
and
\begin{align}
v({\bar x},z,t,\epsilon)&=\f{\lambda^2(t)}{t^2}\Big[\delta(1-{\bar x})+
\f{\lambda^2}{(4\pi)^3}\f{1}{\epsilon}\left(\f{4\pi\mu^2}{t}\right)^\epsilon
\Big(\hspace{.5mm}\f{1}{6}\delta(1-{\bar x})\nonumber\\
&+\f{(1-z)^2{\bar x}(1-{\bar x})}{({\bar x}(1-z)+z)^2}
+\f{(1-z)^2{\bar x}(1-{\bar x})}{(1-{\bar x}(1-z))^2}\hspace{.5mm}\Big)
+P({\bar x})\f{\lambda^2}{(4\pi)^3}\log\f{4\pi\mu^2}{t}\Big]
\end{align}
as a {\em generalized cut vertex} \cite{new} which contains all the leading
mass singularities of the cross section.
We can write up to ${\cal O}(t/Q^2)$ corrections
\begin{equation}
\label{fact}
w({\bar x},z,t,Q^2,\epsilon)\!=\!\!\int_{\bar x}^1 \!\f{du}{u}
v(u,z,t,\epsilon)C({\bar x}/u,Q^2)
\end{equation}
where the coefficient function is the same which occurs in
inclusive DIS.
The validity of this factorization relies on the fact that
diagrams with more than two legs connecting the soft
to the hard part are suppressed
by powers of $t/Q^2$ \cite{gr1}.
This is a
result which can be
generalized
at all orders
by using
the ideas of Ref. \cite{css,sterman}.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{c}
\epsfxsize=6truecm
\epsffile{leading.eps}\\
\end{tabular}
\end{center}
\caption{\label{leading} {\small Leading contributions to the semi-inclusive
structure function in $(\phi^3)_6$}}
\end{figure}
The large $Q^2$ limit
of the semi-inclusive cross section can be studied by
looking at the singularities in the limit $p^2$, $p^{\prime 2}$, $t\to 0$.
The strength of such
singularities can be predicted
by using infrared power counting \cite{new}.
Starting from a given diagram, its {\em reduced} form in the large $Q$ limit
is constructed by
simply contracting to a point all the lines whose momenta are not on shell.
In $(\phi^3)_6$ the general leading diagrams in the large $Q^2$ limit
for the process under study involve a jet subdiagram $J$,
composed by on shell lines collinear to the incoming particle,
from which the detected particle emerges in the forward
direction and a hard subgraph $H$ in which momenta
of order $Q$ circulate, which is connected to the jet by the minimum number
of collinear lines.
Additional lines connecting $J$ to $H$
as well as soft lines connecting them
are suppressed by power counting.
So one can say that in $(\phi^3)_6$ the leading diagrams are of the form
depicted in Fig. \ref{leading} and this means that in this model
eq. (\ref{fact}) holds at all orders \cite{new}.
\section{SUMMARY}
In this talk I have presented
an explicit calculation of
the one particle
deep inelastic cross section
in the target fragmentation region
within $(\phi^3)_6$ model field theory.
The renormalized hard cross
section gets a large $\log Q^2/t$
correction as expected in a two scale regime and
the coefficient driving this logarithmic correction is precisely
the scalar DGLAP kernel.
Furthermore the result obtained fits
within an extended factorization hypothesis \cite{new}.
In fact
the partonic semi-inclusive cross section factorizes into a convolution of a
new object, a generalized cut vertex $v(p,p^\prime,{\bar x})$ \cite{new},
with four rather than two external legs,
and a coefficient function $C({\bar x},Q^2)$ which
is the same as the one of inclusive DIS.
Infrared power counting applied to this process allows to say
that this last result holds in $(\phi^3)_6$ at all orders.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:introduction}
The anticipated arrival of exascale systems opens up the possibility for scientific simulation at dramatically increased scale, provided that several technical challenges can be overcome.
One such challenge concerns the resilience of the underlying numerical algorithm in the face of the increased level of faults and node failures on an exascale machine \cite{CappelloGeistEtAl2009_TowardExascaleResilience,CappelloGeistEtAl2014_TowardExascaleResilience}.
Faults are classified as \emph{hard} or \emph{soft} \cite{AvizienisLaprieEtAl2004_BasicConceptsTaxonomyDependableSecureComputing}.
While hard faults require immediate remedial action in order for the program execution to proceed, the effect of soft faults is to perturb data and instructions, possibly undetected but having the potential to degrade the performance of the algorithm or even invalidate the entire simulation.
Node failures or lost messages constitute examples of hard faults, whereas random bit flips, induced by cosmic particles, can be classified as soft faults.
It is of urgent concern to determine the effect of faults on existing state of the art numerical methods which will be utilised on exascale systems.
In the cases where the algorithms are found wanting, one would like to know how to modify the schemes to cope with the new reality of fault-prone computing.
The widely used \emph{multigrid algorithm} for the solution of linear systems of equations is the concern of this work.
In earlier work \cite{AinsworthGlusa216_IsMultigridMethodFaultTolerant}, we introduced a simple fault mitigation strategy called \emph{laissez-faire}, whereby, instead of trying to recover any lost or corrupted information, one simply replaces affected values with zero before continuing the execution.
This approach has several advantages in terms of efficiency including no requirement for communication.
However, the acid test is whether or not this strategy is effective and results in any improvement in fault resilience.
The goal of this work is twofold.
Firstly, we extend the analytic convergence estimates from the two grid setting to an arbitrary number of levels.
Secondly, we investigate how the algorithm can be applied in practice where some form of fault detection is needed.
In \Cref{sec:review}, we will recall the basic theoretical framework for fault-prone linear iterative methods as well as some previous results needed in the analysis.
We then state a general convergence estimate for Fault-Prone Multigrid in \Cref{sec:main-results}, and demonstrate its validity for a range of numerical examples.
Finally, the practical issue of fault detection and protection of the involved operations is discussed in \Cref{sec:implementation}.
The effects of different levels of protection are demonstrated for a model problem, and optimal parameters depending on fault rate and problem size are given.
The theoretical details and proofs of the results are collected in the Appendix.
\section{Preliminaries}
\label{sec:review}
\subsection{Modelling Faults}
In \cite{AinsworthGlusa216_IsMultigridMethodFaultTolerant}, we proposed to model the effect of faults and their mitigation in iterative linear solvers through random diagonal matrices.
A fault-free vector $x\in\mathbb{R}^n$ gets transformed into \(\tilde{x}\) according to
\begin{align*}
\tilde{x} = \randmat{X}x.
\end{align*}
Throughout an algorithm, several operations might be subject to faults, each corresponding to such a fault matrix.
For given \(\varepsilon>0\), let $\SS_{\varepsilon}$ denote the set of these fault matrices, and assume that they satisfy the following conditions:
\begin{assumption} \label{as:faults}\leavevmode
\begin{enumerate}
\item Each $\randmat{X}\in\SS_{\varepsilon}$ is a random diagonal matrix.
\item For every $\randmat{X}\in\SS_{\varepsilon}$, there holds $\Exp{\randmat{X}} =
e(\randmat{X})\mat{I}$, where $e(\randmat{X})>0$, and
$|e(\randmat{X})-1|\le C\varepsilon$ for some fixed $C>0$.
\item For every
$\randmat{X}\in \SS_{\varepsilon}$ there holds $\norm{\Var{\randmat{X}]}}_{2}
= \max_{i,j}\left|\Cov\left(\randmat{X}_{ii},\randmat{X}_{jj}\right)
\right| \leq\varepsilon$.
\end{enumerate}
\end{assumption}
\(\varepsilon\) measures how close the fault matrices are to the identity, i.e. the fault-free case.
While the value of \(\varepsilon\) could vary from operation to operation, for example to take into account that denser matrix-vector products take more time and are therefore more likely to be hit by faults, we neglect this aspect in the analysis for the sake of clarity.
Important examples of random faults covered by \Cref{as:faults} are
\begin{enumerate}
\item \emph{Componentwise detectable faults}
A typical example of a detectable fault would be flipping of individual bits in a floating point number, resulting in a large enough upset to be detected, or a pointer corruption that leads to an invalid memory address.
Such cases can be treated using the laissez-faire strategy described above, and therefore modelled as componentwise faults with
\begin{align}
\randmat{X}= \diag\left(\chi_{1},\dots,\chi_{n}\right),\label{eq:componentwiseFaults}
\end{align}
where $\chi_i$ are independent and identically \(\varepsilon\)-distributed Bernoulli random variables, i. e.
\begin{align*}
\chi_{i} &=
\begin{cases}
0 & \text{with probability } \varepsilon, \\
1 & \text{with probability } 1-\varepsilon.
\end{cases}
\end{align*}
\item \emph{Blockwise detectable faults}
In the event of a node failure and application of the laissez-faire strategy, all the components of a vector that were residing on the node will be zeroed out.
This can be modelled by a random matrix with independent diagonal \emph{blocks}:
\begin{align}
\randmat{X}= \blockdiag\left(\chi_{1}\mat{I}_{1},\dots,\chi_{N}\mat{I}_{N}\right), \label{eq:blockwiseFaults}
\end{align}
where $\chi_i$ are independent and identically \(\varepsilon\)-distributed Bernoulli random variables and \(\mat{I}_{j}\) are identity matrices.
\item \emph{Silent faults}
Soft faults may be difficult or even impossible to detect, especially if their induced perturbation is small. Such silent faults can be modelled by a random matrix
\begin{align}
\randmat{X} = \mat{I}+\diag\left(\eta_1\chi_1, \dots, \eta_n\chi_n\right) \label{eq:silentFaults}
\end{align}
where $\eta_i$ are independent and identically distributed random variables, and \(\chi_{i}\) are independent and identically distributed Bernoulli random variables, such that \(\randmat{X}\in \SS_{\varepsilon}\).
In particular, this in includes the cases of frequent faults with small impact and of rare but large upsets.
\end{enumerate}
\subsection{Multigrid Algorithm}
Let $\mat{A}$ be a symmetric, positive definite matrix arising from a finite element discretization of an elliptic partial differential equation in $d$ spatial dimensions.
The multigrid method solves the linear system of equations
\begin{align*}
\mat{A}x = b
\end{align*}
by introducing a hierarchy of coarser problems
\begin{align*}
\mat{A}_{\ell}x_{\ell}=b_{\ell}
\end{align*}
of size \(n_{\ell}\), \(\ell=0,\dots,L\) with \(\mat{A}_{L}=\mat{A}\).
Information gets transferred between levels through restriction and prolongation operators
\begin{align*}
\mat{R}_\ell^{\ell+1}: \mathbb{R}^{n_{\ell+1}}\rightarrow \mathbb{R}^{n_\ell}, \quad
\mat{P}_{\ell+1}^\ell: \mathbb{R}^{n_\ell}\rightarrow \mathbb{R}^{n_{\ell+1}}.
\end{align*}
We will assume that \(\mat{P}_{\ell+1}^\ell=\left(\mat{R}_\ell^{\ell+1}\right)^{T}\) along with the usual Galerkin relation \(\mat{A}_{\ell} = \mat{R}_{\ell}^{\ell+1}\mat{A}_{\ell+1}\mat{P}_{\ell+1}^{\ell}\).
We will drop sub- and superscripts on restriction and prolongation operators in what follows.
Moreover, smootheners are defined on each level as
\begin{align*}
x_{\ell}\leftarrow x_{\ell}+\mat{N}_{\ell}\left(b_{\ell}-\mat{A}_{\ell}x_{\ell}\right),
\end{align*}
where \(\mat{N}_{\ell}\) is a suitable preconditioner, e.g. the damped Jacobi preconditioner \(\mat{N}_{\ell}=\theta \mat{D}_{\ell}^{-1}\), with \(\mat{D}_{\ell}\) the diagonal part of \(\mat{A}_{\ell}\).
The Fault-Prone Multigrid Method \(\c{M\!G}_{\ell}\) was described in detail in \cite{AinsworthGlusa216_IsMultigridMethodFaultTolerant}, and is given by \Cref{alg:ndmg}.
The classical fault-free variant \(M\!G_{\ell}\) can be obtained by replacing all fault matrices \(\randmat{X}^{(\bullet)}_{\ell}\) with identity matrices.
\begin{algorithm}
\begin{algorithmic}[1]
\Require Right hand side $b_\ell$; Initial iterate $x_\ell$
\Ensure $\c{M\!G}_\ell(b_\ell, x_\ell)$
\If{$\ell=0$}
\Return $\mat{A}_{0}^{-1}b_{0}$ \Comment{Exact solve on coarsest grid}
\Else
\For{$i\leftarrow 1$ \To{} $\nu_{1}$}
\State $x_\ell\leftarrow x_{\ell}+\randmat{X}^{(S,\text{pre})}_{\ell}\mat{N}_{\ell}\left(b_\ell-\mat{A}_{\ell}x_\ell\right)$ \Comment{$\nu_1$ pre-smoothing steps}
\EndFor
\State $d_{\ell-1}\leftarrow\randmat{X}^{(R)}_{\ell-1}\mat{R}\randmat{X}^{(\rho)}_\ell\left(b_\ell-\mat{A}_\ell x_\ell\right)$ \Comment{Restriction to coarser grid}
\State $e_{\ell-1}\leftarrow 0$
\For{$j\leftarrow 1$ \To{} $\gamma$}
\State $e_{\ell-1}\leftarrow \c{M\!G}_{\ell-1}(d_{\ell-1},e_{\ell-1})$ \Comment{$\gamma$ coarse grid correction steps}
\EndFor
\State $x_\ell\leftarrow x_\ell + \randmat{X}^{(P)}_\ell\mat{P}e_{\ell-1}$ \Comment{Prolongation to finer grid}
\For{$i\leftarrow 1$ \To{} $\nu_{2}$}
\State $x_\ell\leftarrow x_{\ell}+\randmat{X}^{(S,\text{post})}_{\ell}\mat{N}_{\ell}\left(b_\ell-\mat{A}_{\ell}x_\ell\right)$ \Comment{$\nu_2$ post-smoothing steps}
\EndFor
\EndIf
\end{algorithmic}
\caption{Model for Fault-Prone Multigrid Algorithm $\c{M\!G}_\ell$ where $\randmat{X}^{(\bullet)}_\ell$ are random diagonal matrices.}
\label{alg:ndmg}
\end{algorithm}
The classical approach to the analysis of iterative solution methods for linear systems uses the notion of an iteration matrix.
For the fault-prone method \(\c{M\!G}_{\ell}\), it is defined by the equation
\begin{align*}
x - \c{M\!G}_\ell(\mat{A}_\ell x, y) = \randmat{E}_\ell(x-y),
\quad\forall x,y\in\mathbb{R}^{n_\ell},
\end{align*}
and given by
\begin{align}\label{eq:Mrecursion}
\randmat{E}_{0}&=\mat{0}, \nonumber \\
\randmat{E}_\ell
&= \left(\randmat{E}^{S,\text{post}}_\ell\right)^{\nu_2}
\left[ \mat{I}
- \randmat{X}^{(P)}_\ell\mat{P}( \mat{I} -\randmat{E}^\gamma_{\ell-1} )
\mat{A}_{\ell-1}^{-1}\randmat{X}^{(R)}_{\ell-1} \mat{R}\randmat{X}^{(\rho)}_\ell \mat{A}_\ell
\right]
\left(\randmat{E}^{S,\text{pre}}_\ell\right)^{\nu_1},
\end{align}
for $\ell=1,\ldots,L$, with \(\randmat{E}_{\ell}^{S,\text{pre/post}}=\mat{I}-\randmat{X}_{\ell}^{(S,\text{pre/post})}\mat{N}_{\ell}\mat{A}_{\ell}\).
Here, we have used superscripts pre and post to reflect that the pre- and post-smootheners are \emph{independent realisations of the same random matrix}.
Moreover, we will use powers of random matrices to signify products of independent realisations of the same random matrix.
Setting $\randmat{E}_{L-1}=\mat{0}$ and applying the recursion~\cref{eq:Mrecursion} in the case $\ell=L$ yields a formula for the iteration matrix of the \emph{Fault-Prone Two Grid Algorithm}:
\begin{align}\label{eq:twolevel}
\randmat{E}^{TG}_L
&= \left(\randmat{E}^{S,\text{post}}_L\right)^{\nu_2}
\left[ \mat{I} - \randmat{X}^{(P)}_L\mat{P}
\mat{A}_{L-1}^{-1}\randmat{X}^{(R)}_{L-1} \mat{R}\randmat{X}^{(\rho)}_{L} \mat{A}_L
\right]
\left(\randmat{E}^{S,\text{pre}}_L\right)^{\nu_1} \\
&=\left(\randmat{E}^{S,\text{post}}_L\right)^{\nu_2}
\randmat{E}_{L}^{CG}
\left(\randmat{E}^{S,\text{pre}}_L\right)^{\nu_1},\nonumber
\end{align}
corresponding to using an exact solver on level $L-1$.
Here \(\randmat{E}^{CG}_{\ell}\) is the iteration matrix of the exact fault-prone coarse grid correction on level \(\ell\).
By replacing all fault matrices with the identity, the classical fault-free quantities are recovered:
\begin{align*}
\mat{E}_{\ell} &= \left(\mat{E}_{\ell}^{S}\right)^{\nu_{2}} \left[\mat{I}-\mat{P}\left(\mat{I}-\mat{E}_{\ell-1}^{\gamma}\right)\mat{A}_{\ell}^{-1}\mat{R}\mat{A}_{\ell}\right] \left(\mat{E}_{\ell}^{S}\right)^{\nu_{1}}, \\
\mat{E}_{\ell}^{S}&=\mat{I}-\mat{N}_{\ell}\mat{A}_{\ell}, \\
\mat{E}_{\ell}^{TG} &= \left(\mat{E}_{\ell}^{S}\right)^{\nu_{2}} \mat{E}^{CG}_{\ell} \left(\mat{E}_{\ell}^{S}\right)^{\nu_{1}}, \\
\mat{E}^{CG}_{\ell}&=\mat{I}-\mat{P}\mat{A}_{\ell}^{-1}\mat{R}\mat{A}_{\ell}.
\end{align*}
These are the iteration matrices of the fault-free multigrid methods, smoothener, two grid method and coarse-grid correction.
\subsection{Convergence Theory for Standard Multigrid}
\label{sec:conv-theory-fault-free}
The convergence proof of the Fault-Prone Multigrid Method is motivated by the classical analysis of the fault-free algorithm.
The standard assumptions for the convergence analysis of the fault-free multigrid method read as follows~\cite{Braess2007_FiniteElements,Hackbusch1985_MultiGridMethodsApplications,Hackbusch1994_IterativeSolutionLargeSparseSystemsEquations,TrottenbergOosterleeEtAl2001_Multigrid}:
\begin{assumption}[Smoothing property]\label{as:smoothing}
There exists \(\eta: \mathbb{N}\rightarrow \mathbb{R}_{\geq0}\) satisfying \(\lim_{\nu\rightarrow\infty}\eta(\nu) = 0\) and such that
\begin{align*}
\norm{\mat{A}_{\ell}\left(\mat{E}^{S}_{\ell}\right)^{\nu}}_{2} & \leq \eta(\nu)\norm{\mat{A}_{\ell}}_{2}, \quad \nu\geq0,~\ell=1,\dots,L.
\end{align*}
\end{assumption}
\begin{assumption}[Approximation property]\label{as:approximation}
There exists a constant \(C_{A}\) such that
\begin{align*}
\norm{\mat{E}^{CG}_{\ell}\mat{A}_{\ell}^{-1}}_{2} & \leq \frac{C_{A}}{\norm{\mat{A}_{\ell}}_{2}}, \quad \ell=1,\dots,L.
\end{align*}
\end{assumption}
The smoothing and approximation property imply two grid convergence, with convergence rate given by
\begin{align*}
\rho\left(\mat{E}^{TG}_{\ell}\right) &\leq \norm{\mat{E}^{CG}_{\ell}\left(\mat{E}^{S}_{\ell}\right)^{\nu_{1}+\nu_{2}}}_{2} \numberthis \label{eq:tgconv}\\
&\leq \norm{\mat{E}^{CG}_{\ell}\mat{A}_{\ell}^{-1}}_{2}\norm{\mat{A}_{\ell}\left(\mat{E}^{S}_{\ell}\right)^{\nu_{1}+\nu_{2}}}_{2} \\
&\leq C_{A}\eta\left(\nu_{1}+\nu_{2}\right).
\end{align*}
The right-hand side is less than one provided \(\nu_{1}+\nu_{2}\) is large enough.
\begin{assumption}\label{as:smoother}
The smoothener is non-expansive, i.e.\ \(\rho\left(\mat{E}^{S}_{\ell}\right)=\norm{\mat{E}^{S}_{\ell}}_{A}\leq 1\), and there exists \(C_{S}>0\) such that
\begin{align*}
\norm{\left(\mat{E}^{S}_{\ell}\right)^{\nu}}_{2} & \leq C_{S} \quad \nu\geq 1,~\ell=1,\dots,L.
\end{align*}
\end{assumption}
\Cref{as:smoother} permits to show that the two grid method is also a contraction with respect to the spectral norm \(\norm{\bullet}_{2}\):
\begin{align*}
\norm{\mat{E}^{TG}_{\ell}}_{2} &\leq \norm{\left(\mat{E}^{S}_{\ell}\right)^{\nu_{2}}}_{2} \norm{\mat{E}^{CG}_{\ell}\left(\mat{E}^{S}_{\ell}\right)^{\nu_{1}}}_{2}
\leq C_{S} C_{A}\eta\left(\nu_{1}\right).
\end{align*}
While this result is weaker than \cref{eq:tgconv}, it will be useful in the fault-prone case.
\begin{assumption}\label{as:prolongation}
There exist positive constants \(\underline{C}_{p}\) and \(\overline{C}_{p}\) such that
\begin{align*}
\underline{C}_{p}^{-1}\norm{x_{\ell}}_{2} & \leq \norm{\mat{P} x_{\ell}}_{2}\leq \overline{C}_{p}\norm{x_{\ell}}_{2} \quad \forall x_{\ell}\in \mathbb{R}^{n_{\ell}},~\ell=0,\dots,L-1.
\end{align*}
\end{assumption}
\Cref{as:smoother,as:prolongation} allow to extend the convergence theory to the multilevel case with \(\gamma\geq2\).
The most interesting case in practice is the W-cycle (\(\gamma=2\)).
\begin{align*}
\norm{\mat{E}_{\ell}}_{2} &= \norm{\left(\mat{E}^{S}_{\ell}\right)^{\nu_{2}} \left[\mat{I}-\mat{P}\left(\mat{I}-\mat{E}_{\ell-1}^{\gamma}\right)\mat{A}_{\ell-1}^{-1}\mat{R}\mat{A}_{\ell}\right] \left(\mat{E}^{S}_{\ell}\right)^{\nu_{1}}}_{2} \\
&\leq \norm{\left(\mat{E}^{S}_{\ell}\right)^{\nu_{2}} \left[\mat{I}-\mat{P}\mat{A}_{\ell-1}^{-1}\mat{R}\mat{A}_{\ell}\right] \left(\mat{E}^{S}_{\ell}\right)^{\nu_{1}}}_{2} \\
&\quad + \norm{\left(\mat{E}^{S}_{\ell}\right)^{\nu_{2}}}_{2} \norm{\mat{P}}_{2} \norm{\mat{E}_{\ell-1}}_{2}^{\gamma} \norm{\mat{A}_{\ell-1}^{-1}\mat{R}\mat{A}_{\ell} \left(\mat{E}^{S}_{\ell}\right)^{\nu_{1}}}_{2} \\
&\leq \norm{\mat{E}^{TG}_{\ell}}_{2} + \underline{C}_{p}\overline{C}_{p}C_{S} \norm{\mat{E}_{\ell-1}}_{2}^{\gamma} \norm{\mat{P}\mat{A}_{\ell-1}^{-1}\mat{R}\mat{A}_{\ell} \left(\mat{E}^{S}_{\ell}\right)^{\nu_{1}}}_{2}.
\end{align*}
Since
\begin{align*}
\norm{\mat{P}\mat{A}_{\ell-1}^{-1}\mat{R}\mat{A}_{\ell} \left(\mat{E}^{S}_{\ell}\right)^{\nu_{1}}}_{2}
&\leq\norm{\left(\mat{I}-\mat{P}\mat{A}_{\ell-1}^{-1}\mat{R}\mat{A}_{\ell}\right) \left(\mat{E}^{S}_{\ell}\right)^{\nu_{1}}}_{2} + \norm{\left(\mat{E}^{S}_{\ell}\right)^{\nu_{1}}}_{2} \\
&\leq C_{S} + C_{A}\eta\left(\nu_{1}\right),
\end{align*}
we obtain the recursive inequality
\begin{align}
\norm{\mat{E}_{\ell}}_{2} &\leq \norm{\mat{E}^{TG}_{\ell}}_{2} + C_{*}\norm{\mat{E}_{\ell-1}}_{2}^{\gamma} \label{eq:18}
\end{align}
with \(C_{*}=C_{S}\underline{C}_{p}\overline{C}_{p}\left(C_{S}+C_{A}\eta\left(\nu_{1}\right)\right)\).
A classical result \cite{Hackbusch1985_MultiGridMethodsApplications} concerning this inequality is
\begin{lemma} \label{lem:reccursion}
Suppose the elements of the sequence \(\left\{\eta_{\ell}\right\}_{\ell\geq1}\) satisfy \(0\leq \eta_{1} \leq \xi\) and \(0\leq \eta_{\ell} \leq \xi + C_{*}\eta_{\ell-1}^{\gamma}\), \(\ell\geq 2\).
If \(\gamma\geq 2\), \(C_{*}\gamma > 1\) and \(\xi\leq \frac{\gamma-1}{\gamma}\frac{1}{\left(C_{*}\gamma\right)^{\frac{1}{\gamma-1}}}\), then
\begin{align*}
\eta_{\ell}&\leq \left\{
\begin{array}{ll}
\frac{\gamma}{\gamma-1}\xi, & \gamma\geq 2, \\
\frac{2\xi}{1+\sqrt{1-4C_{*}\xi}}, & \gamma=2
\end{array}\right\} < 1.
\end{align*}
\end{lemma}
The result show that two grid convergence along with a sufficient number of smoothing steps imply the multilevel scheme is convergent in the absence of faults.
\subsection{Review of Previous Work on the Fault-Prone Two Grid Method}
The iteration matrix of the Fault-Prone Multigrid Method is random, and the usual the spectral radius used in the fault-free case is no longer relevant.
Instead, it transpires that the asymptotic rate of convergence in the fault-prone case is governed by the \emph{Lyapunov spectral radius} of the iteration matrix:
\begin{align*}
\varrho(\randmat{E}_L)=\lim_{N\rightarrow\infty}\exp\left\{\Exp{\log\norm{\randmat{E}_{L}^{N}}^{1/N}}\right\}.
\end{align*}
We refer the reader to \cite{BougerolLacroix1985_ProductsRandomMatricesWith,CrisantiPaladinEtAl1993_ProductsRandomMatrices,AinsworthGlusa216_IsMultigridMethodFaultTolerant} for further discussion and details relating to the Lyapunov spectral radius.
In particular, \cite{AinsworthGlusa216_IsMultigridMethodFaultTolerant} describes the so called \emph{Replica trick} which gives the following bound for the Lyapunov spectral radius
\begin{align}
\varrho\left(\randmat{E}_{L}\right) & \leq \sqrt{\rho\left(\Exp{\randmat{E}_{L}\otimes\randmat{E}_{L}}\right)}. \label{eq:replicaTrick}
\end{align}
Under suitable assumptions, it was shown \cite{AinsworthGlusa216_IsMultigridMethodFaultTolerant} that the Lyapunov spectral radius for the Fault-Prone Two Grid Method satisfies
\begin{align}
\varrho \left(\randmat{E}^{TG}_{L}\right) & \leq \norm{\mat{E}^{TG}_{L}}_{A} +
C\begin{cases}
\varepsilon n_{L}^{(4-d)/2d} & d < 4, \\
\varepsilon \sqrt{\log n_{L}} & d=4, \\
\varepsilon & d > 4,
\end{cases} \label{eq:TGbound}
\end{align}
where \(d\) is the spatial dimension of the underlying PDE, and \(\norm{\bullet}_{A}\) is the usual energy norm.
The estimate suggests, as confirmed by numerical examples, that the convergence rate of the Fault-Prone Two Grid Method degenerates as \(n_{L}\rightarrow\infty\) and eventually fails to converge.
Moreover, it can be shown \cite{AinsworthGlusa216_IsMultigridMethodFaultTolerant} that protection of the prolongation operation against faults (so that \(\randmat{X}^{(P)}_{L}=\mat{I}\)), whilst allowing other sources of faults to remain, results in the Two Grid scheme being resilient:
\begin{theorem}\label{thm:TGNoProlong}
Let \(\randmat{E}^{TG}_{\ell}\left(\nu_{1},\nu_{2}\right) = \left(\randmat{E}^{S,\text{post}}_{\ell}\right)^{\nu_{2}} \randmat{E}^{CG}_{\ell} \left(\randmat{E}^{S,\text{pre}}_{\ell}\right)^{\nu_{1}}\)
be the iteration matrix of the Fault-Prone Two Grid Method with faults in smoothener, residual and restriction, and protected prolongation.
Provided \Cref{as:faults,as:smoothing,as:approximation,as:smoother,as:prolongation} hold, and that \(\mat{N}_{\ell}\) and \(\randmat{X}^{(S)}_{\ell}\) commute, we find that
\begin{align*}
\varrho \left(\randmat{E}^{TG}_{\ell}\left(\nu_{1},\nu_{2}\right)\right) & \leq \opnorm{\Exp{\left(\randmat{E}^{TG}_{\ell}\left(\nu_{1},\nu_{2}\right)\right)^{\otimes2}}}^{1/2} \leq \norm{\mat{E}^{TG}_{\ell}\left(\nu_{2},\nu_{1}\right)}_{2} + C\varepsilon,
\end{align*}
where the constant \(C\) is independent of \(\varepsilon\) and \(\ell\), and \(\opnorm{\bullet}\) is the double energy norm defined in the Appendix.
\end{theorem}
\Cref{thm:TGNoProlong} shows that the convergence of the Fault-Prone Two Grid scheme with a protected prolongation does not degenerate as \(n_{L}\rightarrow\infty\).
One of the main aims of the present work is to generalise \Cref{thm:TGNoProlong} to the case of the Fault-Prone Multigrid Method.
\section{Main Results}
\label{sec:main-results}
Consider a second order elliptic PDE with homogeneous Dirichlet boundary conditions on a polyhedral domain \(\Omega\) given in variational form by
\begin{align*}
\text{Find } u\in V = H_{0}^{1}\left(\Omega\right): a(u,v) = L(v) \quad \forall v\in V.
\end{align*}
Here, the bilinear form \(a\) is continuous and \(V\)-coercive, and the linear form \(L\) is continuous.
Using a shape regular partitioning \(\c{T}_{L}\) of \(\Omega\), the finite dimensional space of continuous piecewise polynomials of order \(k\) is defined as
\begin{align*}
V_{L}=\left\{v\in H_{0}^{1}\left(\Omega\right) \mid v|_{K} \in \mathbb{P}_{k}(K)~\forall K \in \c{T}_{L} \right\}.
\end{align*}
Letting \(\phi_{L}\) be the vector of nodal basis functions \(\phi_{L}^{(i)}\), \(i=1,\dots,n_{L}\), the solution \(u_{L}\in V_{L}\) to the PDE is approximated by
\begin{align*}
\mat{A}_{L}x_{L}=b_{L},
\end{align*}
where \(\mat{A}_{L}=a\left(\phi_{L},\phi_{L}\right)\), \(b_{L}=L\left(\phi_{L}\right)\), and \(u_{L}=\phi_{L}\cdot x_{L}\).
The hierarchy of levels for the multigrid solver is constructed from discretizations of the same problem on nested coarser meshes \(\c{T}_{\ell}\).
Uniform refinement \cite{KroegerPreusser2008_Stability8TetrahedraShortest} or adaptive mesh refinement can be used to obtain successively finer meshes, and the canonical injection of the coarser space into the finer one is used to define prolongation and restriction.
\subsection{Convergence Estimate for the Fault-Prone W-Cycle}
While the two grid method is, as a solver, mostly of academic interest, the behaviour of the multigrid method under the impact of faults is of great practical importance.
We extend the result of \Cref{thm:TGNoProlong} to the W-cycle.
This theorem mirrors the classical implication of W-cycle convergence by two grid convergence, but applies to the Lyapunov spectral radius needed for the analysis of the Fault-Prone Multigrid Method.
No additional assumptions are required beyond these needed for the classical multigrid analysis.
\begin{restatable}{theorem}{MGNoProlong}
\label{thm:MGNoProlong}
Provided \cref{as:smoothing,as:approximation,as:smoother,as:prolongation,as:faults} hold, that the prolongation is protected, that the number of smoothing steps is sufficient, and that \(\varepsilon\) is sufficiently small, the fault-prone multigrid method converges with a rate bounded by
\begin{align*}
\varrho\left(\randmat{E}_{\ell}\left(\nu_{1},\nu_{2},\gamma\right)\right)
& \leq
\begin{cases}
\frac{\gamma}{\gamma-1}C_{TG} + C\varepsilon, & \gamma\geq 2,\\
\frac{2}{1 + \sqrt{1-4C_{*}C_{TG}}}C_{TG} + C\varepsilon, & \gamma=2,
\end{cases}
\end{align*}
where \(C\) is independent of \(\varepsilon\) and \(\ell\) and
\begin{align*}
C_{TG} &= \max_{\ell}\norm{\mat{E}_{\ell}^{TG}\left(\nu_{2},\nu_{1}\right)}_{2} \leq C_{S}C_{A}\eta(\nu_{2}), \\
C_{*} &= C_{S}\underline{C}_{p}\overline{C}_{p}\left(C_{S}+C_{A}\eta(\nu_{2})\right).
\end{align*}
\end{restatable}
Setting the fault rate \(\varepsilon=0\) in the above bounds recovers the classical estimates from \Cref{sec:review} for the fault-free multigrid method, since \(\varrho\left(\randmat{E}_{\ell}\right)\) reduces to \(\rho\left(\mat{E}_{\ell}\right)\) for \(\varepsilon=0\).
Just as the Two Grid result, this Theorem makes no assumptions about the origin of the solver hierarchy, and does not rely on a particular choice of smoothener.
We refer the reader to \Cref{sec:proof} for the proof of \Cref{thm:MGNoProlong}.
\subsection{The Effect of Protection of the Prolongation}
\Cref{thm:MGNoProlong} assumes that the prolongation operator is protected.
We investigate whether this assumption can be relaxed by considering a numerical example where we do not protect the prolongation.
Specifically, we consider the Poisson equation in two dimensions
\begin{align}
\left\{
\begin{array}{rll}
-\Delta u &= f & \text{in }\Omega=[0,1]^{2}, \\
u&=0&\text{on }\partial\Omega,
\end{array}\right. \label{eq:poisson2d}
\end{align}
and in order to rule out extraneous effects, we use a uniform mesh, a discretization with piecewise linear finite elements, and optimally damped Jacobi pre- and post-smootheners.
In particular, it is well established that the fault-free multigrid method converges in this setting.
\Cref{as:approximation,as:smoother,as:smoothing,as:prolongation} are satisfied, as for example shown in \cite{Hackbusch1994_IterativeSolutionLargeSparseSystemsEquations}.
On every level, residual, restriction, prolongation and smootheners are subject to componentwise faults, as given in \cref{eq:componentwiseFaults}.
We consider problems of size between 1 million and 1 billion degrees of freedom, and fault probabilities between \(10^{-4}\) and \(0.6\).
To minimize floating point contamination in the approximation of the Lyapunov spectral radius, the right-hand side \(f\) is taken to be zero, a non-zero random initial iterate is chosen, and after each iteration the current iterate is renormalised.
In \Cref{fig:poisson2dgeoWcycle-residuals}, we plot the evolution of the residual norm with respect to the iteration number for the case of laissez-faire mitigation in prolongation, restriction, residual and smoothener.
We can see that the number of degrees of freedom \(n_{L}\) adversely affects the rate of convergence, even leading to divergence for large number of unknowns.
Estimates of the Lyapunov spectral radius are obtained as the geometric average of 1000 iterations of Fault-Prone Multigrid, and are displayed in \Cref{fig:poisson2dgeoWcycle-qd-rhoL}.
\begin{figure}
\centering
\includegraphics{poisson2dgeoWcycle-residuals}
\caption{
Plots of the norm of the residual against iteration number for the Fault-Prone W-Cycle Multigrid Method in the case of discretization of Poisson problem on square domain.
} \label{fig:poisson2dgeoWcycle-residuals}
\end{figure}
\begin{figure}
\centering
\label{fig:poisson2dgeoWcycle-qd-rhoL}
\label{fig:poisson2dgeoWcycleNoProlong-qd-rhoL}
\includegraphics{poisson2dgeoWcycle-compare-qd-rhoL}
\caption{
Lyapunov spectral radius $\varrho(\randmat{E}_L)$ for the iteration matrix for the Fault-Prone W-Cycle Multigrid method in the case of discretization of Poisson problem on a square domain.
Without protected prolongation (\textit{left}), and with protected prolongation (\textit{right}).
}
\end{figure}
In the Two Grid setting which was discussed in \cite{AinsworthGlusa216_IsMultigridMethodFaultTolerant}, we found that the asymptotic rate for this problem scales like \(\sqrt{n_{L} \varepsilon^{2}}\).
Similarly, as seen in \Cref{fig:poisson2dgeoWcycle-qd-rhoL}, we find
\begin{align}
\varrho\left(\randmat{E}_{L}\right) = C_{0} + \c{O}\left(\sqrt{n_{L} \varepsilon^{2}}\right) \label{eq:5}
\end{align}
for the multilevel case, with \(C_{0}\) being a constant related to the fault-free method.
This means that the method is not fault resilient, since for any given rate of faults \(\varepsilon\), there exists a maximum problem size above which multigrid diverges.
Therefore additional protection is mandatory in the multilevel case as well.
We repeat the same experiments, this time with a protected prolongation, i.e. \(\randmat{X}_{\ell}^{(P)}=\mat{I}\).
This produces the desired independence of the problem size, as predicted by \Cref{thm:MGNoProlong} and as shown by the evolution of the residual in \Cref{fig:poisson2dgeoWcycleNoProlong-residuals} and by the estimated rate of convergence in \Cref{fig:poisson2dgeoWcycleNoProlong-qd-rhoL}.
We further illustrate the results with several test problems that pose more of a challenge to the multigrid solver.
\begin{figure}
\centering
\includegraphics{poisson2dgeoWcycleNoProlong-residuals}
\caption{
Plot of the norm of the residual against iteration number for the Fault-Prone W-Cycle Multigrid Method with protected prolongation in the case of discretization of Poisson problem on square domain.
} \label{fig:poisson2dgeoWcycleNoProlong-residuals}
\end{figure}
\subsection{Adaptively Refined Meshes}
A second numerical example illustrates the results of \Cref{thm:MGNoProlong} for the case of an adaptively refined mesh.
We consider the 2D magnetostatics problem for a three phase 6/4 switched reluctance motor as depicted in \Cref{fig:motorMesh}.
Gauss's law and Amp\`ere's law are given by
\begin{align*}
\Div \vec{B} &= 0, &
\curl \vec{H} &= J,
\end{align*}
where \(\vec{B}\) is the magnetic flux density, \(\vec{H}\) the magnetic field intensity and \(J\) the current density.
\(\vec{B}\) and \(\vec{H}\) are linked through the constitutive relation \(\vec{B} = \mu \vec{H}\) with magnetic permeability \(\mu\).
Using the magnetic vector potential \(\vec{B}=\vec{\curl}~ u\), the system can be rewritten as
\begin{align*}
-\frac{1}{\mu}\Delta u &= J.
\end{align*}
This gives rise to a variational problem with
\begin{align*}
a(u,v)&= \int_{\Omega} \frac{1}{\mu}\grad u \cdot \grad v, &
F(v) &= \int_{\Omega} J v.
\end{align*}
The permeability \(\mu\) is \(5200\) in the rotor and the stator, and \(1\) everywhere else.
The current density \(J\) is unity in the coils and \(0\) everywhere else.
Using continuous piecewise linear finite elements and classical residual-based local a posteriori error indicators \cite{ErnGuermond2004_TheoryPracticeFiniteElements}, we adaptively refine an initial mesh shown in \Cref{fig:motorMesh}.
We use Red-Green refinement \cite{BankShermanEtAl1983_RefinementAlgorithmsDataStructures} coupled with a D\"orfler marking strategy \cite{Doerfler1996_ConvergentAdaptiveAlgorithmPoissonsEquation} with refinement parameter 0.5.
The solver hierarchy is generated without injection of faults; then the problem is solved for componentwise faults and varying fault rates.
\Cref{fig:motor-qd-rhoL} depicts the approximate rate of convergence of the W-Cycle with one step of Jacobi pre- and post-smoothing each and without protection of the prolongation.
We can see that although neither the W-Cycle nor the adaptively refined mesh are covered by the two grid result from \cite{AinsworthGlusa216_IsMultigridMethodFaultTolerant}, the convergence estimate is qualitatively correct.
Added protection of the prolongation operation leads to a fault resilient method, as shown in \Cref{fig:motorNoProlong-qd-rhoL}.
Small variations with respect to the number of degrees of freedom are due to varying coarsening ratios of the levels.
The results match \Cref{thm:MGNoProlong}, even though \Cref{as:approximation} is not satisfied.
\begin{figure}
\centering
\includegraphics{motorFullAdaptive-mesh0}
\caption{
Initial mesh for the motor problem.
} \label{fig:motorMesh}
\end{figure}
\begin{figure}
\centering
\label{fig:motor-qd-rhoL}
\label{fig:motorNoProlong-qd-rhoL}
\includegraphics{motor-compare-qd-rhoL}
\caption{
Lyapunov spectral radius $\varrho(\randmat{E}_L)$ for the iteration matrix for the Fault-Prone W-Cycle Multigrid Method in the case of discretization of the motor problem.
Without protected prolongation (\textit{left}), and with protected prolongation (\textit{right}).
}
\end{figure}
\subsection{Higher Spatial Dimension and Higher Order Finite Elements}
We now demonstrate that fault resilience of the W-cycle is retained for a 3D partial differential equation and higher order (quadratic) continuous finite elements, and solve the Poisson equation on the Fichera cube.
\begin{align*}
-\Delta u &= f && \text{in } \Omega=[0,2]^{3}\setminus [1,2]^{3}, \\
u&=u_{0} && \text{on } \partial\Omega
\end{align*}
The geometry and its uniform meshing are shown in \Cref{fig:fichera}.
The system is partitioned using METIS \cite{KarypisKumar1998_FastHighQualityMultilevel} into blocks of size \(2^{14}\) and distributed over the compute nodes.
We inject nodewise faults as given by \cref{eq:blockwiseFaults}.
The problem sizes considered range from 1.7 million to about 940 million degrees of freedom.
In \Cref{fig:fichera-qd-rhoL}, we show the approximate rate of convergence as well as a level set of the error bound obtained for the Fault-Prone Two Grid Method in \cite{AinsworthGlusa216_IsMultigridMethodFaultTolerant}.
It can be seen that the rate of convergence in the multilevel case behaves like
\begin{align*}
\varrho\left(\randmat{E}_{L}\right) = C_{0} + \c{\varepsilon\sqrt[6]{n_{L}}},
\end{align*}
where \(C_{0}\) is a constant that is related to the fault-free method.
We notice that the onset of divergent behaviour indeed happens at a slower rate as compared to the 2D setting.
Protection of the prolongation again makes the method fault resilient, as shown in \Cref{fig:ficheraNoProlong-qd-rhoL}.
The choice of higher order elements plays no significant role in the convergence behaviour of the method.
\begin{figure}
\centering
\includegraphics[scale=0.4]{fichera}
\caption{
Mesh for the Fichera cube, indicating partitioning from METIS.
} \label{fig:fichera}
\end{figure}
\begin{figure}
\centering
\label{fig:fichera-qd-rhoL}
\label{fig:ficheraNoProlong-qd-rhoL}
\includegraphics{ficheraWcycleP2Blocks-compare-qd-rhoL}
\caption{
Lyapunov spectral radius $\varrho(\randmat{E}_L)$ for the iteration matrix for the Fault-Prone W-Cycle Multigrid Method in the case of discretization of the Fichera cube.
Without protected prolongation (\textit{left}), and with protected prolongation (\textit{right}).
}
\end{figure}
\FloatBarrier
\section{Implementation Issues}
\label{sec:implementation}
In the previous examples, we assumed that faults can be perfectly detected.
In practice, faults cannot be perfectly detected, nor is is it possible to perfectly protect the prolongation.
The question therefore arises of whether \Cref{thm:MGNoProlong} has any practical relevance.
In \Cref{sec:detect-soft-faults}, we present one possible simple approach for fault detection and show that the behaviour of Fault-Prone Multigrid found in \cref{eq:5} is recovered.
In \Cref{sec:prot-prol} we turn to the issue of protecting the prolongation operator.
\subsection{Detection of Soft Faults}
\label{sec:detect-soft-faults}
The laissez-faire fault mitigation strategy requires fault detection.
While this is straight-forward in the case of hard faults, soft faults are more problematic.
Various techniques have been suggested \cite{SnirWisniewskiEtAl2014_AddressingFailuresExascaleComputing}.
Here, we present a simple approach based on replication \cite{HeraultRobert2015_FaultToleranceTechniquesHighPerformanceComputing} in which a fault-prone component is repeated \(K\) times (\(K\geq2\)) and the results are compared for consistency.
The replication of node local operations is free of any communication requirements.
\Cref{alg:detection} shows how the strategy is used in conjunction with laissez-faire in the detection of faults in the computation of a generic matrix-vector product \(y=\mat{M}x\).
Instead of the action of \(\mat{M}\), only its fault-prone equivalent \(\randmat{M}\) is available in practice, where \(\randmat{M}\in\mathbb{R}^{n\times n}\) is a random matrix.
\begin{algorithm}
\begin{algorithmic}[1]
\For{$i\leftarrow 1$ \To{} $n$}
\For{$j\leftarrow 1$ \To{} $K$}
\State \(w_{j} \leftarrow (\randmat{M}x)_{i}\) \Comment{\(j\)-th replica}
\EndFor
\If{\(w_{1}=w_{2}=\dots=w_{K}\) and \(\abs{w_{1}} < 10^{16}\)}
\State \(y_{i} \leftarrow w_{1}\) \Comment{value accepted}
\Else
\State \(y_{i} \leftarrow 0\) \Comment{Laissez-faire}
\EndIf
\EndFor
\end{algorithmic}
\caption{Detection of faults using \(K\) replicas and laissez-faire mitigation for the fault-prone matrix-vector product \(y=\randmat{M}x\).}
\label{alg:detection}
\end{algorithm}
The basic idea behind \Cref{alg:detection} is to declare an operation as being fault-free if all replicas are in agreement, otherwise a fault is deemed to have occurred and laissez-faire is triggered.
This means that there are three possible outcomes for the \(i\)-th component of the output \(y\) from \Cref{alg:detection}:
\begin{align*}
&(C) && \text{ the correct value of \(y_{i}=(\mat{M}x)_{i}\) is returned,} \\
&(M) && \text{ an error is detected, the laissez-faire mitigation is triggered, and \(y_{i}=0\)}, \\
&(U) && \text{ a fault occurs but remains undetected, and \(y_{i}\) is a corrupted value.}
\end{align*}
Our analysis in \Cref{sec:main-results} caters for the cases (\(C\)) and (\(M\)), but does not take account of case (\(U\)).
Suppose that the probability of a replica being corrupted is \(\varepsilon\ll 1\).
Then the probability of \emph{all} \(K\) replicas being corrupted in \emph{exactly} the same way is \(\c{O}(\varepsilon^{K})\), showing that the likelihood of (\(U\)) occurring is exponentially small in \(K\).
Hence, the probabilities of the outcomes of \Cref{alg:detection} are
\begin{align*}
\Pr{C} &=1-\c{O}(\varepsilon), \\
\Pr{M} &=\c{O}(\varepsilon), \\
\Pr{U} &=\c{O}(\varepsilon^{K}).
\end{align*}
We illustrate the effect of using our ad-hoc fault detection strategy given in \Cref{alg:detection} in \Cref{fig:poisson2dgeoSoftWcycle2-qd-rhoL} for the Poisson problem \cref{eq:poisson2d}.
In particular, the faults are modelled using bit flipping in which any bit in the floating point representation is flipped with a small but non-zero probability such that the overall probability of a floating point number being corrupted is \(\varepsilon\).
The results are given in \Cref{fig:poisson2dgeoSoftWcycle2-qd-rhoL} in the simplest case where \(K=2\) replicas are used in \Cref{alg:detection} and all operations.
It is observed that, even in this simplest case, the convergence behaviour mirrors that which would be obtained with perfect detection.
\begin{figure}
\centering
\includegraphics{poisson2dgeoSoftWcycle2-qd-rhoL}
\caption{
Lyapunov spectral radius $\varrho(\randmat{E}_L)$ for the iteration matrix for the Fault-Prone W-Cycle Multigrid Method in the case of discretization of the 2D Poisson problem with fault detection using \(K=2\) replicas in every operation.
} \label{fig:poisson2dgeoSoftWcycle2-qd-rhoL}
\end{figure}
\subsection{Protection of the Prolongation}
\label{sec:prot-prol}
\Cref{thm:MGNoProlong} requires that the prolongation operations
\begin{align*}
x_{\ell}\leftarrow x_{\ell} + \mat{P}e_{\ell-1}
\end{align*}
are computed exactly, i.e. without \emph{any} faults.
This is clearly not possible in practice.
We have seen that in the absence of full protection of the prolongation, the Fault-Prone Multigrid converges at a rate given by \cref{eq:5}, where \(\varepsilon\) is the underlying failure rate of the machine.
Moreover, \Cref{thm:MGNoProlong} suggests that the \(\c{O}(\sqrt{n_{L}\varepsilon^{2}})\) term is entirely due to faults in the prolongation (with the faults coming purely from other components of multigrid contributing with higher order terms).
At any rate, it is clear that if we can enhance the reliability of the prolongation \emph{sufficiently} (without being necessarily exact), then one can expect to ameliorate the factor \(\c{O}\left(\sqrt{n_{L}\varepsilon^{2}}\right)\) in the bound of the Lyapunov spectral radius, e.g. by obtaining a growth of \(\c{O}\left(n_{L}\varepsilon^{\alpha}\right)\) for some \(\alpha>2\).
Is it possible to improve the likelihood of detecting and mitigating faults in the prolongation beyond \(\Pr{M}=\c{O}\left(\varepsilon\right)\)?
\Cref{alg:detection} may be regarded as being overly conservative.
For instance, if all but one of the replicas are in agreement, then intuitively it seems likely that the majority are correct.
\Cref{alg:protection} implements the approach suggested by this argument by looking for agreement amongst a subset of \(k_{P}\) of the \(K_{P}\) replicas for the prolongation.
\begin{algorithm}
\begin{algorithmic}[1]
\For{$i\leftarrow 1$ \To{} $n_{\ell}$}
\For{$j\leftarrow 1$ \To{} $K_{P}$}
\State \(w_{j} \leftarrow (\randmat{P}e_{\ell-1})_{i}\) \Comment{\(j\)-th replica}
\If{\(k_{P}\) replicas have matching value \(w\) and \(\abs{w}<10^{16}\)}
\State \((x_{\ell})_{i} \leftarrow (x_{\ell})_{i} + w\) \Comment{value accepted}
\State \Break
\EndIf
\EndFor
\EndFor
\end{algorithmic}
\caption{Protection of the fault-prone prolongation \(x_{\ell}\leftarrow x_{\ell}+\randmat{P}e_{\ell-1}\) with up to \(K_{P}\) replicas, acceptance threshold \(k_{P}\) and laissez-faire mitigation.}
\label{alg:protection}
\end{algorithm}
This added freedom alters the likelihood at which the three events occur in the prolongation:
\begin{align*}
\Pr{C_{P}} & =1-\c{O}(\varepsilon^{K_{P}-k_{P}+1}), \\
\Pr{M_{P}} & =\c{O}(\varepsilon^{K_{P}-k_{P}+1}), \\
\Pr{U_{P}} & =\c{O}(\varepsilon^{k_{P}}).
\end{align*}
In particular, when \Cref{alg:protection} is applied to the computation of the prolongation with parameters \(k_{P}=3\) and \(K_{P}=4\), we obtain an overall rate of \(\Pr{M_{P}}=\c{O}(\varepsilon^{2})\) at which mitigation occurs.
\Cref{fig:poisson2dgeoSoftWcycle4-qd-rhoL} shows the results obtained by applying \Cref{alg:protection} with \(k_{P}=3\) and \(K_{P}=4\) to the prolongation, and \Cref{alg:detection} with \(K=3\) to all other operations.
It is seen that the strategy has resulted in the factor \(\c{O}\left(\sqrt{n_{L}\varepsilon^{2}}\right)\) in the Lyapunov spectral radius being replaced by \(\c{O}\left(n_{L}\varepsilon^{\alpha}\right)\) with \(\alpha=3\).
\begin{figure}
\centering
\includegraphics{poisson2dgeoSoftWcycle4-qd-rhoL}
\caption{
Lyapunov spectral radius $\varrho(\randmat{E}_L)$ for the iteration matrix for the Fault-Prone W-Cycle Multigrid Method with \(K_{P}=4\) replicas and acceptance threshold \(k_{P}=3\) in the prolongation and \(K=3\) replicas in all other operations.
} \label{fig:poisson2dgeoSoftWcycle4-qd-rhoL}
\end{figure}
Would a cheaper detection and protection strategy (e.g. with \(K_{P}=3\), \(k_{P}=2\) and \(K=2\)) suffice?
\Cref{fig:softFaultCurves} shows the results obtained employing a variety of different choices for \(K\), \(K_{P}\) and \(k_{P}\).
It is seen that the cheapest strategy that results in the growth \(\c{O}\left(\sqrt{n_{L}\varepsilon^{2}}\right)\) being replaced by \(\c{O}\left(n_{L}\varepsilon^{\alpha}\right)\), \(\alpha>2\), is indeed \(K_{P}=4\), \(k_{P}=3\) and \(K=3\).
\begin{figure}
\centering
\includegraphics{soft_faults_curves}
\caption{
Level sets $\varrho(\randmat{E}_L)=0.57$ (50\% more iterations than the ideal fault-free case) for different levels of detection and protection in the case of the 2D Poisson problem.
Optimal parameter combinations are marked in thick solid lines.
The obtained optimal detection and protection parameters for each region in the \(\varepsilon\)-\(n_{L}\)-plane are marked in the plot.
} \label{fig:softFaultCurves}
\end{figure}
Moreover, if one wishes to obtain a given rate \(\alpha\geq2\), then one can choose \(K=k_{P}=\alpha\) and \(K_{P}=2\alpha-2\).
As problems in three spatial dimensions are more resilient to the effect of laissez-faire mitigation, as reflected by \cref{eq:TGbound} and the numerical evidence in \Cref{sec:main-results}, we expect the strategy \(K=K_{P}=k_{P}=\alpha\) to be sufficient for \(2\leq\alpha\leq 6\).
\section{Conclusion}
In this work, we extended previous results concerning the convergence of the Fault-Prone Two Grid Method to the multigrid case, and showed that, if the prolongation is protected against faults, then the resulting Fault-Prone Multigrid Method is resilient.
Numerical examples illustrated this result also holds in a variety of settings where the theory does not apply, along with the necessity of protecting the prolongation.
Finally, we presented one possible simple fault detection strategy and showed that the resulting algorithm is also resilient.
\FloatBarrier
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The confinement of electron spins in host semiconductors is intensely studied in view of powerful applications in quantum computation and simulation \cite{Hanson-2007, Vandersypen-2017, Russ-2017,Rotta-2017}. Largely studied are qubit defined in electrostatically or self-assembled QDs \cite{Loss-1998, Veldhorst-2014, Kawakami-2014}, donor spins in solid matrices \cite{Kane-1998, Pla-2012, Pla-2013} or a combination of them \cite{Pica-2016, Harvey-2017}. In particular, semiconductor-based qubits assure long electron spin coherence times, easy manipulation, fast gate operations, and potential for scaling, in addition to the compatibility with the existing CMOS process. Starting from analytical expressions for the realization of logical gates, that are rotations along the main axis on the Bloch sphere, we studied how such operations are affected when non-idealities are included in the control pulses. The control and the manipulation of the qubit need electric voltages or magnetic fields, using DC pulses or microwave drives depending of the qubit type. We focus our study on three QD spin qubits: the single spin qubit \cite{Morton-2011}, the singlet-triplet qubit \cite{Levy-2002} and the hybrid qubit \cite{Shi-2012}.
The single spin qubit is realized confining the spin of a single electron in a single QD. The logical basis is defined adopting the two spin eigenstates $|\!\uparrow\rangle$ and $|\!\downarrow\rangle$ as the logical $|0\rangle$ and $|1\rangle$ respectively. In a rotating frame, which rotates at the angular frequency $\omega$, under the rotating wave approximation (RWA), the Hamiltonian is given by:
\begin{equation}\label{HSS2}
H_{SS}=\frac{\hbar}{2}\Delta\omega_z\sigma^z + \frac{\hbar}{2}\Omega\cos(\phi)\sigma^x+\frac{\hbar}{2}\Omega\sin(\phi)\sigma^y,
\end{equation}
where $\sigma^{z(x,y)}$ is the Pauli operator, $\Delta\omega_z\equiv\omega_z-\omega$ where $\omega_z$ is the Larmor angular frequency, $\hbar\omega_z=g_e\mu_BB_0$ is the Zeeman energy associated to the constant magnetic field $B_0$ applied in the $z$ direction ($g_e$ is the electron g-factor and $\mu_B$ is the Bohr magneton) and $\hbar\Omega=g_e\mu_BB_1/2$ is the Zeeman energy associated to the oscillating magnetic field $B_1$, with phase $\phi$ and angular frequency $\omega$. Qubit manipulation is obtained by modulating the phase $\phi$.
The singlet-triplet qubit is defined on states of two electrons confined in a double QD. The logical states are a superposition of two-particle spin singlet and triplet states, that are $|0\rangle\equiv|S\rangle$ and $|1\rangle\equiv|T_0\rangle$, where each QD is occupied with one electron. The effective Hamiltonian model is
\begin{equation}\label{singlet-triplet}
H_{ST}=\frac{1}{2}\Delta E_z(\sigma_{1}^z-\sigma_{2}^z)+\frac{1}{4}J\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{2},
\end{equation}
where $\boldsymbol{\sigma}_1$ and $\boldsymbol{\sigma}_2$ are the Pauli matrices referring to the two electrons. The first term on the right side is the Zeeman energy with $\Delta E_z\equiv\frac{1}{2}(E_1^z-E_2^z)$, namely a magnetic field gradient between the QDs, the second term is the exchange interaction through the coupling constant $J$. The ST qubit allows fast readout and fast manipulation as long as local magnetic gradient have been created through, for example, the use of a micro-magnet in close proximity.
The hybrid qubit is realized confining three electrons in a double QD. The logical states (coded in the $S=\frac{1}{2}$ and $S_z=\frac{1}{2}$ three electrons subspace) have been expressed by $|0\rangle\equiv|S\rangle|\!\uparrow\rangle$ and $|1\rangle\equiv\sqrt{\frac{1}{3}}|T_0\rangle|\!\uparrow\rangle-\sqrt{\frac{2}{3}}|T_+\rangle|\!\downarrow\rangle$, where combined singlet and triplet states of a pair of electrons occupying one dot with the states of the single electron occupying the other are used. The effective Hamiltonian model for single and two qubits was derived in Ref. \cite{Ferraro-2014} and in Ref. \cite{Ferraro-2015-qip}, respectively. For the single HY qubit the effective Hamiltonian is
\begin{equation}\label{Hy}
H_{HY}=\frac{1}{2}E_z(\sigma_{1}^z+\sigma_{2}^z+\sigma_{3}^z)+\frac{1}{4}J\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{2}+\frac{1}{4}J_{1}\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\sigma}_{3}+\frac{1}{4}J_{2}\boldsymbol{\sigma}_{2}\cdot\boldsymbol{\sigma}_{3},
\end{equation}
where $\boldsymbol{\sigma}_i$ ($i=1,2,3$) is the Pauli matrix of the i-th electron, $E_z$ is the Zeeman energy due to a constant global magnetic field used to initialize the qubit and the effective coupling constants $J_1$, $J_2$ and $J$ are explicitly derived in Ref. \cite{Ferraro-2014}. The key advantage of this qubit relies on the all-electrical manipulation of the qubit that assures very fast operations.
The paper is organized as follows. Section 2 contains the presentation of the main results. The fidelities of the gate rotations $R_x(\theta)$ and $R_z(\theta)$ are calculated when the realistic transients of the control signal pulses are considered by adopting an appropriate filter function and the effect of the input disturbances is taken into account by using a Gaussian noise model. The rise and the fall edges of the realistic input signals are obtained by applying a first-order low-pass filter function to the ideal input signals. The low-pass filter with time constant $\tau=1/f_{max}$, where $f_{max}$ is the frequency cutoff, defines the bandwidth of the realistic input signal. Section 3 contains a discussion about the main findings obtained. Finally in Section 4 the nodal points related to the theory and the methods adopted are presented.
\section{Results}
We present for each spin qubit the results related to the single qubit gate operation $R_x(\theta)$ and $R_z(\theta)$ starting from the initial condition $|\psi(0)\rangle=\frac{1}{\sqrt{2}}(|0\rangle+i|1\rangle)$. The rotations are obtained through analytical input sequences that are reported in Appendix \ref{appendixA} \cite{Ferraro-2018}. We point out that the method is general and valid for arbitrary rotation angles as well as any initial condition. Moreover, the sequences are determined in such a way that each step time has to be longer than 100 ps, value that represents a current reasonable experimental limit.
\subsection{$R_x(\theta)$ and $R_z(\theta)$ with bandwidth-limited pulses}
\unskip
Figure (\ref{Bloch}) shows a pictorial representation of the gate operations $R_x(\pi/2)$ and $R_z(\pi/2)$ on the Bloch sphere starting from the initial condition $|\psi(0)\rangle=\frac{1}{\sqrt{2}}(|0\rangle+i|1\rangle)$ represented by the blue arrow. The results of both the rotations are represented by the red arrows and the final state reached is explicitly written.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Bloch.pdf}
\caption{Bloch sphere. The blue arrow represents the initial condition. The red arrows represent the final conditions after the $R_x(\pi/2)$ and $R_z(\pi/2)$ gate operations.}\label{Bloch}
\end{figure}
In the next subsections devoted to SS, ST and HY qubits respectively, we report gate infidelities as a function of the rotation angle $\theta$ and the time constant $\tau$ of first-order low-pass filter function without and with the effects of the input disturbances.
\subsubsection{Single Spin qubit}
In the single spin qubit the signal sequences for the rotations along $x$ and $z$ axis differ from the number of the steps. While $R_x$ needs only one step, $R_z$ needs three steps where the $x$ and $y$ components of the oscillating magnetic field are obtained by modifying its phase $\phi$ (see Table \ref{table}).
In Figure (\ref{SS1}) we report $R_x$ (left) and $R_z$ (right) infidelity as a function of $\theta$ and $\tau$ when bandwidth-limited input signals are considered.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{FidelityRxThetaTau_SS.pdf}\ \includegraphics[width=0.45\textwidth]{FidelityRzThetaTau_SS.pdf}
\caption{SS qubit. Left: $R_x$ Infidelity as a function of $\theta$ and $\tau$ when bandwidth-limited input signals are considered. Right: The same for $R_z$. Qubit parameters: $\Omega/2\pi$=1 MHz, $\Delta\omega_z$= 0.}\label{SS1}
\end{figure}
Both $R_x$ and $R_z$ infidelities increase as $\tau$ grows for all the considered rotation angles $\theta$. Note that $R_z$ fidelity degrades slowly for $\theta=\pi$ than other rotation angles as $\tau$ increases.
Inclusion of the input signal disturbances in our calculations gives the results reported in Figure (\ref{SS2}) for the SS qubit. Here, by using a filter time constant of $\tau=100$ ps we present $R_x$ (left) and $R_z$ (right) infidelity as a function of $\theta$ when undisturbed (solid line, blue) and disturbed (dashed line, red) input signals are considered.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{FidelityRxTheta_SS_comb.pdf}\ \includegraphics[width=0.45\textwidth]{FidelityRzTheta_SS_comb.pdf}
\caption{SS qubit. Left: $R_x$ infidelity as a function of $\theta$ when undisturbed input signals (solid line, blue) and disturbed input signals with $\sigma_{\Omega/2\pi}$= 0.05 MHz \cite{Kawakami-2014} and $\sigma_{\Delta\omega_z/2\pi}$= 20 Hz \cite{DatasheetMWgenerator} (dashed line, red) are considered. The value of $\tau$ is fixed to 100 ps. Right: The same for $R_z$.}\label{SS2}
\end{figure}
Both $R_x$ and $R_z$ fidelities are heavily deteriorated by the input disturbances. Disturbed $R_x$ shows an increased infidelity as $\theta$ grows due to the fact that control sequence times for large $\theta$ are longer than those for small $\theta$. As a result, input disturbances are integrated for longer times and gate fidelity worsen. The same comment holds for $R_z$ but with a mild infidelity increase as $\theta$ augments.
\subsubsection{Singlet-Triplet qubit}
In the singlet-triplet qubit the signal sequences for the rotations along $x$ and $z$ axis are reported in Table \ref{table}. $R_x$ is obtained operating in one step with the input $\Delta E_z$, while $R_z$ needs two steps that include also the manipulation of the exchange coupling $J$.
Figure (\ref{ST1}) shows $R_x$ (left) and $R_z$ (right) infidelity as a function of $\theta$ and $\tau$.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{FidelityRxThetaTau_ST.pdf}\ \includegraphics[width=0.45\textwidth]{FidelityRzThetaTau_ST.pdf}
\caption{ST qubit. Left: $R_x$ Infidelity as a function of $\theta$ and $\tau$ when bandwidth-limited input signals are considered. Right: The same for $R_z$. Qubit parameters: $J$=700 neV, $\Delta E_z$= 32 neV \cite{XianWu-2014}.}\label{ST1}
\end{figure}
$R_x$ and $R_z$ infidelities increase as $\tau$ grows for all the considered rotation angles. After the inclusion of the input disturbances, the resulting ST qubit infidelities for both rotations with $\tau=100$ ps are reported in Figure (\ref{ST2}).
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{FidelityRxTheta_ST_comb.pdf}\ \includegraphics[width=0.45\textwidth]{FidelityRzTheta_ST_comb.pdf}
\caption{ST qubit. Left: $R_x$ Infidelity as a function of $\theta$ when undisturbed input signals (solid line, blue) and disturbed input signals with $\sigma_J$= 1 neV, $\sigma_{\Delta E_z}$= 4 neV \cite{XianWu-2014} (dashed line, red) are considered. The value of $\tau$ is fixed to 100 ps. Right: The same for $R_z$.}\label{ST2}
\end{figure}
As for the single spin qubit, singlet-triplet qubit rotations $R_x$ and $R_z$ show a strong fidelity degradation when input disturbances are included. $R_x$ infidelity grows as $\theta$ increases whereas $R_z$ infidelity is not sensitive to $\theta$ variations.
\subsubsection{Hybrid qubit}
The rotations along $x$ and $z$ axis for the hybrid qubit are realized through signal sequences involving the effective exchange couplings $J$, $J_1$ and $J_2$. They are multi-step sequences composed respectively by two and three steps (see Table \ref{table}).
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{FidelityRxThetaTau_HY.pdf}\ \includegraphics[width=0.45\textwidth]{FidelityRzThetaTau_HY.pdf}
\caption{HY qubit. Left: $R_x$ Infidelity as a function of $\theta$ and $\tau$ when bandwidth-limited input signals are considered. Right: The same for $R_z$. Qubit parameters: $J_1$=$J_2$=1 $\mu$eV, $J$= 0.5 $\mu$eV \cite{DeMichielis-2015}.}\label{HY1}
\end{figure}
As it is evident in Figure (\ref{HY1}), $R_x$ infidelity increases as $\tau$ grows for almost all the considered rotation angles. An infidelity reduction is observable for $\theta$=$\pi$ in correspondance to $\tau$>1 ns. $R_z$ infidelity instead shows a more complex behaviour than $R_x$. 1-F increases for all $\theta$ as $\tau$ augments till roughly 100 ps. When $\tau$ is set between 100 ps and 1 ns some local minima in the infidelity can be observed at different $\theta$.
For $\tau$> 1 ns, infidelity is very high (larger than $10^{-2}$) and constant below $\theta$=$\pi$/2 whereas it decreases for rotation angles above $\pi$/2.
After inclusion of the input disturbances, the resulting HY qubit infidelities for both rotation with $\tau=100$ ps are reported in Figure (\ref{HY2}).
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{FidelityRxTheta_HY_comb_80neV.pdf}\ \includegraphics[width=0.45\textwidth]{FidelityRzTheta_HY_comb_80neV.pdf}
\caption{HY qubit. Left: $R_x$ Infidelity as a function of $\theta$ when undisturbed input signals (solid line, blue) and disturbed input signals with $\sigma_{J}$= 80 neV \cite{Thorgrimsson-2017} (dashed line, red) are considered. The value of $\tau$ is fixed to 100 ps. Right: The same for $R_z$.}\label{HY2}
\end{figure}
Hybrid qubit rotation $R_x$ shows a weak fidelity degradation with respect to the undisturbed case, in addition a slight degradation of the fidelity can be observed as $\theta$ increases. Conversely, $R_z$ infidelity is strongly affected by the input disturbances in the entire range of $\theta$ studied with a weak additional degradation for large $\theta$.
\subsection{Fidelity comparison}
A comparison of the three spin qubit under study is here shown. The infidelity behaviour as a function of $\tau$ is analized for $R_x(\pi/2)$ (Fig. (\ref{F1})) and $R_z(\pi/2)$ (Fig. (\ref{F2})). The results are presented when undisturbed pulses are considered (left) and when input disturbances are included (right). For both the operations we observe that the fidelity decreases when $\tau$ grows.
From Fig. (\ref{F1}), we may conclude that, setting the same initial condition for the gate operation considered, the SS qubit assures greater fidelities with respect to ST and HY qubit. We observe for the HY which has shorter sequences that is instead the most sensitive to $\tau$ variation. In the case in which also the input disturbance is included (right) we observe for SS and ST a saturated behavior when $\tau$ is reduced, also the HY fidelity saturates but for smaller values of $\tau$. The No-Operation curve describes the case in which the realistic pulses to perform the rotations are not applied. In this case the fidelity gives as a result $1/2$, due to the reciprocal position of the initial and the ideal final qubit states on the Bloch sphere. When tau is large the time variation of the control signal is so tiny that the qubit state is minimally rotated from the initial condition and thus we observe a saturated behavior of the infidelity to the value corresponding to No-Operation as tau increases.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{FidelityRx_all.pdf}\ \includegraphics[width=0.45\textwidth]{FidelityRxNoise_all.pdf}
\caption{$R_x(\pi/2)$ infidelity as a function of $\tau$ for SS (solid line, blue), ST (dashed line, red), HY (dot-dashed line, green) qubit and No-Operation (dotted line, black). Left: with undisturbed input signals. Right: with disturbed input signals.}\label{F1}
\end{figure}
The situation changes for $R_z(\pi/2)$ (Fig. (\ref{F2})) that is obtained with multiple step sequences (see Table \ref{table}). SS qubit is once again the most robust to $\tau$ variation, but differently from $R_x(\pi/2)$, the most sensitive is the ST qubit. The HY qubit present local minima in the gate infidelity. Those local minima originate because selected gate sequences are not the shortest ones in absolute but they have to last longer than 100 ps. Such a step time elongation is obtained by increasing the parameter $n$ (see Table \ref{table}) causing the qubit state to make additional complete rotations on the Bloch sphere during the step. When input signal sequences of such kind are filtered at a given $\tau$, the consequential delay in the signal switching (on and off) generates partial rotations on the Bloch sphere that if sum up to a 2$\pi$ rotation can lead to operations with low infidelity. The presence of an infidelity maximum in the ST qubit before the saturation to the No-Operation infidelity value (dotted line, black) is strictly connected to the nature of the multi-steps operation. The HY presents a similar behavior except from the fact that the infidelity does not reach the No-Operation value in the range of $\tau$ studied, since the $R_z(\pi/2)$ operation requires that the exchange coupling $J$, that is not tunable from external gates, is always turned on during all the operations.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{FidelityRz_all.pdf}\ \includegraphics[width=0.45\textwidth]{FidelityRzNoise_all.pdf}
\caption{$R_z(\pi/2)$ infidelity as a function of $\tau$ for SS (solid line, blue), ST (dashed line, red), HY (dot-dashed line, green) qubit and No-Operation (dotted line, black). Left: with undisturbed input signals. Right: with disturbed input signals.}\label{F2}
\end{figure}
\section{Discussion}
As expected, when bandwidth-limited input signals for $x$ and $z$ rotations are considered, all the qubit types show a reduction of gate fidelities for increasing $\tau$. Inclusion of signal amplitude disturbances further deteriorates the gate fidelity for reduced $\tau$, also creating plateaus and leading to fidelities values in the range between 90\% and 99.99\%.
The presence of local minima in the gate infidelities of HY qubit suggests that optimal working points can be identified achieving not only a reduced infidelity but even a simultaneous relaxation of the bandwidth requirements of the input system (larger $\tau$). By using parameter values at the state of the art we can conclude that the hybrid qubit has the lowest infidelity provided that input signals with large enough bandwidth are available to achieved those fast sequences. Conversely, the single spin qubit shows low infidelities even at relatively quite high time constants due to the slow pulse times of its sequences.
\section{Methods}
The qubit dynamics is obtained solving the master equation $\frac{\partial\rho}{\partial t}=-\frac{i}{\hbar}[H,\rho]$ for the total density matrix $\rho$, where $H$ is the effective qubit Hamiltonian ($H\equiv H_{SS},H_{ST},H_{HY}$) in the logical basis $\{|0\rangle,|1\rangle\}$. The ideal gate sequences for $R_x(\theta)$ and $R_z(\theta)$ are analytically derived and reported in Appendix \ref{appendixA}.
The solution of the ideal dynamics, in which the applied pulses have ideal rise and fall edges (squared signals), is compared with the realistic situation in which the rise and the fall edges of the input signals are described by a first-order low-pass filter function with time constant $\tau$.
Moreover also the non-idealities of the input amplitudes are included. Employing the quasi-static model, the errors on the input signals are modeled as random variables with Gaussian distributions featuring zero mean and standard deviation $\sigma$ that add up to the ideal values. The figure of merit used to estimate the disturbance effects is the fidelity $F=\left[\Tr\sqrt{\sqrt{\rho^{ideal}}\rho^{real}\sqrt{\rho^{ideal}}}\right]^2$ that measure how much the real state is distant from the ideal case.
\section*{Acknowledgments}
This work has been funded from the European Union's Horizon 2020 research and innovation programme under grant agreement No 688539.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\subsection{Motivation}
The capacity of the Interference channel remains one of the most challenging open problems in the domain of network information theory. The capacity region is not known in general, except for a specific range of channel parameters. For the two-user scalar Gaussian interference channel, where the interference alignment is not required, the approximate capacity region to within one bit is known \cite{EtkinTseWang08}. For the channels where interference alignment is required such as the $K$-user Gaussian interference channel \cite{MotahariGharanMaddahAliKhandani14,SridharanVishwanathJafar08,SridharanJafarianVishwanathJafarShamai08,BreslerParekhTse10,JafarVishwanath10,OrdentlichErezNazer14} and the Gaussian X-channel \cite{MotahariGharanMaddahAliKhandani14,HuangCadambeJafar12,NiesenMaddahAli13}, a tight characterization of the capacity region is not known, even for symmetric channel cases.
A tractable approach to the capacity of interference channels is to consider partial connectivity of interference links and analyze the impact of topology on the capacity. Topological interference management \cite{Jafar2014} approach gives important insights on the degrees-of-freedom (DoF) of partially connected interference channels and their connection to index coding problems \cite{BirkKol98,Bar-YossefBirkJayramKol06,Bar-YossefBirkJayramKol11,AlonLubetzkyStavWeinsteinHassidim08,MalekiCadambeJafar2012,ArbabjolfaeiBandemerKimSasogluWang13,Ong14,EffrosElRouayhebLangberg15}. It is shown that the symmetric DoF of a partially connected interference channel can be found by solving the corresponding index coding problem.
In this paper, we consider a class of three-user partially connected interference channels and characterize approximate capacity regions at finite SNR. We focus on the impact of interference topology, interference alignment, and interplay between interference and noise. We choose a few representative topologies where we can achieve clear interference alignment gain. For these topologies, Z-channel type outer bounds are tight to within a constant gap from the corresponding inner bound. For each topology, we present an achievable scheme based on rate-splitting, lattice alignment, and successive decoding.
\subsection{Related Work}
Lattice coding based on nested lattices is shown to achieve the capacity of the single user Gaussian channel in \cite{ErezZamir04,Zamir14}. The idea of lattice-based interference alignment by decoding the sum of lattice codewords appeared in the conference version of \cite{BreslerParekhTse10}. This lattice alignment technique is used to derive capacity bounds for three-user interference channel in \cite{SridharanVishwanathJafar08,SridharanJafarianVishwanathJafarShamai08}. The idea of decoding the sum of lattice codewords is also used in \cite{WilsonNarayananPfisterSprintson10,NamChungLee10,NamChungLee11} to derive the approximate capacity of the two-way relay channel. An extended approach, compute-and-forward \cite{NazerGastpar11,GastparNazer11} enables to first decode some linear combinations of lattice codewords and then solve the lattice equation to recover the desired messages. This approach is also used in \cite{OrdentlichErezNazer14} to characterize approximate sum-rate capacity of the fully connected $K$-user interference channel.
The idea of sending multiple copies of the same sub-message at different signal levels, so-called Zigzag decoding, appeared in \cite{JafarVishwanath10} where receivers collect side information and use them for interference cancellation.
The $K$-user cyclic Gaussian interference channel is considered in \cite{ZhouYu13} where an approximate capacity for the weak interference regime ($\textrm{SNR}_k\geq \textrm{INR}_k$ for all $k$) and the exact capacity for the strong interference regime ($\textrm{SNR}_k\leq \textrm{INR}_k$ for all $k$) are derived. Our type 4 and 5 channels are $K=3$ cases in \emph{mixed} interference regimes, which were not considered in \cite{ZhouYu13}.
\subsection{Main Results}
We consider five channel types defined in Table \ref{tab:TxRxSignals} and described in Fig. \ref{fig:channelType} (a)--(e). Each channel type is a partially connected three-user Gaussian interference channel. Each transmitter is subject to power constraint $\mathbb{E}[X_k^2]\leq P_k=P$. Let us denote the noise variance by $N_k=\mathbb{E}[Z_k^2]$. Without loss of generality, we assume that $N_1\leq N_2\leq N_3$.
\begin{definition}[side information graph]
The side information graph representation of an interference channel satisfies the following.
\begin{itemize}
\item A node represents a transmitter-receiver pair, or equivalently, the message.
\item There is a directed edge from node $i$ to node $j$ if transmitter $i$ does not interfere at receiver $j$.
\end{itemize}
\end{definition}
The side information graphs for five channel types are described in Fig. \ref{fig:channelType} (f)--(j). We state the main results in the following two theorems, of which the proofs will be given in the main body of the paper.
\begin{theorem}[Capacity region outer bound]
For the five channel types, if $(R_1,R_2,R_3)$ is achievable, it must satisfy
\begin{eqnarray}
\sum_{j\in\mathcal{K}} R_j \leq \frac{{1}}{{2}}\log\left(1+\frac{|\mathcal{K}|P}{\min_{j\in\mathcal{K}}\{N_j\}}\right)
\end{eqnarray}
for every subset $\mathcal{K}$ of the nodes $\{1,2,3\}$ that does not include a directed cycle in the side information graph over the subset.
\end{theorem}
\begin{theorem}[Capacity region to within one bit]\ \\
For any rate triple $(R_1,R_2,R_3)$ on the boundary of the outer bound region, the point $(R_1-1,R_2-1,R_3-1)$ is achievable.
\end{theorem}
\begin{table}
\begin{center}
\begin{tabular}{|c|l|}
\hline
Type & \ \ \ \ \ \ \ \ Channel model \\
\hline
$1$
& $\left. {\begin{array}{*{20}l}
Y_1=X_1+X_2+Z_1\\
Y_2=X_1+X_2+X_3+Z_2\\
Y_3=X_2+X_3+Z_3\\
\end{array} } \right.$
\\
\hline
$2$
& $\left. {\begin{array}{*{20}l}
Y_1=X_1+X_2+X_3+Z_1\\
Y_2=X_1+X_2+Z_2\\
Y_3=X_1+X_3+Z_3\\\end{array} } \right.$
\\
\hline
$3$
& $\left. {\begin{array}{*{20}l}
Y_1=X_1+X_3+Z_1\\
Y_2=X_2+X_3+Z_2\\
Y_3=X_1+X_2+X_3+Z_3\\
\end{array} } \right.$
\\
\hline
$4$
& $\left. {\begin{array}{*{20}l}
Y_1=X_1+X_3+Z_1\\
Y_2=X_1+X_2+Z_2\\
Y_3=X_2+X_3+Z_3\\
\end{array} } \right.$
\\
\hline
$5$
& $\left. {\begin{array}{*{20}l}
Y_1=X_1+X_2+Z_1\\
Y_2=X_2+X_3+Z_2\\
Y_3=X_1+X_3+Z_3\\
\end{array} } \right.$
\\
\hline
\end{tabular}
\caption{Five channel types}
\label{tab:TxRxSignals}
\end{center}
\end{table}
\subsection{Paper Organization and Notation}
The capacity outer bounds are derived in Section II. The inner bounds for each channel type and the corresponding gap analysis are given in Section III, IV, V, VI, VII, respectively. Section VIII concludes the paper. While lattice coding-based achievable rate regions for channel types 4 and 5 are presented in Section VI and VII, random coding achievability is given in Appendix.
Signal $\mathbf{x}_{ij}$ is a coded version of message $M_{ij}$ with code rate $R_{ij}$ unless otherwise stated. The single user capacity at receiver $k$ is denoted by $C_k=\frac{{1}}{{2}}\log\left(1+\frac{P}{N_k}\right)$. Let $\mathcal{C}$ denote the capacity region of an interference channel. Also, let $\mathcal{R}_i$ and $\mathcal{R}_o$ denote the capacity inner bound and the capacity outer bound, respectively. Thus, $\mathcal{R}_i\subset\mathcal{C}\subset\mathcal{R}_o$. Let $\delta_k$ denote the gap on the rate $R_k$ between $\mathcal{R}_i$ and $\mathcal{R}_o$. Let $\delta_{jk}$ denote the gap on the sum-rate $R_j+R_k$ between $\mathcal{R}_i$ and $\mathcal{R}_o$. For example, if
\begin{eqnarray}
&&\mathcal{R}_i=\{(R_j,R_k): R_k\leq L_k, R_j+R_k\leq L_{jk}\}\\
&&\mathcal{R}_o=\{(R_j,R_k): R_k\leq U_k, R_j+R_k\leq U_{jk}\},
\end{eqnarray}
then $\delta_k=U_k-L_k$ and $\delta_{jk}=U_{jk}-L_{jk}$. For side information graph, we use graph notation of \cite{ArbabjolfaeiBandemerKimSasogluWang13}. For example, $\mathcal{G}_1=\{(1|3),(2),(3|1)\}$ means that node 1 has an incoming edge from node 3, that node 2 has no incoming edge, and that node 3 has an incoming edge from node 1.
\begin{figure*}[tp]
\begin{center}
\mbox{
\subfigure[Type 1]{\includegraphics[width=0.18\textwidth]{IFC1-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Type 2]{\includegraphics[width=0.18\textwidth]{IFC2-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Type 3]{\includegraphics[width=0.18\textwidth]{IFC3-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Type 4]{\includegraphics[width=0.18\textwidth]{IFC4-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Type 5]{\includegraphics[width=0.18\textwidth]{IFC5-eps-converted-to.pdf}}
}
\mbox{
\subfigure[$\mathcal{G}_1$]{\includegraphics[width=0.18\textwidth]{type1-eps-converted-to.pdf}}
}
\mbox{
\subfigure[$\mathcal{G}_2$]{\includegraphics[width=0.18\textwidth]{type2-eps-converted-to.pdf}}
}
\mbox{
\subfigure[$\mathcal{G}_3$]{\includegraphics[width=0.18\textwidth]{type3-eps-converted-to.pdf}}
}
\mbox{
\subfigure[$\mathcal{G}_4$]{\includegraphics[width=0.18\textwidth]{type4-eps-converted-to.pdf}}
}
\mbox{
\subfigure[$\mathcal{G}_5$]{\includegraphics[width=0.18\textwidth]{type5-eps-converted-to.pdf}}
}
\caption{Five channel types and their side information graphs: $\mathcal{G}_1=\{(1|3),(2),(3|1)\}$, $\mathcal{G}_2=\{(1),(2|3),(3|2)\}$, $\mathcal{G}_3=\{(1|2),(2|1),(3)\}$, $\mathcal{G}_4=\{(1|2),(2|3),(3|1)\}$, and $\mathcal{G}_5=\{(1|3),(2|1),(3|2)\}$.}
\label{fig:channelType}
\end{center}
\end{figure*}
\section{Capacity outer bounds}
We prove the capacity outer bound in \emph{Theorem 1} for each channel type. The result is summarized in Table \ref{tab:outerBounds}. The shape of the outer bound region is illustrated in Fig. \ref{fig:outerBoundShape}. For all channel types, we assume $P_1=P_2=P_3=P$ and $N_1\leq N_2\leq N_3$.
\subsection{Channel Type 1}
In this section, we present an outer bound on the capacity region of Type 1 channel defined by
\[
\left[ {\begin{array}{*{20}c}
Y_1\\
Y_2\\
Y_3\\
\end{array} } \right]
=\left[ {\begin{array}{*{20}c}
1 & 1 & 0\\
1 & 1 & 1\\
0 & 1 & 1\\
\end{array} } \right]
\left[ {\begin{array}{*{20}c}
X_1\\
X_2\\
X_3\\
\end{array} } \right]
+\left[ {\begin{array}{*{20}c}
Z_1\\
Z_2\\
Z_3\\
\end{array} } \right].
\]
We state the outer bound in the following theorem.
\begin{theorem}
The capacity region of Type 1 channel is contained in the following outer bound region:
\begin{eqnarray*}
&&\ \ \ \ \ \ \ R_k\leq C_k,\ k=1,2,3\\
&&R_1+R_2\leq \frac{{1}}{{2}}\log\left(1+\frac{P}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{2P+N_2}{P+N_2}\right)\\
&&R_2+R_3\leq \frac{{1}}{{2}}\log\left(1+\frac{P}{N_2}\right)+\frac{{1}}{{2}}\log\left(\frac{2P+N_3}{P+N_3}\right).
\end{eqnarray*}
\begin{IEEEproof}
The individual rate bounds are obvious. We proceed to sum-rate bounds.
\begin{eqnarray*}
&&n(R_1+R_2-\epsilon)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_2^n;Y_2^n)\\
&&\ \ \leq I(X_1^n;Y_1^n|X_2^n)+I(X_2^n;Y_2^n|X_3^n)\\
&&\ \ = h(Y_1^n|X_2^n)-h(Y_1^n|X_1^n,X_2^n)\\
&&\ \ \ \ \ \ \ +h(Y_2^n|X_3^n)-h(Y_2^n|X_2^n,X_3^n)\\
&&\ \ = h(X_1^n+Z_1^n)-h(Z_1^n)\\
&&\ \ \ \ \ \ \ +h(X_1^n+X_2^n+Z_2^n)-h(X_1^n+Z_2^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{P+N_1}{N_1}\right)+\frac{n}{2}\log\left(\frac{2P+N_2}{P+N_2}\right)
\end{eqnarray*}
where the first inequality is by Fano's inequality, the second inequality due to the independence of $X_1,X_2,X_3$. The third inequality holds from the fact that Gaussian distribution maximizes differential entropy and that $h(X_1^n+Z_1^n)-h(X_1^n+Z_2^n)$ is also maximized by Gaussian distribution. Similarly,
\begin{eqnarray*}
&&n(R_2+R_3-\epsilon)\\
&&\ \ \leq I(X_2^n;Y_2^n)+I(X_3^n;Y_3^n)\\
&&\ \ \leq I(X_2^n;Y_2^n|X_1^n,X_3^n)+I(X_3^n;Y_3^n)\\
&&\ \ = h(Y_2^n|X_1^n,X_3^n)-h(Y_2^n|X_1^n,X_2^n,X_3^n)\\
&&\ \ \ \ \ \ \ +h(Y_3^n)-h(Y_3^n|X_3^n)\\
&&\ \ = h(X_2^n+Z_2^n)-h(Z_2^n)\\
&&\ \ \ \ \ \ \ +h(X_2^n+X_3^n+Z_3^n)-h(X_2^n+Z_3^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{P+N_2}{N_2}\right)+\frac{n}{2}\log\left(\frac{2P+N_3}{P+N_3}\right).
\end{eqnarray*}
\end{IEEEproof}
\end{theorem}
\subsection{Channel Type 2}
In this section, we present an outer bound on the capacity region of Type 2 channel defined by
\[
\left[ {\begin{array}{*{20}c}
Y_1\\
Y_2\\
Y_3\\
\end{array} } \right]
=\left[ {\begin{array}{*{20}c}
1 & 1 & 1\\
1 & 1 & 0\\
1 & 0 & 1\\
\end{array} } \right]
\left[ {\begin{array}{*{20}c}
X_1\\
X_2\\
X_3\\
\end{array} } \right]
+\left[ {\begin{array}{*{20}c}
Z_1\\
Z_2\\
Z_3\\
\end{array} } \right].
\]
We state the outer bound in the following theorem.
\begin{theorem}
The capacity region of Type 2 channel is contained in the following outer bound region:
\begin{eqnarray*}
&&\ \ \ \ \ \ \ R_k\leq C_k,\ k=1,2,3\\
&&R_1+R_2\leq \frac{{1}}{{2}}\log\left(1+\frac{P}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{2P+N_2}{P+N_2}\right)\\
&&R_1+R_3\leq \frac{{1}}{{2}}\log\left(1+\frac{P}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{2P+N_3}{P+N_3}\right).
\end{eqnarray*}
\begin{IEEEproof}
\begin{eqnarray*}
&&n(R_1+R_2-\epsilon)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_2^n;Y_2^n)\\
&&\ \ \leq I(X_1^n;Y_1^n|X_2^n,X_3^n)+I(X_2^n;Y_2^n)\\
&&\ \ = h(Y_1^n|X_2^n,X_3^n)-h(Y_1^n|X_1^n,X_2^n,X_3^n)\\
&&\ \ \ \ \ \ \ +h(Y_2^n)-h(Y_2^n|X_2^n)\\
&&\ \ = h(X_1^n+Z_1^n)-h(Z_1^n)\\
&&\ \ \ \ \ \ \ +h(X_1^n+X_2^n+Z_2^n)-h(X_1^n+Z_2^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{P+N_1}{N_1}\right)+\frac{n}{2}\log\left(\frac{2P+N_2}{P+N_2}\right).
\end{eqnarray*}
\begin{eqnarray*}
&&n(R_1+R_3-\epsilon)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_3^n;Y_3^n)\\
&&\ \ \leq I(X_1^n;Y_1^n|X_2^n,X_3^n)+I(X_3^n;Y_3^n)\\
&&\ \ = h(Y_1^n|X_2^n,X_3^n)-h(Y_1^n|X_1^n,X_2^n,X_3^n)\\
&&\ \ \ \ \ \ \ +h(Y_3^n)-h(Y_3^n|X_3^n)\\
&&\ \ = h(X_1^n+Z_1^n)-h(Z_1^n)\\
&&\ \ \ \ \ \ \ +h(X_1^n+X_3^n+Z_3^n)-h(X_1^n+Z_3^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{P+N_1}{N_1}\right)+\frac{n}{2}\log\left(\frac{2P+N_3}{P+N_3}\right).
\end{eqnarray*}
\end{IEEEproof}
\end{theorem}
\subsection{Channel Type 3}
In this section, we present an outer bound on the capacity region of Type 3 channel defined by
\[
\left[ {\begin{array}{*{20}c}
Y_1\\
Y_2\\
Y_3\\
\end{array} } \right]
=\left[ {\begin{array}{*{20}c}
1 & 0 & 1\\
0 & 1 & 1\\
1 & 1 & 1\\
\end{array} } \right]
\left[ {\begin{array}{*{20}c}
X_1\\
X_2\\
X_3\\
\end{array} } \right]
+\left[ {\begin{array}{*{20}c}
Z_1\\
Z_2\\
Z_3\\
\end{array} } \right].
\]
We state the outer bound in the following theorem.
\begin{theorem}
The capacity region of Type 3 channel is contained in the following outer bound region:
\begin{eqnarray*}
&&\ \ \ \ \ \ \ R_k\leq C_k,\ k=1,2,3\\
&&R_1+R_3\leq \frac{{1}}{{2}}\log\left(1+\frac{P}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{2P+N_3}{P+N_3}\right)\\
&&R_2+R_3\leq \frac{{1}}{{2}}\log\left(1+\frac{P}{N_2}\right)+\frac{{1}}{{2}}\log\left(\frac{2P+N_3}{P+N_3}\right).
\end{eqnarray*}
\begin{IEEEproof}
\begin{eqnarray*}
&&n(R_1+R_3-\epsilon)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_3^n;Y_3^n)\\
&&\ \ \leq I(X_1^n;Y_1^n|X_3^n)+I(X_3^n;Y_3^n|X_2^n)\\
&&\ \ = h(Y_1^n|X_3^n)-h(Y_1^n|X_1^n,X_3^n)\\
&&\ \ \ \ \ \ \ +h(Y_3^n|X_2^n)-h(Y_3^n|X_2^n,X_3^n)\\
&&\ \ = h(X_1^n+Z_1^n)-h(Z_1^n)\\
&&\ \ \ \ \ \ \ +h(X_1^n+X_3^n+Z_3^n)-h(X_1^n+Z_3^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{P+N_1}{N_1}\right)+\frac{n}{2}\log\left(\frac{2P+N_3}{P+N_3}\right).
\end{eqnarray*}
\begin{eqnarray*}
&&n(R_2+R_3-\epsilon)\\
&&\ \ \leq I(X_2^n;Y_2^n)+I(X_3^n;Y_3^n)\\
&&\ \ \leq I(X_2^n;Y_2^n|X_3^n)+I(X_3^n;Y_3^n|X_1^n)\\
&&\ \ = h(Y_2^n|X_3^n)-h(Y_2^n|X_2^n,X_3^n)\\
&&\ \ \ \ \ \ \ +h(Y_3^n|X_1^n)-h(Y_3^n|X_1^n,X_3^n)\\
&&\ \ = h(X_2^n+Z_2^n)-h(Z_2^n)\\
&&\ \ \ \ \ \ \ +h(X_2^n+X_3^n+Z_3^n)-h(X_2^n+Z_3^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{P+N_2}{N_2}\right)+\frac{n}{2}\log\left(\frac{2P+N_3}{P+N_3}\right).
\end{eqnarray*}
\end{IEEEproof}
\end{theorem}
\begin{figure}[tp]
\begin{center}
\mbox{
\subfigure[Channel type 1]{\includegraphics[width=0.22\textwidth]{type1_region-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Channel types 4 and 5]{\includegraphics[width=0.22\textwidth]{type4_region-eps-converted-to.pdf}}
}
\caption{The shape of the outer bound region. The regions for channel types 2 and 3 look similar to the one for channel type 1 (with change of axis).}
\label{fig:outerBoundShape}
\end{center}
\end{figure}
\subsection{Channel Type 4}
In this section, we present an outer bound on the capacity region of Type 4 channel defined by
\[
\left[ {\begin{array}{*{20}c}
Y_1\\
Y_2\\
Y_3\\
\end{array} } \right]
=\left[ {\begin{array}{*{20}c}
1 & 0 & 1\\
1 & 1 & 0\\
0 & 1 & 1\\
\end{array} } \right]
\left[ {\begin{array}{*{20}c}
X_1\\
X_2\\
X_3\\
\end{array} } \right]
+\left[ {\begin{array}{*{20}c}
Z_1\\
Z_2\\
Z_3\\
\end{array} } \right].
\]
This is a cyclic Gaussian interference channel \cite{ZhouYu13}. We first show that channel type 4 is in the mixed interference regime. By normalizing the noise variances, we get the equivalent channel given by
\[
\left[ {\begin{array}{*{20}c}
Y_1'\\
Y_2'\\
Y_3'\\
\end{array} } \right]
=\left[ {\begin{array}{*{20}c}
h_{11} & h_{12} & h_{13}\\
h_{21} & h_{22} & h_{23}\\
h_{31} & h_{32} & h_{33}\\
\end{array} } \right]
\left[ {\begin{array}{*{20}c}
X_1\\
X_2\\
X_3\\
\end{array} } \right]
+\left[ {\begin{array}{*{20}c}
Z_1'\\
Z_2'\\
Z_3'\\
\end{array} } \right]
\]
where $Y_k'=\frac{1}{\sqrt{N_k}} Y_k$, $Z_k'=\frac{1}{\sqrt{N_k}}Z_k$, $N_0=\mathbb{E}[Z_k'^2]=1$, $\mathbb{E}[X_k^2]\leq P_k=P$ and
\[
\left[ {\begin{array}{*{20}c}
h_{11} & h_{12} & h_{13}\\
h_{21} & h_{22} & h_{23}\\
h_{31} & h_{32} & h_{33}\\
\end{array} } \right]
=\left[ {\begin{array}{*{20}c}
\frac{1}{\sqrt{N_1}} & 0 & \frac{1}{\sqrt{N_1}}\\
\frac{1}{\sqrt{N_2}} & \frac{1}{\sqrt{N_2}} & 0\\
0 & \frac{1}{\sqrt{N_3}} & \frac{1}{\sqrt{N_3}}\\
\end{array} } \right].
\]
With the usual definitions of $\textrm{SNR}_k=\frac{h_{kk}^2 P_k}{N_0}$ and\\ $\textrm{INR}_k=\frac{h_{jk}^2 P_k}{N_0}$ for $j\neq k$ as in \cite{EtkinTseWang08,ZhouYu13},
\begin{eqnarray}
&&\textrm{SNR}_1=\frac{P}{N_1} \geq \textrm{INR}_1=\frac{P}{N_2}\\
&&\textrm{SNR}_2=\frac{P}{N_2} \geq \textrm{INR}_2=\frac{P}{N_3}\\
&&\textrm{SNR}_3=\frac{P}{N_3} \leq \textrm{INR}_3=\frac{P}{N_1}.
\end{eqnarray}
We state the outer bound in the following theorem.
\begin{theorem}
The capacity region of Type 4 channel is contained in the following outer bound region:
\begin{eqnarray*}
&&\ \ \ \ \ \ \ R_k\leq C_k,\ k=1,2,3\\
&&R_1+R_2\leq \frac{{1}}{{2}}\log\left(1+\frac{P}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{2P+N_2}{P+N_2}\right)\\
&&R_1+R_3\leq \frac{{1}}{{2}}\log\left(1+\frac{2P}{N_1}\right)\\
&&R_2+R_3\leq \frac{{1}}{{2}}\log\left(1+\frac{P}{N_2}\right)+\frac{{1}}{{2}}\log\left(\frac{2P+N_3}{P+N_3}\right).
\end{eqnarray*}
\begin{IEEEproof}
\begin{eqnarray*}
&&n(R_1+R_2-\epsilon)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_2^n;Y_2^n)\\
&&\ \ \leq I(X_1^n;Y_1^n|X_3^n)+I(X_2^n;Y_2^n)\\
&&\ \ = h(Y_1^n|X_3^n)-h(Y_1^n|X_1^n,X_3^n)\\
&&\ \ \ \ \ \ \ +h(Y_2^n)-h(Y_2^n|X_2^n)\\
&&\ \ = h(X_1^n+Z_1^n)-h(Z_1^n)\\
&&\ \ \ \ \ \ \ +h(X_1^n+X_2^n+Z_2^n)-h(X_1^n+Z_2^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{P+N_1}{N_1}\right)+\frac{n}{2}\log\left(\frac{2P+N_2}{P+N_2}\right).
\end{eqnarray*}
\begin{eqnarray*}
&&n(R_2+R_3-\epsilon)\\
&&\ \ \leq I(X_2^n;Y_2^n)+I(X_3^n;Y_3^n)\\
&&\ \ \leq I(X_2^n;Y_2^n|X_1^n)+I(X_3^n;Y_3^n)\\
&&\ \ = h(Y_2^n|X_1^n)-h(Y_2^n|X_1^n,X_2^n)\\
&&\ \ \ \ \ \ \ +h(Y_3^n)-h(Y_3^n|X_3^n)\\
&&\ \ = h(X_2^n+Z_2^n)-h(Z_2^n)\\
&&\ \ \ \ \ \ \ +h(X_2^n+X_3^n+Z_3^n)-h(X_2^n+Z_3^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{P+N_2}{N_2}\right)+\frac{n}{2}\log\left(\frac{2P+N_3}{P+N_3}\right).
\end{eqnarray*}
\begin{eqnarray*}
&&n(R_1+R_3-\epsilon)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_3^n;Y_3^n)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_3^n;Y_3^n|X_2^n)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_3^n;Y_1^n|X_1^n)\\
&&\ \ \leq I(X_1^n,X_3^n;Y_1^n)\\
&&\ \ = h(Y_1^n)-h(Y_1^n|X_1^n,X_3^n)\\
&&\ \ = h(X_1^n+X_3^n+Z_1^n)-h(Z_1^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{2P+N_1}{N_1}\right)
\end{eqnarray*}
where we used the fact that $I(X_3^n;Y_3^n|X_2^n)=I(X_3^n;X_3^n+Z_3^n)\leq I(X_3^n;X_3^n+Z_1^n)=I(X_3^n;Y_1^n|X_1^n)$.
\end{IEEEproof}
\end{theorem}
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Type & Outer bound region $\mathcal{R}_o$ & Relaxed outer bound region $\mathcal{R}_o'$ & Two-dimensional cross-section of $\mathcal{R}_o'$ \\
\hline
$1$
& $\left. {\begin{array}{*{20}l}
\ \ \ \ \ \ \ R_k\leq C_k,\ k=1,2,3\\
R_1+R_2\leq \frac{{1}}{{2}}\log\left(\frac{P+N_1}{N_1}\cdot\frac{2P+N_2}{P+N_2}\right)\\
R_2+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P+N_2}{N_2}\cdot\frac{2P+N_3}{P+N_3}\right)\\
\end{array} } \right.$
& $\left. {\begin{array}{*{20}l}
\ \ \ \ \ \ \ R_k\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_k}\cdot\frac{4}{3}\right)\\
R_1+R_2\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)\\
R_2+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)\\
\end{array} } \right.$
& $\left. {\begin{array}{*{20}l}
\textrm{At some } R_2 \in [0,C_2],\\
R_1\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_2, \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{4}{3}\right)\right\}\\
R_3\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)-R_2, \frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right)\right\}\\
\end{array} } \right.$
\\
\hline
$2$
& $\left. {\begin{array}{*{20}l}
\ \ \ \ \ \ \ R_k\leq C_k,\ k=1,2,3\\
R_1+R_2\leq \frac{{1}}{{2}}\log\left(\frac{P+N_1}{N_1}\cdot\frac{2P+N_2}{P+N_2}\right)\\
R_1+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P+N_1}{N_1}\cdot\frac{2P+N_3}{P+N_3}\right)\\
\end{array} } \right.$
& $\left. {\begin{array}{*{20}l}
\ \ \ \ \ \ \ R_k\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_k}\cdot\frac{4}{3}\right)\\
R_1+R_2\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)\\
R_1+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)\\
\end{array} } \right.$
& $\left. {\begin{array}{*{20}l}
\textrm{At some } R_1 \in [0,C_1],\\
R_2\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_1, \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{4}{3}\right)\right\}\\
R_3\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_1, \frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right)\right\}\\
\end{array} } \right.$
\\
\hline
$3$
& $\left. {\begin{array}{*{20}l}
\ \ \ \ \ \ \ R_k\leq C_k,\ k=1,2,3\\
R_1+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P+N_1}{N_1}\cdot\frac{2P+N_3}{P+N_3}\right)\\
R_2+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P+N_2}{N_2}\cdot\frac{2P+N_3}{P+N_3}\right)\\
\end{array} } \right.$
& $\left. {\begin{array}{*{20}l}
\ \ \ \ \ \ \ R_k\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_k}\cdot\frac{4}{3}\right)\\
R_1+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)\\
R_2+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)\\
\end{array} } \right.$
& $\left. {\begin{array}{*{20}l}
\textrm{At some } R_3 \in [0,C_3],\\
R_1\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_3, \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{4}{3}\right)\right\}\\
R_2\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)-R_3, \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{4}{3}\right)\right\}\\
\end{array} } \right.$
\\
\hline
$4$
& $\left. {\begin{array}{*{20}l}
\ \ \ \ \ \ \ R_k\leq C_k,\ k=1,2,3\\
R_1+R_2\leq \frac{{1}}{{2}}\log\left(\frac{P+N_1}{N_1}\cdot\frac{2P+N_2}{P+N_2}\right)\\
R_1+R_3\leq \frac{{1}}{{2}}\log\left(\frac{2P+N_1}{N_1}\right)\\
R_2+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P+N_2}{N_2}\cdot\frac{2P+N_3}{P+N_3}\right)\\
\end{array} } \right.$
& $\left. {\begin{array}{*{20}l}
\ \ \ \ \ \ \ R_k\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_k}\cdot\frac{4}{3}\right)\\
R_1+R_2\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)\\
R_1+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)\\
R_2+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)\\
\end{array} } \right.$
& $\left. {\begin{array}{*{20}l}
\textrm{At some } R_1 \in [0,C_1],\\
R_2\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_1, \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{4}{3}\right)\right\}\\
R_3\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_1, \frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right)\right\}\\
R_2+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)\\
\end{array} } \right.$
\\
\hline
$5$
& $\left. {\begin{array}{*{20}l}
\ \ \ \ \ \ \ R_k\leq C_k,\ k=1,2,3\\
R_1+R_2\leq \frac{{1}}{{2}}\log\left(\frac{2P+N_1}{N_1}\right)\\
R_2+R_3\leq \frac{{1}}{{2}}\log\left(\frac{2P+N_2}{N_2}\right)\\
R_1+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P+N_1}{N_1}\cdot\frac{2P+N_3}{P+N_3}\right)\\
\end{array} } \right.$
& $\left. {\begin{array}{*{20}l}
\ \ \ \ \ \ \ R_k\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_k}\cdot\frac{4}{3}\right)\\
R_1+R_2\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)\\
R_2+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)\\
R_1+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)\\
\end{array} } \right.$
& $\left. {\begin{array}{*{20}l}
\textrm{At some } R_2 \in [0,C_2],\\
R_1\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_2, \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{4}{3}\right)\right\}\\
R_3\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)-R_2, \frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right)\right\}\\
R_1+R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)\\
\end{array} } \right.$
\\
\hline
\end{tabular}
\caption{Capacity outer bounds}
\label{tab:outerBounds}
\end{center}
\end{table*}
\subsection{Channel Type 5}
In this section, we present an outer bound on the capacity region of Type 5 channel defined by
\[
\left[ {\begin{array}{*{20}c}
Y_1\\
Y_2\\
Y_3\\
\end{array} } \right]
=\left[ {\begin{array}{*{20}c}
1 & 1 & 0\\
0 & 1 & 1\\
1 & 0 & 1\\
\end{array} } \right]
\left[ {\begin{array}{*{20}c}
X_1\\
X_2\\
X_3\\
\end{array} } \right]
+\left[ {\begin{array}{*{20}c}
Z_1\\
Z_2\\
Z_3\\
\end{array} } \right].
\]
This is a cyclic Gaussian interference channel \cite{ZhouYu13}. We first show that channel type 5 is in the mixed interference regime. By normalizing the noise variances, we get the equivalent channel given by
\[
\left[ {\begin{array}{*{20}c}
Y_1'\\
Y_2'\\
Y_3'\\
\end{array} } \right]
=\left[ {\begin{array}{*{20}c}
\frac{1}{\sqrt{N_1}} & \frac{1}{\sqrt{N_1}} & 0\\
0 & \frac{1}{\sqrt{N_2}} & \frac{1}{\sqrt{N_2}}\\
\frac{1}{\sqrt{N_3}} & 0 & \frac{1}{\sqrt{N_3}}\\
\end{array} } \right]
\left[ {\begin{array}{*{20}c}
X_1\\
X_2\\
X_3\\
\end{array} } \right]
+\left[ {\begin{array}{*{20}c}
Z_1'\\
Z_2'\\
Z_3'\\
\end{array} } \right].
\]
We can see that
\begin{eqnarray}
&&\textrm{SNR}_1=\frac{P}{N_1} \geq \textrm{INR}_1=\frac{P}{N_3}\\
&&\textrm{SNR}_2=\frac{P}{N_2} \leq \textrm{INR}_2=\frac{P}{N_1}\\
&&\textrm{SNR}_3=\frac{P}{N_3} \leq \textrm{INR}_3=\frac{P}{N_2}.
\end{eqnarray}
We state the outer bound in the following theorem.
\begin{theorem}
The capacity region of Type 5 channel is contained in the following outer bound region:
\begin{eqnarray*}
&&\ \ \ \ \ \ \ R_k\leq C_k,\ k=1,2,3\\
&&R_1+R_2\leq \frac{{1}}{{2}}\log\left(1+\frac{2P}{N_1}\right)\\
&&R_2+R_3\leq \frac{{1}}{{2}}\log\left(1+\frac{2P}{N_2}\right)\\
&&R_1+R_3\leq \frac{{1}}{{2}}\log\left(1+\frac{P}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{2P+N_3}{P+N_3}\right).
\end{eqnarray*}
\begin{IEEEproof}
\begin{eqnarray*}
&&n(R_1+R_2-\epsilon)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_2^n;Y_2^n)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_2^n;Y_2^n|X_3^n)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_2^n;Y_1^n|X_1^n)\\
&&\ \ \leq I(X_1^n,X_2^n;Y_1^n)\\
&&\ \ = h(Y_1^n)-h(Y_1^n|X_1^n,X_2^n)\\
&&\ \ = h(X_1^n+X_2^n+Z_1^n)-h(Z_1^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{2P+N_1}{N_1}\right)
\end{eqnarray*}
where we used the fact that $I(X_2^n;Y_2^n|X_3^n)=I(X_2^n;X_2^n+Z_2^n)\leq I(X_2^n;X_2^n+Z_1^n)=I(X_2^n;Y_1^n|X_1^n)$.
\begin{eqnarray*}
&&n(R_2+R_3-\epsilon)\\
&&\ \ \leq I(X_2^n;Y_2^n)+I(X_3^n;Y_3^n)\\
&&\ \ \leq I(X_2^n;Y_2^n)+I(X_3^n;Y_3^n|X_1^n)\\
&&\ \ \leq I(X_2^n;Y_2^n)+I(X_3^n;Y_2^n|X_2^n)\\
&&\ \ \leq I(X_2^n,X_3^n;Y_2^n)\\
&&\ \ = h(Y_2^n)-h(Y_2^n|X_2^n,X_3^n)\\
&&\ \ = h(X_2^n+X_3^n+Z_2^n)-h(Z_2^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{2P+N_2}{N_2}\right)
\end{eqnarray*}
where we used the fact that $I(X_3^n;Y_3^n|X_1^n)=I(X_3^n;X_3^n+Z_3^n)\leq I(X_3^n;X_3^n+Z_2^n)=I(X_3^n;Y_2^n|X_2^n)$.
\begin{eqnarray*}
&&n(R_1+R_3-\epsilon)\\
&&\ \ \leq I(X_1^n;Y_1^n)+I(X_3^n;Y_3^n)\\
&&\ \ \leq I(X_1^n;Y_1^n|X_2^n)+I(X_3^n;Y_3^n)\\
&&\ \ = h(Y_1^n|X_2^n)-h(Y_1^n|X_1^n,X_2^n)\\
&&\ \ \ \ \ \ \ +h(Y_3^n)-h(Y_3^n|X_3^n)\\
&&\ \ = h(X_1^n+Z_1^n)-h(Z_1^n)\\
&&\ \ \ \ \ \ \ +h(X_1^n+X_3^n+Z_3^n)-h(X_1^n+Z_3^n)\\
&&\ \ \leq \frac{n}{2}\log\left(\frac{P+N_1}{N_1}\right)+\frac{n}{2}\log\left(\frac{2P+N_3}{P+N_3}\right)
\end{eqnarray*}
\end{IEEEproof}
\end{theorem}
\subsection{Relaxed Outer Bounds}
For ease of gap calculation, we also derive relaxed outer bounds. First, we can see that for $N_j\leq N_k$,
\begin{eqnarray*}
\frac{{1}}{{2}}\log\left(1+\frac{P}{N_j}\right)+\frac{{1}}{{2}}\log\left(\frac{2P+N_k}{P+N_k}\right)\leq \frac{{1}}{{2}}\log\left(1+\frac{2P}{N_j}\right).
\end{eqnarray*}
Five outer bound theorems in this section, together with this inequality, give the sum-rate bound expression in Theorem 1.
Next, we can assume that $P\geq 3N_j$ for $j=1,2,3$. Otherwise, showing one-bit gap capacity is trivial as the capacity region is included in the unit hypercube, i.e., $R_j\leq \frac{{1}}{{2}}\log\left(1+\frac{P}{N_j}\right)< 1$. For $P\geq 3N_j$,
\begin{eqnarray*}
&&\frac{{1}}{{2}}\log\left(1+\frac{2P}{N_j}\right)=\frac{{1}}{{2}}\log\left(\frac{P}{N_j}\right)+\frac{{1}}{{2}}\log\left(\frac{N_j}{P}+2\right)\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_j}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\right)\\
&&\frac{{1}}{{2}}\log\left(1+\frac{P}{N_j}\right)\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_j}\right)+\frac{{1}}{{2}}\log\left(\frac{4}{3}\right).
\end{eqnarray*}
The resulting relaxed outer bounds $\mathcal{R}_o'$ are summarized in Table \ref{tab:outerBounds}.
\section{Inner Bound: Channel Type 1}
\begin{theorem}
Given $\alpha=(\alpha_0,\alpha_2)\in [0,1]^2$, the rate region $\mathcal{R}_\alpha$ is defined by
\begin{eqnarray*}
&&R_1 \leq \frac{{1}}{{2}}\log^+\left(\frac{1-\alpha_0}{2-\alpha_0}+\frac{(1-\alpha_0) P}{(\alpha_0+\alpha_2) P+N_2}\right)\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{{1}}{{2}}\log\left(1+\frac{{\alpha}_0 P}{N_1}\right)\\
&&R_2 \leq \frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_2}\right)\\
&&R_3 \leq \frac{{1}}{{2}}\log^+\left(\frac{1}{2-\alpha_0}+\frac{P}{(\alpha_0+\alpha_2) P+N_3}\right)
\end{eqnarray*}
where $\log^+(\cdot)=\max\{0,\log(\cdot)\}$. And, \[\mathcal{R}=\textsc{conv}\left(\bigcup_{\alpha}\mathcal{R}_\alpha\right)\] is achievable where $\textsc{conv}(\cdot)$ is convex hull operator.
\end{theorem}
\subsection{Preliminaries: Lattice Coding}
Lattice $\Lambda$ is a discrete subgroup of $\mathbb{R}^n$, $\Lambda =\{\mathbf{t}=\mathbf{G}\mathbf{u}: \mathbf{u}\in\mathbb{Z}^n\}$ where $\mathbf{G}\in\mathbb{R}^{n\times n}$ is a real generator matrix. Quantization with respect to $\Lambda$ is $Q_{\Lambda}(\mathbf{x})=\arg \min_{\lambda \in \Lambda} \|\mathbf{x}-\lambda\|$. Modulo operation with respect to $\Lambda$ is $M_{\Lambda}(\mathbf{x})=[\mathbf{x}]\textrm{ mod }\Lambda=\mathbf{x}-Q_{\Lambda}(\mathbf{x})$. For convenience, we use both notations $M_{\Lambda}(\cdot)$ and $[\cdot]\textrm{ mod }\Lambda$ interchangeably. Fundamental Voronoi region of $\Lambda$ is $\mathcal{V}(\Lambda)=\{\mathbf{x}:Q_{\Lambda}(\mathbf{x})=\mathbf{0}\}$. Volume of the Voronoi region of $\Lambda$ is $V(\Lambda)=\int_{\mathcal{V}(\Lambda)} d\mathbf{x}$. Normalized second moment of $\Lambda$ is $G(\Lambda)=\frac{\sigma^2(\Lambda)}{V(\Lambda)^{2/n}}$ where $\sigma^2(\Lambda)=\frac{1}{nV(\Lambda)}\int_{\mathcal{V}(\Lambda)} \|\mathbf{x}\|^2 d\mathbf{x}$. Lattices $\Lambda_1$, $\Lambda_2$ and $\Lambda$ are said to be nested if $\Lambda\subseteq\Lambda_2\subseteq\Lambda_1$. For nested lattices $\Lambda_2\subset\Lambda_1$,
$\Lambda_1/\Lambda_2=\Lambda_1\cap\mathcal{V}(\Lambda_2)$.
We briefly review the lattice decoding procedure in \cite{ErezZamir04}. We use nested lattices $\Lambda\subseteq \Lambda_t$ with $\sigma^2(\Lambda)=S$, $G(\Lambda)=\frac{1}{2\pi e}$, and $V(\Lambda)=(2\pi e S)^{\frac{n}{2}}$. The transmitter sends $\mathbf{x}=[\mathbf{t}+\mathbf{d}]\textrm{ mod }\Lambda$ over the point-to-point Gaussian channel $\mathbf{y}=\mathbf{x}+\mathbf{z}$ where the codeword $\mathbf{t}\in \Lambda_t\cap \mathcal{V}(\Lambda)$, the dither signal $\mathbf{d}\sim\textrm{Unif}(\mathcal{V}(\Lambda))$, the transmit power $\frac{1}{n}\|\mathbf{x}\|^2=S$ and the noise $\mathbf{z}\sim\mathcal{N}(0,N\mathbf{I})$. The code rate is given by $R=\frac{1}{n}\log\left(\frac{V(\Lambda)}{V(\Lambda_t)}\right)$.
After linear scaling, dither removal, and mod-$\Lambda$ operation, we get
\begin{eqnarray}
\mathbf{y}'=[\beta\mathbf{y}-\mathbf{d}]\textrm{ mod }\Lambda = \left[\mathbf{t}+\mathbf{z}_e\right]\textrm{ mod }\Lambda
\end{eqnarray}
where the effective noise is $\mathbf{z}_e=(\beta-1)\mathbf{x}+\beta\mathbf{z}_1$
and its variance
$\sigma_e^2=\frac{1}{n}\mathbb{E}[\left\|\mathbf{z}_e\right\|^2]=(\beta-1)^2 S+\beta^2 N$.
With the MMSE scaling factor $\beta=\frac{S}{S+N}$ plugged in, we get $\sigma_e^2=\beta N=\frac{SN}{S+N}$. The capacity of the mod-$\Lambda$ channel \cite{ErezZamir04} between $\mathbf{t}$ and $\mathbf{y}$ is
\begin{eqnarray*}
\frac{1}{n} I\left(\mathbf{t};\mathbf{y}\right)
&=& \frac{1}{n} h\left(\mathbf{y}\right)-\frac{1}{n} h\left(\mathbf{y}|\mathbf{t}\right)\\
&=& \frac{1}{n} h\left(\mathbf{y}\right)-\frac{1}{n} h\left(\mathbf{z}\textrm{ mod }\Lambda\right)\\
&\geq &\frac{1}{n} h\left(\mathbf{y}\right)-\frac{1}{n} h\left(\mathbf{z}\right)\\
&=& \frac{1}{n} \log V(\Lambda)-\frac{1}{n} h\left(\mathbf{z}\right)\\
&=& \frac{1}{2}\log \left(\frac{S}{\beta N}\right)\\
&=& \frac{1}{2}\log \left(1+\frac{S}{N}\right)\\
&=& C
\end{eqnarray*}
where $I(\cdot)$ and $h(\cdot)$ are mutual information and differential entropy, respectively. For reliable decoding of $\mathbf{t}$, we have the code rate constraint $R\leq C$.
With the choice of lattice parameters, $\sigma^2(\Lambda_t)\geq \beta N$, $G(\Lambda_t)=\frac{1}{2\pi e}$ and $V(\Lambda_t)^{\frac{n}{2}}=\frac{\sigma^2(\Lambda_t)}{G(\Lambda_t)}\geq 2\pi e \beta N$,
\begin{eqnarray*}
R &=& \frac{1}{n}\log\left(\frac{V(\Lambda)}{V(\Lambda_t)}\right)\\
&\leq & \frac{1}{n}\log\left(\frac{(2\pi e S)^{\frac{n}{2}}}{(2\pi e \beta N)^{\frac{n}{2}}}\right)\\
&=& \frac{1}{2}\log\left(\frac{S}{\beta N}\right).
\end{eqnarray*}
Thus, the constraint $R\leq C$ can be satisfied. By \emph{lattice decoding} \cite{ErezZamir04}, we can recover $\mathbf{t}$, i.e.,
\begin{equation}
Q_{\Lambda_t}(\mathbf{y}')=\mathbf{t},
\end{equation}
with probability $1-P_e$ where
\begin{equation}
P_e=\textrm{Pr}[Q_{\Lambda_t}\left(\mathbf{y}'\right)\neq \mathbf{t}]
\end{equation}
is the probability of decoding error.
If we choose $\Lambda$ to be Poltyrev-good \cite{Zamir14}, then $P_e\rightarrow 0$ as $n\rightarrow \infty$.
\subsection{Achievable Scheme}
We present an achievable scheme for the proof of \emph{Theorem 8}. The achievable scheme is based on rate-splitting, lattice coding, and interference alignment. Message $M_1\in\{1,2,\ldots,2^{nR_1}\}$ is split into two parts: $M_{11}\in\{1,2,\ldots,2^{nR_{11}}\}$ and $M_{10}\in\{1,2,\ldots,2^{nR_{10}}\}$, so $R_1=R_{11}+R_{10}$. Transmitter 1 sends $\mathbf{x}_1=\mathbf{x}_{11}+\mathbf{x}_{10}$ where $\mathbf{x}_{11}$ and $\mathbf{x}_{10}$ are coded signals of $M_{11}$ and $M_{10}$, respectively. Transmitters 2 and 3 send $\mathbf{x}_2$ and $\mathbf{x}_3$, coded signals of $M_2\in\{1,2,\ldots,2^{nR_2}\}$ and $M_3\in\{1,2,\ldots,2^{nR_3}\}$. In particular, $\mathbf{x}_{11}$ and $\mathbf{x}_3$ are lattice-coded signals.
We use the lattice construction of \cite{NamChungLee10,NamChungLee11} with the lattice partition chain $\Lambda_c/\Lambda_1/\Lambda_3$, so $\Lambda_3\subset \Lambda_1\subset \Lambda_c$ are nested lattices. $\Lambda_c$ is the coding lattice for both $\mathbf{x}_{11}$ and $\mathbf{x}_3$. $\Lambda_1$ and $\Lambda_3$ are shaping lattices for $\mathbf{x}_{11}$ and $\mathbf{x}_3$, respectively. The lattice signals are formed by
\begin{eqnarray}
&&\mathbf{x}_{11}=[\mathbf{t}_{11}+\mathbf{d}_{11}]\textrm{ mod }\Lambda_1\\
&&\ \mathbf{x}_3=[\mathbf{t}_3+\mathbf{d}_3]\textrm{ mod }\Lambda_3
\end{eqnarray}
where $\mathbf{t}_{11}\in \Lambda_c\cap\mathcal{V}(\Lambda_1)$ and $\mathbf{t}_3\in \Lambda_c\cap\mathcal{V}(\Lambda_3)$ are lattice codewords. The dither signals $\mathbf{d}_{11}$ and $\mathbf{d}_3$ are uniformly distributed over $\mathcal{V}(\Lambda_1)$ and $\mathcal{V}(\Lambda_3)$, respectively.
To satisfy power constraints, we choose $\mathbb{E}[\|\mathbf{x}_{11}\|^2]=n\sigma^2(\Lambda_1)=(1-\alpha_1) nP$, $\mathbb{E}[\|\mathbf{x}_{10}\|^2]={\alpha}_1 nP$,
$\mathbb{E}[\|\mathbf{x}_2\|^2]={\alpha}_2 nP$, $\mathbb{E}[\|\mathbf{x}_3\|^2]=n\sigma^2(\Lambda_3)=nP$.
With the choice of transmit signals, the received signals are given by
\begin{eqnarray*}
&&\mathbf{y}_1=\mathbf{x}_{11}+\mathbf{x}_2+\mathbf{x}_{10}+\mathbf{z}_1\\
&&\mathbf{y}_2=[\mathbf{x}_{11}+\mathbf{x}_3]+\mathbf{x}_2+\mathbf{z}_2'\\
&&\mathbf{y}_3=\mathbf{x}_3+\mathbf{z}_3'.
\end{eqnarray*}
where $\mathbf{x}_f=[\mathbf{x}_{11}+\mathbf{x}_3]$ is the sum of interference, and $\mathbf{z}_2'=\mathbf{x}_{10}+\mathbf{z}_2$ and $\mathbf{z}_3'=\mathbf{x}_2+\mathbf{z}_3$ are the effective Gaussian noise.
The signal scale diagram at each receiver is shown in Fig. \ref{fig:signalScale} (a).
At the receivers, successive decoding is performed in the following order: $\mathbf{x}_{11}\rightarrow \mathbf{x}_2\rightarrow \mathbf{x}_{10}$ at receiver 1, $\mathbf{x}_f\rightarrow \mathbf{x}_2$ at receiver 2, and receiver 3 only decodes $\mathbf{x}_3$.
Note that the aligned lattice codewords $\mathbf{t}_{11}+\mathbf{t}_3\in \Lambda_c$, and $\mathbf{t}_f=[\mathbf{t}_{11}+\mathbf{t}_3]\textrm{ mod }\Lambda_1 \in\Lambda_c\cap\mathcal{V}(\Lambda_1)$. We state the relationship between $\mathbf{x}_f$ and $\mathbf{t}_f$ in the following lemmas.
\begin{lemma} The following holds.
\begin{equation*}
[\mathbf{x}_f-\mathbf{d}_f]\textrm{ mod }\Lambda_1=\mathbf{t}_f
\end{equation*}
where $\mathbf{d}_f=\mathbf{d}_{11}+\mathbf{d}_3$.
\end{lemma}
\begin{IEEEproof}
\begin{eqnarray*}
&&[\mathbf{x}_f-\mathbf{d}_f]\textrm{ mod }\Lambda_1\\
&&=[M_{\Lambda_1}(\mathbf{t}_{11}+\mathbf{d}_{11})+M_{\Lambda_3}(\mathbf{t}_3+\mathbf{d}_3)-\mathbf{d}_f]\textrm{ mod }\Lambda_1\\
&&=[M_{\Lambda_1}(\mathbf{t}_{11}+\mathbf{d}_{11})+M_{\Lambda_1}(\mathbf{t}_3+\mathbf{d}_3)-\mathbf{d}_f]\textrm{ mod }\Lambda_1\\
&&=[\mathbf{t}_{11}+\mathbf{d}_{11}+\mathbf{t}_3+\mathbf{d}_3-\mathbf{d}_f]\textrm{ mod }\Lambda_1\\
&&=[\mathbf{t}_{11}+\mathbf{t}_3]\textrm{ mod }\Lambda_1\\
&&=\mathbf{t}_f
\end{eqnarray*}
The second and third equalities are due to distributive law and the identity in the following lemma.
\end{IEEEproof}
\begin{figure}[tp]
\begin{center}
\mbox{
\subfigure[Channel type 1]{\includegraphics[width=0.45\textwidth]{type1signalScale-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Channel type 2]{\includegraphics[width=0.45\textwidth]{type2signalScale-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Channel type 3]{\includegraphics[width=0.45\textwidth]{type3signalScale-eps-converted-to.pdf}}
}
\caption{Signal scale diagram.}
\label{fig:signalScale}
\end{center}
\end{figure}
\begin{lemma} For any nested lattices $\Lambda_3\subset\Lambda_1$ and\\ any $\mathbf{x}\in\mathbb{R}^n$, it holds that
\begin{equation*}
[M_{\Lambda_3}(\mathbf{x})]\textrm{ mod }\Lambda_1=[\mathbf{x}]\textrm{ mod }\Lambda_1.
\end{equation*}
\end{lemma}
\begin{IEEEproof}
\begin{eqnarray*}
&&[M_{\Lambda_3}(\mathbf{x})]\textrm{ mod }\Lambda_1\\
&&=[\mathbf{x}-\lambda_3]\textrm{ mod }\Lambda_1\\
&&=[M_{\Lambda_1}(\mathbf{x})-M_{\Lambda_1}(\lambda_3)]\textrm{ mod }\Lambda_1\\
&&=[M_{\Lambda_1}(\mathbf{x})-\lambda_3 +Q_{\Lambda_1}(\lambda_3)]\textrm{ mod }\Lambda_1\\
&&=[M_{\Lambda_1}(\mathbf{x})]\textrm{ mod }\Lambda_1\\
&&=[\mathbf{x}]\textrm{ mod }\Lambda_1
\end{eqnarray*}
where $\lambda_3=Q_{\Lambda_3}(\mathbf{x})\in\Lambda_1$, thus $Q_{\Lambda_1}(\lambda_3)=\lambda_3$.
\end{IEEEproof}
\begin{lemma} The following holds.
\begin{equation*}
[\mathbf{t}_f+\mathbf{d}_f]\textrm{ mod }\Lambda_1=[\mathbf{x}_f]\textrm{ mod }\Lambda_1.
\end{equation*}
\end{lemma}
\begin{IEEEproof}
\begin{eqnarray*}
&&[\mathbf{t}_f+\mathbf{d}_f]\textrm{ mod }\Lambda_1\\
&&=[M_{\Lambda_1}(\mathbf{t}_{11}+\mathbf{t}_3)+\mathbf{d}_f]\textrm{ mod }\Lambda_1\\
&&=[\mathbf{t}_{11}+\mathbf{t}_3+\mathbf{d}_f]\textrm{ mod }\Lambda_1\\
&&=[M_{\Lambda_1}(\mathbf{t}_{11}+\mathbf{d}_{11})+M_{\Lambda_1}(\mathbf{t}_3+\mathbf{d}_3)]\textrm{ mod }\Lambda_1\\
&&=[M_{\Lambda_1}(\mathbf{t}_{11}+\mathbf{d}_{11})+M_{\Lambda_3}(\mathbf{t}_3+\mathbf{d}_3)]\textrm{ mod }\Lambda_1\\
&&=[\mathbf{x}_{11}+\mathbf{x}_3]\textrm{ mod }\Lambda_1\\
&&=[\mathbf{x}_f]\textrm{ mod }\Lambda_1
\end{eqnarray*}
\end{IEEEproof}
Receiver 2 does not need to recover the codewords $\mathbf{t}_{11}$ and $\mathbf{t}_3$ but the real sum $\mathbf{x}_f$ to remove the interference from $\mathbf{y}_2$. Since $\mathbf{x}_f=M_{\Lambda_1}(\mathbf{x}_f)+Q_{\Lambda_1}(\mathbf{x}_f)$, we first recover the modulo part and then the quantized part to cancel out $\mathbf{x}_f$. This idea appeared in \cite{GastparNazer11} as an achievable scheme for the many-to-one interference channel.
The mod-$\Lambda_1$ channel between $\mathbf{t}_f$ and $\mathbf{y}_2'$ is given by
\begin{eqnarray}
&&\mathbf{y}_2'=[\beta_2\mathbf{y}_2-\mathbf{d}_f]\textrm{ mod }\Lambda_1\\
&&\ \ \ \ = [\mathbf{x}_f-\mathbf{d}_f +\mathbf{z}_{e2}]\textrm{ mod }\Lambda_1\\
&&\ \ \ \ = [\mathbf{t}_f +\mathbf{z}_{e2}]\textrm{ mod }\Lambda_1
\end{eqnarray}
where the effective noise $\mathbf{z}_{e2}=(\beta_2-1)\mathbf{x}_f+\beta_2(\mathbf{x}_2+\mathbf{x}_{10}+\mathbf{z}_2)$. Note that $\mathbb{E}[\|\mathbf{x}_f\|^2]=(\bar{\alpha}_0+1)nP$, and the effective noise variance $\sigma_{e2}^2=\frac{1}{n}\mathbb{E}[\|\mathbf{z}_{e2}\|^2]=(\beta_2-1)^2(\bar{\alpha}_0+1)P+\beta_2^2 N_{e2}$ where $N_{e2}=(\alpha_0+\alpha_2) P+N_2$. With the MMSE scaling factor $\beta_2=\frac{(\bar{\alpha}_0+1)P}{(\bar{\alpha}_0+1)P+N_{e2}}$ plugged in, we get $\sigma_{e2}^2=\beta_2 N_{e2}=\frac{(\bar{\alpha}_0+1)P N_{e2}}{(\bar{\alpha}_0+1)P+N_{e2}}$. The capacity of the mod-$\Lambda_1$ channel between $\mathbf{t}_f$ and $\mathbf{y}_2'$ is
\begin{eqnarray*}
&&\frac{1}{n} I\left(\mathbf{t}_f;\mathbf{y}_2'\right)\\
&&\geq \frac{1}{n}\log\left(\frac{V(\Lambda_1)}{2^{h(\mathbf{z}_{e2})}}\right)\\
&&= \frac{1}{2}\log \left(\frac{\bar{\alpha}_0 P}{\beta_2 N_{e2}}\right)\\
&&= \frac{1}{2}\log \left(\frac{\bar{\alpha}_0(\bar{\alpha}_0+1)P+\bar{\alpha}_0 N_{e2}}{(\bar{\alpha}_0+1)N_{e2}}\right)\\
&&= \frac{1}{2}\log \left(\frac{\bar{\alpha}_0}{\bar{\alpha}_0+1}+\frac{\bar{\alpha}_0 P}{N_{e2}}\right)\\
&&= \frac{1}{2}\log \left(\frac{\bar{\alpha}_0}{\bar{\alpha}_0+1}+\frac{\bar{\alpha}_0 P}{(\alpha_0+\alpha_2) P+N_2}\right)\\
&&= C_f
\end{eqnarray*}
For reliable decoding of $\mathbf{t}_f$ at receiver 2, we have the code rate constraint $R_{11}=\frac{1}{n}\log\left(\frac{V(\Lambda_1)}{V(\Lambda_c)}\right)\leq C_f$. This also implies that $R_3=\frac{1}{n}\log\left(\frac{V(\Lambda_2)}{V(\Lambda_c)}\right)\leq C_f+\frac{1}{n}\log\left(\frac{V(\Lambda_2)}{V(\Lambda_1)}\right)=\frac{1}{2}\log\left(\frac{P}{\beta_2 N_{e2}}\right)=\frac{1}{2}\log \left(\frac{1}{\bar{\alpha}_0+1}+\frac{P}{(\alpha_0+\alpha_2) P+N_2}\right)$.
By lattice decoding, we can recover the modulo sum of interference codewords $\mathbf{t}_f$ from $\mathbf{y}_2'$. Then, we can recover the real sum $\mathbf{x}_f$ in the following way.
\begin{itemize}
\item Recover $M_{\Lambda_1}(\mathbf{x}_f)$ by calculating $[\mathbf{t}_f+\mathbf{d}_f]\textrm{ mod }\Lambda_1$ (lemma 3).
\item Subtract it from the received signal,
\begin{equation}
\mathbf{y}_2-M_{\Lambda_1}(\mathbf{x}_f)=Q_{\Lambda_1}(\mathbf{x}_f) +\mathbf{z}_2''
\end{equation}
where $\mathbf{z}_2''=\mathbf{x}_2+\mathbf{x}_{10}+\mathbf{z}_2$.
\item Quantize it to recover $Q_{\Lambda_1}(\mathbf{x}_f)$,
\begin{equation}
Q_{\Lambda_1}\left(Q_{\Lambda_1}(\mathbf{x}_f) +\mathbf{z}_2''\right)=Q_{\Lambda_1}(\mathbf{x}_f)
\end{equation}
with probability $1-P_e$ where
\begin{equation}
P_e=\textrm{Pr}[Q_{\Lambda_1}\left(Q_{\Lambda_1}(\mathbf{x}_f) +\mathbf{z}_2''\right)\neq Q_{\Lambda_1}(\mathbf{x}_f)]
\end{equation}
is the probability of decoding error.
If we choose $\Lambda_1$ to be simultaneously Rogers-good and Poltyrev-good \cite{Zamir14} with $V(\Lambda_1)\geq V(\Lambda_c)$, then $P_e\rightarrow 0$ as $n\rightarrow \infty$.
\item Recover $\mathbf{x}_f$ by adding two vectors,
\begin{equation}
M_{\Lambda_1}(\mathbf{x}_f)+Q_{\Lambda_1}(\mathbf{x}_f)=\mathbf{x}_f.
\end{equation}
\end{itemize}
We now proceed to decoding $\mathbf{x}_2$ from $\mathbf{y}_2-\mathbf{x}_f=\mathbf{x}_2+\mathbf{z}_2'$. Since $\mathbf{x}_2$ is a codeword from an i.i.d. random code for point-to-point channel, we can achieve rate up to
\begin{equation}
R_2\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{\alpha_0 P+N_2}\right).
\end{equation}
At receiver 1, we first decode $\mathbf{x}_{11}$ while treating other signals $\mathbf{x}_2+\mathbf{x}_{10}+\mathbf{z}_1$ as noise. The effective noise in the mod-$\Lambda_1$ channel is $\mathbf{z}_{e1}=(\beta_1-1)^2\mathbf{x}_{11}+\beta_1(\mathbf{x}_2+\mathbf{x}_{10}+\mathbf{z}_1)$ with variance $\sigma_{e1}^2=\frac{1}{n}\mathbb{E}[\|\mathbf{z}_{e1}\|^2]=(\beta_1-1)^2 \bar{\alpha}_0 P+\beta_1^2 N_{e1}$ where $N_{e1}=(\alpha_0+\alpha_2)P+N_1$. For reliable decoding, the rate $R_{11}$ must satisfy
\[R_{11}\leq \frac{{1}}{{2}}\log\left(\frac{\sigma^2(\Lambda_1)}{\beta_1\sigma_{e1}^2}\right)=\frac{{1}}{{2}}\log\left(1+\frac{\bar{\alpha}_0 P}{(\alpha_0+\alpha_2) P+N_1}\right)\]
where the MMSE scaling parameter $\beta_1=\frac{\bar{\alpha}_0 P}{\bar{\alpha}_0 P+N_{e1}}$. Similarly, we have the other rate constraints at receiver 1:
\begin{eqnarray}
&& R_2\leq \frac{{1}}{{2}}\log\left(1+\frac{{\alpha}_2 P}{{\alpha}_0 P +N_1}\right)\\
&& R_{10}\leq \frac{{1}}{{2}}\log\left(1+\frac{{\alpha}_0 P}{N_1}\right).
\end{eqnarray}
At receiver 3, the signal $\mathbf{x}_3$ is decoded with the effective noise $\mathbf{x}_2+\mathbf{z}_3$. For reliable decoding, $R_3$ must satisfy
\begin{equation}
R_3\leq \frac{{1}}{{2}}\log\left(1+\frac{P}{\alpha_2 P+N_3}\right).
\end{equation}
In summary,
\begin{itemize}
\item $\mathbf{x}_{11}$ decoded at receivers 1 and 2
\begin{eqnarray*}
&& R_{11}\leq T_{11}'=\frac{{1}}{{2}}\log\left(1+\frac{(1-\alpha_0) P}{(\alpha_0+\alpha_2) P+N_1}\right)\\
&& R_{11}\leq T_{11}''=\frac{{1}}{{2}}\log\left(c_{11}+\frac{(1-\alpha_0) P}{(\alpha_0+\alpha_2) P+N_2}\right)
\end{eqnarray*}
where $c_{11}=\frac{(1-\alpha_0)P}{(1-\alpha_0)P+P}=\frac{1-\alpha_0}{2-\alpha_0}$.
\item $\mathbf{x}_{10}$ decoded at receiver 1
\begin{eqnarray}
R_{10}\leq T_{10}=\frac{{1}}{{2}}\log\left(1+\frac{{\alpha}_0 P}{N_1}\right)
\end{eqnarray}
\item $\mathbf{x}_2$ decoded at receivers 1 and 2
\begin{eqnarray}
&& R_2\leq T_2' =\frac{{1}}{{2}}\log\left(1+\frac{{\alpha}_2 P}{{\alpha}_0 P +N_1}\right)\\
&& R_2\leq T_2'' =\frac{{1}}{{2}}\log\left(1+\frac{{\alpha}_2 P}{{\alpha}_0 P +N_2}\right)
\end{eqnarray}
\item $\mathbf{x}_3$ decoded at receivers 2 and 3
\begin{eqnarray}
&& R_3\leq T_3' =\frac{{1}}{{2}}\log\left(c_3+\frac{P}{(\alpha_0+\alpha_2) P+N_2}\right)\nonumber\\
&& R_3\leq T_3'' =\frac{{1}}{{2}}\log\left(1+\frac{P}{\alpha_2 P+N_3}\right)
\end{eqnarray}
where $c_3=\frac{P}{(1-\alpha_0)P+P}=\frac{1}{2-\alpha_0}$.
\end{itemize}
Note that $0\leq c_{11}\leq \frac{1}{2}$, $c_{11}+c_3=1$, and $\frac{1}{2}\leq c_3\leq 1$. Putting together, we can see that the following rate region is achievable.
\begin{eqnarray*}
&& R_1 \leq T_1=\min\{T_{11}',T_{11}''\}+T_{10}=T_{11}''+T_{10}\\
&& R_2 \leq T_2=\min\{T_2',T_2''\}=T_2''\\
&& R_3 \leq T_3=\min\{T_3',T_3''\}
\end{eqnarray*}
where
\begin{eqnarray}
&& T_1=\frac{{1}}{{2}}\log\left(c_{11}+\frac{(1-\alpha_0) P}{(\alpha_0+\alpha_2) P+N_2}\right)\nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{{1}}{{2}}\log\left(1+\frac{{\alpha}_0 P}{N_1}\right)\\
&& T_2=\frac{{1}}{{2}}\log\left(1+\frac{{\alpha}_2 P}{{\alpha}_0 P +N_2}\right)\\
&& T_3 \geq\frac{{1}}{{2}}\log\left(c_3+\frac{P}{(\alpha_0+\alpha_2) P+N_3}\right).
\end{eqnarray}
Thus, \emph{Theorem 8} is proved.
\subsection{The Gap}
We choose the parameter $\alpha_0=\frac{N_2}{P}$, which is suboptimal but good enough to achieve a constant gap. This choice of parameter, inspired by \cite{EtkinTseWang08}, ensures making efficient use of signal scale difference between $N_1$ and $N_2$ at receiver 1, while keeping the interference of $\mathbf{x}_{10}$ at the noise level $N_2$ at receiver 2. By substitution, we get
\begin{eqnarray}
&&T_1 = \frac{{1}}{{2}}\log\left(c_{11}+\frac{P-N_2}{\alpha_2 P+2N_2}\right)\nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{{1}}{{2}}\log\left(1+\frac{N_2}{N_1}\right)\\
&&T_2 = \frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{2 N_2}\right)\\
&&T_3 \geq \frac{{1}}{{2}}\log\left(c_3+\frac{P}{\alpha_2 P+N_2+N_3}\right).
\end{eqnarray}
Since $\alpha_0=\frac{N_2}{P}\in\left[0,\frac{1}{3}\right]$, it follows that $c_{11}=\frac{1-N_2/P}{2-N_2/P}\geq \frac{2}{5}$, and $c_3=\frac{1}{2-N_2/P}\geq \frac{1}{2}$.
Starting from $\mathcal{R}_o$ from Table \ref{tab:outerBounds}, we can express the two-dimensional outer bound region at $R_2$ as
\begin{eqnarray*}
&& R_1 \leq \min\left\{\frac{{1}}{{2}}\log\left(1+\frac{2P}{N_1}\right)-R_2,C_1\right\}\\
&& \ \ \ \ \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_2,\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{4}{3}\right)\right\}\\
&& R_3 \leq \min\left\{\frac{{1}}{{2}}\log\left(1+\frac{2P}{N_2}\right)-R_2,C_3\right\}\\
&& \ \ \ \ \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)-R_2,\frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right)\right\}.
\end{eqnarray*}
Depending on the bottleneck of $\min\{\cdot,\cdot\}$ expressions, there are three cases:
\begin{itemize}
\item $R_2\leq \frac{{1}}{{2}}\log\left(\frac{7}{4}\right)$
\item $\frac{{1}}{{2}}\log\left(\frac{7}{4}\right)\leq R_2\leq \frac{{1}}{{2}}\log\left(\frac{N_3}{N_2}\cdot\frac{7}{4}\right)$
\item $R_2\geq \frac{{1}}{{2}}\log\left(\frac{N_3}{N_2}\cdot\frac{7}{4}\right)$.
\end{itemize}
At $R_2=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\cdot\frac{7}{4}\right)$, the outer bound region is
\begin{eqnarray*}
&& R_1 \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\cdot\frac{N_2}{N_1}\cdot\frac{4}{3}\right),\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{4}{3}\right)\right\}\\
&& R_3 \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\cdot\frac{4}{3}\right),\frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right)\right\}.
\end{eqnarray*}
Depending on the bottleneck of $\min\{\cdot,\cdot\}$ expressions, we consider the following three cases:
\begin{itemize}
\item $\alpha_2 P\geq N_3$
\item $N_2\leq \alpha_2 P\leq N_3$
\item $\alpha_2 P\leq N_2$.
\end{itemize}
\emph{Case i}) $\alpha_2 P\geq N_3$:
The outer bound region at $R_2=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\cdot\frac{7}{4}\right)$ is
\begin{eqnarray}
R_1 \leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\cdot\frac{N_2}{N_1}\cdot\frac{4}{3}\right), R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\cdot\frac{4}{3}\right).
\end{eqnarray}
For comparison, let us take a look at the achievable rate region. The first term of $T_1$ is lower bounded by
\begin{eqnarray}
&&T_{11}'' =\frac{{1}}{{2}}\log\left(c_{11}+\frac{P-N_2}{\alpha_2 P+2N_2}\right)\\
&&\ \ \ \ \ \geq \frac{{1}}{{2}}\log\left(\frac{2}{5}+\frac{P-\alpha_2 P}{3\alpha_2 P}\right)\\
&&\ \ \ \ \ > \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\right).
\end{eqnarray}
We get the lower bounds:
\begin{eqnarray}
&& T_1 = T_{11}''+T_{10}\\
&&\ \ \ \ > \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\right)+\frac{{1}}{{2}}\log\left(1+\frac{N_2}{N_1}\right)\\
&&\ \ \ \ > \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\cdot\frac{N_2}{N_1}\right)\\
&&T_3\geq \frac{{1}}{{2}}\log\left(\frac{1}{2}+\frac{P}{\alpha_2 P+N_2+N_3}\right)\\
&&\ \ \ \ > \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\right).
\end{eqnarray}
For fixed $\alpha_2$ and $R_2=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{2N_2}\right)$, the two-dimensional achievable rate region is given by
\begin{eqnarray}
R_1 \leq \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\cdot\frac{N_2}{N_1}\right),\ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\right).
\end{eqnarray}
\emph{Case ii}) $N_2\leq \alpha_2 P\leq N_3$:
The outer bound region at $R_2=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\cdot\frac{7}{4}\right)$ is
\begin{eqnarray}
R_1 \leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\cdot\frac{N_2}{N_1}\cdot\frac{4}{3}\right),\ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right).
\end{eqnarray}
Now, let us take a look at the achievable rate region. We have the lower bounds:
\begin{eqnarray}
&& T_1 > \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\cdot\frac{N_2}{N_1}\right)\\
&& T_3 \geq \frac{{1}}{{2}}\log\left(\frac{1}{2}+\frac{P}{\alpha_2 P+N_2+N_3}\right)\\
&&\ \ \ \ > \frac{{1}}{{2}}\log\left(\frac{P}{3N_3}\right).
\end{eqnarray}
For fixed $\alpha_2$ and $R_2=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{2N_2}\right)$, the two-dimensional achievable rate region is given by
\begin{eqnarray}
R_1\leq \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\cdot\frac{N_2}{N_1}\right),\ R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{3N_3}\right).
\end{eqnarray}
\emph{Case iii}) $\alpha_2 P\leq N_2$:
The outer bound region at $R_2=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\cdot\frac{7}{4}\right)$ is
\begin{eqnarray}
R_1 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{4}{3}\right),\ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right).
\end{eqnarray}
For this range of $\alpha_2$, the rate $R_2$ is small, i.e., $R_2 = \frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\cdot\frac{7}{4}\right)\leq \frac{{1}}{{2}}\log\left(\frac{7}{4}\right)<\frac{1}{2}$, and $R_1$ and $R_3$ are close to single user capacities $C_1$ and $C_3$, respectively.
Let us take a look at the achievable rate region.
The first term of $T_1$ is lower bounded by
\begin{eqnarray}
&&T_{11}'' =\frac{{1}}{{2}}\log\left(c_{11}+\frac{P-N_2}{\alpha_2 P+2N_2}\right)\\
&&\ \ \ \ \ \geq \frac{{1}}{{2}}\log\left(\frac{2}{5}+\frac{P-N_2}{3N_2}\right)\\
&&\ \ \ \ \ > \frac{{1}}{{2}}\log\left(\frac{P}{3N_2}\right).
\end{eqnarray}
We get the lower bounds:
\begin{eqnarray}
&& T_1 = T_{11}''+T_{10}\\
&&\ \ \ \ > \frac{{1}}{{2}}\log\left(\frac{P}{3N_2}\right)+\frac{{1}}{{2}}\log\left(1+\frac{N_2}{N_1}\right)\\
&&\ \ \ \ > \frac{{1}}{{2}}\log\left(\frac{P}{3N_1}\right)\\
&& T_3 \geq \frac{{1}}{{2}}\log\left(\frac{1}{2}+\frac{P}{\alpha_2 P+N_2+N_3}\right)\\
&&\ \ \ \ > \frac{{1}}{{2}}\log\left(\frac{P}{3N_3}\right).
\end{eqnarray}
For fixed $\alpha_2$ and $R_2=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{2N_2}\right)$, the following two-dimensional rate region is achievable.
\begin{eqnarray}
R_1 \leq \frac{{1}}{{2}}\log\left(\frac{P}{3N_1}\right),\ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{3N_3}\right).
\end{eqnarray}
In all three cases above, by comparing the inner and outer bound regions, we can see that $\delta_1\leq \frac{{1}}{{2}}\log\left(3\cdot\frac{4}{3}\right)=1$, $\delta_2\leq \frac{{1}}{{2}}\log\left(2\cdot\frac{7}{4}\right) = 0.91$ and $\delta_3\leq \frac{{1}}{{2}}\log\left(3\cdot\frac{4}{3}\right)=1$. Therefore, we can conclude that the gap is to within one bit per message.
\section{Inner Bound: Channel Type 2}
\begin{theorem}
Given $\alpha_1\in [0,1]$, the region $\mathcal{R}_\alpha$ is defined by
\begin{eqnarray*}
&&R_1 \leq \frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{N_1}\right)\\
&&R_2 \leq \frac{{1}}{{2}}\log^+\left(\frac{1}{2}+\frac{P}{\alpha_1 P+N_2}\right)\\
&&R_3 \leq \frac{{1}}{{2}}\log^+\left(\frac{1}{2}+\frac{P}{\alpha_1 P+N_3}\right),
\end{eqnarray*}
and $\mathcal{R}=\textsc{conv}\left(\bigcup_{\alpha_1}\mathcal{R}_\alpha\right)$ is achievable.
\end{theorem}
\subsection{Achievable Scheme}
For this channel type, rate splitting is not necessary. Transmit signal $\mathbf{x}_k$ is a coded signal of $M_k\in\{1,2,\ldots,2^{nR_k}\},k=1,2,3$. In particular, $\mathbf{x}_2$ and $\mathbf{x}_3$ are lattice-coded signals using the same pair of coding and shaping lattices. As a result, the sum $\mathbf{x}_2+\mathbf{x}_3$ is a dithered lattice codeword. The power allocation satisfies
$\mathbb{E}[\|\mathbf{x}_1\|^2]=\alpha_1 nP$, $\mathbb{E}[\|\mathbf{x}_2\|^2]=nP$, and $\mathbb{E}[\|\mathbf{x}_3\|^2]=nP$.
The received signals are
\begin{eqnarray*}
&&\mathbf{y}_1=[\mathbf{x}_2+\mathbf{x}_3]+\mathbf{x}_1+\mathbf{z}_1\\
&&\mathbf{y}_2=\mathbf{x}_2+\mathbf{x}_1+\mathbf{z}_2\\
&&\mathbf{y}_3=\mathbf{x}_3+\mathbf{x}_1+\mathbf{z}_3.
\end{eqnarray*}
The signal scale diagram at each receiver is shown in Fig. \ref{fig:signalScale} (b).
Decoding is performed in the following way.
\begin{itemize}
\item At receiver 1, $[\mathbf{x}_2+\mathbf{x}_3]$ is first decoded while treating $\mathbf{x}_1+\mathbf{z}_1$ as noise. Next, $\mathbf{x}_1$ is decoded from $\mathbf{y}_1-[\mathbf{x}_2+\mathbf{x}_3]=\mathbf{x}_1+\mathbf{z}_1$. For reliable decoding, the code rates should satisfy
\begin{eqnarray}
&& R_2 \leq T_2' =\frac{{1}}{{2}}\log\left(\frac{1}{2}+\frac{P}{\alpha_1 P+N_1}\right)\\
&& R_3 \leq T_3' =\frac{{1}}{{2}}\log\left(\frac{1}{2}+\frac{P}{\alpha_1 P+N_1}\right)\\
&& R_1 \leq T_1 =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{N_1}\right).
\end{eqnarray}
\item At receiver 2, $\mathbf{x}_2$ is decoded while treating $\mathbf{x}_1+\mathbf{z}_2$ as noise. Similarly at receiver 3, $\mathbf{x}_3$ is decoded while treating $\mathbf{x}_1+\mathbf{z}_3$ as noise. For reliable decoding, the code rates should satisfy
\begin{eqnarray}
&&R_2\leq T_2'' =\frac{{1}}{{2}}\log\left(1+\frac{P}{\alpha_1 P+N_2}\right)\\
&&R_3\leq T_3'' =\frac{{1}}{{2}}\log\left(1+\frac{P}{\alpha_1 P+N_3}\right).
\end{eqnarray}
\end{itemize}
Putting together, we get
\begin{eqnarray*}
&& R_1 \leq T_1\\
&& R_2 \leq T_2 =\min\{T_2',T_2''\}\\
&& R_3 \leq T_3 =\min\{T_3',T_3''\}
\end{eqnarray*}
where
\begin{eqnarray}
&& T_1 =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{N_1}\right)\\
&& T_2 \geq \frac{{1}}{{2}}\log\left(\frac{1}{2}+\frac{P}{\alpha_1 P+N_2}\right)\\
&& \ \ \ \ \geq \frac{{1}}{{2}}\log\left(\frac{1}{2}+\frac{P}{2\cdot\max\{\alpha_1 P,N_2\} }\right)\\
&& T_3 \geq \frac{{1}}{{2}}\log\left(\frac{1}{2}+\frac{P}{\alpha_1 P+N_3}\right)\\
&& \ \ \ \ \geq \frac{{1}}{{2}}\log\left(\frac{1}{2}+\frac{P}{2\cdot\max\{\alpha_1 P,N_3\} }\right).
\end{eqnarray}
\subsection{The Gap}
Starting from $\mathcal{R}_o$ from Table \ref{tab:outerBounds}, we can express the two-dimensional outer bound region at $R_1$ as
\begin{eqnarray*}
&& R_2 \leq \min\left\{\frac{{1}}{{2}}\log\left(1+\frac{2P}{N_1}\right)-R_1,C_2\right\}\\
&& \ \ \ \ \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_1,\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{4}{3}\right)\right\}\\
&& R_3 \leq \min\left\{\frac{{1}}{{2}}\log\left(1+\frac{2P}{N_1}\right)-R_1,C_3\right\}\\
&& \ \ \ \ \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_1,\frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right)\right\}.
\end{eqnarray*}
Depending on the bottleneck of $\min\{\cdot,\cdot\}$ expressions, there are three cases:
\begin{itemize}
\item $R_1\leq \frac{{1}}{{2}}\log\left(\frac{N_2}{N_1}\cdot\frac{7}{4}\right)$
\item $\frac{{1}}{{2}}\log\left(\frac{N_2}{N_1}\cdot\frac{7}{4}\right)\leq R_1\leq \frac{{1}}{{2}}\log\left(\frac{N_3}{N_1}\cdot\frac{7}{4}\right)$
\item $R_1\geq \frac{{1}}{{2}}\log\left(\frac{N_3}{N_1}\cdot\frac{7}{4}\right)$.
\end{itemize}
At $R_1=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{7}{4}\right)$, the region can be expressed as
\begin{eqnarray*}
&& R_2 \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{4}{3}\right),\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{4}{3}\right)\right\}\\
&& R_3 \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{4}{3}\right),\frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right)\right\}.
\end{eqnarray*}
Depending on the bottleneck of $\min\{\cdot,\cdot\}$ expressions, we consider the following three cases.
\emph{Case i}) $\alpha_1 P\geq N_3$: The two-dimensional outer bound region at $R_1=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{7}{4}\right)$ is
\begin{eqnarray}
R_2 \leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{4}{3}\right),\ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{4}{3}\right).
\end{eqnarray}
For fixed $\alpha_1$ and $R_1=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\right)$, the following two-dimensional region is achievable.
\begin{eqnarray}
R_2 \leq \frac{{1}}{{2}}\log\left(\frac{P}{2\alpha_1 P}\right),\ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{2\alpha_1 P}\right).
\end{eqnarray}
\emph{Case ii}) $N_2\leq \alpha_1 P\leq N_3$: The two-dimensional outer bound region at $R_1=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{7}{4}\right)$ is
\begin{eqnarray}
R_2 \leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{4}{3}\right),\ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right).
\end{eqnarray}
For fixed $\alpha_1$ and $R_1=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\right)$, the following two-dimensional region is achievable.
\begin{eqnarray}
R_2 \leq \frac{{1}}{{2}}\log\left(\frac{P}{2\alpha_1 P}\right),\ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{2N_3}\right).
\end{eqnarray}
\emph{Case iii}) $\alpha_1 P\leq N_2$: The two-dimensional outer bound region at $R_1=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{7}{4}\right)$ is
\begin{eqnarray}
R_2 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{4}{3}\right),\ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right).
\end{eqnarray}
For fixed $\alpha_1$ and $R_1=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\right)$, the following two-dimensional region is achievable.
\begin{eqnarray}
R_2 \leq \frac{{1}}{{2}}\log\left(\frac{P}{2N_2}\right),\ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{2N_3}\right).
\end{eqnarray}
In all three cases above, by comparing the inner and outer bounds, we can see that $\delta_1\leq \frac{{1}}{{2}}\log\left(\frac{7}{4}\right)<0.41$, $\delta_2 \leq \frac{{1}}{{2}}\log\left(2\cdot\frac{4}{3}\right)<0.71$, and $\delta_3 \leq\frac{{1}}{{2}}\log\left(2\cdot\frac{4}{3}\right)<0.71$. We can conclude that the inner and outer bounds are to within one bit.
\section{Inner Bound: Channel Type 3}
\begin{theorem}
Given $\alpha \in [0,1]$, the region $\mathcal{R}_\alpha$ is defined by
\begin{eqnarray*}
&&R_1 \leq \frac{{1}}{{2}}\log\left(1+\frac{\alpha P}{N_1}\right)\\
&&R_2 \leq \frac{{1}}{{2}}\log\left(1+\frac{\alpha P}{N_2}\right)\\
&&R_3 \leq \frac{{1}}{{2}}\log\left(1+\frac{P}{2\alpha P+N_3}\right),
\end{eqnarray*}
and $\mathcal{R}=\textsc{conv}\left(\bigcup_{\alpha}\mathcal{R}_\alpha\right)$ is achievable.
\end{theorem}
\subsection{Achievable Scheme}
For this channel type, neither rate splitting nor aligned interference decoding is necessary.
Transmit signal $\mathbf{x}_k$ is a coded signal of $M_k\in\{1,2,\ldots,2^{nR_k}\},k=1,2,3$. The power allocation satisfies $\mathbb{E}[\|\mathbf{x}_1\|^2]=\alpha nP$, $\mathbb{E}[\|\mathbf{x}_2\|^2]=\alpha nP$, and $\mathbb{E}[\|\mathbf{x}_3\|^2]=nP$.
The received signals are
\begin{eqnarray*}
&&\mathbf{y}_1=\mathbf{x}_3+\mathbf{x}_1+\mathbf{z}_1\\
&&\mathbf{y}_2=\mathbf{x}_3+\mathbf{x}_2+\mathbf{z}_2\\
&&\mathbf{y}_3=\mathbf{x}_3+\mathbf{x}_1+\mathbf{x}_2+\mathbf{z}_3.
\end{eqnarray*}
The signal scale diagram at each receiver is shown in Fig. \ref{fig:signalScale} (c). Decoding is performed in the following way.
\begin{itemize}
\item At receiver 1, $\mathbf{x}_3$ is first decoded while treating $\mathbf{x}_1+\mathbf{z}_1$ as noise. Next, $\mathbf{x}_1$ is decoded from $\mathbf{y}_1-\mathbf{x}_3=\mathbf{x}_1+\mathbf{z}_1$. For reliable decoding, the code rates should satisfy
\begin{eqnarray}
&& R_3 \leq T_3' =\frac{{1}}{{2}}\log\left(1+\frac{P}{\alpha P+N_1}\right)\\
&& R_1 \leq T_1 =\frac{{1}}{{2}}\log\left(1+\frac{\alpha P}{N_1}\right).
\end{eqnarray}
\item At receiver 2, $\mathbf{x}_3$ is first decoded while treating $\mathbf{x}_2+\mathbf{z}_2$ as noise. Next, $\mathbf{x}_2$ is decoded from $\mathbf{y}_2-\mathbf{x}_3=\mathbf{x}_2+\mathbf{z}_2$. For reliable decoding, the code rates should satisfy
\begin{eqnarray}
&& R_3 \leq T_3'' =\frac{{1}}{{2}}\log\left(1+\frac{P}{\alpha P+N_2}\right)\\
&& R_2 \leq T_2 =\frac{{1}}{{2}}\log\left(1+\frac{\alpha P}{N_2}\right).
\end{eqnarray}
\item At receiver 3, $\mathbf{x}_3$ is decoded while treating $\mathbf{x}_1+\mathbf{x}_2+\mathbf{z}_3$ as noise. For reliable decoding, the code rates should satisfy
\begin{eqnarray}
&& R_3\leq T_3''' =\frac{{1}}{{2}}\log\left(1+\frac{P}{2\alpha P+N_3}\right).
\end{eqnarray}
\end{itemize}
Putting together, we get
\begin{eqnarray*}
&& R_1 \leq T_1\\
&& R_2 \leq T_2\\
&& R_3 \leq T_3 =\min\{T_3',T_3'',T_3'''\}
\end{eqnarray*}
where
\begin{eqnarray}
&& T_1 =\frac{{1}}{{2}}\log\left(1+\frac{\alpha P}{N_1}\right)\\
&& T_2 =\frac{{1}}{{2}}\log\left(1+\frac{\alpha P}{N_2}\right)\\
&& T_3 =\frac{{1}}{{2}}\log\left(1+\frac{P}{2\alpha P+N_3}\right)\\
&& \ \ \ \ \geq\frac{{1}}{{2}}\log\left(1+\frac{P}{3\cdot\max\{\alpha P,N_3\} }\right).
\end{eqnarray}
\subsection{The Gap}
Starting from $\mathcal{R}_o$ from Table \ref{tab:outerBounds}, we can express the two-dimensional outer bound region at $R_3$ as
\begin{eqnarray*}
&& R_1 \leq \min\left\{\frac{{1}}{{2}}\log\left(1+\frac{2P}{N_1}\right)-R_3,C_1\right\}\\
&& \ \ \ \ \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_3,\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{4}{3}\right)\right\}\\
&& R_2 \leq \min\left\{\frac{{1}}{{2}}\log\left(1+\frac{2P}{N_2}\right)-R_3,C_2\right\}\\
&& \ \ \ \ \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)-R_3,\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{4}{3}\right)\right\}.
\end{eqnarray*}
Depending on the bottleneck of $\min\{\cdot,\cdot\}$ expressions, there are two cases:
$R_3\leq \frac{{1}}{{2}}\log\left(\frac{7}{4}\right)$ and $R_3\geq \frac{{1}}{{2}}\log\left(\frac{7}{4}\right)$.
We assume that $R_3\geq \frac{{1}}{{2}}\log\left(\frac{7}{4}\right)$, equivalently $\alpha\leq\frac{4}{7}$. We also assume that $R_3\leq\frac{{1}}{{2}}\log\left(\frac{P}{N_3}\right)$, equivalently $\alpha P\geq N_3$. The other cases are trivial.
The two-dimensional outer bound region at $R_3=\frac{{1}}{{2}}\log\left(\frac{P}{\alpha P}\right)$ is
\begin{eqnarray*}
&& R_1 \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{\alpha P}{N_1}\cdot\frac{7}{3}\right),\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{4}{3}\right)\right\}\\
&& R_2 \leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{\alpha P}{N_2}\cdot\frac{7}{3}\right),\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{4}{3}\right)\right\}.
\end{eqnarray*}
For $\alpha\leq\frac{4}{7}$, the two-dimensional outer bound region is
\begin{eqnarray}
R_1 \leq \frac{{1}}{{2}}\log\left(\frac{\alpha P}{N_1}\cdot\frac{7}{3}\right),\ R_2 \leq \frac{{1}}{{2}}\log\left(\frac{\alpha P}{N_2}\cdot\frac{7}{3}\right).
\end{eqnarray}
For $\alpha P\geq N_3$, the two-dimensional achievable rate region at $R_3=\frac{{1}}{{2}}\log\left(\frac{P}{3\alpha P}\right)$ is
\begin{eqnarray}
R_1 \leq \frac{{1}}{{2}}\log\left(\frac{\alpha P}{N_1}\right),\ R_2 \leq \frac{{1}}{{2}}\log\left(\frac{\alpha P}{N_2}\right).
\end{eqnarray}
By comparing the inner and outer bounds, we can see that $\delta_1\leq \frac{{1}}{{2}}\log\left(\frac{7}{3}\right)<0.62$, $\delta_2 \leq \frac{{1}}{{2}}\log\left(\frac{7}{3}\right)<0.62$, and $\delta_3 \leq\frac{{1}}{{2}}\log\left(3\right)<0.8$. We can conclude that the inner and outer bounds are to within one bit.
\begin{figure}[t]
\begin{center}
\mbox{
\subfigure[Large $R_1$]{\includegraphics[width=0.2\textwidth]{type4_largeR1-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Small $R_1$]{\includegraphics[width=0.2\textwidth]{type4_smallR1-eps-converted-to.pdf}}
}
\caption{The cross-section of the type 4 outer bound region at a relatively small or large $R_1$.}
\label{fig:outerBoundCrossSection4}
\end{center}
\end{figure}
\section{Inner Bound: Channel Type 4}
The relaxed outer bound region $\mathcal{R}_o'$ given by
\begin{eqnarray*}
&& \ \ \ \ \ \ \ R_k \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_k}\right)+\frac{{1}}{{2}}\log\left(\frac{4}{3}\right),\ k=1,2,3\\
&& R_1+R_2 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\right)\\
&& R_1+R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\right)\\
&& R_2+R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\right).
\end{eqnarray*}
The cross-sectional region at a given $R_1$ is described by
\begin{eqnarray*}
&& R_2\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_1,\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{4}{3}\right)\right\}\\
&& R_3\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_1,\frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right)\right\}\\
&& R_2+R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right).
\end{eqnarray*}
Depending on the bottleneck of $\min\{\cdot,\cdot\}$ expressions, there are three cases:
\begin{itemize}
\item $R_1\leq \frac{{1}}{{2}}\log\left(\frac{N_2}{N_1}\cdot\frac{7}{4}\right)$
\item $\frac{{1}}{{2}}\log\left(\frac{N_2}{N_1}\cdot\frac{7}{4}\right)\leq R_1\leq \frac{{1}}{{2}}\log\left(\frac{N_3}{N_1}\cdot\frac{7}{4}\right)$
\item $R_1\geq \frac{{1}}{{2}}\log\left(\frac{N_3}{N_1}\cdot\frac{7}{4}\right)$.
\end{itemize}
In this section, we focus on the third case. The other cases can be proved similarly.
If the sum of the righthand sides of $R_2$ and $R_3$ bounds is smaller than the righthand side of $R_2+R_3$ bound, i.e.,
\begin{eqnarray}
\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-2R_1\leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right),
\end{eqnarray}
then the $R_2+R_3$ bound is not active at the $R_1$.
This condition can be expressed as a threshold on $R_1$ given by
\begin{eqnarray}
&& R_1 > R_{1,th}= \frac{1}{2}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-\frac{1}{4}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)\nonumber\\
&& \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \frac{1}{4}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)+\frac{1}{4}\log\left(\frac{N_2}{N_1}\right).
\end{eqnarray}
For this relatively large $R_1$, the cross-sectional region is a rectangle as described in Fig. \ref{fig:outerBoundCrossSection4} (a). In contrast, for a relatively small $R_1$, when the threshold condition does not hold, the cross-sectional region is a MAC-like region as described in Fig. \ref{fig:outerBoundCrossSection4} (b). In the rest of the section, we present achievable schemes for each case.
\subsection{Achievable Scheme for Relatively Large $R_1$}
\begin{figure}[tp]
\begin{center}
\mbox{
\subfigure[Channel type 4: relatively large $R_1$]{\includegraphics[width=0.45\textwidth]{type4signalScale1-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Channel type 4: relatively small $R_1$]{\includegraphics[width=0.45\textwidth]{type4signalScale2-eps-converted-to.pdf}}
}
\caption{Signal scale diagram.}
\label{fig:signalScale4}
\end{center}
\end{figure}
\begin{theorem}
Given $\alpha=(\alpha_0,\alpha_1,\alpha_2) \in [0,1]^3$, the region $\mathcal{R}_\alpha$ is defined by
\begin{eqnarray*}
&& R_1 \leq \min\left\{\frac{{1}}{{2}}\log^+\left(c_{11}+\frac{(1-\alpha_0-\alpha_1-\alpha_2)P}{(\alpha_0+\alpha_1+2\alpha_2)P+N_2}\right),\right.\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_1}\right)\right\}\\
&&\ \ \ \ \ \ \ \ +\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{(\alpha_0+\alpha_2) P+N_2}\right)\\
&&\ \ \ \ \ \ \ \ +\frac{{1}}{{2}}\log\left(1+\frac{\alpha_0 P}{N_1}\right)\\
&& R_2 \leq \frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_2}\right)\\
&& R_3 \leq \frac{{1}}{{2}}\log^+\left(c_3+\frac{P}{(\alpha_0+\alpha_1+\alpha_2) P+N_3}\right)
\end{eqnarray*}
where $c_{11}=\frac{1-\alpha_0-\alpha_1-\alpha_2}{2-\alpha_0-\alpha_1-\alpha_2}$ and $c_3=\frac{1}{2-\alpha_0-\alpha_1-\alpha_2}$,
and $\mathcal{R}=\textsc{conv}\left(\bigcup_{\alpha}\mathcal{R}_\alpha\right)$ is achievable.
\end{theorem}
We present an achievable scheme for the case of $R_1 > R_{1,th}$. Message $M_1\in\{1,2,\ldots,2^{nR_1}\}$ is split into three parts: $M_{10}\in\{1,2,\ldots,2^{nR_{10}}\}$, $M_{11}\in\{1,2,\ldots,2^{nR_{11}}\}$ and $M_{12}\in\{1,2,\ldots,2^{nR_{12}}\}$, so $R_1=R_{10}+R_{11}+R_{12}$. We generate the signals in the following way: $\mathbf{x}_{11}$ and $\mathbf{x}_{11}'$ are differently coded signals of $M_{11}$, and $\mathbf{x}_{10}$ and $\mathbf{x}_{12}$ are coded signal of $M_{10}$ and $M_{12}$, respectively. The transmit signal is the sum
\[\mathbf{x}_1=\mathbf{x}_{10}+\mathbf{x}_{11}+\mathbf{x}_{12}+\mathbf{x}_{11}'.\] The power allocation satisfies $\mathbb{E}[\|\mathbf{x}_{10}\|^2]=\alpha_0 nP$, $\mathbb{E}[\|\mathbf{x}_{11}\|^2]=\alpha_2 nP$, $\mathbb{E}[\|\mathbf{x}_{12}\|^2]=\alpha_1 nP$, and $\mathbb{E}[\|\mathbf{x}_{11}'\|^2]=(1-\alpha_0-\alpha_1-\alpha_2) nP$.
The transmit signals $\mathbf{x}_2$ and $\mathbf{x}_3$ are coded signals of the messages $M_2\in\{1,2,\ldots,2^{nR_2}\}$ and $M_3\in\{1,2,\ldots,2^{nR_3}\}$, satisfying $\mathbb{E}[\|\mathbf{x}_2\|^2]=\alpha_2 nP$ and $\mathbb{E}[\|\mathbf{x}_3\|^2]=nP$.
The signals $\mathbf{x}_{11}'$ and $\mathbf{x}_3$ are lattice-coded signals using the same coding lattice but different shaping lattices. As a result, the sum $\mathbf{x}_{11}'+\mathbf{x}_3$ is a dithered lattice codeword.
The received signals are
\begin{eqnarray*}
&&\mathbf{y}_1=[\mathbf{x}_{11}'+\mathbf{x}_3]+\mathbf{x}_{12}+\mathbf{x}_{11}+\mathbf{x}_{10}+\mathbf{z}_1\\
&&\mathbf{y}_2=\mathbf{x}_{11}'+\mathbf{x}_{12}+\mathbf{x}_{11}+\mathbf{x}_{2}+\mathbf{x}_{10}+\mathbf{z}_2\\
&&\mathbf{y}_3=\mathbf{x}_3+\mathbf{x}_2+\mathbf{z}_3.
\end{eqnarray*}
The signal scale diagram at each receiver is shown in Fig. \ref{fig:signalScale4} (a).
Decoding is performed in the following way.
\begin{itemize}
\item At receiver 1, $[\mathbf{x}_{11}'+\mathbf{x}_3]$ is first decoded while treating other signals as noise and removed from $\mathbf{y}_1$. Next, $\mathbf{x}_{12}$, $\mathbf{x}_{11}$, and $\mathbf{x}_{10}$ are decoded successively. For reliable decoding, the code rates should satisfy
\begin{eqnarray*}
&&R_{11}\leq T_{11}' =\frac{{1}}{{2}}\log\left(c_{11}+\frac{(1-\alpha_0-\alpha_1-\alpha_2)P}{(\alpha_0+\alpha_1+\alpha_2)P+N_1}\right)\\
&&R_3\ \leq T_3'\ =\frac{{1}}{{2}}\log\left(c_3+\frac{P}{(\alpha_0+\alpha_1+\alpha_2)P+N_1}\right)\\
&&R_{12}\leq T_{12}' =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{(\alpha_0+\alpha_2) P+N_1}\right)\\
&&R_{11}\leq T_{11}'' =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_1}\right)\\
&&R_{10}\leq T_{10} =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_0 P}{N_1}\right)
\end{eqnarray*}
where $c_{11}=\frac{(1-\alpha_0-\alpha_1-\alpha_2)P}{(1-\alpha_0-\alpha_1-\alpha_2)P+P}=\frac{1-\alpha_0-\alpha_1-\alpha_2}{2-\alpha_0-\alpha_1-\alpha_2}$ and $c_3=\frac{P}{(1-\alpha_0-\alpha_1-\alpha_2)P+P}=\frac{1}{2-\alpha_0-\alpha_1-\alpha_2}$. Note that $0\leq c_{11}\leq \frac{1}{2}$, $c_{11}+c_3=1$, and $\frac{1}{2}\leq c_3\leq 1$.
\item At receiver 2, $\mathbf{x}_{11}'$ is first decoded while treating other signals as noise. Having successfully recovered $M_{11}$, receiver 2 can generate $\mathbf{x}_{11}$ and $\mathbf{x}_{11}'$, and cancel them from $\mathbf{y}_2$. Next, $\mathbf{x}_{12}$ is decoded from $\mathbf{x}_{12}+\mathbf{x}_{2}+\mathbf{x}_{10}+\mathbf{z}_2$. Finally, $\mathbf{x}_2$ is decoded from $\mathbf{x}_2+\mathbf{x}_{10}+\mathbf{z}_2$. For reliable decoding, the code rates should satisfy
\begin{eqnarray*}
&& R_{11}\leq T_{11}''' =\frac{{1}}{{2}}\log\left(1+\frac{(1-\alpha_0-\alpha_1-\alpha_2)P}{(\alpha_0+\alpha_1+2\alpha_2)P+N_2}\right)\\
&& R_{12}\leq T_{12}'' =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{(\alpha_0+\alpha_2)P+N_2}\right)\\
&& R_2\ \leq T_2\ =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_2}\right).
\end{eqnarray*}
\item At receiver 3, $\mathbf{x}_3$ is decoded while treating $\mathbf{x}_2+\mathbf{z}_3$ as noise. Reliable decoding is possible if
\begin{eqnarray}
&&R_3\ \leq T_3'' =\frac{{1}}{{2}}\log\left(1+\frac{P}{\alpha_2 P+N_3}\right).
\end{eqnarray}
\end{itemize}
Putting together, we can see that given $\alpha_0,\alpha_1,\alpha_2\in[0,1]$, the following rate region is achievable.
\begin{eqnarray*}
&& R_1 \leq T_1=\min\{T_{11}',T_{11}'',T_{11}'''\}+\min\{T_{12}',T_{12}''\}+T_{10}\\
&& R_2 \leq T_2\\
&& R_3 \leq T_3=\min\{T_3',T_3''\}
\end{eqnarray*}
where
\begin{eqnarray*}
&& T_1 = \min\{T_{11}',T_{11}'',T_{11}'''\}+\min\{T_{12}',T_{12}''\}+T_{10}\\
&&\ \ \ \ = \min\{\min\{T_{11}',T_{11}'''\},T_{11}''\}+T_{12}''+T_{10}\\
&&\ \ \ \ \geq \min\left\{\frac{{1}}{{2}}\log\left(c_{11}+\frac{(1-\alpha_0-\alpha_1-\alpha_2)P}{(\alpha_0+\alpha_1+2\alpha_2)P+N_2}\right),\right.\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_1}\right)\right\}\\
&&\ \ \ \ \ \ \ \ +\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{(\alpha_0+\alpha_2) P+N_2}\right)\\
&&\ \ \ \ \ \ \ \ +\frac{{1}}{{2}}\log\left(1+\frac{\alpha_0 P}{N_1}\right)\\
&& T_2 =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_2}\right)\\
&& T_3 \geq \frac{{1}}{{2}}\log\left(c_3+\frac{P}{(\alpha_0+\alpha_1+\alpha_2) P+N_3}\right).
\end{eqnarray*}
\subsection{The Gap for Relatively Large $R_1$}
We choose $\alpha_0$, $\alpha_1$ and $\alpha_2$ such that $\alpha_1\leq \frac{3}{8}$, that $\alpha_1\geq 3(\alpha_0+\alpha_2)$, that $\alpha_2 P\geq 3N_3$, and that $\alpha_0 P=N_2$. It follows that $\alpha_0+\alpha_1+\alpha_2\leq \frac{4}{3}\alpha_1\leq \frac{1}{2}$, that $c_{11}\geq \frac{1}{3}$, and that $(\alpha_0+\alpha_1+2\alpha_2)P+N_2 = 2(\alpha_0+\alpha_2)P+\alpha_1 P\leq \frac{5}{3}\alpha_1 P$. We get the lower bounds for each term of $T_1$ expression above.
\begin{eqnarray*}
&& \min\{T_{11}',T_{11}'''\}\\
&&\geq\frac{{1}}{{2}}\log\left(c_{11}+\frac{(1-\alpha_0-\alpha_1-\alpha_2)P}{(\alpha_0+\alpha_1+2\alpha_2)P+N_2}\right)\\
&&\geq\frac{{1}}{{2}}\log\left(\frac{1}{3}+\frac{(1-(4/3)\alpha_1)P}{(5/3)\alpha_1 P}\right)\\
&&= \frac{{1}}{{2}}\log\left(\frac{P}{(5/3)\alpha_1 P}-\frac{7}{15}\right)\\
&&= \frac{{1}}{{2}}\log\left(\frac{P}{(5/3)\alpha_1 P}\right)+\frac{{1}}{{2}}\log\left(1-\frac{7}{15}\cdot\frac{5}{3}\alpha_1\right)\\
&&\geq \frac{{1}}{{2}}\log\left(\frac{P}{(5/3)\alpha_1 P}\right)+\frac{{1}}{{2}}\log\left(\frac{17}{24}\right)\\
&&\geq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{17}{40}\right)
\end{eqnarray*}
and
\begin{eqnarray}
&& T_{11}''=\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_1}\right)\\
&&\ \ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{(\alpha_0+\alpha_2) P+N_1}{\alpha_0 P+N_1}\right)\\
&&\ \ \ \ \ \geq \frac{{1}}{{2}}\log\left(\frac{(\alpha_0+\alpha_2) P}{\alpha_0 P+N_2}\right)\\
&&\ \ \ \ \ =\frac{{1}}{{2}}\log\left(\frac{(\alpha_0+\alpha_2) P}{2 N_2}\right).
\end{eqnarray}
Since $(\alpha_0+\alpha_2)P\geq N_2+3N_3\geq 4 N_2$,
\begin{eqnarray}
&&T_{12}'' = \frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{(\alpha_0+\alpha_2)P+N_2}\right)\\
&& \ \ \ \ \ \geq \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{(5/4)(\alpha_0+\alpha_2)P}\right).
\end{eqnarray}
Putting together,
\begin{eqnarray*}
&& T_1 \geq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{17}{40}\right),\frac{{1}}{{2}}\log\left(\frac{(\alpha_0+\alpha_2) P}{2 N_2}\right)\right\}\\
&&\ \ \ \ \ \ \ +\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{(5/4)(\alpha_0+\alpha_2) P}\right)+\frac{{1}}{{2}}\log\left(\frac{N_2}{N_1}\right)\\
&&\ \ \ \ = \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{(\alpha_0+\alpha_2) P}\cdot\frac{N_2}{N_1}\cdot\frac{17}{40}\cdot\frac{4}{5}\right),\right.\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{ N_1}\cdot\frac{1}{2}\cdot\frac{4}{5}\right)\right\}\\
&&\ \ \ \ = \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{(\alpha_0+\alpha_2) P}\cdot\frac{N_2}{N_1}\cdot\frac{17}{50}\right),\right.\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{ N_1}\cdot\frac{2}{5}\right)\right\}.
\end{eqnarray*}
Given $\alpha_1$, we choose $\alpha_2$ that satisfies $\frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{17}{40}\right)=\frac{{1}}{{2}}\log\left(\frac{(\alpha_0+\alpha_2) P}{2N_2}\right)$. As a result, we can write $T_1\geq \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{ N_1}\cdot\frac{2}{5}\right)$, and also
\begin{eqnarray}
&& T_2 =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_2}\right)\\
&& \ \ \ \ \geq\frac{{1}}{{2}}\log\left(\frac{(\alpha_0+\alpha_2) P}{2 N_2}\right)\\
&& \ \ \ \ =\frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{17}{40}\right).
\end{eqnarray}
Since $N_3\leq \frac{1}{3}\alpha_2 P\leq\frac{1}{3}(\alpha_0+\alpha_2)P \leq \frac{1}{9}\alpha_1 P$,
\begin{eqnarray*}
&& T_3 \geq \frac{{1}}{{2}}\log\left(c_3+\frac{P}{(\alpha_0+\alpha_1+\alpha_2) P+N_3}\right)\\
&& \ \ \ \ \geq \frac{{1}}{{2}}\log\left(\frac{1}{2}+\frac{P}{(4/3)\alpha_1 P+(1/9)\alpha_1 P}\right)\\
&& \ \ \ \ \geq \frac{{1}}{{2}}\log\left(\frac{P}{(13/9)\alpha_1 P}\right).
\end{eqnarray*}
The following rate region is achievable.
\begin{eqnarray}
&& R_1\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{2}{5}\right)\\
&& R_2\leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{17}{40}\right)\\
&& R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{9}{13}\right).
\end{eqnarray}
For fixed $\alpha_1$ and $R_1=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{2}{5}\right)$, the two-dimensional rate region, given by
\begin{eqnarray*}
&& R_2\leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{17}{40}\right),\ R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{9}{13}\right)
\end{eqnarray*}
is achievable.
In comparison, the two-dimensional outer bound region at $R_1=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{2}{5}\right)+1$, given by
\begin{eqnarray*}
&&R_2 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{2}{5}\right)-1\\
&&\ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\cdot\frac{5}{2}\cdot\frac{1}{4}\right)\\
&&R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{2}{5}\right)-1\\
&&\ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\cdot\frac{5}{2}\cdot\frac{1}{4}\right).
\end{eqnarray*}
As discussed above, the sum-rate bound on $R_2+R_3$ is loose for $R_1$ larger than the threshold, so the rate region is a rectangle.
By comparing the inner and outer bound rate regions, we can see that $\delta_2< \frac{{1}}{{2}}\log\left(\frac{40}{17}\cdot\frac{7}{3}\cdot\frac{5}{2}\cdot\frac{1}{4}\right) < 0.89$ and $\delta_3< \frac{{1}}{{2}}\log\left(\frac{13}{9}\cdot\frac{7}{3}\cdot\frac{5}{2}\cdot\frac{1}{4}\right) < 0.54$. Therefore, we can conclude that the gap is to within one bit per message.
\subsection{Achievable Scheme for Relatively Small $R_1$}
\begin{theorem}
Given $\alpha=(\alpha_0,\alpha_1,\alpha_2) \in [0,1]^3$, the region $\mathcal{R}_\alpha$ is defined by
\begin{eqnarray*}
&& R_1 \leq \min\left\{\frac{{1}}{{2}}\log^+\left(c_{11}+\frac{(1-\alpha_1)P}{(\alpha_1+\alpha_2) P+N_2}\right),\right.\\
&&\ \ \ \ \ \ \ \left.\frac{{1}}{{2}}\log\left(1+\frac{(\alpha_1-\alpha_0) P}{\alpha_0 P+N_1}\right)\right\}+\frac{{1}}{{2}}\log\left(1+\frac{\alpha_0 P}{N_1}\right)\\
&& R_2 \leq\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_2}\right)\\
&& R_3 \leq \frac{{1}}{{2}}\log^+\left(c_3+\frac{P}{\max\{\alpha_1,\alpha_2\} P+N_3}\right)
\end{eqnarray*}
where $c_{11}=\frac{1-\alpha_1}{2-\alpha_1}$ and $c_3=\frac{1}{2-\alpha_1}$,
and $\mathcal{R}=\textsc{conv}\left(\bigcup_{\alpha}\mathcal{R}_\alpha\right)$ is achievable.
\end{theorem}
For the case of $R_1 < R_{1,th}$, we present the following achievable scheme. At transmitter 1, we split $M_1$ into $M_{10}$ and $M_{11}$, so $R_{1}=R_{10}+R_{11}$. The transmit signal is the sum
\[\mathbf{x}_1=\mathbf{x}_{10}+\mathbf{x}_{11}+\mathbf{x}_{11}'.\]
The power allocation satisfies $\mathbb{E}[\|\mathbf{x}_{10}\|^2]=\alpha_0 nP$, $\mathbb{E}[\|\mathbf{x}_{11}\|^2]=(\alpha_1-\alpha_0) nP$, and $\mathbb{E}[\|\mathbf{x}_{11}'\|^2]=(1-\alpha_1)nP$ at receiver 1, $\mathbb{E}[\|\mathbf{x}_2\|^2]=\alpha_2 nP$ at receiver 2, and $\mathbb{E}[\|\mathbf{x}_3\|^2]=nP$ at receiver 3.
The signals $\mathbf{x}_{11}'$ and $\mathbf{x}_3$ are lattice codewords using the same coding lattice but different shaping lattices. As a result, the sum $\mathbf{x}_{11}'+\mathbf{x}_3$ is a lattice codeword.
The received signals are
\begin{eqnarray*}
&&\mathbf{y}_1 = [\mathbf{x}_{11}'+\mathbf{x}_3]+\mathbf{x}_{11}+\mathbf{x}_{10}+\mathbf{z}_1\\
&&\mathbf{y}_2 = \mathbf{x}_{11}'+\mathbf{x}_{11}+\mathbf{x}_2+\mathbf{x}_{10}+\mathbf{z}_2\\
&&\mathbf{y}_3 = \mathbf{x}_3+\mathbf{x}_2+\mathbf{z}_3.
\end{eqnarray*}
The signal scale diagram at each receiver is shown in Fig. \ref{fig:signalScale4} (b).
Decoding is performed in the following way.
\begin{itemize}
\item At receiver 1, $[\mathbf{x}_{11}'+\mathbf{x}_3]$ is first decoded while treating other signals as noise and removed from $\mathbf{y}_1$. Next, $\mathbf{x}_{11}$ and then $\mathbf{x}_{10}$ is decoded successively. For reliable decoding, the code rates should satisfy
\begin{eqnarray*}
&&R_{11}\leq T_{11}' =\frac{{1}}{{2}}\log\left(c_{11}+\frac{(1-\alpha_1)P}{\alpha_1 P+N_1}\right)\\
&&R_3\ \leq T_3'\ =\frac{{1}}{{2}}\log\left(c_3+\frac{P}{\alpha_1 P+N_1}\right)\\
&&R_{11}\leq T_{11}'' =\frac{{1}}{{2}}\log\left(1+\frac{(\alpha_1-\alpha_0) P}{\alpha_0 P+N_1}\right)\\
&&R_{10}\leq T_{10} =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_0 P}{N_1}\right)
\end{eqnarray*}
where $c_{11}=\frac{(1-\alpha_1)P}{(1-\alpha_1)P+P}=\frac{1-\alpha_1}{2-\alpha_1}$ and $c_3=\frac{P}{(1-\alpha_1)P+P}=\frac{1}{2-\alpha_1}$. Note that $0\leq c_{11}\leq \frac{1}{2}$, $c_{11}+c_3=1$, and $\frac{1}{2}\leq c_3\leq 1$.
\item At receiver 2, $\mathbf{x}_{11}'$
is first decoded while treating other signals as noise. Having successfully recovered $M_{11}$, receiver 1 can generate $\mathbf{x}_{11}$ and $\mathbf{x}_{11}'$, and cancel them from $\mathbf{y}_2$. Next, $\mathbf{x}_2$ is decoded from $\mathbf{x}_2+\mathbf{x}_{10}+\mathbf{z}_2$. At receiver 2, $\mathbf{x}_{10}$ is not decoded. For reliable decoding, the code rates should satisfy
\begin{eqnarray*}
&& R_{11}\leq T_{11}''' =\frac{{1}}{{2}}\log\left(1+\frac{(1-\alpha_1)P}{(\alpha_1+\alpha_2) P+N_2}\right)\\
&& R_2\ \leq T_2\ =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_2}\right).
\end{eqnarray*}
\item At receiver 3, $\mathbf{x}_3$ is decoded while treating $\mathbf{x}_2+\mathbf{z}_3$ as noise. Reliable decoding is possible if
\begin{eqnarray}
&&R_3\ \leq T_3'' =\frac{{1}}{{2}}\log\left(1+\frac{P}{\alpha_2 P+N_3}\right).
\end{eqnarray}
\end{itemize}
Putting together, we can see that given $\alpha_0,\alpha_1\alpha_2\in[0,1]$, the following rate region is achievable.
\begin{eqnarray}
&& R_1 \leq T_1 =\min\{T_{11}',T_{11}'',T_{11}'''\}+T_{10}\\
&& R_2 \leq T_2 \\
&& R_3 \leq T_3 = \min\{T_3',T_3''\}
\end{eqnarray}
where
\begin{eqnarray*}
&& T_1 = \min\{T_{11}',T_{11}'',T_{11}'''\}+T_{10}\\
&&\ \ \ \ = \min\{\min\{T_{11}',T_{11}'''\},T_{11}''\}+T_{10}\\
&&\ \ \ \ \geq \min\left\{\frac{{1}}{{2}}\log\left(c_{11}+\frac{(1-\alpha_1)P}{(\alpha_1+\alpha_2) P+N_2}\right),\right.\\
&&\ \ \ \ \ \ \ \left.\frac{{1}}{{2}}\log\left(1+\frac{(\alpha_1-\alpha_0) P}{\alpha_0 P+N_1}\right)\right\}+\frac{{1}}{{2}}\log\left(1+\frac{\alpha_0 P}{N_1}\right)\\
&& T_2 =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_0 P+N_2}\right)\\
&& T_3 \geq \frac{{1}}{{2}}\log\left(c_3+\frac{P}{\max\{\alpha_1,\alpha_2\} P+N_3}\right).
\end{eqnarray*}
\begin{figure}[tp]
\begin{center}
\mbox{
\subfigure[Channel type 4: small $R_1$]{\includegraphics[width=0.2\textwidth]{type4MAC-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Channel type 5: small $R_2$]{\includegraphics[width=0.2\textwidth]{type5MAC-eps-converted-to.pdf}}
}
\caption{MAC-like region.}
\label{fig:MAClike}
\end{center}
\end{figure}
\subsection{The Gap for Relatively Small $R_1$}
We choose $\alpha_0$, $\alpha_1$, and $\alpha_2$ such that $\alpha_1 \leq \alpha_2\leq \frac{1}{2}$, that $\alpha_1 P\geq 3N_2$, that $\alpha_2 P\geq 3N_3$, and that $\alpha_0 P=\frac{4}{5} N_2$. It follows that $c_{11}\geq \frac{1}{3}$ and that $(\alpha_1+\alpha_2)P+N_2\leq \frac{4}{3}\alpha_1 P+\alpha_2P\leq \frac{7}{3}\alpha_2 P$.
\begin{eqnarray*}
&&\min\{T_{11}',T_{11}'''\}\\
&&=\frac{{1}}{{2}}\log\left(c_{11}+\frac{(1-\alpha_1)P}{(\alpha_1+\alpha_2)P+N_2}\right)\\
&&\geq \frac{{1}}{{2}}\log\left(\frac{1}{3}+\frac{(1-\alpha_2)P}{(7/3)\alpha_2 P}\right)\\
&&=\frac{{1}}{{2}}\log\left(\frac{P}{(7/3)\alpha_2 P}-\frac{2}{21}\right)\\
&&=\frac{{1}}{{2}}\log\left(\frac{P}{(7/3)\alpha_2 P}\right)+\frac{{1}}{{2}}\log\left(1-\frac{2}{21}\cdot\frac{7}{3}\alpha_2\right)\\
&&\geq \frac{{1}}{{2}}\log\left(\frac{P}{(7/3)\alpha_2 P}\right)+\frac{{1}}{{2}}\log\left(\frac{8}{9}\right)\\
&&\geq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\cdot\frac{8}{21}\right)
\end{eqnarray*}
and
\begin{eqnarray}
&& T_{11}''=\frac{{1}}{{2}}\log\left(1+\frac{(\alpha_1-\alpha_0) P}{\alpha_0 P+N_1}\right)\\
&& \ \ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P+N_1}{\alpha_0 P+N_1}\right)\\
&& \ \ \ \ \ \geq \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{\alpha_0 P+N_2}\right)\\
&& \ \ \ \ \ =\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{(9/5)N_2}\right).
\end{eqnarray}
Putting together,
\begin{eqnarray*}
&& T_1 \geq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\cdot\frac{8}{21}\right),\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{(9/5) N_2}\right) \right\}\\
&&\ \ \ \ \ \ \ +\frac{{1}}{{2}}\log\left(\frac{N_2}{N_1}\cdot\frac{4}{5}\right).
\end{eqnarray*}
Let us define $\alpha_1'$ by the equality $\frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1' P}\cdot\frac{8}{21}\right)=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{(9/5) N_2}\right)$. If we choose $\alpha_2\leq \alpha_1'$, then $\frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\cdot\frac{8}{21}\right)\geq \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{(9/5) N_2}\right)$, and
\begin{eqnarray*}
&& T_1 \geq \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{(9/5)N_2}\cdot\frac{N_2}{N_1}\cdot\frac{4}{5}\right)=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{4}{9}\right).
\end{eqnarray*}
We can see that the following rate region is achievable.
\begin{eqnarray}
&& R_1\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{4}{9}\right)\\
&& R_2\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{(9/5)N_2}\right)\\
&& R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{(4/3)\alpha_2 P}\right).
\end{eqnarray}
For fixed $\alpha_2\in [\alpha_1,\alpha_1']$ and $R_1=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{4}{9}\right)$, the two-dimensional rate region $\mathcal{R}_\alpha$, given by
\begin{eqnarray}
&& R_2\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{(9/5)N_2}\right)\\
&& R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{(4/3)\alpha_2 P}\right)
\end{eqnarray}
is achievable. The union $\bigcup_{\alpha_2\in [\alpha_1,\alpha_1']}\mathcal{R}_\alpha$ is a MAC-like region, given by
\begin{eqnarray}
&& \ \ \ \ \ \ \ R_2\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_1' P}{(9/5)N_2}\right)\\
&& \ \ \ \ \ \ \ \ \ \ \ \leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{8}{21}\right)\\
&& \ \ \ \ \ \ \ R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\cdot\frac{3}{4}\right)\\
&& R_2+R_3 \leq \frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{(9/5)N_2}\cdot\frac{P}{(4/3)\alpha_2 P}\right)\\
&& \ \ \ \ \ \ \ \ \ \ \ \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{15}{36}\right).
\end{eqnarray}
This region is described in Fig. \ref{fig:MAClike} (a).
In comparison, the two-dimensional outer bound region at $R_1=\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{4}{9}\right)+1$, given by
\begin{eqnarray*}
&&\ \ \ \ \ \ \ R_2 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{4}{9}\right)-1\\
&&\ \ \ \ \ \ \ \ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\cdot\
\frac{9}{4}\cdot\frac{1}{4}\right)\\
&&\ \ \ \ \ \ \ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-\frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\cdot\frac{4}{9}\right)-1\\
&&\ \ \ \ \ \ \ \ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\cdot \frac{9}{4}\cdot\frac{1}{4}\right)\\
&&R_2+R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\right).
\end{eqnarray*}
Since $\delta_2 < \frac{{1}}{{2}}\log\left(\frac{21}{8}\cdot\frac{7}{3}\cdot \frac{9}{4}\cdot\frac{1}{4}\right) < 0.90$, $\delta_3 < \frac{{1}}{{2}}\log\left(\frac{4}{3}\cdot \frac{7}{3}\cdot \frac{9}{4}\cdot\frac{1}{4}\right) < 0.41$ and $\delta_{23} < \frac{{1}}{{2}}\log\left(\frac{36}{15}\cdot\frac{7}{3}\right) < 1.25 < \sqrt{2}$, we can conclude that the gap is to within one bit per message.
\section{Inner Bound: Channel Type 5}
\begin{figure}[t]
\begin{center}
\mbox{
\subfigure[Large $R_2$]{\includegraphics[width=0.2\textwidth]{type5_largeR2-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Small $R_2$]{\includegraphics[width=0.2\textwidth]{type5_smallR2-eps-converted-to.pdf}}
}
\caption{The cross-section of the type 5 outer bound region at a relatively small or large $R_2$.}
\label{fig:outerBoundCrossSection5}
\end{center}
\end{figure}
Let us consider the relaxed outer bound region $\mathcal{R}_o'$ given by
\begin{eqnarray*}
&&\ \ \ \ \ \ \ R_k \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_k}\right)+\frac{{1}}{{2}}\log\left(\frac{4}{3}\right),\ k=1,2,3\\
&&R_1+R_2 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\right)\\
&&R_2+R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\right)\\
&&R_1+R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\right).
\end{eqnarray*}
The cross-sectional region at a given $R_2$ is described by
\begin{eqnarray*}
&& R_1\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-R_2,\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{4}{3}\right)\right\}\\
&& R_3\leq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)-R_2,\frac{{1}}{{2}}\log\left(\frac{P}{N_3}\cdot\frac{4}{3}\right)\right\}\\
&& R_1+R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right).
\end{eqnarray*}
Depending on the bottleneck of $\min\{\cdot,\cdot\}$ expressions, there are three cases:
\begin{itemize}
\item $R_2\leq \frac{{1}}{{2}}\log\left(\frac{7}{4}\right)$
\item $\frac{{1}}{{2}}\log\left(\frac{7}{4}\right)\leq R_2\leq \frac{{1}}{{2}}\log\left(\frac{N_3}{N_2}\cdot\frac{7}{4}\right)$
\item $R_2\geq \frac{{1}}{{2}}\log\left(\frac{N_3}{N_2}\cdot\frac{7}{4}\right)$.
\end{itemize}
In this section, we focus on the third case. The other cases can be proved similarly.
If the sum of the righthand sides of $R_1$ and $R_3$ bounds is smaller than the righthand side of $R_1+R_3$ bound, i.e.,
\begin{eqnarray*}
\frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)+\frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)-2R_2 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right),
\end{eqnarray*}
then the $R_1+R_3$ bound is not active at the $R_2$.
By rearranging, the threshold condition is given by
\begin{eqnarray}
R_2 > R_{2,th}=\frac{1}{4}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right).
\end{eqnarray}
Note that $R_{2,th}$ is roughly half of $C_2$.
For this relatively large $R_2$, the cross-sectional region is a rectangle as described in Fig. \ref{fig:outerBoundCrossSection5} (a). In contrast, for a relatively small $R_1$, when the threshold condition does not hold, the cross-sectional region is a MAC-like region as described in Fig. \ref{fig:outerBoundCrossSection5} (b). In the following subsections, we present achievable schemes for each case.
\subsection{Achievable Scheme for Relatively Large $R_2$}
\begin{theorem}
Given $\alpha=(\alpha_1,\alpha_2,\alpha_2') \in [0,1]^3$, the region $\mathcal{R}_\alpha$ is defined by
\begin{eqnarray*}
&& R_1 \leq \frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{N_1}\right)\\
&& R_2 \leq \min\left\{\frac{{1}}{{2}}\log^+\left(c_{21}+\frac{(1-\alpha_2-\alpha_2')P}{(\alpha_1+\alpha_2+\alpha_2')P+N_2}\right),\right.\\
&&\ \ \ \ \ \ \ \left.\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2' P}{N_2}\right)\right\}+\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_2' P+N_2}\right) \nonumber\\
&& R_3 \leq \frac{{1}}{{2}}\log^+\left(c_3+\frac{P}{\max\{\alpha_1,\alpha_2+\alpha_2'\} P+N_3}\right)
\end{eqnarray*}
where $c_{21}=\frac{1-\alpha_2-\alpha_2'}{2-\alpha_2-\alpha_2'}$ and $c_3=\frac{1}{2-\alpha_2-\alpha_2'}$,
and $\mathcal{R}=\textsc{conv}\left(\bigcup_{\alpha}\mathcal{R}_\alpha\right)$ is achievable.
\end{theorem}
We present an achievable scheme for the case of $R_2 > R_{2,th}$. Message $M_2\in\{1,2,\ldots,2^{nR_2}\}$ for receiver 2 is split into two parts: $M_{21}\in\{1,2,\ldots,2^{nR_{21}}\}$ and $M_{22}\in\{1,2,\ldots,2^{nR_{22}}\}$, so $R_2=R_{21}+R_{22}$. We generate the signals in the following way: $\mathbf{x}_{21}$ and $\mathbf{x}_{21}'$ are differently coded signals of $M_{21}$, and $\mathbf{x}_{22}$ is a coded signal of $M_{22}$. The transmit signal is the sum
\[\mathbf{x}_2=\mathbf{x}_{21}+\mathbf{x}_{22}+\mathbf{x}_{21}'.\]
The power allocation satisfies $\mathbb{E}[\|\mathbf{x}_1\|^2]=\alpha_1 nP$, at receiver 1, $\mathbb{E}[\|\mathbf{x}_{21}\|^2]=\alpha_2' nP$, $\mathbb{E}[\|\mathbf{x}_{22}\|^2]=\alpha_2 nP$, and $\mathbb{E}[\|\mathbf{x}_{21}'\|^2]=(1-\alpha_2-\alpha_2')P$ at receiver 2, and $\mathbb{E}[\|\mathbf{x}_3\|^2]=nP$ at receiver 3.
The signals $\mathbf{x}_{21}'$ and $\mathbf{x}_3$ are lattice codewords using the same coding lattice but different shaping lattices. As a result, the sum $\mathbf{x}_{21}'+\mathbf{x}_3$ is a lattice codeword.
The received signals are
\begin{eqnarray*}
&&\mathbf{y}_1=\mathbf{x}_{21}'+\mathbf{x}_{22}+\mathbf{x}_{21}+\mathbf{x}_1+\mathbf{z}_1\\
&&\mathbf{y}_2=[\mathbf{x}_{21}'+\mathbf{x}_3]+\mathbf{x}_{22}+\mathbf{x}_{21}+\mathbf{z}_2\\
&&\mathbf{y}_3=\mathbf{x}_3+\mathbf{x}_1+\mathbf{z}_3.
\end{eqnarray*}
The signal scale diagram at each receiver is shown in Fig. \ref{fig:signalScale5} (a). Decoding is performed in the following way.
\begin{itemize}
\item At receiver 1, $\mathbf{x}_{21}'$ is first decoded while treating other signals as noise. Having successfully recovered $M_{21}$, receiver 1 can generate $\mathbf{x}_{21}$ and $\mathbf{x}_{21}'$, and cancel them from $\mathbf{y}_1$. Next, $\mathbf{x}_{22}$ is decoded from $\mathbf{x}_{22}+\mathbf{x}_1+\mathbf{z}_1$. Finally, $\mathbf{x}_1$ is decoded from $\mathbf{x}_1+\mathbf{z}_1$. For reliable decoding, the code rates should satisfy
\begin{eqnarray*}
&& R_{21}\leq T_{21}' =\frac{{1}}{{2}}\log\left(1+\frac{(1-\alpha_2-\alpha_2')P}{(\alpha_1+\alpha_2+\alpha_2')P+N_1}\right)\\
&& R_{22}\leq T_{22}' =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_1 P+N_1}\right)\\
&& R_1\ \leq T_1\ =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{N_1}\right).
\end{eqnarray*}
\item At receiver 2, $[\mathbf{x}_{21}'+\mathbf{x}_3]$ first decoded while treating other signals as noise and removed from $\mathbf{y}_2$. Next, $\mathbf{x}_{22}$ and $\mathbf{x}_{21}$ are decoded successively. For reliable decoding, the code rates should satisfy
\begin{eqnarray*}
&&R_{21}\leq T_{21}'' =\frac{{1}}{{2}}\log\left(c_{21}+\frac{(1-\alpha_2-\alpha_2')P}{(\alpha_2+\alpha_2')P+N_2}\right)\\
&&R_3\ \leq T_3'\ =\frac{{1}}{{2}}\log\left(c_3+\frac{P}{(\alpha_2+\alpha_2')P+N_2}\right)\\
&&R_{22}\leq T_{22}'' =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_2' P+N_2}\right)\\
&&R_{21}\leq T_{21}''' =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2' P}{N_2}\right)
\end{eqnarray*}
where $c_{21}=\frac{(1-\alpha_2-\alpha_2')P}{(1-\alpha_2-\alpha_2')P+P}=\frac{1-\alpha_2-\alpha_2'}{2-\alpha_2-\alpha_2'}$ and $c_3=\frac{P}{(1-\alpha_2-\alpha_2')P+P}=\frac{1}{2-\alpha_2-\alpha_2'}$. Note that $0\leq c_{21}\leq \frac{1}{2}$, $c_{21}+c_3=1$, and $\frac{1}{2}\leq c_3\leq 1$.
\item At receiver 3, $\mathbf{x}_3$ is decoded while treating $\mathbf{x}_1+\mathbf{z}_3$ as noise. Reliable decoding is possible if
\begin{eqnarray}
&&R_3\ \leq T_3'' =\frac{{1}}{{2}}\log\left(1+\frac{P}{\alpha_1 P+N_3}\right).
\end{eqnarray}
\end{itemize}
Putting together, we can see that given $\alpha_1,\alpha_2,\alpha_2'\in[0,1]$, the following rate region is achievable.
\begin{eqnarray*}
&& R_1 \leq T_1\\
&& R_2 \leq T_2 =\min\{T_{21}',T_{21}'',T_{21}'''\}+\min\{T_{22}',T_{22}''\}\\
&& R_3 \leq T_3=\min\{T_3',T_3''\}
\end{eqnarray*}
where
\begin{eqnarray*}
&& T_1 =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{N_1}\right)\\
&& T_2 = \min\{T_{21}',T_{21}'',T_{21}'''\}+T_{22}''\\
&&\ \ \ \ = \min\{\min\{T_{21}',T_{21}''\},T_{21}'''\}+T_{22}''\\
&&\ \ \ \ \geq \min\left\{\frac{{1}}{{2}}\log\left(c_{21}+\frac{(1-\alpha_2-\alpha_2')P}{(\alpha_1+\alpha_2+\alpha_2')P+N_2}\right),\right.\\
&&\ \ \ \ \ \ \ \left.\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2' P}{N_2}\right)\right\}+\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_2' P+N_2}\right) \nonumber\\
&& T_3 \geq \frac{{1}}{{2}}\log\left(c_3+\frac{P}{\max\{\alpha_1,\alpha_2+\alpha_2'\} P+N_3}\right).
\end{eqnarray*}
\begin{figure}[tp]
\begin{center}
\mbox{
\subfigure[Channel type 5: relatively large $R_2$]{\includegraphics[width=0.45\textwidth]{type5signalScale1-eps-converted-to.pdf}}
}
\mbox{
\subfigure[Channel type 5: relatively small $R_2$]{\includegraphics[width=0.45\textwidth]{type5signalScale2-eps-converted-to.pdf}}
}
\caption{Signal scale diagram.}
\label{fig:signalScale5}
\end{center}
\end{figure}
\subsection{The Gap for Relatively Large $R_2$}
We choose $\alpha_1$ and $\alpha_2$ such that $\alpha_1 P\geq N_2$, that $\alpha_2 P\geq N_3$, that $\alpha_1=\alpha_2'\leq \alpha_2$, and that $\alpha_1+\alpha_2\leq \frac{1}{2}$. It follows that $c_{21}\geq \frac{1}{3}$. We get the lower bounds for each term of $T_2$ expression above.
\begin{eqnarray}
&&\min\{T_{21}',T_{21}''\}\\
&&\geq \frac{{1}}{{2}}\log\left(c_{21}+\frac{(1-\alpha_1-\alpha_2)P}{(2\alpha_1+\alpha_2)P+N_2}\right)\\
&&\geq \frac{{1}}{{2}}\log\left(\frac{1}{3}+\frac{(1-\alpha_1-\alpha_2)P}{(3\alpha_1+\alpha_2)P}\right)\\
&&\geq \frac{{1}}{{2}}\log\left(\frac{P}{(3\alpha_1+\alpha_2)P}\right).
\end{eqnarray}
The first entry of $\min\{\cdot,\cdot\}$ in
\[T_2 = \min\{\min\{T_{21}',T_{21}''\}+T_{22}'',T_{21}'''+T_{22}''\}\] is lower bounded as follows.
\begin{eqnarray*}
&&\min\{T_{21}',T_{21}''\}+T_{22}''\\
&&\geq \frac{{1}}{{2}}\log\left(\frac{P}{(3\alpha_1+\alpha_2)P}\right)+\frac{{1}}{{2}}\log\left(\frac{(\alpha_1+\alpha_2) P+N_2}{\alpha_1 P+N_2}\right)\\
&&= \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_1 P+N_2}\cdot\frac{(\alpha_1+\alpha_2) P+N_2}{(3\alpha_1+\alpha_2)P}\right)\\
&&\geq \frac{{1}}{{2}}\log\left(\frac{P}{3(\alpha_1 P+N_2)}\right)\\
&&\geq \frac{{1}}{{2}}\log\left(\frac{P}{6\alpha_1 P}\right).
\end{eqnarray*}
The second entry of $T_2=\min\{\cdot,\cdot\}$ is lower bounded as follows.
\begin{eqnarray*}
&&T_{21}'''+T_{22}''\\
&&=\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{N_2}\right)+\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{\alpha_1 P+N_2}\right)\\
&&=\frac{{1}}{{2}}\log\left(1+\frac{(\alpha_1+\alpha_2) P}{N_2}\right)\\
&&\geq\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right).
\end{eqnarray*}
Putting together, we get the lower bound
\begin{eqnarray*}
&& T_2 \geq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{6\alpha_1 P}\right) ,\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)\right\}.
\end{eqnarray*}
Given $\alpha_2$, we choose $\alpha_1$ that satisfies $\frac{{1}}{{2}}\log\left(\frac{P}{6\alpha_1 P}\right)=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)$. As a result, we can write $T_2\geq \frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)$.
We also have
\begin{eqnarray*}
&& T_3 \geq \frac{{1}}{{2}}\log\left(\frac{P}{(\alpha_1+\alpha_2) P+N_3}\right) \geq \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\right).
\end{eqnarray*}
Putting together, we can see that the following rate region is achievable.
\begin{eqnarray}
&& R_1\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\right)\\
&& R_2\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)\\
&& R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\right).
\end{eqnarray}
For fixed $\alpha_2$ and $R_2=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)$, the two-dimensional rate region, given by
\begin{eqnarray}
&& R_1\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\right)\\
&& \ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{P}{6\alpha_2 P}\cdot\frac{N_2}{N_1}\right)\\
&& R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\right)
\end{eqnarray}
is achievable.
In comparison, the two-dimensional outer bound region at $R_2=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)+1$ is given by
\begin{eqnarray*}
&&R_1 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)-1\\
&& \ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\cdot\frac{N_2}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\cdot\frac{1}{4}\right)\\
&&R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)-\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)-1\\
&&\ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\cdot\frac{1}{4}\right).
\end{eqnarray*}
As discussed above, the sum-rate bound on $R_1+R_3$ is loose for $R_2$ larger than the threshold, so the rate region is a rectangle.
By comparing the inner and outer bound rate regions, we can see that $\delta_1< \frac{{1}}{{2}}\log\left(6\cdot\frac{7}{3}\cdot\frac{1}{4}\right) < 0.91$ and $\delta_3< \frac{{1}}{{2}}\log\left(3\cdot\frac{7}{3}\cdot\frac{1}{4}\right) < 0.41$. Therefore, we can conclude that the gap is to within one bit per message.
\subsection{Achievable Scheme for Relatively Small $R_2$}
\begin{theorem}
Given $\alpha=(\alpha_1,\alpha_2) \in [0,1]^2$, the region $\mathcal{R}_\alpha$ is defined by
\begin{eqnarray*}
&& R_1 \leq\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{N_1}\right)\\
&& R_2 \leq \min\left\{\frac{{1}}{{2}}\log^+\left(c_{21}+\frac{(1-\alpha_2)P}{(\alpha_1+\alpha_2) P+N_2}\right),\right.\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{N_2}\right)\right\}\\
&& R_3 \leq \frac{{1}}{{2}}\log^+\left(c_3+\frac{P}{\max\{\alpha_1,\alpha_2\} P+N_3}\right)
\end{eqnarray*}
where $c_{21}=\frac{1-\alpha_2}{2-\alpha_2}$ and $c_3=\frac{1}{2-\alpha_2}$,
and $\mathcal{R}=\textsc{conv}\left(\bigcup_{\alpha}\mathcal{R}_\alpha\right)$ is achievable.
\end{theorem}
For the case of $R_2 < R_{2,th}$, we present the following scheme. At transmitter 2, rate splitting is not necessary. The transmit signal is the sum
\[\mathbf{x}_2=\mathbf{x}_{21}+\mathbf{x}_{21}'\]
where $\mathbf{x}_{21}$ and $\mathbf{x}_{21}'$ are differently coded versions of the same message $M_2\in\{1,2,\ldots,2^{nR_2}\}$.
The power allocation: $\mathbb{E}[\|\mathbf{x}_1\|^2]=\alpha_1 nP$ at receiver 1, $\mathbb{E}[\|\mathbf{x}_{21}\|^2]=\alpha_2 nP$, and $\mathbb{E}[\|\mathbf{x}_{21}'\|^2]=(1-\alpha_2)nP$ at receiver 2, and $\mathbb{E}[\|\mathbf{x}_3\|^2]=nP$ at receiver 3.
The signals $\mathbf{x}_{21}'$ and $\mathbf{x}_3$ are lattice codewords using the same coding lattice but different shaping lattices. As a result, the sum $\mathbf{x}_{21}'+\mathbf{x}_3$ is a lattice codeword.
The received signals are
\begin{eqnarray*}
&&\mathbf{y}_1=\mathbf{x}_{21}'+\mathbf{x}_{21}+\mathbf{x}_1+\mathbf{z}_1\\
&&\mathbf{y}_2=[\mathbf{x}_{21}'+\mathbf{x}_3]+\mathbf{x}_{21}+\mathbf{z}_2\\
&&\mathbf{y}_3=\mathbf{x}_3+\mathbf{x}_1+\mathbf{z}_3.
\end{eqnarray*}
The signal scale diagram at each receiver is shown in Fig. \ref{fig:signalScale5} (b).
Decoding is performed in the following way.
\begin{itemize}
\item At receiver 1, $\mathbf{x}_{21}'$
is first decoded while treating other signals as noise. Having successfully recovered $M_{21}$, receiver 1 can generate $\mathbf{x}_{21}$ and $\mathbf{x}_{21}'$, and cancel them from $\mathbf{y}_1$. Next, $\mathbf{x}_1$ is decoded from $\mathbf{x}_1+\mathbf{z}_1$. For reliable decoding, the code rates should satisfy
\begin{eqnarray*}
&& R_{21}\leq T_{21}' =\frac{{1}}{{2}}\log\left(1+\frac{(1-\alpha_2)P}{(\alpha_1+\alpha_2) P+N_1}\right)\\
&& R_1\ \leq T_1\ =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{N_1}\right).
\end{eqnarray*}
\item At receiver 2, $[\mathbf{x}_{21}'+\mathbf{x}_3]$ first decoded while treating other signals as noise and removed from $\mathbf{y}_2$. Next, $\mathbf{x}_{21}$ is decoded from $\mathbf{x}_{21}+\mathbf{z}_2$. For reliable decoding, the code rates should satisfy
\begin{eqnarray*}
&&R_{21}\leq T_{21}'' =\frac{{1}}{{2}}\log\left(c_{21}+\frac{(1-\alpha_2)P}{\alpha_2 P+N_2}\right)\\
&&R_3\ \leq T_3'\ =\frac{{1}}{{2}}\log\left(c_3+\frac{P}{\alpha_2 P+N_2}\right)\\
&&R_{21}\leq T_{21}''' =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{N_2}\right)
\end{eqnarray*}
where $c_{21}=\frac{(1-\alpha_2)P}{(1-\alpha_2)P+P}=\frac{1-\alpha_2}{2-\alpha_2}$ and $c_3=\frac{P}{(1-\alpha_2)P+P}=\frac{1}{2-\alpha_2}$. Note that $0\leq c_{21}\leq \frac{1}{2}$, $c_{21}+c_3=1$, and $\frac{1}{2}\leq c_3\leq 1$.
\item At receiver 3, $\mathbf{x}_3$ is decoded while treating $\mathbf{x}_1+\mathbf{z}_3$ as noise. Reliable decoding is possible if
\begin{eqnarray}
&&R_3\ \leq T_3'' =\frac{{1}}{{2}}\log\left(1+\frac{P}{\alpha_1 P+N_3}\right).
\end{eqnarray}
\end{itemize}
Putting together, we get
\begin{eqnarray}
&& R_1 \leq T_1\\
&& R_2 \leq T_2 =\min\{T_{21}',T_{21}'',T_{21}'''\}\\
&& R_3 \leq T_3=\min\{T_3',T_3''\}
\end{eqnarray}
where
\begin{eqnarray*}
&& T_1 =\frac{{1}}{{2}}\log\left(1+\frac{\alpha_1 P}{N_1}\right)\\
&& T_2 = \min\{T_{21}',T_{21}'',T_{21}'''\}\\
&&\ \ \ \ = \min\{\min\{T_{21}',T_{21}''\}, T_{21}'''\}\\
&&\ \ \ \ \geq \min\left\{\frac{{1}}{{2}}\log\left(c_{21}+\frac{(1-\alpha_2)P}{(\alpha_1+\alpha_2) P+N_2}\right),\right.\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.\frac{{1}}{{2}}\log\left(1+\frac{\alpha_2 P}{N_2}\right)\right\}\\
&& T_3 \geq \frac{{1}}{{2}}\log\left(c_3+\frac{P}{\max\{\alpha_1,\alpha_2\} P+N_3}\right).
\end{eqnarray*}
\subsection{The Gap for Relatively Small $R_2$}
We choose $\alpha_1$ and $\alpha_2$ such that $\alpha_1 P\geq N_2$, that $\alpha_2 P\geq N_3$, that $\alpha_1+\alpha_2\leq \frac{1}{2}$, and that $\alpha_1\geq \alpha_2$. It follows that $c_{21}\geq \frac{1}{3}$. We get the lower bound
\begin{eqnarray}
&&\min\{T_{21}',T_{21}''\}\\
&&= \frac{{1}}{{2}}\log\left(c_{21}+\frac{(1-\alpha_2)P}{(\alpha_1+\alpha_2)P+N_2}\right)\\
&&\geq \frac{{1}}{{2}}\log\left(\frac{1}{3}+\frac{(1-\alpha_1)P}{3\alpha_1 P}\right)\\
&&= \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_1 P}\right)
\end{eqnarray}
and
\begin{eqnarray*}
&& T_2 \geq \min\left\{\frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_1 P}\right), \frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)\right\}.
\end{eqnarray*}
Let us define $\alpha_2'$ by the equality $\frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2' P}\right)=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)$. If we choose $\alpha_1\leq \alpha_2'$, then $T_2 \geq \frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)$.
We can see that the following rate region is achievable.
\begin{eqnarray}
&& R_1\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\right)\\
&& R_2\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)\\
&& R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{2\alpha_1 P}\right).
\end{eqnarray}
For fixed $\alpha_1\in [\alpha_2,\alpha_2']$ and $R_2=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)$, the two-dimensional rate region $\mathcal{R}_\alpha$, given by
\begin{eqnarray}
&& R_1\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_1 P}{N_1}\right)\\
&& R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{2\alpha_1 P}\right)
\end{eqnarray}
is achievable. The union $\bigcup_{\alpha_1\in [\alpha_2,\alpha_2']}\mathcal{R}_\alpha$ is a MAC-like region, given by
\begin{eqnarray}
&& \ \ \ \ \ \ \ R_1\leq \frac{{1}}{{2}}\log\left(\frac{\alpha_2' P}{N_1}\right)\\
&& \ \ \ \ \ \ \ \ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{P}{3\alpha_2 P}\cdot\frac{N_2}{N_1}\right)\\
&& \ \ \ \ \ \ \ R_3\leq \frac{{1}}{{2}}\log\left(\frac{P}{2\alpha_2 P}\right)\\
&& R_1+R_3=\frac{{1}}{{2}}\log\left(\frac{P}{2N_1}\right).
\end{eqnarray}
In comparison, the two-dimensional outer bound region at $R_2=\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)+1$ is given by
\begin{eqnarray*}
&&\ \ \ \ \ \ \ R_1 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{7}{3}\right)-\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)-1\\
&&\ \ \ \ \ \ \ \ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\cdot\frac{N_2}{N_1}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\cdot\frac{1}{4}\right)\\
&&\ \ \ \ \ \ \ R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_2}\cdot\frac{7}{3}\right)-\frac{{1}}{{2}}\log\left(\frac{\alpha_2 P}{N_2}\right)-1\\
&&\ \ \ \ \ \ \ \ \ \ \ = \frac{{1}}{{2}}\log\left(\frac{P}{\alpha_2 P}\right)+\frac{{1}}{{2}}\log\left(\frac{7}{3}\cdot\frac{1}{4}\right)\\
&&R_1+R_3 \leq \frac{{1}}{{2}}\log\left(\frac{P}{N_1}\cdot\frac{8}{3}\right).
\end{eqnarray*}
Since $\delta_1 < \frac{{1}}{{2}}\log\left(3\cdot\frac{7}{3}\cdot\frac{1}{4}\right) < 0.41$, $\delta_3 < \frac{{1}}{{2}}\log\left(2\cdot\frac{7}{3}\cdot\frac{1}{4}\right) < 0.12$ and $\delta_{13} < \frac{{1}}{{2}}\log\left(2\cdot\frac{7}{3}\right) < 1.12 < \sqrt{2}$, we can conclude that the gap is to within one bit per message.
\section{Conclusion}
We presented approximate capacity region of five important cases of partially connected interference channels. The outer bounds based on $Z$-channel type argument are derived. Achievable schemes are developed and shown to approximately achieve the capacity to within a constant bit.
For future work, the channels with fully general coefficients may be considered. In this paper, we presented different schemes for each channel type although they share some principle. A universal scheme is to be developed for unified capacity characterization of all possible topologies. The connection between interference channel and index coding problems is much to explore. In particular, the results on the capacity region for index coding in \cite{ArbabjolfaeiBandemerKimSasogluWang13} seem to have an interesting connection to our work.
\appendices
\section{Random Coding Achievability: Channel Type 4}
At transmitter 1, message $M_1$ is split into three parts $(M_{12},M_{11},M_{10})$, and the transmit signal is $\mathbf{x}_1=\mathbf{x}_{12}+\mathbf{x}_{11}+\mathbf{x}_{10}$. The signals satisfy $\mathbb{E}[\|\mathbf{x}_{12}\|^2]=n(P-N_2-N_3)$, $\mathbb{E}[\|\mathbf{x}_{11}\|^2]=nN_3$, and $\mathbb{E}[\|\mathbf{x}_{10}\|^2]=nN_2$.
At transmitter 2, message $M_2$ is split into three parts $(M_{21},M_{20})$, and the transmit signal is $\mathbf{x}_2=\mathbf{x}_{21}+\mathbf{x}_{20}$. The signals satisfy $\mathbb{E}[\|\mathbf{x}_{21}\|^2]=n(P-N_3)$ and $\mathbb{E}[\|\mathbf{x}_{20}\|^2]=nN_3$. Rate-splitting is not performed at transmitter 3, and $\mathbb{E}[\|\mathbf{x}_3\|^2]=nP$.
The top layer codewords $(\mathbf{x}_{12},\mathbf{x}_{21},\mathbf{x}_{3})$ are from a joint random codebook for $(M_{12},M_{21},M_3)$. The mid-layer codewords $(\mathbf{x}_{11},\mathbf{x}_{20})$ are from a joint random codebook for $(M_{11},M_{20})$. The bottom layer codeword $\mathbf{x}_{10}$ is from a single-user random codebook for $M_{10}$.
The received signals are
\begin{eqnarray*}
&&\mathbf{y}_1=(\mathbf{x}_{12}+\mathbf{x}_3)+\mathbf{x}_{11}+\mathbf{x}_{10}+\mathbf{z}_1\\
&&\mathbf{y}_2=(\mathbf{x}_{12}+\mathbf{x}_{21})+(\mathbf{x}_{11}+\mathbf{x}_{20})+\mathbf{x}_{10}+\mathbf{z}_2\\
&&\mathbf{y}_3=(\mathbf{x}_{21}+\mathbf{x}_3)+\mathbf{x}_{20}+\mathbf{z}_3
\end{eqnarray*}
Decoding is performed from the top layer to the bottom layer. At receiver 1, simultaneous decoding of $(\mathbf{x}_{12},\mathbf{x}_{3})$ is performed while treating other signals as noise. And then, $\mathbf{x}_{11}$ and $\mathbf{x}_{10}$ are decoded successively. At receiver 2, simultaneous decoding of $(\mathbf{x}_{12},\mathbf{x}_{21})$ is performed while treating other signals as noise. And then, simultaneous decoding of $(\mathbf{x}_{11},\mathbf{x}_{20})$ is performed. At receiver 3, simultaneous decoding of $(\mathbf{x}_{21},\mathbf{x}_3)$ is performed while treating other signals as noise. For reliable decoding, code rates should satisfy
\begin{eqnarray*}
&& \ \ \ \ \ \ \ \ R_{12}\leq I_1=\frac{{1}}{{2}}\log\left(1+\frac{P-N_2-N_3}{N_1+N_2+N_3}\right)\\
&& \ \ \ \ \ \ \ \ R_{3}\ \leq I_2=\frac{{1}}{{2}}\log\left(1+\frac{P}{N_1+N_2+N_3}\right)\\
&& R_{12}+R_3\ \leq I_3=\frac{{1}}{{2}}\log\left(1+\frac{2P-N_2-N_3}{N_1+N_2+N_3}\right)\\
&& \ \ \ \ \ \ \ \ R_{11}\leq I_4=\frac{{1}}{{2}}\log\left(1+\frac{N_3}{N_1+N_2}\right)\\
&& \ \ \ \ \ \ \ \ R_{10}\leq I_5=\frac{{1}}{{2}}\log\left(1+\frac{N_2}{N_1}\right)
\end{eqnarray*}
at receiver 1,
\begin{eqnarray*}
&& \ \ \ \ \ \ \ \ R_{12}\leq I_6=\frac{{1}}{{2}}\log\left(1+\frac{P-N_2-N_3}{2N_2+2N_3}\right)\\
&& \ \ \ \ \ \ \ \ R_{21}\leq I_7=\frac{{1}}{{2}}\log\left(1+\frac{P-N_3}{2N_2+2N_3}\right)\\
&& R_{12}+R_{21}\leq I_8=\frac{{1}}{{2}}\log\left(1+\frac{2P-N_2-2N_3}{2N_2+2N_3}\right)\\
&& \ \ \ \ \ \ \ \ R_{11}\leq I_9=\frac{{1}}{{2}}\log\left(1+\frac{N_3}{2N_2}\right)\\
&& \ \ \ \ \ \ \ \ R_{20}\leq I_{10}=\frac{{1}}{{2}}\log\left(1+\frac{N_3}{2N_2}\right)\\
&& R_{11}+R_{20}\leq I_{11}=\frac{{1}}{{2}}\log\left(1+\frac{2N_3}{2N_2}\right)
\end{eqnarray*}
at receiver 2,
\begin{eqnarray*}
&& \ \ \ \ \ \ \ \ R_{21}\leq I_{12}=\frac{{1}}{{2}}\log\left(1+\frac{P-N_3}{2N_3}\right)\\
&& \ \ \ \ \ \ \ \ R_{3}\ \leq I_{13}=\frac{{1}}{{2}}\log\left(1+\frac{P}{2N_3}\right)\\
&& R_{21}+R_{3}\ \leq I_{14}=\frac{{1}}{{2}}\log\left(1+\frac{2P-N_3}{2N_3}\right)
\end{eqnarray*}
at receiver 3. Putting together,
\begin{eqnarray*}
&& \ \ \ \ \ \ \ \ R_{12}\leq T_1=\min\{I_1,I_6\}=I_6\\
&& \ \ \ \ \ \ \ \ R_{21}\leq T_2=\min\{I_7,I_{12}\}=I_7\\
&& \ \ \ \ \ \ \ \ R_{3}\ \leq T_3=\min\{I_2,I_{13}\}\\
&& R_{12}+R_{21}\leq T_4=I_8\\
&& R_{12}+R_{3}\ \leq T_5=I_3\\
&& R_{21}+R_{3}\ \leq T_6=I_{14}
\end{eqnarray*}
at the top layer,
\begin{eqnarray*}
&& \ \ \ \ \ \ \ \ R_{11}\leq T_7=\min\{I_4,I_9\}=I_9\\
&& \ \ \ \ \ \ \ \ R_{20}\leq T_8=I_{10}\\
&& R_{11}+R_{20}\leq T_9=I_{11}
\end{eqnarray*}
at the mid-layer,
\begin{eqnarray*}
R_{10}\leq T_{10}=I_5
\end{eqnarray*}
at the bottom layer. Note that the rate variables are not coupled between layers.
We get the achievable rate region
\begin{eqnarray*}
&& \ \ \ \ \ \ \ R_{1}=R_{12}+R_{11}+R_{10}\leq T_1+T_7+T_{10}\\
&& \ \ \ \ \ \ \ R_{2}=R_{21}+R_{20}\leq T_2+T_8\\
&& \ \ \ \ \ \ \ R_{3}\leq T_3\\
&& R_{1}+R_{2}\leq T_4+T_9+T_{10}\\
&& R_{1}+R_{3}\leq T_5+T_7+T_{10}\\
&& R_{2}+R_{3}\leq T_6+T_8.
\end{eqnarray*}
This region includes the following region.
\begin{eqnarray*}
&& \ \ \ \ \ \ \ R_{1}\leq \frac{{1}}{{2}}\log\left(2+\frac{P}{N_1}\right)-1\\
&& \ \ \ \ \ \ \ R_{2}\leq \frac{{1}}{{2}}\log\left(3+\frac{P}{N_2}\right)-1\\
&& \ \ \ \ \ \ \ R_{3}\leq \frac{{1}}{{2}}\log\left(3+\frac{P}{N_3}\right)-\frac{{1}}{{2}}\log(3)\\
&& R_{1}+R_{2}\leq \frac{{1}}{{2}}\log\left(1+\frac{2P}{N_1}\right)-\frac{1}{2}\\
&& R_{1}+R_{3}\leq \frac{{1}}{{2}}\log\left(1+\frac{2P}{N_1}\right)-1\\
&& R_{2}+R_{3}\leq \frac{{1}}{{2}}\log\left(1+\frac{2P}{N_2}\right)-1.
\end{eqnarray*}
Therefore, we can conclude the capacity region to within one bit.
\section{Random Coding Achievability: Channel Type 5}
Transmit signal construction is the same as the one for channel type 4.
The received signals are
\begin{eqnarray*}
&&\mathbf{y}_1=(\mathbf{x}_{12}+\mathbf{x}_{21})+(\mathbf{x}_{11}+\mathbf{x}_{20})+\mathbf{x}_{10}+\mathbf{z}_1\\
&&\mathbf{y}_2=(\mathbf{x}_{21}+\mathbf{x}_{3})+\mathbf{x}_{20}+\mathbf{z}_2\\
&&\mathbf{y}_3=(\mathbf{x}_{12}+\mathbf{x}_3)+\mathbf{x}_{11}+\mathbf{x}_{10}+\mathbf{z}_3
\end{eqnarray*}
Decoding is performed from the top layer to the bottom layer. At receiver 1, simultaneous decoding of $(\mathbf{x}_{12},\mathbf{x}_{21})$ is performed while treating other signals as noise. And then, simultaneous decoding of $\mathbf{x}_{11}$ and $\mathbf{x}_{20}$ is performed. Lastly, $\mathbf{x}_{10}$ is decoded. At receiver 2, simultaneous decoding of $(\mathbf{x}_{21},\mathbf{x}_{3})$ is performed while treating other signals as noise. And then, $\mathbf{x}_{20}$ is decoded. At receiver 3, simultaneous decoding of $(\mathbf{x}_{12},\mathbf{x}_3)$ is performed while treating other signals as noise. And then, $\mathbf{x}_{11}$ and $\mathbf{x}_{10}$ are decoded successively. For reliable decoding, code rates should satisfy
\begin{eqnarray*}
&& \ \ \ \ \ \ \ \ R_{12}\leq I_1=\frac{{1}}{{2}}\log\left(1+\frac{P-N_2-N_3}{N_1+N_2+2N_3}\right)\\
&& \ \ \ \ \ \ \ \ R_{21}\leq I_2=\frac{{1}}{{2}}\log\left(1+\frac{P-N_3}{N_1+N_2+2N_3}\right)\\
&& R_{12}+R_{21}\leq I_3=\frac{{1}}{{2}}\log\left(1+\frac{2P-N_2-2N_3}{N_1+N_2+2N_3}\right)\\
&& \ \ \ \ \ \ \ \ R_{11}\leq I_4=\frac{{1}}{{2}}\log\left(1+\frac{N_3}{N_1+N_2}\right)\\
&& \ \ \ \ \ \ \ \ R_{20}\leq I_5=\frac{{1}}{{2}}\log\left(1+\frac{N_3}{N_1+N_2}\right)\\
&& R_{11}+R_{20}\leq I_6=\frac{{1}}{{2}}\log\left(1+\frac{2N_3}{N_1+N_2}\right)\\
&& \ \ \ \ \ \ \ \ R_{10}\leq I_7=\frac{{1}}{{2}}\log\left(1+\frac{N_2}{N_1}\right)
\end{eqnarray*}
at receiver 1,
\begin{eqnarray*}
&& \ \ \ \ \ \ \ \ R_{21}\leq I_8=\frac{{1}}{{2}}\log\left(1+\frac{P-N_3}{N_2+N_3}\right)\\
&& \ \ \ \ \ \ \ \ R_{3}\ \leq I_9=\frac{{1}}{{2}}\log\left(1+\frac{P}{N_2+N_3}\right)\\
&& R_{21}+R_{3}\ \leq I_{10}=\frac{{1}}{{2}}\log\left(1+\frac{2P-N_3}{N_2+N_3}\right)\\
&& \ \ \ \ \ \ \ \ R_{20}\leq I_{11}=\frac{{1}}{{2}}\log\left(1+\frac{N_3}{N_2}\right)
\end{eqnarray*}
at receiver 2,
\begin{eqnarray*}
&& \ \ \ \ \ \ \ \ R_{12}\leq I_{12}=\frac{{1}}{{2}}\log\left(1+\frac{P-N_2-N_3}{N_2+2N_3}\right)\\
&& \ \ \ \ \ \ \ \ R_{3}\ \leq I_{13}=\frac{{1}}{{2}}\log\left(1+\frac{P}{N_2+2N_3}\right)\\
&& R_{12}+R_{3}\ \leq I_{14}=\frac{{1}}{{2}}\log\left(1+\frac{2P-N_2-N_3}{N_2+2N_3}\right)
\end{eqnarray*}
at receiver 3. Putting together,
\begin{eqnarray*}
&& \ \ \ \ \ \ \ \ R_{12}\leq T_1=\min\{I_1,I_{12}\}=I_1\\
&& \ \ \ \ \ \ \ \ R_{21}\leq T_2=\min\{I_2,I_8\}=I_2\\
&& \ \ \ \ \ \ \ \ R_{3}\ \leq T_3=\min\{I_9,I_{13}\}=I_{13}\\
&& R_{12}+R_{21}\leq T_4=I_3\\
&& R_{12}+R_{3}\ \leq T_5=I_{14}\\
&& R_{21}+R_{3}\ \leq T_6=I_{10}
\end{eqnarray*}
at the top layer,
\begin{eqnarray*}
&& \ \ \ \ \ \ \ \ R_{11}\leq T_7=I_4\\
&& \ \ \ \ \ \ \ \ R_{20}\leq T_8=\min\{I_5,I_{11}\}=I_5\\
&& R_{11}+R_{20}\leq T_9=I_6
\end{eqnarray*}
at the mid-layer,
\begin{eqnarray*}
R_{10}\leq T_{10}=I_7
\end{eqnarray*}
at the bottom layer. Note that the rate variables are not coupled between layers.
We get the achievable rate region
\begin{eqnarray*}
&& \ \ \ \ \ \ \ R_{1}=R_{12}+R_{11}+R_{10}\leq T_1+T_7+T_{10}\\
&& \ \ \ \ \ \ \ R_{2}=R_{21}+R_{20}\leq T_2+T_8\\
&& \ \ \ \ \ \ \ R_{3}\leq T_3\\
&& R_{1}+R_{2}\leq T_4+T_9+T_{10}\\
&& R_{1}+R_{3}\leq T_5+T_7+T_{10}\\
&& R_{2}+R_{3}\leq T_6+T_8.
\end{eqnarray*}
This region includes the following region.
\begin{eqnarray*}
&& \ \ \ \ \ \ \ R_{1}\leq \frac{{1}}{{2}}\log\left(2+\frac{P}{N_1}\right)-\frac{1}{2}\\
&& \ \ \ \ \ \ \ R_{2}\leq \frac{{1}}{{2}}\log\left(2+\frac{P}{N_2}\right)-1\\
&& \ \ \ \ \ \ \ R_{3}\leq \frac{{1}}{{2}}\log\left(3+\frac{P}{N_3}\right)-\frac{{1}}{{2}}\log(3)\\
&& R_{1}+R_{2}\leq \frac{{1}}{{2}}\log\left(1+\frac{2P}{N_1}\right)\\
&& R_{1}+R_{3}\leq \frac{{1}}{{2}}\log\left(1+\frac{2P}{N_1}\right)-\frac{1}{2}\\
&& R_{2}+R_{3}\leq \frac{{1}}{{2}}\log\left(1+\frac{2P}{N_2}\right)-\frac{1}{2}.
\end{eqnarray*}
Therefore, we can conclude the capacity region to within one bit.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Confinement and dynamical chiral symmetry breaking}
Understanding the spectrum of hadrons with masses less than 2\,GeV and their interactions is an essential step toward revealing the essence of light-quark confinement and dynamical chiral symmetry breaking (DCSB), and describing hadron structure in terms of QCD's elementary degrees of freedom. These are basic questions, which define a frontier of contemporary hadron physics.
In connection with confinement it is important to appreciate that the static potential measured in numerical simulations of quenched lattice-regularised QCD is not related in any known way to the question of light-quark confinement. It is a basic feature of QCD that light-quark creation and annihilation effects are essentially nonperturbative. Therefore it is impossible in principle to compute a potential between two light quarks.
It is known\cite{Bali:2005fu} that in the presence of two dynamical flavours of quark, each with a current-quark mass $\sim m_s$; i.e., typical of the $s$-quark, string breaking is a nonlocal and instantaneous process, which occurs when the static quark and antiquark are separated by $\approx 1.25$fm. There is therefore a critical energy connected with the string; viz., $E_c \approx 1.25\,$GeV.
It is noteworthy and instructive that $E_c \simeq M_S+M_{\bar S}$, where $M_S$ and $M_{\bar S}$ are, respectively, constituent-quark masses associated with the lightest quark and antiquark in the system; namely, the $s$-quark in this instance. Our observation suggests an intuitive understanding of string breaking; namely, the flux tube collapses instantly and entirely when the energy it contains exceeds that required to produce the lightest constituent quark-antiquark pair, and the distorted and distressed upsilon-like state switches instantly to a pair of localised heavy-light mesons.
Our estimate of $M_S=M_{\bar S}$ is based on extensive experience with QCD's Dyson-Schwinger equations (DSEs).\cite{Roberts:1994dr,Maris:2003vk,Roberts:2007jh,Roberts:2007ji} Typically, $m_s \simeq 25\,m_u$ and $M_S \simeq M_U+0.15\,$GeV$\simeq 0.55\,$GeV. The phenomenon underlying this magnification of the current-quark mass is DCSB, which can be understood via the renormalised gap equation:
\begin{eqnarray}
\nonumber
\lefteqn{S(p)^{-1} = Z_2 \,(i\gamma\cdot p + m^{\rm bm})} \\
&+& Z_1 \int^\Lambda_q\! g^2 D_{\mu\nu}(p-q) \frac{\lambda^a}{2}\gamma_\mu S(q) \Gamma^a_\nu(q,p) , \label{gendse}
\end{eqnarray}
where $\int^\Lambda_q$ indicates a Poincar\'e invariant regularisa- %
\begin{center}
\includegraphics[clip,width=0.4\textwidth]{Mp2Jlab.eps}
\figcaption{\label{gluoncloud} Dressed-quark mass function, $M(p)$: solid curves -- DSE results,\protect\cite{Bhagwat:2003vw,Bhagwat:2006tu} ``data'' -- numerical simulations of unquenched lattice-QCD.\protect\cite{Bowman:2005vx} In this figure one observes the current-quark of perturbative QCD evolving into a constituent-quark as its momentum becomes smaller. The constituent-quark mass arises from a cloud of low-momentum gluons attaching themselves to the current-quark. This is dynamical chiral symmetry breaking: an essentially nonperturbative effect that generates a quark mass \emph{from nothing}; namely, it occurs even in the chiral limit.}
\end{center}
tion of the integral, with $\Lambda$ the regularisation mass-scale, $D_{\mu\nu}$ is the renormalised dressed-gluon propagator, $\Gamma_\nu$ is the renormalised dressed-quark-gluon vertex, and $m^{\rm bm}$ is the quark's $\Lambda$-dependent bare current-mass. The vertex and quark wave-function renormalisation constants, $Z_{1,2}(\zeta^2,\Lambda^2)$, depend on the gauge parameter. The solution to Eq.\,(\ref{gendse}) has the form
\begin{eqnarray}
S(p) & =&
\frac{Z(p^2,\zeta^2)}{i\gamma\cdot p + M(p^2)}\,
\label{Sgeneral}
\end{eqnarray}
and it is important that the mass function, $M(p^2)=B(p^2,\zeta^2)/A(p^2,\zeta^2)$ is independent of the renormalisation point, $\zeta$. The form this function takes in QCD is depicted in Fig.\,\ref{gluoncloud}.
The behaviour of the dressed-quark mass function is one of the most remarkable features of the Standard Model. In perturbation theory it is impossible in the chiral limit to obtain $M(p^2)\neq 0$: the generation of mass \emph{from nothing} is an essentially nonperturbative phenomenon. On the other hand, it is a longstanding prediction of nonperturbative DSE studies that DCSB will occur so long as the integrated infrared strength possessed by the gap equation's kernel exceeds some critical value.\cite{Roberts:1994dr} There are strong indications that this condition is satisfied in QCD.\cite{Bhagwat:2003vw,Bhagwat:2006tu,Bowman:2005vx}
It follows that the quark-parton of QCD acquires a momentum-dependent mass, which at infrared momenta is roughly $100$-times larger than the light-quark current-mass. This effect owes primarily to a dense cloud of gluons that clothes a low-momentum quark. It means that the Higgs mechanism is largely irrelevant to the bulk of normal matter in the universe. Instead, the single most important mass generating mechanism for light-quark hadrons is the strong interaction effect of DCSB; e.g., one may identify it as being responsible for 98\% of a proton's mass.
Confinement can be connected with the analytic properties of QCD's Schwinger functions.\cite{Krein:1990sf,Roberts:1994dr,Roberts:2007ji,Roberts:2007jh} Indeed, the presence of an inflexion point in the DSE prediction for the dressed-quark mass function, which lattice simulations may be argued to confirm, signals confinement of the dressed-quark.\cite{Roberts:2007jh} Kindred behaviour is observed in the gluon and ghost self-energies.\cite{Bowman:2007du,Cucchieri:2008fc}
From this standpoint the question of light-quark confinement can be translated into the challenge of charting the infrared behavior of QCD's \emph{universal} $\beta$-function. (Although this function may depend on the scheme chosen to renormalise the theory, it is unique within a given scheme.)
This is a well-posed problem whose solution is an elemental goal of modern hadron physics and which can be addressed in any framework enabling the nonperturbative evaluation of renormalisation constants.
Through the gap and Bethe-Salpeter equations (BSEs) the pointwise behaviour of the $\beta$-function determines the nature of chiral symmetry breaking; e.g., the evolution in Fig.\,\ref{gluoncloud}. Moreover, the fact that DSEs connect the $\beta$-function to experimental observables entails that comparison between computations and observations of hadron properties can be used to chart the $\beta$-function's long-range behaviour.
\section{DSE truncations:\\ preserving symmetry}
In order to realise this goal a nonperturbative symmetry-preserving DSE truncation is necessary. Steady quantitative progress continues with a scheme that is systematically improvable.\cite{Munczek:1994zz,Bender:1996bb} Indeed, its mere existence has enabled the proof of exact nonperturbative results in QCD. Amongst them are veracious statements about the $\eta$-$\eta^\prime$ complex and $\pi^0$-$\eta$-$\eta^\prime$ mixing, with predictions of $\theta_{\eta \eta^\prime} = -15^\circ$ and $\theta_{\pi^0 \eta} = 1^\circ$.\cite{Bhagwat:2007ha} Only studies that are demonstrably consistent with the results proved therein can be considered seriously.
It is also true that significant qualitative advances can be made with symmetry-preserving kernel \emph{Ans\"atze} that express important additional nonperturbative effects, which are difficult to capture in any finite sum of contributions.\cite{Chang:2009zb} In order to elucidate we consider the example of pseudoscalar and axial-vector mesons, which appear as poles in the inhomogeneous BSE for the axial-vector vertex, $\Gamma_{5\mu}^{fg}$. An exact form of that equation is ($q_\pm = q\pm P/2$, etc.)
\begin{eqnarray}
\nonumber
\lefteqn{\Gamma_{5\mu}^{fg}(k;P) = Z_2 \gamma_5\gamma_\mu - \int_q g^2D_{\alpha\beta}(k-q)\, }\\
\nonumber
&& \rule{-2em}{0ex} \times \frac{\lambda^a}{2}\,\gamma_\alpha S_f(q_+) \Gamma_{5\mu}^{fg}(q;P) S_g(q_-) \frac{\lambda^a}{2}\,\Gamma_\beta^g(q_-,k_-) \\
&& \rule{-2em}{0ex} + \int_q g^2D_{\alpha\beta}(k-q)\, \frac{\lambda^a}{2}\,\gamma_\alpha S_f(q_+) \frac{\lambda^a}{2} \Lambda_{5\mu\beta}^{fg}(k,q;P), \label{genbse}
\end{eqnarray}
where $\Lambda_{5\mu\beta}^{fg}$ is a 4-point Schwinger function that is completely defined via the quark self-energy.\cite{Munczek:1994zz,Bender:1996bb} The pseudoscalar vertex, $\Gamma_5^{fg}(k;P)$, satisfies an analogous equation and has the general form
\begin{eqnarray}
\nonumber
\lefteqn{i\Gamma_{5}^{fg}(k;P) = \gamma_5 \left[ i E_5(k;P) + \gamma\cdot P F_5(k;P) \right.}\\
&& \left. + \gamma\cdot k \, G_5(k;P) + \sigma_{\mu\nu} k_\mu P_\nu H_5(k;P) \right].
\label{genG5}
\end{eqnarray}
In any dependable study of light-quark hadrons the solution of Eq.\,(\ref{genbse}) must satisfy the axial-vector Ward-Takahashi; viz.,
\begin{eqnarray}
\nonumber
&& P_\mu \Gamma_{5\mu}^{fg}(k;P) + \, i\,[m_f(\zeta)+m_g(\zeta)] \,\Gamma_5^{fg}(k;P)\\
&=&S_f^{-1}(k_+) i \gamma_5 + i \gamma_5 S_g^{-1}(k_-) \,,
\label{avwtim}
\end{eqnarray}
which expresses chiral symmetry and its breaking pattern. The condition
\begin{eqnarray}
\nonumber && P_\mu \Lambda_{5\mu\beta}^{fg}(k,q;P) + i [m_f(\zeta)+m_g(\zeta)] \Lambda_{5\beta}^{fg}(k,q;P)\\
& =&
\Gamma_\beta^f(q_+,k_+) \, i\gamma_5+ i\gamma_5 \, \Gamma_\beta^g(q_-,k_-) \label{LavwtiGamma}
\end{eqnarray}
where $\Lambda_{5\beta}^{fg}$ is the analogue of $\Lambda_{5\mu\beta}^{fg}$ in the pseudoscalar equation, is necessary and sufficient to ensure the Ward-Takahashi identity is satisfied.\cite{Chang:2009zb}
Consider Eq.\,(\ref{LavwtiGamma}). Rainbow-ladder is the lead\-ing-or\-der term in a systematic DSE truncation scheme.\cite{Munczek:1994zz,Bender:1996bb} It corresponds to $\Gamma_\nu^f=\gamma_\nu$, in which case Eq.\,(\ref{LavwtiGamma}) is solved by $\Lambda_{5\mu\beta}^{fg}\equiv 0 \equiv \Lambda_{5\beta}^{fg}$. This is the solution that indeed provides the rainbow-ladder forms of Eq.\,(\ref{genbse}). Such consistency will be apparent in any valid systematic term-by-term improvement of the rainbow-ladder truncation.
However, Eq.\,(\ref{LavwtiGamma}) is far more than merely a device for checking a truncation's consistency. For, just as the vector Ward-Takahashi identity has long been used to build \emph{Ans\"atze} for the dressed-quark-photon vertex (e.g., Refs.\,\raisebox{-1.35ex}{\mbox{\Large \cite{Roberts:1994dr,Ball:1980ay,Kizilersu:2009kg}}}), Eq.\,(\ref{LavwtiGamma}) provides a tool for constructing a symmetry preserving kernel of the BSE that is matched to any reasonable \emph{Ansatz} for the dressed-quark-gluon vertex which appears in the gap equation. With this powerful capacity Eq.\,(\ref{LavwtiGamma}) achieves a goal that has been sought ever since the Bethe-Salpeter equation was introduced.\cite{Salpeter:1951sz} The symmetry-preserving kernel it can provide promises to enable the first reliable Poincar\'e invariant calculation of the spectrum of mesons with masses larger than 1\,GeV.
The utility of Eq.\,(\ref{LavwtiGamma}) can be illustrated through an application to ground state pseudoscalar and scalar mesons composed of equal-mass $u$- and $d$-quarks. To this end, suppose that in Eq.\,(\ref{gendse}) one employs an \emph{Ansatz} for the quark-gluon vertex which satisfies
\begin{equation}
P_\mu i \Gamma_\mu^f(k_+,k_-) = {\cal B}(P^2)\left[ S_f^{-1}(k_+) - S_f^{-1}(k_-)\right]\,, \label{wtiAnsatz}
\end{equation}
with ${\cal B}$ flavour-independent. (NB.\ While the true quark-gluon vertex does not satisfy this identity, owing to the form of the Slavnov-Taylor identity which it does satisfy, it is plausible that a solution of Eq.\,(\protect\ref{wtiAnsatz}) can provide a reasonable pointwise approximation to the true vertex.) Given Eq.\,(\ref{wtiAnsatz}), then Eq.\,(\ref{LavwtiGamma}) entails ($l=q-k$)
\begin{equation}
i l_\beta \Lambda_{5\beta}^{fg}(k,q;P) =
{\cal B}(l)^2\left[ \Gamma_{5}^{fg}(q;P) - \Gamma_{5}^{fg}(k;P)\right], \label{L5beta}
\end{equation}
with an analogous equation for $P_\mu l_\beta i\Lambda_{5\mu\beta}^{fg}(k,q;P)$. This identity can be solved to obtain
\begin{eqnarray}
\Lambda_{5\beta}^{fg}(k,q;P) & := & {\cal B}((k-q)^2)\, \gamma_5\,\overline{ \Lambda}_{\beta}^{fg}(k,q;P) \,, \label{AnsatzL5a}
\end{eqnarray}
with, using Eq.\,(\ref{genG5}),
\begin{eqnarray}
\nonumber
\lefteqn{
\overline{ \Lambda}_{\beta}^{fg}(k,q;P) = 2 \ell_\beta \, [ i \Delta_{E_5}(q,k;P)+ \gamma\cdot P \Delta_{F_5}(q,k;P) ]}\\
\nonumber
&& + \gamma_\beta \, \Sigma_{G_5}(q,k;P) +
2 \ell_\beta \, \gamma\cdot\ell\, \Delta_{G_5}(q,k;P) + [ \gamma_\beta,\gamma\cdot P]\\
&& \times \Sigma_{H_5}(q,k;P) + 2 \ell_\beta [ \gamma\cdot\ell ,\gamma\cdot P] \Delta_{H_5}(q,k;P) \,,
\label{AnsatzL5b}
\end{eqnarray}
where $\ell=(q+k)/2$, $\Sigma_{\Phi}(q,k;P) = [\Phi(q;P)+\Phi(k;P)]/2$ and $\Delta_{\Phi}(q,k;P) = [\Phi(q;P)-\Phi(k;P)]/[q^2-k^2]$.
Now, given any \emph{Ansatz} for the quark-gluon vertex that satisfies Eq.\,(\ref{wtiAnsatz}), then the pseudoscalar analogue of Eq.\,(\ref{genbse}) and Eqs.\,(\ref{gendse}), (\ref{AnsatzL5a}), (\ref{AnsatzL5b}) provide a symmetry-preserving closed system whose solution predicts the properties of pseudoscalar mesons.
The relevant scalar meson equations are readily derived. (NB.\ We are aware of the role played by resonant contributions to the kernel in the scalar channel \protect\cite{Holl:2005st} but they are not pertinent to this discussion.)
With these systems one can anticipate, elucidate and understand the impact on hadron properties of the rich nonperturbative structure expected of the fully-dressed quark-gluon vertex in QCD.
To proceed one need only specify the gap equation's kernel because the BSEs are completely defined therefrom. To complete the illustration\cite{Chang:2009zb} a simplified form of the effective interaction in Ref.\,\raisebox{-1.35ex}{\mbox{\Large \cite{Maris:1997tm}}} was%
\begin{center}
\centerline{\includegraphics[clip,width=0.43\textwidth]{F1u.eps}}
\vspace*{-5ex}
\centerline{\includegraphics[clip,width=0.43\textwidth]{F1l.eps}}
\vspace*{-2ex}
\figcaption{\label{massDlarge} Current-quark-mass dependence of pseudoscalar (upper panel) and scalar (lower) meson masses. The Ball-Chiu vertex result is compared with the rainbow-ladder result.}
\end{center}
employed and two vertex \emph{Ans\"atze} were compared; viz., the bare vertex $\Gamma_\mu^g = \gamma_\mu$, which defines the rainbow-ladder truncation of the DSEs and omits vertex dressing; and the Ball-Chiu vertex,\cite{Ball:1980ay} which nonperturbatively incorporates vertex dressing associated with DCSB.
The procedure outlined herein enables one to calculate the current-quark-mass-dependence of meson masses using a symmetry-preserving DSE truncation whose diagrammatic content is unknown. That dependence is depicted in Fig.\,\ref{massDlarge} and compared with the rainbow-ladder result. The $m$-dependence of the pseudoscalar meson's mass provides numerical confirmation of the algebraic fact that the axial-vector Ward-Takahashi identity is preserved by both the rainbow-ladder truncation and the BC-consistent \emph{Ansatz} for the Bethe-Salpeter kernel. The figure also shows that the axial-vector Ward-Takahashi identity and DCSB conspire to shield the pion's mass from material variation in response to dressing the quark-gluon vertex.\cite{Roberts:2007jh,Bhagwat:2004hn}
As noted in Ref.\,\raisebox{-1.35ex}{\mbox{\Large \cite{Chang:2009zb}}}, a rainbow-ladder kernel with realistic interaction strength yields
\begin{equation}
\label{epsilonRL}
\varepsilon_\sigma^{\rm RL} := \left.\frac{2 M(0) - m_\sigma }{2 M(0)}\right|_{\rm RL} = (0.3 \pm 0.1)\,,
\end{equation}
which can be contrasted with the value obtained using the BC-consistent Bethe-Salpeter kernel; viz.,
\begin{equation}
\label{epsilonBC}
\varepsilon_\sigma^{\rm BC} \lesssim 0.1\,.
\end{equation}
Plainly, significant additional repulsion is present in the BC-consistent truncation of the scalar BSE.
Scalar mesons are commonly identified as $^3\!P_0$ states. This assignment reflects a constituent-quark model perspective, from which a $J^{PC}=0^{++}$ fermion-antifermion bound-state must have the constituents' spins aligned and one unit of constituent orbital angular momentum. From this viewpoint a scalar is a spin and orbital excitation of a pseudoscalar meson. We note that although the constituent-quark model cannot be connected with QCD, the presence of orbital angular momentum in a hadron's rest frame is a necessary consequence of Poincar\'e covariance and the vector-boson-exchange character of that theory.\cite{Bhagwat:2006pu,Bhagwat:2006xi,Cloet:2007pi}
Extant studies of realistic corrections to the rainbow-ladder truncation show that they reduce hyperfine splitting.\cite{Bhagwat:2004hn} Hence, with the comparison between Eqs.\,(\ref{epsilonRL}) and (\ref{epsilonBC}) one has a clear indication that in a Poincar\'e covariant treatment the BC-consistent truncation magnifies spin-orbit splitting. This may be attributed to the influence of the quark's dynamically-enhanced scalar self-energy\cite{Roberts:2007ji} in the Bethe-Salpeter kernel.
This feature may reasonably be expected to have a material impact on mesons with mass greater than 1\,GeV. Indeed, \emph{prima facie} it can plausibly overcome a longstanding shortcoming of the rainbow-ladder truncation; viz., that the splitting between vector and axial-vector mesons is too small.\cite{Maris:2006ea}
This expectation is supported by Ref.\,\raisebox{-1.35ex}{\mbox{\Large \cite{Bloch:1999vka}}} wherein, using a separable Ansatz for the Bethe-Salpeter kernel which depends explicitly on the strength of DCSB, a vector--axial-vector mass-splitting is obtained that is commensurate with experiment.
\section{Baryons}
\subsection{Faddeev equation}
While a symmetry-preserving description of mesons is essential, it is only part of the story that nonperturbative QCD has to tell. An explanation of the spectrum of baryons and the nature of interactions between them is basic to understanding the Standard Model. The present and future resonance programmes at JLab and the Excited Baryon Analysis Center are critical elements in this effort. They are a vital complement to the Hall-D meson programme.
QCD confines light-quarks in particle-antiparticle pairs and also in three-particle composites. No approach to nonperturbative QCD is comprehensive if it cannot provide a unified explanation of both. DCSB, a keystone of the Standard Model and evident in the momentum-dependence of the dressed-quark mass function -- Fig.\,\ref{gluoncloud}, is just as important to baryons as it is to mesons. The DSEs furnish the only extant framework that can simultaneously connect both meson and baryon observables with this basic feature of QCD, having provided, e.g., a direct correlation of meson and baryon properties via a single interaction kernel, which preserves QCD's one-loop renormalisation group behaviour and can systematically be improved.\cite{Eichmann:2008ae,Eichmann:2008ef}
In quantum field theory a baryon appears as a pole in a six-point quark Green function. The residue is proportional to the baryon's Faddeev amplitude, which is obtained from a Poincar\'e covariant Faddeev equation that sums all possible exchanges and interactions that can take place between three dressed-quarks. A tractable Faddeev equation for baryons\cite{Cahill:1988dx} is founded on the observation that an interaction which describes colour-singlet mesons also generates nonpointlike quark-quark (diquark) correlations in the colour-$\bar 3$ (antitriplet) channel.\cite{Cahill:1987qr} The lightest diquark correlations appear in the $J^P=0^+,1^+$ channels\cite{Burden:1996nh,Maris:2002yu} and hence today only they are retained in approximating the quark-quark scattering matrix that appears as part of the Faddeev equation.\cite{Eichmann:2008ef,Cloet:2008re}
While diquarks do not appear in the strong interaction spectrum,\cite{Bender:1996bb,Bhagwat:2004hn} the attraction between quarks in this channel justifies a picture of baryons in which two quarks are always correlated as a colour-$\bar 3$ diquark pseudoparticle, and binding is effected by the iterated exchange of roles between the bystander and diquark-participant quarks. Here it is important to emphasise strongly that QCD supports \emph{nonpointlike} diquark correlations.\cite{Maris:2004bp,Alexandrou:2006cq} Models that employ pointlike diquark degrees of freedom cannot be connected with QCD. This, however, is a defect they share with all approaches that employ pointlike-constituent degrees of freedom. It is therefore not surprising that experimental observations contradict the spectroscopic predictions of such models; e.g., in connection with the so-called \emph{missing resonance} problem, the best information available today indicates that even some listed $\ast\ast\ast\ast$-resonances should be discarded.\cite{tshlee}
\subsection{Strangeness sigma-term}
Numerous properties of the nucleon have been computed using the Faddeev equation just described. An example is the nucleon's $\sigma$-term, with the result:\cite{Flambaum:2005kc}
\begin{equation}
\label{sigmaN}
f_N^u:=\frac{\sigma_N}{M_N} \approx\,6\%.
\end{equation}
This measures the contribution to the nucleon's mass from the explicit chiral symmetry breaking term associated with $u$- and $d$-quarks in QCD's Lagrangian. Of material additional interest is the contribution to the nucleon's mass from the $s$-quark mass term.
We have estimated this by analysing the dressed-quark $\sigma$-term\cite{Flambaum:2005kc,Holl:2005st} using a gap equation that incorporates $\pi$- and $K$-loop contributions and which has previously been used to estimate the strangeness contribution to the nucleon's magnetic moment: $\mu_p^S\approx -0.02\,$nuclear magnetons.\cite{Cloet:2008fw} The model yields
\begin{equation}
\label{sigmaQ}
f_U^u:= \frac{m_u}{M_u} \frac{d M_u}{d m_u} = 6.5\% , \quad f_U^s:=\frac{m_s}{M_u} \frac{d M_u}{d m_s} = 2.4\%.
\end{equation}
Comparing Eqs.\,(\ref{sigmaN}) and (\ref{sigmaQ}), one observes $f_N^u \simeq f_U^u$. One would have equality between these two quantities in a weak-binding independent particle model. We therefore anticipate that
\begin{equation}
\label{fNs}
f_N^s \approx f_U^s = 2.4\%.
\end{equation}
The results in Eqs.\,(\ref{sigmaN}) and (\ref{fNs}) agree with those inferred recently\cite{Young:2009zb} from numerical simulations of lattice-regularised QCD.
We observe that pseudoscalar meson exchange is attractive between a fermion and antifermion. Hence one knows with certainty \emph{a priori} that a valid calculation of pseudoscalar-meson-loop contributions to the gap and Bethe-Salpeter equations must show a reduction in both the mass of the fermion the mesons dress and the bound-state they bind. One check on a study is the effect of the loops on the quark condensate: in a realistic truncation they must reduce its size, as is found, e.g., in the model employed above. It follows from this fact and the axial-vector Ward-Takahashi that in the neighbourhood of the chiral limit
\begin{equation}
\left. f_\pi^2 m_\pi^2 \right|_{\rm with~loops} <
\left. f_\pi^2 m_\pi^2\right|_{\rm without~loops} \,.
\end{equation}
The symmetry-preserving inclusion of pion-loop corrections to the gap and bound-state equations is challenging but progress has been made.\cite{Fischer:2007ze}
In a recent application of this proposed method\cite{Fischer:2008wy} some results were obtained that conflict with our statements. Their origin is now understood (a confusion in the renormalisation prescription) and they will be corrected in a forthcoming article.\cite{CFRWprivate}
\subsection{Neutron electromagnetic form factors}
A comprehensive analysis of nucleon electromagnetic form factors using the DSEs is available\cite{Cloet:2008re} and%
\begin{center}
\includegraphics[clip,width=0.46\textwidth]{GEn.eps}
\figcaption{\label{GEneutron}
Sachs neutron electric form factor: \emph{solid curve} -- DSE prediction;\protect\cite{Cloet:2008re}
\emph{dashed curve} -- a 2004 parametrisation of data.\protect\cite{Kelly:2004hm} New JLab Hall-A data on the neutron form factor at $Q^2 = 1.71,2.51,3.47\,$GeV$^2$ will soon be available.\protect\cite{bogdan}}
\end{center}
\begin{center}
\includegraphics[clip,width=0.46\textwidth]{F1nud.eps}
\includegraphics[clip,width=0.46\textwidth]{GEnud.eps}
\figcaption{\label{GEneutronud}
Flavour decomposition of neutron form factors:\protect\cite{Cloet:2008re} \emph{Upper panel} -- Dirac; \emph{Lower panel} -- Sachs electric. In both panels: \emph{solid curve} -- $f$=$d$-quark; \emph{dashed curve} -- $f$=$u$-quark. All form factors normalised to unity at $q^2=0$. In reality:
$F_1^{nd}(0)=G_E^{nd}(0)=-\frac{2}{3}$ and $F_1^{nu}(0)=G_E^{nu}(0)=\frac{2}{3}$; $F_1^{nd}(q^2)$ is negative definite, $G_E^{nd}(q^2)$ becomes positive at $q^2\approx 9\,$GeV$^2$, $F_1^{nu}(q^2)$ becomes negative at $q^2\approx 7\,$GeV$^2$ and $G_E^{nu}(q^2)$ becomes negative at $q^2\approx 10\,$GeV$^2$.}
\end{center}
the calculation of nucleon-to-resonance transition form factors is underway.
These studies compute a dressed-quark core contribution to the form factors, which is defined by the%
\begin{center}
\includegraphics[clip,width=0.30\textwidth]{impulse1.eps}
\figcaption{\label{impulse} Bystander quark vertex, one of six diagrams that contribute to a conserved current for on-shell nucleons described by the Faddeev equation solution, $\Psi_{i,f}$.\protect\cite{Oettel:1999gc} The single line represents $S(p)$, the dressed-quark propagator, and the double line is the diquark propagator.}
\end{center}
solution of the Poincar\'e covariant Faddeev equation. A particular feature of the study cited is a separation of form factor contributions into those from different
diagram types and correlation sectors, and subsequently a flavour separation for each of these. Amongst the extensive body of results that one might highlight are: both the neutron (Fig.\,\ref{GEneutron}) and proton Sachs electric form factor possess a zero; and, owing to the presence of axial-vector quark-quark correlations, $r_1^{n,u}>r_1^{n,d}$ but $r_E^{n,u}<r_E^{n,d}$ (Fig.\,\ref{GEneutronud}).
In vanishing twice on the domain accessible to reliable calculation, at the origin owing to charge neutrality, and at $Q^2\approx 11\,$GeV$^2$ owing to dynamics, the
neutron's electric form factor is special. The origin of this second zero can be explained and is consistent with intuition.\cite{Cloet:2008re} For example, consider $G_E^{n,q_b}$, which is the contribution to the form factor from a \emph{bystander quark}; viz., the contribution from a process in which the photon strikes a quark that is neither within a diquark nor breaking away from one, illustrated in Fig.\,\ref{impulse}. (NB.\ This is one contribution to the quantity plotted in Fig.\,\ref{GEneutronud}, which is the total $f$-quark contribution to the form factor.) $G_E^{n,q_b}$ is negative at small-$Q^2$ because the scalar diquark component of the Faddeev amplitude is dominant and that is paired with a $d$-quark bystander in the neutron. This dressed-quark is responsible for the preponderance of negative charge at long range. $G_E^{n,q_b}$ is positive at large $Q^2$ because $F_2^n$ dominates on that domain, which focuses attention on the axial-vector diquark component of the Faddeev amplitude. The positively charged $u$-quark is most likely the bystander quark in these circumstances.
\subsection{Neutron charge distribution}
These features are manifest in the configuration-space charge density, obtained through a three-dimensional Fourier-Transform; namely,
\begin{equation}
\label{rhonr}
\rho_n(r) = \frac{1}{2\pi^2 r} \int_0^\infty \! dq\, q \,\sin (q r)\, G_E^n(q^2)\,,
\end{equation}
\begin{center}
\includegraphics[clip,width=0.46\textwidth]{Fig3u.eps}
\includegraphics[clip,width=0.46\textwidth]{Fig3l.eps}
\figcaption{\label{rhoneutron}
Distribution of charge within the neutron evaluated from the three-dimensional Fourier transform in Eq.\,(\protect\ref{rhonr}): \emph{solid curve} -- DSE prediction;\protect\cite{Cloet:2008re}
\emph{dashed curve} -- a 2004 parametrisation of data.\protect\cite{Kelly:2004hm}
}
\end{center}
which is depicted in Fig.\,\ref{rhoneutron}. To compute $\rho_n(r)$ we inferred $G_E^n(q^2)$ from the calculated ratio $\mu_n G_E^n(q^2)/G_M^n(q^2)$ (Fig.\,16, Ref.\,\raisebox{-1.35ex}{\mbox{\Large \cite{Cloet:2008re}}}) multiplied by
the empirical dipole: $1/[1+q^2/(0.84\,{\rm GeV})^2]^2$. This procedure corrects for the deliberate omission of pion cloud effects in Ref.\,\raisebox{-1.35ex}{\mbox{\Large \cite{Cloet:2008re}}}. The result is depicted in Fig.\,\ref{GEneutron}. Caveats on the interpretation of $\rho_n(r)$ as a quantum mechanical charge density are discussed in Sec.\,4 of Ref.\,\raisebox{-1.35ex}{\mbox{\Large \cite{Cloet:2008re}}}. These observations notwithstanding, the mapping between the $q^2$-dependence of the Sachs electric form factor and the charge density is intuitively appealing and instructive.
In the comparison made in Fig.\,\ref{rhoneutron} between the DSE prediction\cite{Cloet:2008re} and a 2004 parametrisation of data,\cite{Kelly:2004hm} two features are striking: a significant depletion of positive charge at the core of the neutron accompanied by an increased concentration of negative charge toward the surface; and oscillations in the charge distribution. NB.\ We computed the charge density arising only from the dressed-quark core.
The depletion of charge is associated with the second zero in $G_E^n(q^2)$ and the domain of negative support which follows. The amount of charge depletion is determined by the magnitude of $G_E^n(q^2)$ at it's min-%
\begin{center}
\includegraphics[clip,width=0.46\textwidth]{GMnDipole.eps}
\figcaption{\label{GMdipole}
Sachs neutron magnetic form factor divided by dipole fit: \emph{solid curve} -- DSE prediction;\protect\cite{Cloet:2008re}
\emph{points} -- contemporary experiment.\protect\cite{Lachniet:2008qf} As explained in Sec.\,8 of the DSE study,\protect\cite{Cloet:2008re} pseudoscalar meson loops, deliberately omitted in the calculation displayed here, will work to flatten the prediction, with greatest impact for $Q^2\lesssim 3 M_N^2$.}
\end{center}
imum: should the magnitude exceed a critical value, $\rho_n(r)$ would become negative in the neighbourhood of $r=0$.
The oscillations at $r\gtrsim 0.5\,$fm are connected with the shape of $G_E^n(q^2)$ on its domain of positive support. Crucial to their appearance and nature are the following features: $G_E^n(0)=0$; the location and magnitude of the maximum of $G_E^n(q^2)$; and the fact that the domain of positive support is bounded.
Given that it is common practice to compare nucleon form factors with an empirical dipole, it is of interest to compare the DSE results with a dipole parametrisation. To that end we fitted $1/(1+Q^2/m_D^2)^2$ to the DSE result on $2\leq Q^2/M_N^2 < 9$. This domain excludes the region whereupon pion cloud effects are significant and maximises coverage of the domain on which the quark-core calculation is most reliable. The fit produced $m_D= 1.06\, M_N$, which is just 18\% larger than the empirical dipole mass, $m_D^{\rm emp}=0.89\,M_N$. The ratio obtained with the computed dipole mass is depicted in Fig.\,\ref{GMdipole} and compared with a modern experimental determination.\cite{Lachniet:2008qf}
\section{Perspective}
Plainly, much has been learnt from the application of Dyson-Schwinger equations (DSEs) to problems in nonperturbative QCD. This process will continue.
For example, comparison between DSE results and forthcoming precision data on nucleon form factors, both elastic and resonance-transition, holds promise as a means by which to chart the momentum evolution of the dressed-quark mass function and therefrom the infrared behavior of QCD's $\beta$-function. In particular, it should enable the unambiguous location of the transition boundary between the constituent- and current-quark domains that is signalled by the sharp drop apparent in Fig.\,\ref{gluoncloud} and which can likely be related to an infrared inflexion point in QCD's running coupling, whose properties are determined by the $\beta$-function.
Contemporary theory indicates that this transition boundary lies at $p^2 \sim 0.6\,$GeV$^2$. Since a probe's input momentum $q$ is principally shared equally amongst the dressed-quarks in elastic and transition processes, then each can be considered as absorbing a momentum fraction $q/3$. Thus in order to scan the behaviour of the mass function on the domain $p^2\in [0.5,1.0]\,$GeV$^2$ one requires $q^2\in [5,10]\,$GeV$^2$. This domain will become accessible after completion of the upgrade underway currently at JLab.
\medskip
\acknowledgments{CDR is grateful to the organisers of NSTAR2009 for arranging a very rewarding meeting. We acknowledge valuable communications with V.~Burkert, G.\,P.~Gilfoyle, R.~Gothe, T.-S.\,H.~Lee, Y.\,X.~Liu, V.~Mokeev, R.\,A.~Schumacher, B.~Wojtsekhowski, R.\,D.~Young and H.\,S.~Zong.}
\end{multicols}
\vspace{-2mm}
\centerline{\rule{80mm}{0.1pt}}
\vspace{2mm}
\begin{multicols}{2}
\newcounter{dumbone}
\setcounter{dumbone}{0}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{sec:intro}
Person re-identification (re-ID) aims to find the same person from a gallery collected by different cameras. It is a challenging task due to the large image variations caused by different human poses and camera settings. During the past few years, person re-ID has achieved great improvement, benefiting from the remarkable success of deep Convolutional Neural Nets (CNNs) \cite{krizhevsky2012alex,he2016deep}. Nevertheless, training deep re-ID model requires a substantial annotated data, which is quite expensive especially when across a mass of cameras. Under such circumstances, there is an urgent demand for learning the discriminative deep re-ID model with large-scale unlabeled data. In this paper, we address the challenging unsupervised person re-ID problem, where large-scale training data is provided while no label information is available.
Unsupervised person re-ID has been studied in many previous works \cite{liao2015person,yang2017unsupervised,zheng2015scalable}. These works mainly focus on designing discriminative hand-crafted features and dealing with a small dataset, but degenerate when applying on large-scale datasets. Deep CNNs have reached state-of-the-art performance on large-scale person re-ID datasets. Most of existing deep CNNs based re-ID models were trained by using either ID-discriminative embedding (IDE) \cite{zheng2016person} or triplet (or pairwise) loss \cite{HermansBeyer2017Arxiv}. However, it is impossible to train these models without annotations on the training set, because both IDE and triplet loss require label information or the relationship (positive and negative) with other training data for the given image. There are limited works make efforts on deep learning based unsupervised re-ID. Fan \textit{et al.} \cite{fan2017unsupervised} propose a framework called UPL which progressively utilizes $k$-means clustering to find reliable positive pairs and fine-tunes the deep CNN model. The main drawbacks of UPL are that the initial re-ID model should be pre-trained on a labeled re-ID dataset and the rough number of unique identities in the target dataset should be given for clustering.
\begin{figure*}[!t]
\centering
\includegraphics[width=\linewidth]{figure/process.pdf}
\caption{The overall framework of the training of deep re-ID model which can be divided into three steps. In step 1, we use virtual data generated by DPG-GAN and Star-GAN to train a coarse deep re-ID model. Then, a collaborative filtering based positive pair mining approach is utilized to find reliable positive pairs from the real data in step 2. In step 3, we refine the coarse re-ID model by leveraging the virtual data and mined positive pairs with a multi-task loss function. Finally, we alternate between step 2 and step 3 until the re-ID model converged.}
\label{fig:process}
\end{figure*}
In this study, we propose a deep CNN based approach for unsupervised person re-ID. Our approach consists of two components: 1) virtual person generation and 2) training of deep re-ID model. For virtual person generation, we first employ DPG-GAN \cite{ma2018disentangled} and Star-GAN \cite{choi2017stargan} to learn a person generation model and a camera style transfer model by using unlabeled real training data. As such, we can generate virtual persons with different poses and assign them with corresponding pseudo labels. Then the same generated identity will be style transfered to different cameras. These virtual persons are formed as virtual training data and subsequently be utilized to train a coarse deep re-ID model in a supervised way.
For training of deep re-ID model, it is divided into three steps as shown in Figure~\ref{fig:process}: 1) pre-training on virtual data, 2) positive pair mining, and 3) model fine-tuning.
For step 1, a coarse deep re-ID model is trained by using the generated virtual data. This model can provide discriminative representation for measuring the similarity of persons.
For step 2, we first use the previous pre-trained coarse re-ID model to extract features for each real image and compute its $k$-reciprocal nearest neighbors ($k$-RNNs) \cite{qin2011hello}. Although each image and one of its $k$-RNNs can be treated as a positive pair, there are a large amount of false positive pairs which have negative effects for fine-tuning the re-ID model. In order to alleviate this issue, a novel collaborative filtering based positive pair mining approach is proposed to find most reliable positive pairs on unlabeled real data.
In step 3, the mined positive pairs along with the virtual labeled training data are simultaneously leveraged for model refinement by using a multi-task loss function in which IDE for virtual data and triplet loss for mined positive pairs. At last, the final deep re-ID model is achieved by iterating between step 2 and step 3 until convergence.
To summarize, the main contributions of this study are as follows:
\begin{itemize}
\item We propose a novel framework for unsupervised person re-ID by leveraging the generated pseudo labeled virtual data and the unlabeled real data for deep re-ID network training.
\item A collaborative filtering based positive pair mining approach is used to select reliable training pairs from unlabeled real data for model refinement by leveraging person-to-person similarity relations.
\item The proposed method achieves the state-of-the-art performance in unsupervised person re-ID on two large-scale datasets, Market-1501 and DukeMTMC-reID.
\end{itemize}
\section{Related Work}
\label{sec:review}
\textbf{Unsupervised Person Re-identification.} Unsupervised person re-ID attempts to learn discriminate features for pedestrian with unlabeled data. Hand-craft features can be directly employed for unsupervised person re-ID. Farenzena \textit{et al.} \cite{farenzena2010person} propose to use weighted color histogram, maximally stable color regions and recurrent high structured patches to separate foreground of pedestrians from background and compute appearance based feature for re-ID. Gray and Tao \cite{GrayTao} split input image into horizontal stripes and use 8 color channels and 21 texture filters on the luminance channel to extract feature. Recently, Zhao \textit{et al.} \cite{zhao2013unsupervised,zhao2013person,zhao2014learning} propose to split images of pedestrians into 10$\times$10 patches and combine LAB color histogram and SIFT feature as the final descriptor. Liao \textit{et al.} \cite{liao2015person} introduce local maximal occurrence descriptor (LOMO) by combining color feature and SILTP histogram. Zheng \textit{et al.} \cite{zheng2015scalable} propose to extract global visual feature by aggregating local color name descriptor. Bag of words model is then utilized for re-ID. Yang \textit{et al.} \cite{yang2017unsupervised} propose a weighted linear coding method for multi-level descriptor learning. These hand-craft based methods can be readily applied to unsupervised person re-ID but often fail to perform well on large-scale datasets.
Yu \textit{et al.} \cite{camel} present an unsupervised metric learning approach for re-ID called CAMEL. It employs asymmetric metric learning to find the shared space where the data representations are less affected by view-specific bias. Liu \textit{et al.} \cite{liu2017stepwise} propose a step-wise metric promotion model for unsupervised video person re-ID by iteratively estimating the annotations of training tracklets and optimizing re-ID model.
Recently, many works try to transfer a pre-trained re-ID model to the unlabeled dataset (also called domain adaptation). Peng \textit{et al.} \cite{peng2016unsupervised} exploit multi-task dictionary learning method to learn shared feature space between labeled dataset and unlabeled dataset. To take advantage of the strong discriminate ability of deep learning, Fan \textit{et al.} present a deep learning framework called UPL \cite{fan2017unsupervised}. They use a labeled dataset to initialize feature embedding and then fine-tune the network with positive sample pairs obtained through $k$-means clustering on unlabeled dataset. TJ-AIDL \cite{wang2018transferable} adopts a multi-branch network to establish an identity-discriminative and attribute-sensitive feature representation space most optimal for target domain without any label information. Deng \textit{et al.} \cite{deng2018image} introduce SP-GAN by jointly preserving self-similarity and domain-dissimilarity in the process of image-to-image translation. The source set is transferred to the style of target set and is then used to learn re-ID model for target set. Similarity, Wei \textit{et al.} \cite{wei2017person} present PT-GAN to reduce the domain gap by translating the given image to the style of target dataset and train deep re-ID model in supervised way. All the methods mentioned above require a labeled re-ID dataset to pre-train a re-ID model and then transfer it to the unlabeled target set. In this paper, we conduct unsupervised person re-ID under a more strict condition where there are only provided with unlabeled target set.
\textbf{Person Image Generation.} Generating realistic person images is a challenging task because of the complexity of foreground, person pose and background. The image generation models, \textit{e.g.}, VAE \cite{kingma2014auto} and GANs \cite{goodfellow2014generative}, have been demonstrated the effectiveness in person generation. Zhao \textit{et al.} \cite{zhao2017multi} combine variational inference into GAN to generate multi-view images of persons in a coarse-to-fine manner. Ma \textit{et al.} \cite{ma2017pose} develop a framework to generate new person images in arbitrary poses given as input person images and a target pose. Despite the promising results, these two approaches require aligned person image pairs in the training stage. To solve this problem, Esser \textit{et al.} \cite{esser2018VAE} propose VAE-U-Net to train person generation model by disentangling the shape and appearance of the input image. The novel image is generated with U-Net for target shape, conditioned on the VAE output for appearance. Ma \textit{et al.} \cite{ma2018disentangled} introduce DPG-GAN to generate virtual person images by simultaneously disentangle and encode the foreground, background and pose information into embedding features. The embedding features are then combined to reconstruct the input person image.
\textbf{Style Transfer.} Style transfer is a sub-domain of image-to-image translation. Recent works conducted on GANs \cite{goodfellow2014generative} have achieved impressive results on image-to-image translation.
Pix2pix towards this goal by optimizing both adversarial and L1 loss of cGAN \cite{mirza2014cGAN}. However, paired samples are required in training process, this limits the application of pix2pix in practice. To alleviate this problem, Cycle-GAN \cite{zhu2017CycleGAN} introduces cycle-consistent loss to preserve key attributes for both source domain and target domain. These two models can only transfer images from one domain to another and may not be flexible enough when dealing with multi-domain translation. To overcome this problem, Star-GAN \cite{choi2017stargan} is proposed to combine classification loss and adversarial loss into training process to translate image into different styles with only one model.
\section{The Proposed Method}
\label{sec:model}
In this section, we first describe the pipeline of virtual person generation in Section~\ref{sec:person generation}. Then the implementing of training coarse Deep Re-ID model in Section~\ref{sec:fake-reid}. We present the details of collaborative filtering based positive pair mining in Section~\ref{sec:final-reid} and the final model fine-tuning in Section~\ref{sec:model-finetune}.
\subsection{Virtual Person Generation}
\label{sec:person generation}
In unsupervised person re-ID, identity annotations are not available in training set. This makes it difficult to train deep re-ID model in traditional way, such as IDE \cite{zheng2016person} and triplet loss \cite{HermansBeyer2017Arxiv}. To solve this problem, this paper considers to learn the potential distribution of the unlabeled person data, and to generate labeled virtual person images for deep re-ID model training. To achieve this goal, this work employs DPG-GAN \cite{ma2018disentangled} to generate person samples with different poses and transfer them to styles of different cameras by Star-GAN \cite{choi2017stargan}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/step2.pdf}
\caption{The pipeline of virtual image generation. We first use DPG-GAN to generate virtual images from Gaussian noise. Then, we assign annotations to the virtual samples where person samples with the same foreground contain the same identity. Finally, we transfer the virtual person samples to the styles of different cameras with Star-GAN on average.}
\label{fig:step2}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=\linewidth]{figure/genIMG.pdf}
\caption{Examples of virtual person images on Market-1501 and DukeMTMC-reID. Despite the successful virtual images, failure instances (\textit{e.g.} incomplete body parts and blurred backgrounds) may influence the performance of deep re-ID model.}
\label{fig:genImg}
\end{figure*}
\textbf{DPG-GAN.} DPG-GAN is an unsupervised person generation method that can obtain novel person image from Gaussian noise. A generator is proposed to disentangle pose information, foreground and background masks of unlabeled real data and encode them into embedding representations. These embeddings are decoded to reconstruct the input image with L1 loss. In addition, three generators are introduced to generate virtual embeddings from Gaussian noise and corresponding discriminators try to distinguish the embeddings of real data from the virtual embeddings. In this way, DPG-GAN leans to synthesis virtual person samples with different appearances, backgrounds and poses.
\textbf{Star-GAN.} Star-GAN consists of a style transfer model $G(x,c)$ and a discriminator $D(x)$, where $x$ and $c$ represent input image and target domain label, respectively. In this paper, we regard each camera as an independent domain. During training, $G$ is designed to generate virtual image in the style of target domain $c$. $D$ learns to distinguish between real image and style transferred image, as well as to classify the real image to its corresponding camera domain. We alternatively optimize $G$ and $D$ as the training strategy in \cite{choi2017stargan}.
\textbf{Virtual Dataset Generation.} Given unlabeled real training data, we first learn a person generation model and a camera style transfer model with DPG-GAN and Star-GAN, respectively. Then, we use DPG-GAN to randomly generate person images with different poses and transfer them in the styles of different cameras by Star-GAN. In Fig.~\ref{fig:step2}, we show the pipeline of virtual person generation, which can be summarized into four steps:
1. Define the number of identities (classes) $N_p$ and number of samples $N_e$ for each person. In this way, the number of images in the virtual dataset will be $N_p \times N_e$.
2. Sample real-like foreground $v_{fg}$, background $v_{bg}$ and pose $v_{pose}$ embeddings from Gaussian noise and feed them into pre-trained DPG-GAN for composing virtual person image. For each identity of person, we fix $v_{fg}$ and randomly sample $v_{bg}$ and $v_{pose}$ $N_e$ times to generate person images with different poses and backgrounds.
3. Repeat step 2 $N_p$ times to generate the whole virtual person dataset. Person images with the same foreground are assigned to the same identity.
4. Transfer virtual person images into styles of different cameras using pre-trained Star-GAN. For virtual person samples of each identity, we transfer them to $N_c$ camera styles on average.
To this end, we generate virtual person data with different poses and camera styles. Examples of virtual person images are shown in Fig.~\ref{fig:genImg}.
\subsection{Training Coarse Deep Re-ID Model}
\label{sec:fake-reid}
Given the labeled virtual person data contains $N_p$ identities, we are able to train a deep re-ID model as traditional methods in supervised way. In this work, we regard the training of deep re-ID model as a classification problem and train a coarse re-ID model based on IDE \cite{zheng2016person}. We adopt ResNet-50 \cite{he2016deep} as the backbone network and add two fully convolutional (FC) layers after the Pooling-5 layer. The first FC layer has 1024-dim named as ``FC-1024''. The second FC layer named as ``FC-$\#$ID'' which has $N_p$-dim. $N_p$ is number of identities in virtual person dataset. The cross-entropy loss function is used to train the coarse re-ID model.
\subsection{Collaborative Filtering based Positive Pair Mining}
\label{sec:final-reid}
Although person generation algorithm can produce high quality samples, it still generates a certain portion of poor instances \textit{(e.g. broken limbs or blur background)} as shown in the Fig.~\ref{fig:genImg}-(c) and Fig.~\ref{fig:genImg}-(f). These poor instances will degenerate the performance of the re-ID model and the coarse deep re-Id model trained on virtual data is insufficient to discriminate the real data in testing set. To address this problem, we attempt to mine positive pairs from unlabeled data for model refinement.
\textbf{Definition.} We denote the unlabeled real data as $\mathcal{U}$. Given a query image $p \in \mathcal{U}$, our goal is to find the positive sample sharing the same identity with $p$ from $\mathcal{U}$ (except $p$). Based on the pre-trained coarse re-ID model, we extract the output of pooling-5 as the feature for each real image, and compute the pair-wise similarity matrix $\textbf{S}$ between all real images as:
\begin{equation}
\textbf{S}_{p,q}=\exp (-||v_{p}-v_{q}||_2),
\label{eq:Simi}
\end{equation}
where $ v_{p}$ and $ v_{q}$ are normalized pooling-5 features of image $p$ and $q$.
\textbf{$k$-reciprocal nearest neighbors.} Given the computed pair-wise similarity matrix, we could obtain the $k$-nearest neighbors (i.e. the top-$k$ samples in the similarity ranking list) for each real image. We define the $k$-nearest neighbors of $p$ as $N(p,k)$. In this paper, we adopt $k$-reciprocal nearest neighbors ($k$-RNNs) \cite{qin2011hello} instead of $k$-nearest neighbors as candidates that may contain positive samples of $p$. The $k$-RNNs for image $p$ is defined as:
\begin{equation}
R_{k}(p) = \{q_i| (q_i \in N(p,k)) \wedge (p \in N(q_i,k)) \},
\label{eq:krNN}
\end{equation}
where $q_i$ is among the top-$k$ similar samples of $p$, and $p$ is also among the top-$k$ of $q_i$. Intuitively, images in $R_{k}(p)$ are of high similarity with $p$ and can be utilized to form positive pairs. We named this approach as $k$-reciprocal nearest neighbor based positive pair mining. However, it will prone to form false positive pairs due to illumination, pose variation and other uncontrollable factors. To filter false samples from the candidates of $R_{k}(p)$, we then propose a collaborative filtering based positive pair mining approach to find more reliable samples that share the same identity with $p$.
\textbf{Collaborative filtering based positive pair mining.} Collaborative filtering (CF) is a technique utilized by recommender systems for preferences prediction \cite{BreeseCF}.
The underlying assumption of the user-based CF is that if two persons have a large overlap in opinions with items, they are very likely to have a similar taste. Inspired by the user-based CF, we argue that if an image $p$ shares the same $k$-RNNs as an image $q$, they are more likely to be a positive pair. Based on the shared neighbors between $p$ and $q$, we are able to leverage their potential relations and recalculate their similarity. As shown in Fig.~\ref{fig:CF}, our approach includes four steps:
\begin{figure*}[!t]
\centering
\includegraphics[width=\linewidth]{figure/CF.pdf}
\caption{Collaborative filtering based positive pair mining. Given an query image $p$ (\textcolor{blue}{blue}) of real data, we first compute the $k$-reciprocal nearest neighbors $R_k(p)$ of $p$ (\textcolor{green}{green}). Then, the collaborator set (\textcolor{blue}{blue}) of $p$ and each candidate $q$ in $R_k(p)$ is mined in step (b). The collaborative filtering similarity of $p$ and each candidate $q$ in $R_k(p)$ is calculated by Eq.~\ref{eq:CF} in step (c). Finally, image pair with the highest re-calculated similarity is selected as the positive pair (\textcolor{green}{green}) in step (d).}
\label{fig:CF}
\end{figure*}
\begin{enumerate}
\item \textit{Obtaining k-reciprocal nearest neighbors.} Given the computed pair-wise similarity matrix, we first calculate the $k$-RNNs for each real image according to Eq.\ref{eq:krNN}. For a query image $p$, we represent the $k$-RNNs of $p$ as $R_{k}(p)$ and aim to find reliable positive sample from $R_{k}(p)$.
\item \textit{Collaborator mining}. We denote collaborators as the shared $k$-RNNs of two images. Thus, given a query image $p$ and a candidate image $q$ in $R_{k}(p)$, the collaborator set $C$ of $p$ and $q$ is defined as:
\begin{equation}
C(p, q) = \{c_i| (c_i \in R_k(p)) \wedge (c_i \in R_k(q)) \}.
\label{eq:CMF}
\end{equation}
\item \textit{Collaborative filtering similarity}. Based on the collaborator set of $p$ and $q$, we calculate the filtered similarity as:
\begin{equation}
\textbf{F}_{p,q} = \textbf{S}_{p,q} + \sum_{i=1}^{|C|}{\textbf{w}_{q,c_i}\textbf{S}_{p,c_i}},
\label{eq:CF}
\end{equation}
where $| \cdot |$ denotes number of candidates in a set, and $\textbf{w}_{q,c_i}$ is the normalized weight to measure the significance of collaborator $c_i$, defined as:
\begin{equation}
\textbf{w}_{q,c_i} = \frac{\textbf{S}_{q,c_i}}{\sum_{i=1}^{|C|}{\textbf{S}_{q,c_i}}}.
\label{eq:normalize}
\end{equation}
The filtered similarity not only considers the original pair-wise distance of $p$ and $q$, but also takes the similarities between $p$, $q$ and the collaborator set into consideration.
\item \textit{Positive pair mining}. With the calculated collaborative filtering similarities between query image $p$ and images in $R_k(p)$, image $q^*$ with the highest similarity $F$ is selected to construct a positive pair $(p, q^*)$ for re-ID model fine-tuning:
\begin{equation}
q^* = \mathop{\arg\max}_{q \in R_k(p)} \textbf{F}_{p,q}.
\label{eq:PPM}
\end{equation}
In practice, we find that positive pairs obtained by our algorithm are always in the same camera. This may make the re-ID model sensitive to camera variations, while the main goal of re-ID is to retrieval a person across different cameras. To alleviate this problem, we remove all images that share the same camera with the given image $p$ during the calculation of $k$-RNNs.
\end{enumerate}
\begin{table*}[t]
\centering
\caption{Unsupervised person re-ID performance comparison with state-of-the-art methods on Market-1501 and DukeMTMC-reID.}
\label{tab:cmpDA}
\resizebox{\textwidth}{!}{
\begin{tabular}{c|ccccc|ccccc} \hline
\multirow{1}{*}{Methods} & \multicolumn{5}{c}{Duke $\rightarrow$ Market} \vline & \multicolumn{5}{c}{Market $\rightarrow$ Duke} \\
\cline{2-11}
(Domain Adaptation) & mAP & rank-1 & rank-5 & rank-10 & rank-20 & mAP & rank-1 & rank-5 & rank-10 & rank-20 \\
\hline
UMDL \cite{peng2016unsupervised} & 12.4 & 34.5 & 52.6 & 59.6 & - & 7.3 & 18.5 & 31.4 & 37.6 & -\\
PT-GAN \cite{wei2017person} & - & 38.6 & - & 66.1 & - & - & 27.4 & - & 50.7 & -\\
SP-GAN \cite{deng2018image} & 22.8 & 51.5 & 70.1 & 76.8 & - & 22.3 & 41.1 & 56.6 & 63.0 & -\\
\hline
{Methods} & \multicolumn{5}{c}{Market-1501} \vline & \multicolumn{5}{c}{DukeMTMC-reID} \\
\cline{2-11}
(Unsupervised) & mAP & rank-1 & rank-5 & rank-10 & rank-20 & mAP & rank-1 & rank-5 & rank-10 & rank-20 \\
\hline
LOMO \cite{liao2015person} & 8.0 & 27.2 & 41.6 & 49.1 & - & 4.8 & 12.3 & 21.3 & 26.6 & -\\
Bow \cite{zheng2015scalable} & 14.8 & 35.8 & 52.4 & 60.3 & - & 8.3 & 17.1 & 28.8 & 34.9 & -\\
DPG-GAN \cite{ma2018disentangled} & 13.8 & 33.8 & - & - & - & 9.0 & 19.5 & 33.3 & 39.9 & 47.9\\
UPL \cite{fan2017unsupervised} & 20.1 & 44.7 & 59.1 & 65.6 & 71.7 & 16.4 & 30.4 & 44.5 & 50.7 & 56.0\\
CAMEL \cite{camel} & 26.3 & 54.5 & - & - & - & - & - & - & - & -\\
\hline
Our Method & \textbf{33.9} & \textbf{63.9} & \textbf{81.1} & \textbf{86.4} & \textbf{90.8} & \textbf{17.9} & \textbf{36.3} & \textbf{54.0} & \textbf{61.6} & \textbf{67.8}\\
\hline
\end{tabular}}
\end{table*}
\subsection{Model Fine-tuning}
\label{sec:model-finetune}
After mining the positive pairs of real data, we combine them together with the generated virtual data to refine the previous coarse deep Re-ID model. Triplet loss project similar pairs into a feature space with a smaller distance than dissimilar pairs, which can be adopted for selected positive training pairs. Another reason to use triplet loss on positive pairs is that we do not have the real label for selected real images, cross-entropy loss can not be obtained.
During training, we randomly sample $N_r$ anchor images from real data and their corresponding mined positive samples to form the training batch. For each image $p_i$, we assign the same pseudo class to $p_i$ and its mined positive sample $q_i^*$. We select the hardest (closest) sample $z_i$ from the other $N-2$ images as the negative sample of $p_i$. The final triple loss function is as following,
\begin{equation}
\begin{aligned}
L_{tri}=\sum_{i=1}^{N_r}\Big[||f(p_i)-f(q_i^*)||_2- ||f(p_i)-f(z_i)||_2+m\Big]_{+},
\end{aligned}
\label{eq:triLoss}
\end{equation}
where $m$ is a margin that is enforced between positive and negative pairs, and $f(\cdot)$ is the pooling-5 feature of the deep re-ID model. $N_r$ is the number of anchors in the training batch.
As we already have the pseudo labels of the generate virtual data, we directly use the IDE cross-entropy loss function $L_{cls}$. By merging these two losses into a multi-task training framework, we then have the final loss as:
\begin{equation}
L_{loss}=L_{cls}+\lambda L_{tri},
\label{eq:lossStage2}
\end{equation}
where $\lambda$ is a hyper-parameter controlling the influence of $L_{cls}$ and $L_{tri}$.
When finished training the re-ID model for each epoch, the parameters of the deep re-ID model will be updated and the adjacent matrix $\textbf{S}$ of the real data will also be updated. As a result, we need to proceed a positive pair mining step for each epoch. The final model can be trained by using loss function~\ref{eq:lossStage2}. By doing so, the real data can help increase the final re-ID accuracy by eliminating negative effects of distorted virtual images while virtual data stabilizes the training process and the keep basic performance of re-ID model.
\section{Experiments}
To evaluate the performance of our proposed method, we conduct experiments on two large-scale benchmark datasets: Market-1501~\cite{zheng2015scalable} and DukeMTMC-reID~\cite{zheng2017unlabeled,ristani2016MTMC}.
The mAP and rank-1 accuracy are adopted as evaluation metrics.
\textbf{Market-1501} dataset contains 32,668 bounding boxes of 1,501 identities obtained from six cameras.
751 persons are used for training while the rest for testing (750 identities, 19,732 images). The probe set contains 3,338 images for querying true person images from gallery set.
\textbf{DukeMTMC-reID} dataset is a subset of DukeMTMC \cite{ristani2016MTMC} which consists of 36,411 labeled bounding boxes of 1,404 identities pictured by 8 different cameras. Similar to the protocol of Market-1501, this dataset split 16,522 images of 702 identities for training, 2,228 probe images and 17,661 gallery images from the rest for testing.
\subsection{Experiment Settings}
\label{sec:exp}
\textbf{DPG-GAN:} We train the DPG-GAN by 120,000 epochs with a batch size of 16. The learning rates of all networks to 0.00008 and divided by 10 in every 10,000 epochs. All input images are resized to 128$\times$64. We use the same network architectures following \cite{ma2018disentangled}.
In virtual person generation stage, we use $N_p$ to represent the number of individuals/identities included in virtual dataset while $N_e$ denotes the number of images generated for each person. Unless otherwise specified, we generate virtual datasets with $N_p=600$ and $N_e=36$ for Market-1501, and with $N_p=600$ and $N_e=48$ for DukeMTMC-reID.
\textbf{Star-GAN:} The Adam solver is employed to train $G$ and $D$ of Star-GAN for a total 200 epochs with a batch-size of 40. Input images are resized to 128$\times$128. The learning rates for $D$ and $G$ are initialized to 0.0001 and linearly reduced to 0 for the last 100 epochs. We employ the network structures following \cite{choi2017stargan}.
During camera style translation, one-hot label of target camera is tiled and concatenated with input images to form a 128$\times$128$\times$($N_c$+3) tensor, the tensor is then sent to U-Net-like generator for style translation. $N_c$ is the total number of cameras for corresponding real dataset. We convert images from virtual data to different camera styles on average. In other words, each image is transferred to one style of cameras.
\textbf{Re-ID model training:} We resize input image to 256$\times$128, and employ random horizontal flipping and random cropping for data argumentation. The SGD solver is used for optimization with a learning rate initialized as 0.1 and divided by 10 after 100 epochs. We train the re-ID model with 150 epochs in total. For positive pair mining, we first train the re-ID model by only using the virtual data for 100 epochs. After that, we add the mined positive pairs from real data for fine-tuning with another 50 epochs. Other parameters are set as follows: the triplet loss with anchor batch size $N_r=50$ and a margin $m=0.3$, $k$-reciprocal nearest neighbors with $k=50$, and $\lambda=1$ in Eq.~\ref{eq:lossStage2}.
\subsection{Comparison with State-of-The-Art}
In order to compare with other competing unsupervised re-ID methods, we train two models with generated virtual datasets for the Market-1501 and DukeMTMC-reID dataset, respectively.
All the experimental results of our method and other methods are reported in Table~\ref{tab:cmpDA}. As can be seen, our method outperforms all previous unsupervised re-ID methods. On Market-1501, we can get a rank-1 accuracy of 63.9\%, which is 9.4\% higher than the previous state-of-the-art method CAMEL~\cite{camel}. On DukeMTMC-reID, our method also can beat UPL~\cite{fan2017unsupervised} with a 5.9\% higher rank-1 accuracy.
Additionally, we also compare our method with three domain adaptation based methods. Domain adaptation methods can be considered as semi-supervised which training a model from one labeled dataset and then transfer to another different datasets. As can be seen from Table~\ref{tab:cmpDA}, our method outperform all three methods on Market-1501 dataset, with a 12.4\% higher rank-1 accuracy compared with the best SP-GAN. On the DukeMTMC-reID dataset, the accuracy of our method is higher than the UMDL and PT-GAN, but lower than the SP-GAN. The main reason is that the generated virtual images still contain lots of low-quality samples which directly affect the accuracy of our method.
\subsection{Ablation Study}
\begin{table}[t]
\centering
\caption{Ablation study of our approach. Based on the re-ID model trained on DPG-GAN, we add Star-GAN, positive pair mining gradually into it to evaluate the re-ID accuracy.}
\label{tab:graduallyCMP}
\footnotesize
\begin{tabular}{l|cc|cc}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{Market-1501} & \multicolumn{2}{c}{DukeMTMC-reID} \\
& mAP & rank-1 & mAP & rank-1 \\
\hline
DPG-GAN & 13.4 & 33.8 & 9.0 & 19.5\\
\hline
+ Star-GAN & 25.1 & 51.7 & 13.9 & 30.3\\
\hline
+ Star-GAN+Mining & \textbf{33.9} & \textbf{63.9} & \textbf{17.9} & \textbf{36.3}\\
\hline
\end{tabular}
\end{table}
\begin{figure}[!t]
\centering\includegraphics[width=\linewidth]{figure/randCF.pdf}
\caption{The effectiveness of collaborative filtering based positive pair mining. We compare with the re-ID models trained without mining and with random selection mining.}
\label{fig:randCF}
\end{figure}
The method discussed in Section~\ref{sec:model} contains three main components: DPG-GAN, Star-GAN and positive pair mining. In order to figure out which component contributes the most for the accuracy, we evaluate the performance by gradually add the Star-GAN and positive pair mining into the re-ID model training. As can be seen in Table~\ref{tab:graduallyCMP}, after adding the Star-GAN, the rank-1 on Market-1501 dataset can boost from 33.8\% to 51.7\%, which means camera-style transfer each generated identity across different cameras playing a significant role for re-ID model initialization. Then including the positive pair mining step, we observe a further 10.3\% rank-1 improvement on Market-1501. Adding real data into training can help reducing the gap between the generate virtual data and real data. On the DukeMTMC-reID dataset, we have similar findings.
To assess the effectiveness of proposed collaborative filtering based mining procedure, we perform another comparison between \textit{without mining}, \textit{random selection} and \textit{collaborative filtering}. Random selection is simplified mining procedure which randomly chooses samples from the $k$-reciprocal nearest neighbors of anchor images as positive pairs. As shown in Fig.~\ref{fig:randCF}-(a), the mining step can improve the result by a large margin, and the proposed collaborative filtering based mining outperform random selection on rank-1 by 6.6\% and 2.9\% on two datasets.
In triplet training phase. We randomly select $N$ anchor images and their corresponding mined positive samples to form the training batch. For each anchor, we randomly select hardest sample as the negative within the other $N-2$ images and their corresponding positive samples. In this way, the probablity of selecting the real positive sample as the negative is very low when sampling a few images from a dataset containing a large number of images and identities. We evaluate the overall error rate during Market-1501 and DukeMTMC-reID training. They are 2.6\% and 3.1\%, respectively. Due to our competitive performance, we would consider the error rate to be small, so it is not necessary to avoid selecting the real positive sample as the negative.
We also validate the accuracy of the mined pairs belonging to the same identity during the whole fine-tuning step by using the ground-truth information of two datasets in Figure~\ref{fig:CF}-(b). As shown, the accuracy of random selection is only 29.0\% and 14.5\% for Market-1501 and DukeMTMC-reID. After employing the collaborative filtering, the accuracies rise to 67.5\% and 40.5\%, which means the quality of mined positive pairs directly affect the final performance.
\subsection{Sensitive Analysis}
\label{sec:SA}
To check the sensitive of method with different hype-parameters, we do a thorough evaluation of: (1) the scale of generated virtual dataset ($N_p$ and $N_e$), (2) the batch-size of real positive pair $N_r$ and the value of $k$, (3) the influence of the $\lambda$ for the $L_{cls}$ and $L_{tri}$.
\textbf{Large-scale virtual dataset has positive impact}. Intuitively, we conduct a series experiments to evaluate the influence of the scale of the generated virtual datasets. Figure~\ref{fig:nSA}-(a) presents the relationship between $N_p$ and rank-1, by changing $N_p$ from 100 to 700 with a fixed $N_e$ equals to 36 and 48 for Market-1501 and DukeMTMC-reID. In general, re-ID model performs better with larger $N_e$, but the accuracy will begin to saturate when $N_e$ is large enough. The same is true for $N_e$ as shown in Figure~\ref{fig:nSA}-(b). We fix $N_p$ to 600 and vary $N_e$. The rank-1 accuracy will saturate when $N_e=36$ for Market-1501 and 48 for DukeMTMC-reID, respectively. The results demonstrate that enlarging the scale of virtual dataset can improve performance of model in a certain degree.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figure/nSA.pdf}
\caption{Sensitive analysis for $N_e$ and $N_p$. Increasing the size of virtual dataset may help improve re-ID accuracy in a certain degree.}
\label{fig:nSA}
\end{figure}
\begin{figure}[!t]
\centering\includegraphics[width=\linewidth]{figure/KBSA.pdf}
\caption{Sensitive analysis for $k$ and $N_r$. Our approach is robust to the changes of $k$ and $N_r$.}
\label{fig:kSA}
\end{figure}
\begin{figure}[!t]
\centering\includegraphics[width=\linewidth]{figure/lam.pdf}
\caption{Sensitive analysis for $\lambda$ shows that $L_{tri}$ and $L_{cls}$ contribute equally to the accuracy of PGPPM. Best result is achieved when $\lambda$ is around 1.}
\label{fig:lam}
\end{figure}
\textbf{Various $N_r$ and $k$ have less effect.} We perform another experiment to check how many positive pairs are needed for fine-tuning the final re-ID model. As shown in Figure~\ref{fig:kSA}-(a) and (b), the rank-1 accuracy are fluctuated in a very small range around 63.9\% and 36.3\% for Market-1501 and DukeMTMC-reID by using various $N_r$ and $k$. But in practice, we still suggest using a large $k$ to ensure that $k$-reciprocal nearest neighbors can always be found.
\textbf{Both $L_{cls}$ and $L_{tri}$ are important for model optimization.} We evaluate PGPPM model with different $\lambda$ values to find out which part contributes most for the accuracy of model. Figure~\ref{fig:lam} shows that rank-1 accuracy of PGPPM improves with the increase of $\lambda$ when $\lambda$ is in the range of [0,1]. However, when $\lambda$ exceeds 1, the rank-1 score begins to decrease. The best result is achieved when $\lambda$ is around 1. The results prove our claims that both $L_{cls}$ and $L_{tri}$ are important for our model. $L_{cls}$ helps model to learn robust features while $L_{tri}$ eliminates the negative effects brought by virtual images.
\section{Conclusion}
In this paper, we consider a challenging problem in person re-identification (re-ID), where labels are not provided in training data. To optimize deep re-ID model in supervised way, this work generates virtual dataset with a person generation model and a camera style model. Moreover, a collaborative filtering based positive pair mining approach is proposed to find reliable positive sample for refining re-ID model. Experiments on two benchmark datasets show that our method outperforms current unsupervised re-ID algorithms. In the future work, we will focus on learning a person generation model that jointly considers the pose and camera variations and produces higher quality virtual images.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and motivation}\label{sec:intro}
In this paper we study global existence and uniqueness of solutions for a family of parabolic systems of PDEs in one space dimension.
Although the literature concerning parabolic problems is very rich, results for nonlinear systems subject to non-homogeneous boundary conditions of different type (Dirichlet, Neumann or mixed) are not always available, even if the spatial domain is just an interval of the real line.
Moreover, many of the classical results for parabolic systems are formulated on a fixed time interval $[0,T]$ (as in \cite{ladyzhenskaia1988linear}), whereas we are interested in obtaining estimates for the solutions that are valid for arbitrarily large time.
The study of global properties becomes trickier when the system includes cross-diffusion and possibly degenerate terms, as they affect the regularizing effects of diffusion terms.
We choose to restrict our attention to the case $d=1$ in order to make the exposition clearer and to give neat statements. Indeed, working in one dimension offers several advantages in terms of regularity results and Sobolev embeddings.
In some cases it is possible to extend many of the results we present to $d>1$ (see e.g. \cite{alasio2018stability, alasio2019trend}), however, in general, one can not guarantee existence of global, classical solutions of strongly coupled systems (see, for example, \cite{mooney2017, stara1995some}).
The system we consider is complemented by mixed, time-dependent boundary conditions which are imposed at the extreme points of the domain.
Hyperbolic and parabolic systems of conservation laws with mixed and time-dependent boundary conditions arise naturally when studying the thermodynamics of some microscopic systems (see for example \cite{olla2014microscopic, olla2015microscopic}).
Our choice of boundary conditions was motivated by the case of an anharmonic chain of particles connected by nonlinear springs, where one end of the chain is fixed (homogeneous Dirichlet) and at the other end is applied a constant force (non-homogeneous Dirichlet).
By means of such boundary conditions and a suitable choice of the external force, it is possible define thermodynamic transformations and deduce the first and second law of Thermodynamics (macroscopic laws) as consequences of the microscopic dynamics.
We refer to \cite{even2010hydrodynamic, MO1, MO2, olla2014microscopic} for further details concerning these physical models.
A prototypical model for our study is given by the following viscous p-system obtained as hydrodynamic limit (under hyperbolic space-time scaling) for the isothermal dynamics of an anharmonic chain subject to an external varying tension:
\begin{ex}\label{example}
As described in \cite{SM2}, a suitable choice of the microscopic model leads to the following viscous p-system
\begin{equation} \label{eq:psystem}
\begin{cases}
\partial_tr =\partial_xp+ \delta_1 \partial_{xx}\tau(r)
\\
\partial_tp =\partial_x\tau(r) + \delta_2\partial_{xx} p
\end{cases}, \qquad (t,x) \in (0,\infty) \times (0,1)
\end{equation}
with boundary conditions
\begin{equation}
p(t,0) = 0, \quad \tau(r(t,1)) = a(t), \quad \partial_x p(t,1)=\partial_x r(t,0)=0.
\end{equation}
Here $r$ and $p$ represent infinitesimal strain and velocity of each point of a anharmonic spring of finite length, $\tau$ is the internal tension, and $a$ represents the boundary tension.
\end{ex}
A second example of system we can study is the following:
\begin{ex}
Consider the $4\times 4$ system of PDEs given by
\begin{align} \label{eq:sysmax}
\begin{cases}
\partial_t E_y = -\partial_x B_z + \delta_1 \partial_{xx}E_y
\\
\partial_t E_z= \partial_x B_y+ \delta_2 \partial_{xx}E_z
\\
\partial_t B_y = \partial_x E_z + \delta_3 \partial_{xx}B_y
\\
\partial_t B_z = - \partial_x E_y+ \delta_4 \partial_{xx}B_z
\end{cases}
\; ,
\end{align}
with boundary conditions
\begin{align}
\mathbf E(t,0)=0,\quad \mathbf B(t,1)= \mathbf b(t), \quad \partial_x \mathbf E(t,1)=0, \quad \partial_x \mathbf B(t,0)=0.
\end{align}
Here $\mathbf E =(E_y, E_z)$ and $\mathbf B =(B_y, B_z)$ can be interpreted as the $y$ and $z$ components of an electric and magnetic field in the vacuum.
When $\delta_i =0$, system \eqref{eq:sysmax} reduces to the $y$ and $z$ components of the Maxwell equations
\begin{align}
\partial_t \mathbf E = \operatorname{curl} \mathbf B, \qquad \partial_t \mathbf B =- \operatorname{curl} \mathbf E,
\end{align}
in the case where $\mathbf E$ and $\mathbf B$ propagate along the $x$-axis (that is, when they do not depend on $y$ and $z$). The Dirichlet boundary conditions fix the values of the electric field at $x=0$ and of the magnetic field at $x=1$. The Neumann conditions are no-flux conditions.
\end{ex}
\section{Set up and main results}
Consider the interval $I=({\ell_{-}}, {\ell_{+}}) \subset \mathbb R$, where ${\ell_{-}} < {\ell_{+}}$, and let $\xi^k$ denote the $k$-th component of a generic vector $\xi\in\mathbb R^N$.
We denote by $Q_T$ the parabolic cylinder $[0,T)\times I$.
For $k=1\dots N$, we consider the system of PDEs
\begin{equation}\label{eq:sysgen0}
\partial_t u^k = \partial_x \sum_{l=1}^N\left(
M^{kl} F^l(u) + A^{kl}\partial_x F^l(u)
\right),
\end{equation}
where the unknown $u$ is a vector function of the independent variables time $t$ and space $x$, namely $u:[0,T]\times \bar I\to\mathbb R^N$, $F:\mathbb R^N\to\mathbb R^N$ is a vector field of class $C^1$.
The matrices $A=\{A^{kl}\}$ and $M=\{M^{kl}\}$, for $k,l=1,\dots N$, will be defined precisely later on.
It is sometimes convenient to write equation \eqref{eq:sysgen0} in matrix form, in particular
\begin{equation} \label{eq:sysmatrix}
\partial_t u = \partial_x \left(
M F(u) + A \partial_x F(u)
\right).
\end{equation}
We shall also denote by $\cdot$ the scalar product of $\mathbb R^N$.
Equation \eqref{eq:sysgen0} is complemented by the following boundary conditions:
\begin{align}\label{eq:bc}
F^1(u(t,{\ell_{+}})) = a(t),
&\qquad (\partial_xF^1)(u(t,{\ell_{-}})) = 0, &
\nonumber \\
(\partial_xF^2)(u(t,{\ell_{+}})) = 0,
&\qquad F^2(u(t,{\ell_{-}})) = 0, &
\\
F^k(u(t,{\ell_{+}})) = 0,
&\qquad F^k(u(t,{\ell_{-}})) = 0, & \text{ for } k > 2
\nonumber
\end{align}
and by the initial condition
\begin{equation}\label{eq:ic}
u^k(0,x) = u^k_0(x).
\end{equation}
\begin{rem}
It is possible to replace \eqref{eq:bc} with other boundary conditions such as homogeneous Dirichlet or assigned periodic conditions and our results still hold with different constants.
\end{rem}
\noindent We consider the following set of assumptions:
\begin{enumerate}
\item[\bf{[H1]}]
We assume that $A$ and $M$ are $N\times N$ symmetric matrices. We suppose that $A=A(x)$ is (at least) of class $C^1$ with respect to $x\in \Bar{I}$ and that there exists a constant $\mu>0$ such that, for any $\xi\in\mathbb R^N$, $x\in\bar{I}$, it holds
$$\mu |\xi|^2\leq A(x)\xi\cdot\xi \leq \frac{1}{\mu}|\xi|^2.$$
We also impose the following compatibility conditions (for $k=1\dots N$):
\begin{align}
A^{1k}(l_-)= 0,\quad &k \neq 1
\\
A^{2k}(l_+)= 0,\quad & k \neq 2.
\end{align}
We further assume that $M=M(t)$ is (at least) of class $C^\alpha$, for some $\alpha\in(0,1/2)$, bounded with respect to $t\in [0,\infty)$ and we require that
$$
M^{kk}=0, \qquad \forall k=1,\cdots,N.
$$
\item[\bf{[H2]}]
The function $a:[0,\infty)\to\mathbb R$ satisfies the following conditions:
$$
\nm{a}_{L^\infty(0,\infty)}\leq a_0,
\qquad
\left(
\int_0^\infty |a'(t)|^p \mathrm{d} t
\right)^{\frac{1}{p}} \leq a_1,
$$
for any $p\in[1,2]$ and suitable constants $a_i>0$, $i=0,1$.
\item[\bf{[H3]}]
The initial condition $u_0:\bar I\to\mathbb R^N$ is (at least) of class $C^{2}$ and it is compatible with the boundary conditions \eqref{eq:bc}.
\item[\bf{[H4]}]
There exists a convex function $H:\mathbb R^N\to \mathbb R_+$ of class $C^2$ and a constant $\lambda>0$ such that
$$
DH(u) = F(u)
\quad \text{ and } \quad
\mathrm{Hess}(H)(u)\xi\cdot\xi \geq \lambda |\xi|^2,
\quad \forall u,\xi \in\mathbb R^N.
$$
Furthermore, $F:\mathbb R^N \to \mathbb R^N$ is monotone of class $C^1$ and there exists a constant $\Lambda\geq\lambda$ such that, for any $ \xi_1,\xi_2\in\mathbb R^N$,
\begin{equation}\label{eq:monotone}
\lambda|\xi_1-\xi_2|^2
\leq
(F(\xi_1) - F(\xi_2))\cdot (\xi_1 - \xi_2)
\leq
\Lambda|\xi_1-\xi_2|^2.
\end{equation}
We also assume that $F(0)=0$.
\end{enumerate}
\begin{rem}[Initial datum in $L^2$]
If the initial datum is not smooth but, for example, it only belongs to $L^2(I)$, the initial time has to be excluded but our results still hold in a subset of the form $(t_0, T)$, for $t_0$ arbitrarily close to $0$.
\end{rem}
\begin{rem}
Model \eqref{eq:psystem} is obtained as a special case of system \eqref{eq:sysgen0} setting
$$
u = \begin{pmatrix} r \\ p\end{pmatrix},
\quad
F(u) = \begin{pmatrix} \tau(r) \\ p\end{pmatrix},
\quad
M = \begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix},
\quad
A = \begin{pmatrix} \delta_1 & 0 \\ 0 & \delta_2\end{pmatrix}.
$$
Similarly, model \eqref{eq:sysmax} is obtained from system \eqref{eq:sysgen0} for $F(u)=u$ setting
$$
u = \begin{pmatrix} E_y \\ E_z \\ B_y \\ B_z\end{pmatrix},
\quad
M = \begin{pmatrix} 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0\end{pmatrix},
\quad
A = \begin{pmatrix} \delta_1 & 0 & 0 & 0 \\ 0 & \delta_2 & 0 & 0 \\ 0 & 0 & \delta_3 & 0\\ 0 & 0 & 0 & \delta_4\end{pmatrix}.
$$
\end{rem}
Our main result is the following:
\begin{theor}\label{thm:main}
Under the hypotheses {\bf{[H1]}-\bf{[H4]}}, problem \eqref{eq:sysgen0}-\eqref{eq:ic} admits a bounded, global, classical solution in $C^1([0,\infty);C^0(\bar{I})) \cap C^0([0,\infty);C^2(\bar{I}))$.
Furthermore, there exists a constant $C>0$ independent of time such that
\begin{align}
\int_I &
\left(
|u(T,x)|^2 +
\mu|\partial_x F(u(T,x))|^2
\right)\mathrm{d} x
\nonumber \\ &+
\int_{Q_T}
\left(
\mu |\partial_x F(u)|^2
+
|\partial_t u|^2
+
|\partial_x (A \partial_x F(u))|^2
\right)
\mathrm{d} x \mathrm{d} t
\leq C.
\end{align}
\end{theor}
In Section \ref{sec:degenerate} we will briefly discuss a way to extend global existence results to a family of possibly degenerate systems (under suitable assumptions); this technique was established in \cite{jungel2015boundedness} (see also \cite{jungel2016entropy}).
Theorem \ref{thm:main}, will be proved in Section \ref{sec:estimates} and we will use the following two fundamental \quot{building blocks}:
\begin{theor}[Existence of classical solutions for short times, \cite{acquistapace1988quasilinear}]\label{thm:AT}
Let hypotheses {\bf{[H1]}-\bf{[H4]}} hold, then there exists a time $t_1\in(0,T)$ such that, given a sufficiently smooth initial datum $u_0$ that is compatible with the boundary conditions, problem \eqref{eq:sysgen0}-\eqref{eq:ic} has a unique solution $u$ in the interval $[0,t_1]$ which satisfies
$$
u\in C^{1+\alpha}([0,t_1];L^2(I))\cap C^{\alpha}([0,t_1];H^2(I)),
$$
where $\alpha\in (0,\frac{1}{2})$ depends on the regularity of $u_0$ and $M$.
Furthermore, we have
$$
\partial_t u, \, \partial_{xx} u \in C^{\alpha_1}([0,t_1];C^0(\bar{I})),
$$
for each $\alpha_1\in(0,\alpha)$.
\end{theor}
The following theorem, combined with uniform-in-time estimates, allows to show global existence of classical solutions for a relatively wide class of cross-diffusion systems (see, for example, \cite{alasio2018stability, alasio2019trend} and references therein).
\begin{theor}[Criterion for global existence, \cite{amann1989dynamic}]\label{thm:amann}
Let hypotheses {\bf{[H1]}-\bf{[H4]}} hold and consider a solution $u$ of problem \eqref{eq:sysgen0}-\eqref{eq:ic}. Let $J(u_0)$ denote the maximal time interval of definition of $u$.
If there exists an exponent $\varepsilon\in (0,1)$ (not depending on time) such that
$$
u\in C^\varepsilon(J(u_0)\cap[0,T];C^0(\bar{I})),
$$
Then $u$ is a global solution, i.e. $J(u_0)=[0,\infty)$.
\end{theor}
\begin{rem}[Notation]
Notice that, comparing Theorem \ref{thm:amann} with the original statement in \cite{amann1989dynamic}, we have that $\mathcal{G} = \mathbb R^N$.
Additionally, in our case the function $f$ introduced in \cite{amann1989dynamic} is \quot{affine in the gradient} in the sense specified therein. Finally, we do not use the notation $BUC^\varepsilon$ to denote the space of \quot{bounded, uniformly $\varepsilon$-H\"older continuous functions}.
\end{rem}
\section{Estimates for the general system}\label{sec:estimates}
In the present section we are going to derive the crucial estimates for solutions of system \eqref{eq:sysgen0}.
\begin{prop}\label{prop:est1}
Let hypotheses {{\bf[H1]}-{\bf[H4]}} hold.
There exists a constant $C_0>0$ independent of $A$ such that for any $T >0$, any solution $u$ of \eqref{eq:sysgen0}-\eqref{eq:ic} satisfies
\begin{equation}
\int_I |u(T,x)|^2 \mathrm{d} x
+\mu \int_0^T \int_I | \partial_x F(u)|^2 \mathrm{d} x \mathrm{d} t \le C_0.
\end{equation}
\end{prop}
\begin{proof}
Thanks to Theorem \ref{thm:AT}, we know that classical solutions exist on a (possibly short) time interval $[0,t_1]$, and therefore the maximal time interval of existence of $u$, denoted by $J(u_0)$, is well defined.
Let $t\in [0,T] \subseteq J(u_0)$; we test \eqref{eq:sysmatrix} against $F(u)$ and integrate over $I$:
\begin{equation} \label{eq:u1}
\int_I F(u) \cdot \partial_t u \mathrm{d} x
= \int_I \left\{ F(u) \cdot M \partial_x F(u) + F(u) \cdot \partial_x (A \partial_x F(u)) \right\} \mathrm{d} x
\end{equation}
Recall that, by assumption {\bf[H4]}, $F$ has a primitive, namely $F(u)= DH(u)$, where $H$ is a convex and non-negative scalar function. Therefore, we can write
$$
\int_I F(u) \cdot \partial_t u \mathrm{d} x = \int_I \partial_t H(u) \mathrm{d} x.
$$
Moreover, since $M$ is symmetric and independent of $x$, we have
\begin{align*}
F(u) \cdot M \partial_x F(u) = \frac{1}{2} \partial_x (F(u) \cdot M F(u)).
\end{align*}
Thus, after an integration by parts, \eqref{eq:u1} becomes
\begin{align}\label{eq:est1}
& \int_I(\partial_t H(u) +F(u)\cdot A \partial_x F(u) )\mathrm{d} x
\nonumber \\
&= \left( \frac{1}{2}F(u) \cdot M F(u)+F(u) \cdot A \partial_x F(u) \right) \biggr |_{\partial I}.
\end{align}
We evaluate the boundary term in \eqref{eq:est1}.
Since we have homogeneous Dirichlet boundary conditions
$
F^k(u(t,{\ell_{-}})) = F^k(u(t,{\ell_{+}}))=0,
$
for $k > 2$, we obtain
\begin{align*}
& \left( \frac{1}{2}F(u) \cdot M F(u)+F(u) \cdot A \partial_x F(u) \right) \biggr |_{\partial I}
\\
&= \left(\frac{1}{2}\sum_{l,k=1}^N M^{kl} F^k(u) F^l(u) + \sum_{l,k=1}^N A^{lk} F^k(u) \partial_x F^l(u) \right)\biggr |_{\partial I}
\\
&= \left( M^{12}F^1(u)F^2(u) + \sum_{l=1}^N (A^{l1}F^1(u)\partial_xF^l(u)+A^{l2}F^2(u)\partial_xF^l(u)) \right) \biggr |_{\partial I}.
\end{align*}
Moreover, since $\partial_x F^1(u(t,{\ell_{-}}))=F^2(u(t,{\ell_{-}}))=\partial_x F^2(u(t,{\ell_{+}})) = 0$ and $F^1(u(t,{\ell_{+}})) = a(t)$, we have
\begin{align*}
& \left( M^{12}F^1(u)F^2(u) + \sum_{l=1}^N (A^{l1}F^1(u)\partial_xF^l(u)+A^{l2}F^2(u)\partial_xF^l(u)) \right) \biggr |_{\partial I}
\\
&= a(t) \left( M^{12} F^2(u(t,{\ell_{+}}))+ \sum_{l \neq 2}A^{l1}({\ell_{+}}) \partial_x F^l(u(t,{\ell_{+}})) \right)
\\
&\qquad + \sum_{l\neq 2} A^{l2}({\ell_{+}})F^2(u(t,{\ell_{+}}))\partial_x F^l(u(t,{\ell_{+}}))
\\
& \qquad-\sum_{l \neq 1} A^{l1}({\ell_{-}})F^1(u(t,{\ell_{-}}))\partial_x F^l(u(t,{\ell_{-}})).
\end{align*}
Finally, since $A^{l2}({\ell_{+}})=0$ for $l \neq 2$ and $A^{l1}({\ell_{-}})=0$ for $l \neq 1$, we obtain
\begin{align}\label{eq:survivingbc}
&\left( \frac{1}{2}F(u) \cdot M F(u)+F(u) \cdot A \partial_x F(u) \right) \biggr |_{\partial I} \nonumber
\\
& = a(t) \left( M^{12} F^2(u(t,{\ell_{+}}))+ \sum_{l \neq 2}A^{l1}({\ell_{+}}) \partial_x F^l(u(t,{\ell_{+}})) \right).
\end{align}
We deduce the value of the last bracket in \eqref{eq:survivingbc} by integrating equation \eqref{eq:sysgen0} for $k=1$ with respect to $x\in I$:
\begin{equation}
M^{12} F^2(u(t,{\ell_{+}}))+ \sum_{l \neq 2}A^{l1}({\ell_{+}}) \partial_x F^l(u(t,{\ell_{+}})) = \partial_t \int_I u^1 \mathrm{d} x.
\end{equation}
Given the boundary terms above, integrating \eqref{eq:est1} in time, we obtain
\begin{align} \label{eq:split}
\begin{split}
& \int_I H(u(T,x)) \mathrm{d} x+ \int_0^T \int_I \partial_x F(u) \cdot A \partial_x F(u) \mathrm{d} x \mathrm{d} t
\\
& =\int_I H(u_0) \mathrm{d} x+ \int_0^T a(t) \partial_t \int_I u^1 \mathrm{d} x \mathrm{d} t
\\
& =\int_I H(u_0) \mathrm{d} x+\left( a(t) \int_I u^1\mathrm{d} x \right)\biggr |_{t=0}^{t=T} - \int_0^T a'(t) \int_I u^1\mathrm{d} x \mathrm{d} t.
\end{split}
\end{align}
Let us estimate the last two terms in \eqref{eq:split}, in particular we have
\begin{align}\label{eq:estc}
a(T)\int_I u^1(T,x)\mathrm{d} x
&\le \frac{{\ell_{+}}-{\ell_{-}}}{\lambda}
a(T)^2 +\frac{\lambda}{4}\int_I (u^1(T,x))^2\mathrm{d} x,
\end{align}
and, using Young's inequality,
\begin{equation}\label{eq:est}
-\int_0^T a'(t) \int_I u^1 \mathrm{d} x \mathrm{d} t \le
\frac{{\ell_{+}}-{\ell_{-}}}{4}\int_0^T |a'(t)| \mathrm{d} t
+ \int_0^T |a'(t)| \int_I(u^1)^2 \mathrm{d} x \mathrm{d} t.
\end{equation}
We recall that, by assumption {\bf [H1]}, it holds
$
\partial_x F(u) \cdot A \partial_x F(u) \ge \mu |\partial_x F(u)|^2
$
and, additionally, by assumption {\bf [H4]}, we have
$
H(u) \ge \frac{\lambda}{2} |u|^2.
$
Therefore, combining inequalities \eqref{eq:split}, \eqref{eq:estc} and \eqref{eq:est},
we obtain the following estimate:
\begin{align}\label{eq:estd}
\frac{\lambda}{4} \int_I
&|u(T,x)|^2 \mathrm{d} x +\int_0^T \int_I \mu|\partial_x F(u)|^2 \mathrm{d} x \mathrm{d} t
\nonumber \\
&\le C(T)
+ \int_0^T |a'(t)| \int_I (u^1)^2 \mathrm{d} x \mathrm{d} t,
\end{align}
where, using assumption
{\bf [H2]}, we have
\begin{align*}
C(T)
&= \int_I H(u_0) \mathrm{d} x
+
({\ell_{+}}-{\ell_{-}})\left(
\frac{1}{\lambda}
a(T)^2
+
\frac{1}{4}\int_0^T |a'(t))| \mathrm{d} t
\right)
\\
&\leq
\int_I H(u_0) \mathrm{d} x
+
({\ell_{+}}-{\ell_{-}})\left(
\frac{1}{\lambda}
a_0^2
+
\frac{1}{4}a_1
\right).
\end{align*}
Finally, thanks to \eqref{eq:estd}, we apply Gr\"onwall's inequality
and obtain
\begin{align}
\frac{\lambda}{4} \int_I
&
|u(T,x)|^2 \mathrm{d} x
+\mu \int_0^T \int_I |\partial_x F(u)|^2
\mathrm{d} x \mathrm{d} t
\nonumber \\
&\leq
C(T)
\exp\left(\int_0^T |a'(t)| \mathrm{d} t\right)
\nonumber \\
&\leq
\left(
\int_I H(u_0) \mathrm{d} x
+
({\ell_{+}}-{\ell_{-}})\left(
\frac{1}{\lambda}
a_0^2
+
\frac{1}{4}a_1
\right)
\right)
\exp(a_1).
\end{align}
Thus each term is bounded by a constant independent of $T$.
\end{proof}
In the next Proposition we will obtain stronger estimates involving first derivatives in time and second derivatives in space.
\begin{prop}\label{prop:est2}
Let hypotheses {\bf{[H1]}-\bf{[H4]}} hold and consider a solution $u$ of \eqref{eq:sysgen0}-\eqref{eq:ic}.
Assume that $\mathrm{Hess}(H) \geq \lambda \operatorname{Id}$, where
$\lambda > \frac{1}{4\sigma} > \frac{1}{2}$,
for some $\sigma\in (0,\frac{1}{2})$.
Then there exists a constant $C_1>0$ independent of $T$ (but depending on $A,M,u_0, C_0, a, \sigma, \mu, \lambda)$ such that
\begin{equation}
\int_{Q_T}
\left(
|\partial_t u|^2
+
|\partial_x (A \partial_x F(u))|^2
\right)
\mathrm{d} x\mathrm{d} t
+
\int_I
|\partial_x F(u)|^2\big|_{t=0}^{t=T}
\mathrm{d} x
\leq
C_1.
\end{equation}
Additionally, $u\in L^2(0,T;H^2(I)) \cap H^1(0,T;L^2(I))$ uniformly for all $T>0$.
\end{prop}
\begin{rem}
Notice that the condition $\lambda>\frac{1}{2}$ can be removed by re-scaling the time variable in equation \eqref{eq:sysgen0}.
\end{rem}
\begin{proof}
Given $T\in J(u_0)$, we test the general system
\begin{equation}\label{eq:sysagain}
\partial_t u = \partial_x \left(
M F(u) + A \partial_x F(u)
\right)
\end{equation}
(with $A=A(x)$ and $M=M(t)$) against $\partial_t F(u) - \Xi$, where $\Xi = \partial_x (A \partial_x F(u))$; namely we obtain
\begin{align}\label{eq:testeq}
& \int_{Q_T}
\left[\partial_t u - \Xi \right]
\cdot
\left[\partial_t F(u) - \Xi \right]
\mathrm{d} x\mathrm{d} t
\nonumber \\
&=
\int_{Q_T}
\partial_x (MF(u))
\cdot
\left[\partial_t F(u) - \Xi \right]
\mathrm{d} x\mathrm{d} t.
\end{align}
Let us denote the left-hand side and right-hand side of \eqref{eq:testeq} by $\mathcal{L}$ and $\mathcal{R}$ respectively. We are going to estimate the following term from below:
$$
\mathcal{L} =
\int_{Q_T}
\left[
\mathrm{Hess}(H) \partial_t u \cdot \partial_t u
+
|\Xi|^2
-
(\partial_t u + \partial_t F(u))\cdot \Xi \right]
\mathrm{d} x\mathrm{d} t,
$$
in particular, we estimate the two \quot{mixed terms} separately. For the first one we have
$$
-\int_{Q_T} \partial_t u \cdot \Xi
\mathrm{d} x\mathrm{d} t
\geq
- \int_{Q_T}
\left[
\sigma \lambda
|\partial_t u |^2
+
\frac{1}{4\sigma \lambda}
|\Xi|^2
\right]
\mathrm{d} x \mathrm{d} t,
$$
whereas for the second one we have
\begin{align*}
-\int_{Q_T} \partial_t F(u) \cdot \Xi
\mathrm{d} x\mathrm{d} t
&=
-\int_{Q_T} \partial_t F(u) \cdot \partial_x (A \partial_x F(u))
\mathrm{d} x\mathrm{d} t
\\
&=
\int_{Q_T} \frac{1}{2} \partial_t
\left( \partial_x F(u) \cdot A \partial_x F(u)
\right)
\mathrm{d} x\mathrm{d} t
\\ & \qquad -
\int_0^T
(
\partial_t F(u) \cdot A \partial_x F(u)
)\bigg|_{\partial I}
\mathrm{d} t.
\end{align*}
We evaluate the boundary term above, indeed, using equation \eqref{eq:survivingbc}, we have
\begin{align*}
\partial_tF(u)\cdot A \partial_x F(u) \biggr |_{\partial I}
&= \sum_{k,l= 1}^N \left( \partial_t F^k(u) A^{kl}\partial_xF^l(u) \right) \biggr |_{\partial I}
\\
& = \partial_t F^1(u(t,{\ell_{+}})) \sum_{l \neq 2} A^{1l}({\ell_{+}})\partial_xF^l(u(t,{\ell_{+}}))
\\
& = a'(t) \left[ \int_I \partial_t u^1 \mathrm{d} x - M^{12}F^2(u(t,{\ell_{+}})) \right].
\end{align*}
Thus, the left-hand side of \eqref{eq:testeq} satisfies the following inequality:
\begin{align}\label{eq:estlhs}
\mathcal{L} &\geq
\int_{Q_T}
\left[
\lambda(1-\sigma) |\partial_t u|^2
+
\left(1-\frac{1}{4\sigma \lambda}\right)
|\Xi|^2
+
\frac{1}{2} \partial_t
\left( A \partial_x F(u) \cdot \partial_x F(u)
\right)
\right]
\mathrm{d} x\mathrm{d} t
\nonumber \\
&\quad -
\int_0^T a'(t)
\left[ \int_I \partial_t u^1 \mathrm{d} x - M^{12}F^2(u(t,{\ell_{+}})) \right]
\mathrm{d} t.
\end{align}
Concerning the right-hand side of \eqref{eq:testeq}, using Young's inequality we obtain
\begin{align}\label{eq:estrhs}
\mathcal{R} &\leq
\frac{1}{2}\left( \frac{1}{\lambda(1-\sigma)} +
\left(1-\frac{1}{4\sigma \lambda}\right)^{-1} \right)
\int_{Q_T}
|\partial_x (M F(u))|^2
\mathrm{d} x\mathrm{d} t
\nonumber \\
&\quad +
\frac{1}{2}
\int_{Q_T}
\left[
\lambda(1-\sigma) |\partial_t u|^2
+
\left(1-\frac{1}{4\sigma \lambda}\right)
|\Xi|^2
\right]
\mathrm{d} x\mathrm{d} t.
\end{align}
Combining the estimates for $\mathcal{L}$ and $\mathcal{R}$ (i.e. \eqref{eq:testeq}, \eqref{eq:estlhs} and \eqref{eq:estrhs}), we have
\begin{align*}
&
\int_{Q_T}
\left[
\lambda(1-\sigma) |\partial_t u|^2
+
\left(1-\frac{1}{4\sigma \lambda}\right)
|\Xi|^2
\right]
\mathrm{d} x\mathrm{d} t
+
\int_I
A \partial_x F(u) \cdot \partial_x F(u)\big|_{t=0}^{t=T}
\mathrm{d} x
\\
&\leq
K_{\lambda \sigma}
\int_{Q_T}
|\partial_x (M F(u))|^2
\mathrm{d} x\mathrm{d} t
+
\int_0^T a'(t)
\left[ \int_I \partial_t u^1 \mathrm{d} x - M^{12}F^2(u(t,{\ell_{+}})) \right]
\mathrm{d} t,
\end{align*}
where
$K_{\lambda \sigma} = \frac{1}{\lambda(1-\sigma)} + \frac{4\sigma \lambda}{4\sigma \lambda-1} $.
We recall that, for $k\geq 2$, we have $F^k(u(t,{\ell_{-}})=0$, hence, by Poincar\'e's inequality (with constant $C_P$), we obtain
$$
\int_I |F^k(u)|^2 \mathrm{d} x
\leq C_P
\int_I |\partial_x F^k(u)|^2 \mathrm{d} x, \quad \forall k\geq 2.
$$
Using Morrey's inequality (with constant $C_S$) and Proposition \ref{prop:est1}, we get
$$
\int_0^T \nm{F^2(u(t,\cdot)}_{L^\infty(I)}^2 \mathrm{d} t
\leq
C_S \int_0^T \nm{F^2(u(t,\cdot)}_{H^1(I)}^2 \mathrm{d} t
\leq (1 + C_P)C_SC_0.
$$
Consequently, we deduce that
\begin{align*}
- & \int_0^T a'(t)
M^{12} F^2(u(t,{\ell_{+}}) ) \mathrm{d} t
\\
&\leq
\nm{M^{12}}_{L^\infty(0,\infty)}
\left(
\int_0^T a'(t)^2 \mathrm{d} t
\right)^{\frac{1}{2}}
\left(\int_0^T \nm{F^2(u(t,\cdot)}_{L^\infty(I)}^2 \mathrm{d} t
\right)^{\frac{1}{2}}
\\
&\leq
\nm{M^{12}}_{L^\infty(0,\infty)}
a_1
\sqrt{\frac{(1 + C_P)C_SC_0}{\mu}},
\end{align*}
where $a_1$ was introduced in {\bf{[H2]}}. Thanks to Proposition \ref{prop:est1}, we also have
$$
\quad\int_{Q_T}
|\partial_x (M F(u))|^2
\mathrm{d} x\mathrm{d} t
\leq \nm{M}_{L^\infty(0,\infty)}^2 \frac{C_0}{\mu}.
$$
Similarly, we also have the bound
\begin{align*}
\int_{Q_T} a'(t)
\partial_t u^1 \mathrm{d} x \mathrm{d} t
&\leq
\frac{1}{4\lambda\sigma}a_1^2
+
\lambda\sigma \int_{Q_T}
|\partial_t u^1|^2 \mathrm{d} x \mathrm{d} t.
\end{align*}
Finally, we deduce that
\begin{align*}
\quad & \lambda(1-2\sigma)
\int_{Q_T} |\partial_t u|^2 \mathrm{d} x\mathrm{d} t
\\
&\qquad+
\left(1-\frac{1}{4\sigma \lambda}\right)
\int_{Q_T} |\Xi|^2 \mathrm{d} x\mathrm{d} t
+
\mu \int_I
|\partial_x F(u(T))|^2
\mathrm{d} x
\\
&\leq
K_{\lambda \sigma}
\frac{1}{4\lambda\sigma}a_1^2
+
\nm{M^{12}}_{L^\infty(0,\infty)}
a_1
\sqrt{\frac{(1 + C_P)C_SC_0}{\mu}}
\\
& \qquad
+
\nm{M}_{L^\infty(0,\infty)}^2 \frac{C_0}{\mu}
+
\frac{1}{\mu} \int_I
|\partial_x F(u_0)|^2
\mathrm{d} x,
\end{align*}
where all constants are independent of time.
Notice that, since we have obtained a bound for $\partial_t u$, we can use equation \eqref{eq:sysagain} and Proposition \ref{prop:est1} to deduce that $F(u)\in L^2(0,T;H^2(I))$.
Furthermore we also have a uniform estimate for $F(u)$ in $L^\infty(0,T;H^1(I))$, which implies
$F(u) \in L^\infty((0,T)\times I)$. Since $F$ is monotone and it satisfies \eqref{eq:monotone}, this gives $u\in L^\infty((0,T)\times I)$.
In conclusion, knowing that $u$ is bounded, we deduce that the estimates for $F(u)$ lead to analogous bounds for $u$ in $L^2(0,T;H^2(I)) \cap H^1(0,T;L^2(I))$.
\end{proof}
The following technical result will be used in the proof of Theorem \ref{thm:main}.
\begin{lem}\label{lem:fractional}
Let $f: Q_T \to\mathbb R$ be a function in $X = L^2(0,T;H^2(I)) \cap H^1(0,T;L^2(I))$, then
$$
f\in H^r(0,T;H^s(I)), \; \forall \; r,s\geq 0
\; \text{ such that } \; r + \frac{s}{2} \leq 1,
$$
and, in turn,
$$
f\in C^{\alpha, \beta}(\bar{Q}_T)= C^{0,\alpha}([0,T];C^{0,\beta}(\bar{I})), \; \forall \; \alpha,\beta\geq 0
\; \text{ such that } \; 2\alpha + \beta \leq \frac{1}{2}.
$$
\end{lem}
\begin{proof}
Thanks to the higher order extensions for Sobolev functions, we can define $f$ on a larger rectangular domain $R\subseteq \mathbb R^2$ containing $Q_T$. Introducing a cut-off function, we further extend $f$ to the whole space ensuring sufficiently fast decay at infinity.
Let us call $g$ such an extension and
observe that the norm of $g$ in $X'=L^2(\mathbb R;H^2(\mathbb R))\cap H^1(\mathbb R;L^2(\mathbb R))$ is controlled by the corresponding norms of $f$ on $Q_T$.
In particular, for a suitable choice of $g$, we have the inequality:
$$
\nm{g}_{X'} \leq 2\nm{f}_{X}.
$$
Let $\langle \kappa \rangle = (1+|\kappa|^2)^{1/2}$. Denoting by $(\omega, \kappa)$ the conjugate variables of $(t,x)$ in Fourier space, we have that
$$
\langle \omega \rangle \hat{g} \in L^2(\mathbb R^2),
\quad
\langle \kappa \rangle^2 \hat{g} \in L^2(\mathbb R^2).
$$
This implies that
$
\left(
\langle \omega \rangle + \langle \kappa \rangle^2
\right) \hat{g} \in L^2(\mathbb R^2)
$
and we obtain
$$
\langle \omega \rangle^r \langle \kappa \rangle^s |\hat{g}|
\leq
\left(
\langle \omega \rangle + \langle \kappa \rangle^2
\right)^{r+\frac{s}{2}} |\hat{g}|.
$$
We obtain the desired fractional Sobolev regularity provided that $r+\frac{s}{2}\leq 1$.
The H\"older regularity follows from the standard embeddings for fractional Sobolev spaces (see e.g. \cite{dinezza2012hitchhiker}). In particular, for $r,s>\frac{1}{2}$, we take $\alpha = r - \frac{1}{2}$ and $\beta = s - \frac{1}{2}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}]
Thanks to Theorem \ref{thm:AT}, we know that classical solutions exist for short times and that, as explained in \cite{acquistapace1988quasilinear}, they can be extended by standard methods to a maximal interval of existence denoted by $J(u_0)$.
In order to show that such solutions exist for arbitrarily large time we are going to use the criterion provided by Theorem \ref{thm:amann}.
In particular, we need H\"older continuity of $u$ with respect to time, as well as a uniform $L^\infty$ bound in the space variable.
Thanks to Proposition \ref{prop:est2} we know that $u\in L^2(0,T;H^2(I)) \cap H^1(0,T;L^2(I))$ uniformly in time. This implies that we can apply Lemma \ref{lem:fractional} and obtain uniform H\"older estimates. Thus Theorem \ref{thm:amann} allows us to conclude the proof.
\end{proof}
\section{Degenerate systems}\label{sec:degenerate}
Consider a system of equations of the type:
\begin{equation}\label{sys0}
\partial_t u = \partial_x (M(t)F(u) + A(u) \partial_x F(u)).
\end{equation}
Notice that this is similar to system \eqref{eq:sysgen0}, but the matrix $A(x)$ has been replaced by $A(u)$, which now plays the role of a \quot{mobility matrix} (whereas $M$ satisfies the same assumptions introduced earlier).
Such system is possibly degenerate in the sense that the matrices $A(u)$ and $DF(u)$ may vanish if $u=0$ (for all the details see Definition \ref{entropystructure} and condition \eqref{eq:coercivity} below).
Also in this case, we will prove global existence of solutions under suitable assumptions on $F$. We will show that entropy methods developed in recent years, and, in particular, the so-called \quot{boundedness-by-entropy principle} presented in \cite{jungel2015boundedness}, can be applied without major modifications.
\begin{defi}[Entropy structure]\label{entropystructure}
We say system \eqref{sys0} has an \emph{entropy structure} if
there exists a function $H:\mathbb R^N\to\mathbb R$ such that
\begin{itemize}
\item $H$ is a convex function of class $C^{2}$ and it defines the following
\emph{entropy functional}
$$
E[u]=\int_{\Omega}H(u) \mathrm{d} x.
$$
\item the map $F(\cdot) = DH(\cdot)$ (i.e. the gradient of $H$) defines a change of coordinates (bi-Lipschitz diffeomorphism) from an open and connected domain $U\subset\mathbb R^{N}$ into the whole $\mathbb{R}^{N}$.
\end{itemize}
\end{defi}
\begin{defi}[Weak formulation]
\label{def:weaksolsys}
We say that the vector function $u$ is a weak solution of \eqref{sys0} subject to the Dirichlet boundary condition $u(t,{\ell_\pm})=0$, for a.e. $t>0$, if
$$
u\in L^{2}(Q_T),
\quad
A(u)\partial_x F(u)\in L^{2}(Q_T)
\quad
\partial_{t}u\in L^{2}(0,T;(H^{1}(I))').
$$
and, for any test function $\eta\in C_0^\infty(I)$ and a.e. $t\geq 0$, it holds
$$
\langle \partial_t u, \eta \rangle
+
\int_{\Omega}
\left[
A(u)\partial_x F(u) \cdot \partial_x \eta
-
\partial_x M F(u) \cdot \eta
\right]
\mathrm{d} x \mathrm{d} t
=0,
$$
where $\langle\cdot,\cdot\rangle$ indicates the duality pairing.
Moreover we require $u(t,\cdot)\to u_0(\cdot)$ in $H^{1}(I)'$ as $t\to 0$.
\end{defi}
\begin{rem}[Entropy decay]
The new unknown $w\in\mathbb R^N$ obtained setting $w=DH(u)=F(u)$, for $u\in U$, is commonly referred to as the \emph{entropy variable}. The domain $U$ (from Definition \ref{entropystructure}) is typically a bounded Lipschitz subset of $\mathbb R^N$.
We consider the boundary condition $F(u)\big|_{\partial I} = 0$, which implies $w\big|_{\partial I} = 0$.
Using Definition \ref{entropystructure}, we will see that for solutions of \eqref{sys0} in the sense of Definition \ref{def:weaksolsys} we have $\frac{dE}{dt}\leq 0$.
\end{rem}
We now present the main existence result of this section.
\begin{theor}[Boundedness-by-entropy principle, \cite{jungel2016entropy}]
\label{thm:existsys-with-entropy}
Consider problem \eqref{sys0} with boundary condition $u(t,{\ell_\pm})=0$ for a.e. $t>0$, let $U$ be an open and bounded subset of ${\mathbb{R}}^{n}$ and suppose $u_0\in U$. Consider the following hypotheses:
\begin{enumerate}
\item
There exist $\gamma_1,\gamma_2\in\mathbb R$ such that $\gamma_1<\gamma_2$ and $U \subset(\gamma_1,\gamma_2)^{N}$.
Furthermore, there exist $\alpha_{i}^{*},m^{i}\geq0$ ($i=1...N$) such
that for any vector $\xi\in\mathbb R^N$ and any $u\in U$
\begin{equation}\label{eq:coercivity}
[ A(u)\mathrm{Hess}(H)(u) ]
\xi\cdot \xi
\geq\sum_{i=1}^{N}\alpha_{i}(u^{i})^{2}(\xi^{i})^{2},
\end{equation}
where $\alpha_{i}(u^{i})$ coincides either with $\alpha_{i}^{*}(u^{i}-\gamma_1)^{m^{i}-1}$
or with $\alpha_{i}^{*}(\gamma_2-u_{i})^{m_{i}-1}$.
\item
We have $A\in C^0(\bar{U};\mathbb R^{N\times N})$ and there exists $L>0$ such that, for all $u\in U$
and all $i.j=1...N$ for which $m_{j}>1$, it holds
$
|A(u)\mathrm{Hess}(H)^{ij}(u)|\leq L |\alpha_{j}(u^{j})|.
$
\item
It holds $u_{0}(x)\in U$ for a.e. $x\in I$.
\end{enumerate}
Then there exists a bounded weak solution $u\in\bar{U}$ of problem \eqref{sys0} in the sense of Definition \ref{def:weaksolsys} for all $t>0$.
\end{theor}
\begin{proof}
The proof is analogous to the one given in \cite{jungel2015boundedness}, the only differences consist in the presence of Dirichlet boundary conditions (instead of no-flux) and in the first order terms (which were not present in the original proof).
Neither of these variations affects the argument in a significant way.
In particular, the first order terms do not contribute to the estimates since, once we change variables to $w=F(u)$, and we test against $w$ in the weak formulation, such terms vanish. In particular, we have:
\begin{align*}
\frac{dE}{dt}=
\int_{Q_T} \partial_t u\cdot w \mathrm{d} x\mathrm{d} t
&= \int_{Q_T} \left[
\partial_x (M w)\cdot w + \partial_x (A(u) \partial_x w)\cdot w \right]\mathrm{d} x\mathrm{d} t
\\
&= \int_{Q_T} \partial_x
\left[
\left(\frac{1}{2} M w\cdot w\right) - A(u) \partial_x w\cdot \partial_x w \right] \mathrm{d} x\mathrm{d} t
\\
&= - \int_{Q_T} A(u) \partial_x w\cdot \partial_x w \mathrm{d} x\mathrm{d} t,
\end{align*}
which is the key estimate in \cite{jungel2015boundedness}.
The rest of the proof follows without major modifications.
\end{proof}
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction and description of the results}\label{s1}
The structure and representation theory of $2$-categories is a young and intensively
studied area of modern mathematics which originated in \cite{BFK,Kh,CR,Ro,KL}.
The series of papers \cite{MM1}--\cite{MM6} initiated the study of the structure
and representation theory of so-called {\em finitary $2$-categories}, which
are natural $2$-analogues of finite dimensional associative algebras.
Classical representation theory of finite dimensional algebras is essentially based on the classification of simple algebras provided by the classical Artin-Wedderburn Theorem.
For a special class of finitary $2$-categories, called {\em strongly regular fiat $2$-categories},
an analogue of the Artin-Wedderburn Theorem was obtained in \cite{MM3}. Fiat $2$-categories
can be viewed as a vague $2$-analogue of cellular algebras: they have a weak involution
and adjunction morphisms which lead to the fact that, in each $2$-representation, the
involution gets interpreted as taking the adjoint functor. This involution plays
a crucial role in all arguments of \cite{MM3} and therefore it is unlikely that
one could generalise these arguments to any wider class of $2$-categories.
In the present paper we take a first step towards understanding the structure of
arbitrary simple finitary $2$-categories. The main idea motivating our study
comes, fairly unexpectedly, from the results of \cite{KM2,KMMZ}. Following the ideas
in the proof of \cite[Theorem~2]{KMMZ}, which are based on the main result of \cite{KM2},
we observe that every simple finitary $2$-category can be faithfully represented
using functorial actions in which all involved functors are right exact and
have the property that they send any module to a projective module.
The main technical result of the this article is a characterisation and description
of such functors. In fact, in Theorem~\ref{thm1} we show that bimodules representing such
functors belong to the additive closure of bimodules of the form $P\otimes_{\Bbbk}N$,
where $P$ is a projective left module and $N$ is an arbitrary right module. Put differently,
in the tensor category of bimodules, all such bimodules factor through $\Bbbk$-vector spaces.
A precise formulation of Theorem~\ref{thm1} and its proof can be found in Section~\ref{s3}.
We also give two applications of Theorem~\ref{thm1}. The first one, which can be found in
Section~\ref{s4}, concerns the faithful representation of simple finitary $2$-categories
as explained above. The second one, presented in Section~\ref{s5}, is of different nature.
It concerns the problem of classifying simple transitive $2$-representations for the
$2$-category of projective bimodules over the finite dimensional algebra
$A:=\Bbbk[x,y]/(x^2,y^2,xy)$. This kind of problem was studied, in different contexts, for many
other $2$-categories, see \cite{MM5,MM6,Zi,MaMa,KMMZ,MT,MMMT,MZ,Zh2}.
The importance of this problem is supported by interesting applications, see e.g. \cite{KM1}.
The case of the algebra $A$ treated in this paper differs
significantly from all previously studied cases. To start with, the $2$-category of
projective functors for $A$ is not fiat, as $A$ is not self-injective, cf.
\cite[Subsection~7.3]{MM1}. However, simple transitive $2$-representations for some non-fiat $2$-categories have also been classified in \cite{Zh2,MZ}. The main point of
the algebra $A$ is that this is the smallest algebra which does not have any non-zero
projective-injective modules. Therefore the general approach outlined in \cite{MZ},
which is based on existence of a projective-injective module, is
not applicable either. In Section~\ref{s5} we propose a new approach to this problem
which crucially uses our Theorem~\ref{thm1}. In Section~\ref{snew} we show that
the decategorification of a fiat $2$-category with strongly regular two-sided cells
is a quasi-hereditary algebra with a simple preserving duality.
As the material discussed in Sections~\ref{s3}, \ref{s4} and \ref{s5} is of rather different
nature, we do not provide a general list of notation and preliminaries but rather
postpone all this to each individual section separately.
For some further examples and structural results on finitary $2$-categories we refer
the reader to \cite{GM1,GM2,Xa,Zh1}.
\vspace{0.5cm}
\textbf{Acknowledgements:} This research was partially supported by
the Swedish Research Council, Knut and Alice Wallenberg Stiftelse and
G{\"o}ran Gustafsson Stiftelse. We thank Claus Michael Ringel for stimulating discussions.
\section{$\Bbbk$-split bimodules for finite dimensional algebras}\label{s3}
\subsection{Main Theorem}\label{s3.1}
Throughout the paper, we fix an algebraically closed field $\Bbbk$. For finite dimensional
$\Bbbk$-algebras $A$ and $B$, we denote by
\begin{itemize}
\item $A$-mod the category of finite dimensional left $A$-modules;
\item mod-$A$ the category of finite dimensional right $A$-modules;
\item $A$-mod-$B$ the category of finite dimensional $A$-$B$-bimodules.
\end{itemize}
The main result of the paper is the following statement.
\begin{theorem}\label{thm1}
Let $A$ and $B$ be two basic finite dimensional $\Bbbk$-algebras
and $Q\in A\text{-}\mathrm{mod}\text{-}B$. Then the following conditions are equivalent.
\begin{enumerate}[$($a$)$]
\item\label{thm1.1} The functor $Q\otimes_B{}_-:B\text{-}\mathrm{mod}\to
A\text{-}\mathrm{mod}$ maps any $B$-module to a projective $A$-module.
\item\label{thm1.2} The functor $\mathrm{Hom}_{A\text{-}}(Q,{}_-):A\text{-}\mathrm{mod}\to
B\text{-}\mathrm{mod}$ maps any short exact sequence in $A\text{-}\mathrm{mod}$
to a split short exact sequence in $B\text{-}\mathrm{mod}$.
\item\label{thm1.3} The $A$-$B$-bimodule $Q$ belongs to the additive closure,
in $A\text{-}\mathrm{mod}\text{-}B$, of all
$A$-$B$-bimodules of the form $A\otimes_{\Bbbk}K$, where $K\in\mathrm{mod}\text{-}B$.
\end{enumerate}
\end{theorem}
Bimodules satisfying the conditions in Theorem~\ref{thm1}\eqref{thm1.3} will be called {\em $\Bbbk$-split}.
Note that, by considering the algebra $A\times B$, we can reduce Theorem~\ref{thm1}
to the case $A=B$. So, in the proof which follows, we assume $A=B$.
\subsection{Implication \eqref{thm1.1}$\Rightarrow$\eqref{thm1.2}}\label{s3.2}
Assume that condition~\eqref{thm1.1} is satisfied. Then, in particular,
$Q\otimes_A A\cong {}_AQ$ is projective and hence the functor
$\mathrm{Hom}_{A\text{-}}(Q,{}_-)$ is exact. Furthermore, for any $M\in A\text{-}\mathrm{mod}$,
the $A$-module $Q\otimes_A M$ is projective and therefore the functor
\begin{displaymath}
\mathrm{Hom}_{A\text{-}}(Q\otimes_A M,{}_-)\cong
\mathrm{Hom}_{A\text{-}}(M,\mathrm{Hom}_{A\text{-}}(Q,{}_-)):A\text{-}\mathrm{mod}\to
\Bbbk\text{-}\mathrm{mod}
\end{displaymath}
is exact.
For a short exact sequence
\begin{displaymath}
0\to X\to Y\to Z \to 0
\end{displaymath}
in $A\text{-}\mathrm{mod}$, application of the exact functor
$\mathrm{Hom}_{A\text{-}}(Q,{}_-)$ produces the short exact sequence
\begin{equation}\label{eq1}
0\to \mathrm{Hom}_{A\text{-}}(Q,X)\to\mathrm{Hom}_{A\text{-}}(Q,Y)\overset{\alpha}{\to}\mathrm{Hom}_{A\text{-}}(Q,Z)\to 0.
\end{equation}
If \eqref{eq1} splits, then its image under
$\mathrm{Hom}_{A\text{-}}(M,{}_-)$ is, clearly, split short exact, for any $M\in A\text{-}\mathrm{mod}$.
At the same time, if \eqref{eq1} does not split, then,
for $M=\mathrm{Hom}_{A\text{-}}(Q,Z)$, the identity morphism on $M$ is not in the
image of the map
\begin{displaymath}
\mathrm{Hom}_{A\text{-}}(M,\mathrm{Hom}_{A\text{-}}(Q,Y))\overset{\alpha\circ{}_-}{\longrightarrow}
\mathrm{Hom}_{A\text{-}}(M,M).
\end{displaymath}
Therefore the latter map is not surjective and, consequently, the sequence
\begin{displaymath}
0\to \mathrm{Hom}_{A\text{-}}(M,\mathrm{Hom}_{A\text{-}}(Q,X))\to
\mathrm{Hom}_{A\text{-}}(M,\mathrm{Hom}_{A\text{-}}(Q,Y))\to
\mathrm{Hom}_{A\text{-}}(M,M)\to 0
\end{displaymath}
is not exact. Thus the functor $\mathrm{Hom}_{A\text{-}}(Q\otimes M,{}_-)$
is not exact either, a contradiction. Hence condition~\eqref{thm1.2} is satisfied.
\subsection{Implication \eqref{thm1.2}$\Rightarrow$\eqref{thm1.3}}\label{s3.3}
Assume that condition~\eqref{thm1.2} is satisfied. In particular, the functor
$\mathrm{Hom}_{A\text{-}}(Q,{}_-)$ is exact and thus the left $A$-module ${}_{A}Q$ is projective.
Denote by $R$ the Jacobson radical $\mathrm{Rad}(A)$ of $A$.
Applying the duality $*:=\mathrm{Hom}_{\Bbbk}({}_-,\Bbbk)$ to the short exact sequence
\begin{displaymath}
0\to R\to A\to \mathrm{top}(A)\to 0
\end{displaymath}
in $A\text{-}\mathrm{mod}\text{-}A$, gives the short exact sequence
\begin{displaymath}
0\to (\mathrm{top}(A))^* \to A^*\to R^*\to 0
\end{displaymath}
in $A\text{-}\mathrm{mod}\text{-}A$. Applying $\mathrm{Hom}_{A\text{-}}(Q,{}_-)$
to the latter short exact sequence results in the short exact sequence
\begin{equation}\label{eq2}
0\to\mathrm{Hom}_{A\text{-}}(Q,(\mathrm{top}(A))^*)\to
\mathrm{Hom}_{A\text{-}}(Q,A^*)\to\mathrm{Hom}_{A\text{-}}(Q,R^*)\to 0.
\end{equation}
By condition~\eqref{thm1.2}, this sequence is split in $A\text{-}\mathrm{mod}$.
By adjunction, we have
\begin{equation}\label{eq3}
\mathrm{Hom}_{A\text{-}}(Q,A^*)\cong\mathrm{Hom}_{\Bbbk}(A\otimes_A Q,\Bbbk)\cong Q^*
\end{equation}
and
\begin{equation}\label{eq4}
\mathrm{Hom}_{A\text{-}}(Q,R^*)\cong \mathrm{Hom}_{\Bbbk}(R\otimes_AQ,\Bbbk).
\end{equation}
Moreover, as any $A$-homomorphism from $Q$ to $(\mathrm{top}(A))^*$
vanishes on $RQ$, we have
\begin{displaymath}
\mathrm{Hom}_{A\text{-}}(Q,(\mathrm{top}(A))^*)\cong
\mathrm{Hom}_{A\text{-}}(Q/RQ,(\mathrm{top}(A))^*)
\end{displaymath}
and then, by adjunction,
\begin{displaymath}
\mathrm{Hom}_{A\text{-}}(Q/RQ,(\mathrm{top}(A))^*)\cong
\mathrm{Hom}_{\Bbbk}(\mathrm{top}(A)\otimes_A (Q/RQ),\Bbbk).
\end{displaymath}
Finally, we observe that $Q/RQ$ is semi-simple as left $A$-module which
yields an isomorphism $\mathrm{top}(A)\otimes_A(Q/RQ)\cong Q/RQ$.
Plugging the latter into \eqref{eq2}, using \eqref{eq3} and \eqref{eq4},
and applying $*$, gives us the short exact sequence
\begin{displaymath}
0\to R\otimes_AQ\to Q\to Q/RQ\to 0
\end{displaymath}
which is split in $\mathrm{mod}\text{-}A$. We denote by
$\beta:Q/RQ \to Q$ the splitting morphism in $\mathrm{mod}\text{-}A$.
Fix a decomposition of the $A$-$A$-bimodule $Q/RQ$ into a direct sum
$X_1\oplus X_2\oplus \dots\oplus X_k$ of indecomposable
$A$-$A$-bimodules. As $Q/RQ$ is semi-simple, as a left $A$-module,
we have that each $X_i$ is isotypic as a left $A$-module and indecomposable
as a right $A$-module. Let $L_i$ denote the (unique) simple subquotient of
the left $A$-module ${}_AX_i$ and $P_i$ denote an indecomposable
projective cover of $L_i$ in $A\text{-}\mathrm{mod}$. Let also $e_i$ denote
a primitive idempotent corresponding to $P_i$.
Consider the $A$-$A$-bimodule
\begin{displaymath}
\hat{Q}:=\bigoplus_{i=1}^k P_i\otimes_{\Bbbk} X_i
\end{displaymath}
and note that $\hat{Q}$ belongs to the additive closure of bimodules described
in condition~\eqref{thm1.3}. By adjunction, for any $A$-$A$-bimodule $V$, we have
\begin{equation}\label{eq5}
\mathrm{Hom}_{A\text{-}A}(\hat{Q},V)\cong
\bigoplus_{i=1}^k\mathrm{Hom}_{\text{-}A}(X_i,e_iV).
\end{equation}
The homomorphism $\beta$ induces, by equation~\eqref{eq5}, a homomorphism $\gamma:\hat{Q}\to Q$
of $A$-$A$-bimodules. By construction, the image of this homomorphism covers
$Q/RQ$, that is the top of $Q$, considered as a left $A$-module. Therefore
$\gamma$ is surjective by the Nakayama Lemma. At the same time, we already know that
$Q$ is projective as a left $A$-module and that all $X_i$ are isotypic as left
$A$-modules. Compared to the construction of $\hat{Q}$, this
implies the equality $\dim(\hat{Q})=\dim(Q)$ and yields that $\gamma$
is, in fact, an isomorphism. Condition~\eqref{thm1.3} follows.
\subsection{Implication \eqref{thm1.3}$\Rightarrow$\eqref{thm1.1}}\label{s3.4}
Assume that condition~\eqref{thm1.3} is satisfied. Note that, for any
$M\in A\text{-}\mathrm{mod}$, the $A$-module $A\otimes_{\Bbbk} K\otimes_AM$ is just
a direct sum of $\dim(K\otimes_A M)$ copies of $A$, in particular, it
is projective. By additivity, $Q\otimes_A M$ is also projective, for any $Q$
from the additive closure of the bimodules described in condition~\eqref{thm1.3}.
Therefore condition~\eqref{thm1.1} is satisfied.
\section{Application: simple finitary $2$-categories}\label{s4}
\subsection{Finitary $2$-categories}\label{s4.1}
For generalities on $2$-categories we refer to \cite{McL,Le}.
Following \cite[Subsection~2.2]{MM1}, a {\em finitary $2$-category over $\Bbbk$} is a $2$-category
$\sc\mbox{C}\hspace{1.0pt}$ such that
\begin{itemize}
\item $\sc\mbox{C}\hspace{1.0pt}$ has finitely many objects;
\item each $\sc\mbox{C}\hspace{1.0pt}(\mathtt{i},\mathtt{j})$ is equivalent to the category of projective modules
over some finite dimensional associative $\Bbbk$-algebra
(i.e. is a {\em finitary $\Bbbk$-linear} category);
\item all compositions are biadditive and $\Bbbk$-bilinear, when applicable;
\item identity $1$-morphisms are indecomposable.
\end{itemize}
In particular, we have a finite set $\mathcal{S}[\sc\mbox{C}\hspace{1.0pt}]$ of isomorphism classes of indecomposable
$1$-morphisms. Furthermore, by \cite[Section~3]{MM2}, $\mathcal{S}[\sc\mbox{C}\hspace{1.0pt}]$ has the natural structure
of a multisemigroup (cf. \cite{KuM}).
We denote by $\leq_L$ and $\sim_L$ the corresponding
{\em left order} and {\em left equivalence relation}, by $\leq_R$ and $\sim_R$ the corresponding
{\em right order} and {\em right equivalence relation} and by $\leq_J$ and $\sim_J$ the corresponding
{\em two-sided order} and {\em two-sided equivalence relation}, see \cite[Section~3]{MM2}
and \cite[Subsection~4.1]{KuM}. Equivalence classes for $\sim_L$, $\sim_R$ and $\sim_J$ are
called {\em cells} (left, right or two-sided, respectively).
A $2$-category which satisfies all the above conditions except for the last one will be called
{\em weakly finitary}. Clearly, splitting idempotents in the endomorphism algebras of the
identity $1$-morphisms, every weakly finitary $2$-category can be Morita reduced to a
finitary $2$-category (cf. \cite{MM4} for general Morita theory of finitary $2$-categories).
A finitary $2$-category $\sc\mbox{C}\hspace{1.0pt}$ will be called {\em simple} provided that
\begin{itemize}
\item any non-zero $2$-ideal of $\sc\mbox{C}\hspace{1.0pt}$ contains the identity $2$-morphism for some
non-zero $1$-morphism;
\item there is a unique two-sided cell, which we call $\mathcal{J}$, containing
a $1$-morphism that is not isomorphic to any of the identity $1$-morphisms;
\item the cell $\mathcal{J}$ is the maximal two-sided cell with respect to $\leq_J$;
\item the cell $\mathcal{J}$ is idempotent in the sense that $\mathrm{F}\circ \mathrm{G}\neq 0$
for some (possibly equal) $1$-morphisms $\mathrm{F}$ and $\mathrm{G}$ in $\mathcal{J}$.
\end{itemize}
In particular, a simple $2$-category is $\mathcal{J}$-simple in the sense of \cite[Subsection~6.2]{MM2}.
We note that the above definition excludes the very easy situation when the only indecomposable
$1$-morphisms in $\sc\mbox{C}\hspace{1.0pt}$ are those isomorphic to the identity $1$-morphisms. Such $2$-categories are
easy to construct and study, so we will ignore them.
Similarly to \cite[Subsection~6.2]{MM2}, one shows that a finitary $2$-category which satisfies
the last three of the above conditions has a unique simple quotient.
\subsection{Simple fiat strongly regular $2$-categories}\label{s4.2}
As in \cite[Subsection~6.2]{MM1}, a finitary $2$-category $\sc\mbox{C}\hspace{1.0pt}$ is called {\em fiat} provided
that it has a weak anti-involution $\star$ (reversing both $1$- and $2$-morphisms) and
adjunction morphisms between any pair $(\mathrm{F},\mathrm{F}^{\star})$ of $1$-morphisms.
A fiat $2$-category $\sc\mbox{C}\hspace{1.0pt}$ is called {\em strongly regular} provided that no two left (right) cells inside the same two-sided cell are comparable, and that the intersection of
each left and each right cell inside the same two-sided cell consists of exactly one element,
see \cite[Subsection~4.8]{MM1} and \cite[Corollary~19]{KM2}.
$2$-categories which are, at the same time, simple, strongly regular and fiat, were classified
in \cite[Theorem~13]{MM3}. Roughly speaking (up to the structure of the endomorphism algebra
of identity $1$-morphisms), they are biequivalent to the bicategory of projective
bimodules for a finite dimensional, weakly symmetric $\Bbbk$-algebra $A$, or, equivalently to the $2$-category $\sc\mbox{C}\hspace{1.0pt}_A$ defined as follows:
Let $A=A_1\times A_2\times\dots\times A_k$ be a decomposition of $A$ into a direct sum of connected
components. Then
\begin{itemize}
\item objects of $\sc\mbox{C}\hspace{1.0pt}_A$ are $\mathtt{1},\mathtt{2},\dots,\mathtt{k}$, where
$\mathtt{i}$ should be thought of as a small category $\mathcal{A}_i$ equivalent to $A_i$-mod;
\item $1$-morphisms in $\sc\mbox{C}\hspace{1.0pt}_A(\mathtt{i},\mathtt{j})$ are functors
from $\mathcal{A}_i$ to $\mathcal{A}_j$ corresponding to tensoring
with bimodules from the additive closure of $A_j\otimes_{\Bbbk}A_i$, with the additional
bimodule $A_i$, if $i=j$;
\item $2$-morphisms are all natural transformations of such functors.
\end{itemize}
It is natural to ask for a description of simple finitary $2$-categories in general.
Unfortunately, an easy generalisation of \cite[Theorem~13]{MM3} is too much to hope for.
\subsection{Cell $2$-representations}\label{s4.3}
For any finitary $2$-category $\sc\mbox{C}\hspace{1.0pt}$, we can consider the $2$-category $\sc\mbox{C}\hspace{1.0pt}$-afmod
of {\em finitary $2$-representations of $\sc\mbox{C}\hspace{1.0pt}$}, where
\begin{itemize}
\item objects in $\sc\mbox{C}\hspace{1.0pt}$-afmod are $2$-functors which represent every object in
$\sc\mbox{C}\hspace{1.0pt}$ by a finitary $\Bbbk$-linear category, each $1$-morphism by an additive functor
and each $2$-morphism by a natural transformation of functors;
\item $1$-morphisms in $\sc\mbox{C}\hspace{1.0pt}$-afmod are $2$-natural transformations, see \cite[Subsection~2.3]{MM3};
\item $2$-morphisms in $\sc\mbox{C}\hspace{1.0pt}$-afmod are modifications.
\end{itemize}
An easy example of a $2$-representation is the {\em principal} (Yoneda) $2$-representation
$\mathbf{P}_{\mathtt{i}}:=\sc\mbox{C}\hspace{1.0pt}(\mathtt{i},{}_-)$, defined for each $\mathtt{i}\in\sc\mbox{C}\hspace{1.0pt}$.
If $\mathcal{L}$ is a left cell, then there is $\mathtt{i}\in \sc\mbox{C}\hspace{1.0pt}$ such that all $1$-morphisms
in $\mathcal{L}$ start at $\mathtt{i}$. In this case we can define the corresponding
{\em cell $2$-representation} $\mathbf{C}_{\mathcal{L}}$ of $\sc\mbox{C}\hspace{1.0pt}$ as a certain
subquotient of $\mathbf{P}_{\mathtt{i}}$, see \cite[Subsection~6.5]{MM2}, and also
\cite[Section~4]{MM1} for the original definition.
\subsection{Main idea}\label{s4.4}
Let now $\sc\mbox{C}\hspace{1.0pt}$ be a simple finitary $2$-category. Fix a left cell $\mathcal{L}$ inside
the distinguished two-sided cell $\mathcal{J}$ and consider the corresponding cell
$2$-representation $\mathbf{C}_{\mathcal{L}}$. By construction, the fact that
$\mathcal{J}$ is idempotent implies that $\mathbf{C}_{\mathcal{L}}$ has trivial annihilator.
Therefore $\mathbf{C}_{\mathcal{L}}$ defines a faithful $2$-functor from $\sc\mbox{C}\hspace{1.0pt}$ into some
$2$-category of right exact functors between modules categories of finite dimensional
algebras. The main point of \cite[Theorem~13]{MM3} was to show that, on the level of
$1$-morphisms in $\mathcal{J}$, this embedding is, in fact, $2$-full and each
$1$-morphism in $\mathcal{J}$ is represented by a {\em projective functor}, that is
by a functor isomorphic to tensoring with a projective bimodule.
If $\sc\mbox{C}\hspace{1.0pt}$ is fiat (but not necessarily strongly regular), then \cite[Theorem~2]{KMMZ}
still asserts that, in the setup of the previous paragraph, each $1$-morphism in
$\mathcal{J}$ is represented by a {\em projective functor}. However, outside
strongly regular situation the $2$-fullness statement is no longer true. An easy
example is given by the small quotient of the $2$-category of Soergel bimodules in
Weyl type $B_2$, see \cite[Subsection~3.2]{KMMZ}. This is a simple, fiat, but not
strongly regular $2$-category. Under $\mathbf{C}_{\mathcal{L}}$, the indecomposable
$1$-morphism corresponding to a simple reflection belonging to $\mathcal{L}$ acts
non-trivially on two simple modules of the abelianised cell $2$-representation. Therefore
this indecomposable $1$-morphism is represented by a decomposable projective functor,
which means that the representation $2$-functor cannot be $2$-full.
If $\sc\mbox{C}\hspace{1.0pt}$ is not fiat, then $1$-morphisms in $\mathcal{J}$ no longer act as
projective functors in general. Later on we will use our Theorem~\ref{thm1}
to describe what kind of functors appear in this case.
\subsection{Simple transitive $2$-representations}\label{s4.5}
Cell $2$-representations of finitary $2$-categories have the following two properties:
\begin{itemize}
\item They are {\em transitive} in the sense that, for any pair $X,Y$ of indecomposable
objects in the underlying categories of the representation, there is a $1$-morphism
$\mathrm{F}$ such that $X$ appears (up to isomorphism) as a direct summand in
$\mathrm{F}\,Y$.
\item They are {\em simple} in the sense that the underlying categories do not have any proper
ideals invariant under the action of $\sc\mbox{C}\hspace{1.0pt}$.
\end{itemize}
In general, simple transitive $2$-representations are natural $2$-analogues of simple
modules over $\Bbbk$-algebras, see \cite{MM5,MM6}.
If $\mathbf{M}$ is a simple transitive $2$-representation of $\sc\mbox{C}\hspace{1.0pt}$, then, according to
\cite[Subsection~3.2]{CM}, there is a unique two-sided cell $\mathcal{J}_{\mathbf{M}}$
which is a maximal element (with respect to the two-sided order) in the set of all
two-sided cells whose $1$-morphisms are not annihilated by $\mathbf{M}$.
The cell $\mathcal{J}_{\mathbf{M}}$ is called the {\em apex of
$\mathbf{M}$} and is idempotent in the sense that it contains three (possibly equal)
$1$-morphisms $\mathrm{F}$, $\mathrm{G}$ and $\mathrm{H}$ such that
$\mathrm{H}$ appears, as a summand and up to isomorphism, in $\mathrm{F}\circ \mathrm{G}$.
\subsection{Abelianisation}\label{s4.6}
Instead of {\em additive} $2$-representations of $\sc\mbox{C}\hspace{1.0pt}$ one can also study {\em abelian}
$2$-representations where
\begin{itemize}
\item each object in $\sc\mbox{C}\hspace{1.0pt}$ is represented by a category equivalent to the module
category for some finite dimensional associative $\Bbbk$-algebra;
\item each $1$-morphism is represented by a right exact functor;
\item each $2$-morphism is represented by a natural transformation of functors.
\end{itemize}
The $2$-category of abelian $2$-representations is denoted $\sc\mbox{C}\hspace{1.0pt}$-mod, see e.g. \cite[Section~4]{MM2}.
There is a natural $2$-functor $\overline{\,\,\cdot\,\,}:\sc\mbox{C}\hspace{1.0pt}\text{-}\mathrm{afmod}\to
\sc\mbox{C}\hspace{1.0pt}\text{-}\mathrm{mod}, \mathbf{M}\mapsto \overline{\mathbf{M}}$, called {\em abelianisation $2$-functor}, see \cite[Subsection~4.2]{MM2}.
\subsection{Main results of this section}\label{s4.7}
\begin{proposition}\label{prop31}
Let $\sc\mbox{C}\hspace{1.0pt}$ be a finitary $2$-category and $\mathbf{M}$
a simple transitive $2$-rep\-re\-sen\-ta\-tion of $\sc\mbox{C}\hspace{1.0pt}$. Then, for any $1$-morphism $\mathrm{F}$ in
$\mathcal{J}_{\mathbf{M}}$, the functor $\overline{\mathbf{M}}(\mathrm{F})$ sends any
object to a projective object.
\end{proposition}
\proof
The claim follows directly from the proof of the first part of \cite[Theorem~2]{KMMZ}
as this proof does not use fiatness of $\sc\mbox{C}\hspace{1.0pt}$.
\endproof
Let $A$ be a finite dimensional associative $\Bbbk$-algebra with a fixed decomposition
$A=A_1\times A_2\times\dots\times A_k$ into a product of (not necessarily connected) components.
For each $i=1,2,\dots,k$ fix a small category $\mathcal{C}_i$ equivalent to
$A_i$-mod and a right $A_i$-module $N_i$. Let $\mathcal{C}=\{\mathcal{C}_i\}$ and
$N=\{N_i\}$. Define the weakly finitary $2$-category $\sc\mbox{C}\hspace{1.0pt}_{A,\mathcal{C},N}$ as follows:
\begin{itemize}
\item The objects of $\sc\mbox{C}\hspace{1.0pt}_{A,\mathcal{C},N}$ are $\mathtt{1},\mathtt{2},\dots,\mathtt{k}$,
where $\mathtt{i}$ should be thought of as $\mathcal{C}_i$;
\item $1$-morphisms in $\sc\mbox{C}\hspace{1.0pt}_{A,\mathcal{C},N}(\mathtt{i},\mathtt{j})$ are all functors
from $\mathcal{C}_i$ to $\mathcal{C}_j$ which are isomorphic to tensoring with
$A_j$-$A_i$-bimodules in the additive closure of $A_j\otimes_{\Bbbk}N_i$ and, additionally,
the $A_i$-$A_i$-bimodule $A_i$, if $i=j$;
\item $2$-morphisms are natural transformations of functors.
\end{itemize}
The main result of this section is the following statement which, roughly speaking, says that
all simple finitary $2$-categories are $2$-subcategories of the categories of the
form $\sc\mbox{C}\hspace{1.0pt}_{A,\mathcal{C},N}$.
\begin{theorem}\label{thm33}
Let $\sc\mbox{C}\hspace{1.0pt}$ be a simple finitary $2$-category. Then there are $A$, $\mathcal{C}$ and $N$ as above
and a faithful $2$-functor from $\sc\mbox{C}\hspace{1.0pt}$ to $\sc\mbox{C}\hspace{1.0pt}_{A,\mathcal{C},N}$.
\end{theorem}
\proof
Let $\sc\mbox{C}\hspace{1.0pt}$ be a simple finitary $2$-category. For a left cell $\mathcal{L}$ in $\mathcal{J}$,
consider the left cell $2$-representation $\mathbf{C}_{\mathcal{L}}$ of $\sc\mbox{C}\hspace{1.0pt}$.
By Proposition~\ref{prop31}, for any $1$-morphism $\mathrm{F}$ in $\mathcal{J}$
from $\mathtt{i}$ to $\mathtt{j}$,
the functor $\overline{\mathbf{C}_{\mathcal{L}}}(\mathrm{F})$ maps any object
of $\overline{\mathbf{C}_{\mathcal{L}}}(\mathtt{i})$ to a projective object
of $\overline{\mathbf{C}_{\mathcal{L}}}(\mathtt{j})$.
For each $\mathtt{i}$, let $A_{\mathtt{i}}$ denote the underlying algebra of
$\overline{\mathbf{C}_{\mathcal{L}}}(\mathtt{i})$. We note that
$A_{\mathtt{i}}$ does not have to be connected. By Theorem~\ref{thm1},
there is a right $A_{\mathtt{i}}$-module $N_i$ such that any $1$-morphism in
$\mathcal{J}$ from $\mathtt{i}$ to any $\mathtt{j}$ is represented, via
$\overline{\mathbf{C}_{\mathcal{L}}}$, by a functor isomorphic to tensoring
with a bimodule of the form $A_j\otimes_{\Bbbk}N_i$ (and, additionally, $A_i$,
if $i=j$).
Let $\displaystyle A:=\prod_{\mathtt{i}}A_{\mathtt{i}}$.
Since $\mathcal{J}$ is idempotent, it coincides with the apex of $\mathbf{C}_{\mathcal{L}}$.
Hence, thanks to simplicity of $\sc\mbox{C}\hspace{1.0pt}$, the representation $2$-functor
$\mathbf{C}_{\mathcal{L}}$ is faithful on the level of $2$-morphisms
and thus provides an embedding of $\sc\mbox{C}\hspace{1.0pt}$ into the weakly finitary category
$\sc\mbox{C}\hspace{1.0pt}_{A,\{\overline{\mathbf{C}_{\mathcal{L}}}(\mathtt{i})\},\{N_{\mathtt{i}}\}}$.
This completes the proof.
\endproof
We note that, usually, the embedding of $\sc\mbox{C}\hspace{1.0pt}$ given by Theorem~\ref{thm33} will not be $2$-full.
Furthermore, $A$, $\mathcal{C}$ and $N$ in the formulation of Theorem~\ref{thm33}
are not uniquely defined, even up to isomorphism/equivalence.
\section{Decategorification}\label{snew}
Let $\sc\mbox{C}\hspace{1.0pt}:=\sc\mbox{C}\hspace{1.0pt}_{A,\mathcal{C},N}$ be as in Section~\ref{s4.7}. Let $P_1,\dots,P_r$ be a complete list of projective indecomposable modules for $A$, and $N_1,\dots, N_s$ a complete list of elements in $N$. Then a complete list of indecomposable $1$-morphisms in $\sc\mbox{C}\hspace{1.0pt}_{A,\mathcal{C},N}$ is given by $\mathbbm{1}_\mathtt{i}$, $\mathtt{i}\in\{\mathtt{1}, \dots, \mathtt{k}\}$, and $\{\mathrm{F}_{i,j}\}$ where $\mathrm{F}_{i,j}$ is a functor isomorphic to tensoring with $P_i\otimes_\Bbbk N_j$.
The structure of a multisemigroup with finite multiplicities (in the sense of \cite{Fo})
on $\mathcal{S}[\sc\mbox{C}\hspace{1.0pt}]\cap \mathcal{J}$ is given by $[\mathrm{F}_{i,j}][\mathrm{F}_{l,t}] = \dim (N_je_l) [\mathrm{F}_{i,t}]$.
\begin{proposition}\label{decat}
Suppose $\mathrm{add}(N_1\oplus N_2\oplus \dots \oplus N_s)\cong \mathrm{add}(A)$.
\begin{enumerate}[$($i$)$]
\item\label{filt} The algebra $\Bbbk\otimes_{\mathbb{Z}} \mathbb{Z}\mathcal{S}[\sc\mbox{C}\hspace{1.0pt}]$ has a filtration by two-sided ideals such that the lowest ideal is given by the span of $\{[\mathrm{F}_{i,j}]| i,j=1,\dots,r\}$, and the remaining subquotients are spanned by $[\mathbbm{1}_\mathtt{i}]$, taken in any order.
\item\label{swich} The ideal $J$ spanned by $\{[\mathrm{F}_{i,j}]| i,j=1,\dots,r\}$ in $\Bbbk\otimes_{\mathbb{Z}} \mathbb{Z}\mathcal{S}[\sc\mbox{C}\hspace{1.0pt}]$ is, in the terminology of \cite[Definition 3.3]{KX},
isomorphic to the swich algebra of the algebra of $r\times r$-matrices with respect to the matrix $(\dim(e_jAe_l))_{j,l=1}^r$.
\item\label{cell} If $A$ is weakly symmetric, the filtration in \eqref{filt} is a cell filtration, where the involution is given by the action of ${}^*$ on $\mathcal{S}[\sc\mbox{C}\hspace{1.0pt}]$, which corresponds to interchanging the two subscripts on $\mathrm{F}_{i,j}$.
\end{enumerate}
\end{proposition}
\begin{proof}
\eqref{filt} and \eqref{swich} follow directly from the definitions.
To prove \eqref{cell}, assume $A$ is weakly symmetric, so $\sc\mbox{C}\hspace{1.0pt}$ is fiat. The involution ${}^*$ induces an involution on $\Bbbk\otimes_{\mathbb{Z}} \mathbb{Z}\mathcal{S}[\sc\mbox{C}\hspace{1.0pt}]$, fixing the $[\mathbbm{1}_\mathtt{i}]$, and sending $ [\mathrm{F}_{i,j}]$ to $[\mathrm{F}_{j,i}]$ and hence, under the isomorphism to the swich algebra from \eqref{swich}, acts as matrix transposition. Since $\dim e_iAe_j = \dim e_jAe_i$, it is easy to check that, for $V$ the $\Bbbk$-span of $[\mathrm{F}_{i,1}]$, where $i=1,2,\dots,r$, the morphism $\alpha: \, J \to V\otimes V, \quad \mathrm{F}_{j,i} \mapsto v_i\otimes v_j$, defines the structure of a cell ideal on $J$ (cf. \cite[Definition 3.2]{KX1}).
Quotienting out by $J$, all remaining subquotients in the ideal filtration are one dimensional and idempotent, and hence cell ideals.
\end{proof}
\begin{corollary}
Let $\sc\mbox{C}\hspace{1.0pt}$ be a fiat $2$-category such that all two-sided cells are strongly regular. Then $\Bbbk\otimes_{\mathbb{Z}} \mathbb{Z}\mathcal{S}[\sc\mbox{C}\hspace{1.0pt}]$ is a quasi-hereditary algebra with simple-preserving duality.
\end{corollary}
\begin{proof}
By induction with respect to the two-sided order, it follows immediately from
Proposition \ref{decat}\eqref{cell} that $\Bbbk\otimes_{\mathbb{Z}} \mathbb{Z}\mathcal{S}[\sc\mbox{C}\hspace{1.0pt}]$
is a cellular algebra. Since each cell contains an idempotent, it is indeed quasi-hereditary.
\end{proof}
\section{Application: simple transitive $2$-representations of projective bimodules
for $\Bbbk[x,y]/(x^2,y^2,xy)$}\label{s5}
\subsection{Classification}\label{s5.1}
In this section we consider the problem of classification of simple transitive
$2$-representations of the simple $2$-category $\sc\mbox{C}\hspace{1.0pt}_A$ of projective bimodules for the
$3$-dimensional algebra $A=\Bbbk[x,y]/(x^2,y^2,xy)$. As $A$ is connected,
the $2$-category $\sc\mbox{C}\hspace{1.0pt}_A$ has only one object. We call this object $\mathtt{i}$.
The main result of this section is the following.
\begin{theorem}\label{thm51}
Every simple transitive $2$-representations of $\sc\mbox{C}\hspace{1.0pt}_A$, for the algebra
$A=\Bbbk[x,y]/(x^2,y^2,xy)$,
is equivalent to a cell $2$-representation.
\end{theorem}
Under the additional assumption that all $1$-morphisms in $\sc\mbox{C}\hspace{1.0pt}_A$ are represented by
exact functors, the claim of Theorem~\ref{thm51} is proved in \cite[Proposition~19]{MM5}.
Therefore, to prove Theorem~\ref{thm51}, we just have to show that, in every simple
transitive $2$-representations of $\sc\mbox{C}\hspace{1.0pt}_A$, all $1$-morphisms in $\sc\mbox{C}\hspace{1.0pt}_A$ are indeed
represented by exact functors. The rest of this section is devoted to the proof of
Theorem~\ref{thm51}.
\subsection{Combinatorics of the action}\label{s5.2}
We fix a simple transitive $2$-representation $\mathbf{M}$ of $\sc\mbox{C}\hspace{1.0pt}_A$. Let
$X_1,X_2,\dots,X_k$ be a list of representatives of isomorphism classes of indecomposable
objects in $\mathbf{M}(\mathtt{i})$. Let $\mathrm{F}$ be an indecomposable $1$-morphism
in $\sc\mbox{C}\hspace{1.0pt}_A$ which is isomorphic to tensoring with $A\otimes_{\Bbbk}A$. Note that
\begin{equation}\label{eq54}
\mathrm{F}\circ \mathrm{F}\cong \mathrm{F}^{\oplus 3}.
\end{equation}
For $i,j=1,2,\dots,k$, let
$m_{i,j}$ denote the multiplicity of $X_i$ in $\mathrm{F}\, X_j$. Then
$M=(m_{i,j})_{i,j=1}^k\in\mathrm{Mat}_{k\times k}(\mathbb{Z}_{\geq0})$, moreover,
all $m_{i,j}$ are positive due to transitivity of $\mathbf{M}$, see \cite[Subsection~7.1]{MM5}
for details.
From \eqref{eq54}, it follows that, $M^2=3M$.
Therefore, up to permutation of the $X_i$'s, $M$ is one of the following matrices
(again, see \cite[Subsection~7.1]{MM5} and \cite[Proposition~10]{MZ} for details):
\begin{displaymath}
M_1=\left(3\right),\quad
M_2=\left(\begin{array}{cc}2&2\\1&1\end{array}\right),\quad
M_3=\left(\begin{array}{cc}2&1\\2&1\end{array}\right),\quad
M_4=\left(\begin{array}{ccc}1&1&1\\1&1&1\\1&1&1\end{array}\right).
\end{displaymath}
\subsection{General part of the proof of Theorem~\ref{thm51}}\label{s5.3}
Let $B$ be an associative algebra such that $\overline{\mathbf{M}}(\mathtt{i})\cong B$-mod, then we have
${\mathbf{M}}(\mathtt{i})\cong B$-proj. Let $e_1,e_2,\dots,e_k$
be primitive idempotents of $B$ corresponding to the objects $X_1,X_2,\dots,X_k$ above.
Note that $k\leq 3$ by Subsection~\ref{s5.2}. For $i=1,2,\dots,k$, we denote by $L_i$
the simple top of $X_i$ in $\overline{\mathbf{M}}(\mathtt{i})$. We also denote by
$L'_i$ the corresponding {\em right} simple $B$-module, for $i=1,2,\dots,k$.
By Proposition~\ref{prop31}, the functor
$\overline{\mathbf{M}}(\mathrm{F})$ sends any object in $\overline{\mathbf{M}}(\mathtt{i})$
to a projective object in $\overline{\mathbf{M}}(\mathtt{i})$. Hence, by Theorem~\ref{thm1},
there are right $B$-modules $N_1,N_2\dots,N_k$ such that $\overline{\mathbf{M}}(\mathrm{F})$
is isomorphic to tensoring with the $B$-$B$-bimodule
\begin{displaymath}
Be_1\otimes_{\Bbbk}N_1\oplus
Be_2\otimes_{\Bbbk}N_2\oplus\dots\oplus
Be_k\otimes_{\Bbbk}N_k.
\end{displaymath}
Note that, for $i,j=1,2,\dots,k$, we have
\begin{displaymath}
Be_i\otimes_{\Bbbk}N_i\otimes_B Be_j\cong Be_i^{\oplus \dim_{\Bbbk}(N_i\otimes_BBe_j)}
\end{displaymath}
and hence
\begin{displaymath}
m_{i,j}=\dim_{\Bbbk}(N_i\otimes_BBe_j)=\dim_{\Bbbk}(N_ie_j)=[N_i:L'_j].
\end{displaymath}
Next we observe that the right $B$-module $N=N_1\oplus N_2\oplus\dots\oplus N_k$ is faithful.
Indeed, if this were not the case, the annihilator of this module would be, due to the form of
the functor $\overline{\mathbf{M}}(\mathrm{F})$, annihilated by $\overline{\mathbf{M}}(\mathrm{F})$
and hence would generate a non-zero proper $\sc\mbox{C}\hspace{1.0pt}_A$-stable ideal of $\mathbf{M}$.
This is, however, not possible thanks to simplicity of $\mathbf{M}$. Consequently, as the sum
of entries in each row of the above matrices is at most $4$, the Loewy length of $N$ is at most $4$, and
we have that $\mathrm{Rad}(B)^4=0$.
As mentioned above, we just need to show that $\overline{\mathbf{M}}(\mathrm{F})$ is exact or,
equivalently, that $N$ is projective. If $B$ is semi-simple, then $N$ is automatically projective,
so in what follows we may assume that $\mathrm{Rad}(B)\neq 0$.
Finally, we will need the following.
\begin{lemma}\label{lem541}
The $2$-functor $\overline{\mathbf{M}}$ induces an embedding of the algebra $A$ into the center
$Z(B)$ of the algebra $B$.
\end{lemma}
\begin{proof}
The $2$-functor $\overline{\mathbf{M}}$ gives a map from
$A\cong\mathrm{End}_{\scc\mbox{C}\hspace{1.0pt}_A}(\mathbbm{1})$ to the endomorphism algebra of
the $B$-$B$-bimodule $B$. The latter is isomorphic to $Z(B)$ and injectivity of
this map follows from simplicity of $\sc\mbox{C}\hspace{1.0pt}_A$.
\end{proof}
Now we have to go into a case-by-case analysis.
\subsection{Case~1: $M=M_1$}\label{s5.4}
In this case we have $k=1$ and $N=N_1$ has dimension $3$. If $\mathrm{Rad}(B)^2\neq 0$, then
$N$ must be uniserial and hence $B\cong\Bbbk[x]/(x^3)$. This means that $N$ is projective and we are done.
If $\mathrm{Rad}(B)^2=0$, then we have two possibilities. The first one is $B\cong\Bbbk[x]/(x^2)$
and $N=B\oplus \Bbbk$, which immediately contradicts Lemma~\ref{lem541}.
The second possibility which is left is $B\cong A$. In this case, as $N$ is faithful, it must be either the
projective indecomposable or the injective indecomposable $B$-module. In the projective case we are done,
so let us assume that $N$ is injective. To get a contradiction in this case we will need to do more subtle computations.
We have $A=B=\Bbbk[x,y]/(x^2,y^2,xy)$. Then the $A$-$A$-bimodule $A\otimes_{\Bbbk} A$ has the following basis:
\begin{displaymath}
1\otimes 1,\,1\otimes x,\,1\otimes y,\, x\otimes 1,\, y\otimes 1,\,x\otimes x,\,x\otimes y,\, y\otimes x,\,
y\otimes y.
\end{displaymath}
The $A$-$A$-bimodule $A\otimes_{\Bbbk} N\cong A\otimes_{\Bbbk} A^{*}$ has the following basis:
\begin{displaymath}
1\otimes x^*,\,1\otimes y^*,\,1\otimes 1^*,\, x\otimes x^*,\, x\otimes y^*,\,y\otimes x^*,\,y\otimes y^*,\,
x\otimes 1^*, y\otimes 1^*.
\end{displaymath}
Now it is easy to check that $\dim_{\Bbbk}\mathrm{Hom}_{A\text{-}A}(A,A\otimes_{\Bbbk}A)=4$, where
the generator $1$ of $A$ can be mapped to any of the elements
\begin{displaymath}
x\otimes x,\,x\otimes y,\, y\otimes x,\, y\otimes y.
\end{displaymath}
At the same time, $\dim_{\Bbbk}\mathrm{Hom}_{A\text{-}A}(A,A\otimes_{\Bbbk}A^*)=3$, where
the generator $1$ of $A$ can be mapped to any of the elements
\begin{displaymath}
1\otimes 1^*+x\otimes x^*+y\otimes y^*,\, x\otimes 1^*,\, y\otimes 1^*.
\end{displaymath}
As $\sc\mbox{C}\hspace{1.0pt}_A$ is simple, $\mathrm{Hom}_{A\text{-}A}(A,A\otimes_{\Bbbk}A)$ should be embeddable into
the homomorphism spaces between the functors $\overline{\mathbf{M}}(\mathbbm{1}_{\mathtt{i}})$ and
$\overline{\mathbf{M}}(\mathrm{F})$, but the above calculation shows that this is not possible.
This completes the proof in Case~1.
\subsection{Case~2: $M=M_2$}\label{s5.5}
In this case $k=2$, $[N_1:L'_i]=2$ and $[N_2:L'_i]=1$, for $i=1,2$.
The endomorphism algebra of the multiplicity-free module $N_2$ is a direct sum of copies of $\Bbbk$.
As $N$ is faithful, it follows that $A$ embeds into $\mathrm{End}_B(N_1)$. Both simples
appear in $N_1$ with multiplicity $2$. This implies that the image of $Z(B)$ in the endomorphism
algebra of each of the two indecomposable projective $B$-modules has dimension at most $2$.
Therefore both these images must have dimension $2$ and the corresponding kernels must be different.
As $A$ embeds into the endomorphism algebra of $N_1$, we have
$\dim_{\Bbbk}(\mathrm{End}_B(N_1))\geq 3$. In particular, $N_1$ cannot have simple top or socle for in that case
the dimension of its endomorphism algebra would be at most $2$, that is the multiplicity of the top
(or socle) simple module in $N_1$. As $N_1$ cannot be semi-simple (since $A$ cannot be embedded into a
direct sum of copies of $\Bbbk$), it follows that $N_1$ has Loewy length two with both top
and socle isomorphic to $L'_1\oplus L'_2$. Consequently, $\mathrm{Rad}(B)^2=0$.
It follows that the quiver of $B$ has one loop at each vertex.
Furthermore, we could also have at most two arrows in each direction
between the two different vertices (and, in total, at most three such arrows).
Thus, if $B$ is not connected, then
$B=Z(B)\cong \Bbbk[x]/(x^2)\oplus \Bbbk[y]/(y^2)$. If $B$ is connected, then $Z(B)\cong A$.
Consider first the case when $B$ is not connected. In this case the above discussion implies
$N_1\cong B$ while $N_2\cong \Bbbk_{x}\oplus \Bbbk_{y}$, the direct sum of two non-isomorphic
simple $B$-modules corresponding to the $x$- and $y$-components of $B$, respectively. Setting
$D_x:=\Bbbk[x]/(x^2)$ and $D_y:=\Bbbk[y]/(y^2)$, we obtain that $\overline{\mathbf{M}}(\mathrm{F})$
is represented by the bimodule
\begin{equation}\label{eq58}
(D_x\otimes_{\Bbbk}D_x)\oplus (D_x\otimes_{\Bbbk}D_y)\oplus (D_y\otimes_{\Bbbk}\Bbbk_x)\oplus
(D_y\otimes_{\Bbbk}\Bbbk_y).
\end{equation}
The dimension of the homomorphism space from the $B$-$B$-bimodule $B$ to the bimodule
in \eqref{eq58} is $3$, where two
linearly independent homomorphisms come from homomorphisms from $D_x$ to $D_x\otimes_{\Bbbk}D_x$
and one more linearly independent homomorphism comes from the map from $D_y$ to $D_y\otimes_{\Bbbk}\Bbbk_y$.
As mentioned above, $\dim_{\Bbbk}\mathrm{Hom}_{A\text{-}A}(A,A\otimes_{\Bbbk}A)=4$ and
we get a contradiction.
If $B$ is connected, the analogue of the bimodule \eqref{eq58} will look more complicated.
However, the new quiver will be obtained from the case of disconnected $B$ by adding some
new arrows between different vertices. The restriction of the center of this new $B$ to each
vertex still coincides with the case considered in the previous paragraph. Therefore any
homomorphism from $B$ to this new bimodule will restrict, by forgetting all new arrows,
to a homomorphism from the previous paragraph. As generators for our bimodules remain the
same, this restriction map is injective. Of course, new arrows might come with new conditions,
so there is no obvious guarantee that the restriction map is surjective. In any case, the
dimension of the homomorphism space from our new $B$ to this new bimodule will be at most $3$,
which again leads to a contradiction. This completes the proof in Case~2.
\subsection{Case~3: $M=M_3$}\label{s5.6}
In this case $k=2$, $[N_i:L'_1]=2$ and $[N_i:L'_2]=1$, for $i=1,2$.
As $N$ is faithful, we obviously have $\mathrm{Rad}(B)^3=0$. Moreover, as $N$ is faithful and
$L'_2$ appears with multiplicity $1$ in both $N_1$ and $N_2$, it follows that
$\mathrm{End}(P_2)\cong\Bbbk$ and hence Lemma~\ref{lem541} gives an embedding
of $A$ into $\mathrm{End}(P_1)$. As $N$ has only $2$ direct summands and
$L_1$ appears with multiplicity $2$ in both, it follows that $A\cong \mathrm{End}(P_1)$ and
that the generators $x$ and $y$ of $A$ can be chosen such that
$\mathrm{End}(N_1e_1)\cong A/(x)$ under the natural map from $A$ to $\mathrm{End}(N_1e_1)$.
The $2$-functor $\overline{\mathbf{M}}$ induces a map from the $4$-dimensional
space $\mathrm{Hom}_{A\text{-}A}(A,A\otimes_{\Bbbk}A)$ to the space of $B$-$B$-bimodule
homomorphisms from $B$ to $P_1\otimes_{\Bbbk}N_1\oplus P_2\otimes_{\Bbbk}N_2$. Due
to the previous paragraph, the image of the generator $1\in B$ belongs to
\begin{displaymath}
e_1P_1\otimes_{\Bbbk}N_1e_1\oplus e_2P_2\otimes_{\Bbbk}N_2e_2.
\end{displaymath}
The space $e_2P_2\otimes_{\Bbbk}N_2e_2$ has dimension $1$ by the above. The space
$e_1P_1\otimes_{\Bbbk}N_1e_1$ can be identified with $A\otimes_{\Bbbk}\left(A/(x)\right)$.
Even on the level of $A$-$A$-bimodules, the image of $1$ in $A\otimes_{\Bbbk}A/(x)$
has to belong to the $2$-dimensional subspace
generated by $x\otimes y$ and $y\otimes y$. Therefore we obtain an injection of
a $4$-dimensional space into a space of dimension at most $3$, a contradiction.
The proof of Case~3 is now complete.
\subsection{Case~4: $M=M_4$}\label{s5.7}
In this case $k=3$ and $[N_i:L'_j]=1$, for all $i,j=1,2,3$. In particular,
all $N_i$'s are multiplicity free and hence the endomorphism algebra of
each $N_i$ is a direct sum of copies of $\Bbbk$.
Thus $\overline{\mathbf{M}}$ cannot embed $A$ into $Z(B)$, contradicting
Lemma~\ref{lem541}. This completes the proof in Case~4 and thus of the whole Theorem~\ref{thm51}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction} \label{intro}
Nonlocal continuum theories such as peridynamics \cite{Silling_10_AAM}, physics-based nonlocal elasticity \cite{DiPaola_09_JE}, or nonlocal descriptions resulting from homogenization of nonlinear damage models \cite{Han_12_IJNME} can incorporate strong nonlocal effects due to long-range forces at the mesoscale or microscale.
As a result, for problems where these effects cannot be neglected, such descriptions are more accurate than local Partial Differential Equations (PDEs) models. However, their computational cost is also significantly higher than that of PDEs.
Local-to-Nonlocal (LtN) coupling methods aim to combine the computational efficiency of PDEs with the accuracy of nonlocal models.
The need for LtN couplings is especially acute when the size of the computational domain is such that the nonlocal solution becomes prohibitively expensive to compute, yet the nonlocal model is required to accurately resolve small scale features such as crack tips or dislocations that can affect the global material behavior \cite{DElia_2020review}.
LtN couplings involve two fundamentally different mathematical descriptions of the same physical phenomena. The principal challenge is the stable and accurate merging of these descriptions into a physically consistent coupled formulation.
In this paper we address this challenge by couching the LtN coupling into an optimization problem. The objective is to minimize the mismatch of the local and nonlocal solutions on the overlap of their respective subdomains, the constraints are the associated governing equations, and the controls are the virtual nonlocal volume constraint and the local boundary condition. We formulate and analyze this optimization-based LtN approach in the context of local and nonlocal diffusion models \cite{Du_12_SIREV}.
Our coupling strategy differs fundamentally from other LtN approaches such as the extension of the Arlequin \cite{BenDhia_05_IJNME} method to LtN couplings \cite{Han_12_IJNME}, force-based couplings \cite{Seleson_13_CMS}, or the morphing approach \cite{Azdoud_13_IJSS,Lubineau_12_JMPS}. The first two schemes blend the energies or the forces of the two models over a dedicated ``gluing'' area, while the third one implements the coupling through a gradual change in the material properties characterizing the two models over a ``morphing'' region. In either case, resulting LtN methods treat the coupling condition as a constraint, similar to classical domain decomposition methods.
In contrast, we treat this condition as an optimization objective, and keep the two models separate. This strategy brings about valuable theoretical and computational advantages. For instance, the coupled problem passes a patch test by construction and its well-posedness typically follows from the well-posedness of the constraint equations.
Our approach has its roots in non-standard \emph{optimization-based domain decomposition} methods for PDEs \cite{Discacciati_13_SIOPT,Du_01_SINUM,Du_00_SINUM,Gervasio_01_NM,Gunzburger_00_AMC,Gunzburger_00_SINUM,Gunzburger_99_CMA,Kuberry_13_CMAME}. It has also been applied to the coupling of discrete atomistic and continuum models in \cite{Bochev_14_SINUM,Bochev_14_LSSC} and multiscale problems \cite{Abdulle_15}.
This paper continues the efforts in \cite{Bochev_14_INPROC}, which presented an initial optimization-based LtN formulation, in
\cite{Bochev_16b_CAMWA}, which focussed on specializing the formulation to mixed boundary conditions and mixed volume constraints and its practical demonstration using Sandia's agile software components toolkit, and in \cite{DElia_15Handbook}, which extended the formulation to peridynamics.
The main contributions of this paper include (i) rigorous analysis of the LtN coupling error, (ii) formal proof of the well-posedness of the discretized LtN formulation, and (iii) rigorous convergence analysis.
We have organized the paper as follows. Section \ref{notation} introduces notation, basic notions of nonlocal vector calculus and the relevant mathematical models. We present the optimization-based LtN method and prove its well-posedness in Section \ref{optimization} and study its error in Section \ref{error-analysis}. Section \ref{finite-dim} focusses on the discrete LtN formulation, its well-posedness and numerical analysis. A collection of numerical examples in Section \ref{numerical-tests} illustrates the theoretical properties of the method using a simple one-dimensional setting.
\section{Preliminaries} \label{notation}
Let $\omega$ be a bounded open domain in ${\mathbb{R}^d}$, $d=2,3$, with Lipschitz-continuous boundary $\partial\omega$. We use the standard notation $H^s(\omega)$ for a Sobolev space of order $s$ with norm and inner product $\|\cdot\|_{s,\omega}$ and $(\cdot,\cdot)_{s,\omega}$, respectively. As usual, $H^0(\omega):=L^2(\omega)$ is the space of all square integrable functions on $\omega$. The subset of all functions in $H^1(\omega)$ that vanish on $\partial\omega$ is $H^1_0({\omega}) := \{u\in H^1(\omega): \; u|_{\partial\omega}=0 \}$.
The nonlocal model in this paper requires nonlocal vector calculus operators \cite[\S3.2]{Du_12_SIREV} acting on functions
$u(\bm{x})\colon {\mathbb{R}^d} \to \mathbb{R}$ and ${\boldsymbol \nu}(\bm{x},\bm{y})\colon {\mathbb{R}^d}\times{\mathbb{R}^d}\to {\mathbb{R}^d}$.
Let $\gamma (\bm{x}, \bm{y} )\colon{\mathbb{R}^d}\times{\mathbb{R}^d}\to\mathbb{R}$ and ${\boldsymbol\alpha}(\bm{x},\bm{y}) \colon {\mathbb{R}^d}\times{\mathbb{R}^d}\to {\mathbb{R}^d}$ be a non-negative symmetric kernel and an antisymmetric function, respectively, i.e., $\gamma(\bm{x}, \bm{y} )=\gamma(\bm{y}, \bm{x} )\ge0$ and ${\boldsymbol\alpha}(\bm{y},\bm{x})=-{\boldsymbol\alpha}(\bm{x},\bm{y})$.
The nonlocal \emph{diffusion}\footnote{More general representations of $\mathcal{L}$, associated with non-symmetric and not necessarily positive kernel functions exist. Such nonlocal operators may define models for non-symmetric diffusion phenomena such as non-symmetric jump processes \cite{DElia_17CMAM}.} of $u$ is an operator $\mathcal{L}(u)\colon {\mathbb{R}^d} \to \mathbb{R}$ defined by
\begin{displaymath}
\mathcal{L} u(\bm{x}) := 2\int_{\mathbb{R}^d} \big(u(\bm{y})-u(\bm{x})\big) \, \gamma (\bm{x}, \bm{y} )\,d\bm{y} \qquad \bm{x} \in {\mathbb{R}^d},
\end{displaymath}
and its nonlocal \emph{gradient} is a mapping $\mathcal{G}(u)\colon {\mathbb{R}^d}\times{\mathbb{R}^d}\to{\mathbb{R}^d}$ given by
\begin{subequations}\label{eq:ndivgrad}
\begin{equation}\label{ngra}
\mathcal{G} u(\bm{x},\bm{y}) := \big(u(\bm{y})-u(\bm{x})\big) {\boldsymbol\alpha}(\bm{x},\bm{y}) \qquad \bm{x},\bm{y}\in{\mathbb{R}^d}.
\end{equation}
Finally, the nonlocal \emph{divergence} of ${\boldsymbol \nu}(\bm{x},\bm{y})$ is a mapping $\mathcal{D}({\boldsymbol \nu})\colon {\mathbb{R}^d} \to \mathbb{R}$ defined by\footnote{The paper \cite{Du_13_MMMAS} shows that the adjoint $\mathcal{D}^*=-\mathcal{G}$.}
\begin{equation}\label{ndiv}
\mathcal{D}{\boldsymbol \nu}(\bm{x}) := \int_{{\mathbb{R}^d}} \big({\boldsymbol \nu}(\bm{x},\bm{y})+{\boldsymbol \nu}(\bm{y},\bm{x})\big)\cdot{\boldsymbol\alpha}(\bm{x},\bm{y})\,d\bm{y}\qquad \bm{x}\in{\mathbb{R}^d}.
\end{equation}
\end{subequations}
Furthermore, given a second-order symmetric tensor ${\boldsymbol\Phi}(\bm{x},\bm{y})={\boldsymbol\Phi}(\bm{y},\bm{x})$, equations \eqref{eq:ndivgrad} imply that
\begin{displaymath}
\mathcal{D}\big({\boldsymbol\Phi} \mathcal{G} u)(\bm{x}) = 2\int_{{\mathbb{R}^d}}\big(u(\bm{y})-u(\bm{x})\big) \big( {\boldsymbol\alpha}(\bm{x},\bm{y})\cdot {\boldsymbol\Phi}(\bm{x},\bm{y}){\boldsymbol\alpha}(\bm{x},\bm{y})\big) \,d\bm{y}.
\end{displaymath}
Thus, with the identification $\gamma(\bm{x},\bm{y}):={\boldsymbol\alpha}(\bm{x},\bm{y})\cdot {\boldsymbol\Phi}(\bm{x},\bm{y}) {\boldsymbol\alpha}(\bm{x},\bm{y})$
the operator $\mathcal{L}$ is a composition of the nonlocal divergence and gradient operators: $\mathcal{L} u = \mathcal{D}\big({\boldsymbol\Phi} \mathcal{G} u)$.
We define the \emph{interaction domain} of an open bounded region $\omega\in{\mathbb{R}^d}$ as
\begin{displaymath}
{\eta} = \{\bm{y}\in{\mathbb{R}^d}\setminus\omega: \; \gamma(\bm{x},\bm{y})\neq 0\},
\end{displaymath}
for $\bm{x}\in\omega$ and set $\Omega =\omega\cup{\eta}$. In this paper we consider kernels $\gamma$ such that for $\bm{x}\in{\omega}$
\begin{equation}\label{eq:gamma-conds}
\left\{\begin{aligned}
\gamma(\bm{x},\bm{y})\geq 0 \quad &\forall\, \bm{y}\in B_\varepsilon(\bm{x}) \\
\gamma(\bm{x},\bm{y}) = 0 \quad &\forall\, \bm{y}\in {\mathbb{R}^d} \setminus B_\varepsilon(\bm{x}),
\end{aligned}\right.
\end{equation}
where $B_\varepsilon(\bm{x}) = \{\bm{y}\in {\mathbb{R}^d}: \; |\bm{x}-\bm{y}|\leq\varepsilon\}$. Kernels that satisfy \eqref{eq:gamma-conds} are referred to as localized kernels with {\it interaction radius} $\varepsilon$. It is easy to see that for such kernels the interaction domain is a layer of thickness $\varepsilon$ that surrounds $\omega$, i.e.
\begin{displaymath}
{\eta} = \{ \bm{y}\in {\mathbb{R}^d}\setminus\omega: \; \inf_{\bm{x}\in\omega}|\bm{y}-\bm{x}|\leq\varepsilon\};
\end{displaymath}
see Fig. \ref{full-domain} for a two-dimensional example.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{full-domainN.pdf}
\vspace{-1ex}
\caption{Two-dimensional domain ${\omega}$ and interaction domain ${\eta}$ with interaction radius $\varepsilon$.}
\vspace{-2ex}
\label{full-domain}
\end{figure}
For a symmetric positive definite tensor ${\boldsymbol\Phi}$ we respectively define the nonlocal energy semi-norm, nonlocal energy space, and nonlocal volume-constrained energy space by
\begin{subequations}
\begin{align}
&|||v|||_{{\Omega}}^2 := \frac12\int_{{\Omega}}\int_{{{\Omega}}} \mathcal{G}
v\cdot({\boldsymbol\Phi}\mathcal{G} v)\,d\bm{y} \, d\bm{x} \label{energynorm}\\
&V({{\Omega}}) := \left\{ v \in L^2({\Omega}) \,\,:\,\,
|||v|||_{{\Omega}} < \infty \right\} \label{vspace}\\
&V_{\tau}({{\Omega}}) := \left\{v\in V({{\Omega}}) \,\,:\,\, v=0\;{\rm on}\;\tau\right\},
\;\; \hbox{ for $\tau \subseteq {\eta}$.} \label{vcspace}
\end{align}
\end{subequations}
We also define the volume-trace space $\widetilde V({\eta}):=\{v|_{\eta}: \,v\in V({{\Omega}})\}$, and an associated norm
\begin{equation}\label{eq:trace-norm}
\|\sigma\|_{\widetilde V({\eta})}:=
\inf\limits_{v\in V({{\Omega}}), v|_{\eta}=\sigma} \, |||v |||_{{\Omega}}.
\end{equation}
We refer to \cite{Du_12_SIREV,Du_13_MMMAS} for further information about the nonlocal vector calculus.
In order to avoid technicalities not germane to the coupling scheme, in this paper we consider integrable kernels. Examples of applications modeled by the latter can be found in \cite{Aksoylu:2010,Aksoylu:2011,Andreu:2010}. Specifically, we assume that there exists positive constants $\gamma_0$ and $\gamma_2$ such that
\begin{equation}\label{eq:integrable-kernel}
\gamma_0\leq \int_{{\Omega}\cap B_\varepsilon(\bm{x})} \gamma(\bm{x},\bm{y})\,d\bm{y}
\qquad {\rm and} \qquad
\int_{\Omega} \gamma(\bm{x},\bm{y})^2d\bm{y} \leq \gamma_2^2,
\end{equation}
for all $\bm{x}\in{\Omega}$.
Note that this also implies that there exists a positive constant $\gamma_1$ such that for all $\bm{x}\in{\Omega}$
\begin{equation}\label{eq:gamma1}
\int_{\Omega} \gamma(\bm{x},\bm{y})\,d\bm{y} \leq \gamma_1.
\end{equation}
In \cite[\S4.2]{Du_12_SIREV} this class of kernels (referred to as Case 2) is rigorously analyzed; we report below an important result, useful throughout the paper.
\begin{lemma}\label{lem:L2-equivalence}
Let the function $\gamma$ satisfy \eqref{eq:gamma-conds} and \eqref{eq:integrable-kernel}, then, there exist positive constants $C_{pn}$ and $C^*$ such that
\begin{equation}\label{eq:L2equivalence}
\dfrac{1}{C_{pn}} \|u\|_{0,{\Omega}} \leq |||u|||_{\Omega} \leq C^* \|u\|_{0,{\Omega}}
\quad \forall\,u\in V_\tau({\Omega}).
\end{equation}
Furthermore, the energy space $V_\tau({\Omega})$ is equivalent to $L_\tau^2({\Omega})=\{v\in L^2({\Omega}): v|_\tau=0\}$.
\end{lemma}
\smallskip The latter is a combination of results in Lemmas 4.6 and 4.7 and Corollary 4.8 in \cite[\S4.3.2]{Du_12_SIREV}. Note that the lower bound in \eqref{eq:L2equivalence} represents a nonlocal Poincar\'e inequality.
Even though not included in the analysis, singular kernels appear in applications such as peridynamics; numerical results, included in the paper, suggest that the coupling scheme can handle such kernels without difficulties. However, their analysis is beyond the scope of this paper.
\subsection{Local-to-Nonlocal coupling setting} \label{nvcp}
Consider a bounded open region $\omega\subset{\mathbb{R}^d}$ with interaction domain ${\eta}$. Given $f_n\in L^2({\omega})$ and $\sigma_n\in \widetilde V({\eta})$ we assume that the volume-constrained\footnote{The volume constraint in \eqref{nonlocal-problem} is the nonlocal analogue of a Dirichlet boundary condition, in which the closed region ${\eta}$ plays the role of a``boundary''.} nonlocal diffusion equation
\begin{equation} \label{nonlocal-problem}
\left\{\begin{array}{rlll}
-\mathcal{L} u_n & = & f_n & \bm{x}\in {\omega} \\
u_n & = & \sigma_n & \bm{x}\in{\eta},
\end{array}\right.
\end{equation}
provides an accurate description of the relevant physical processes in ${\Omega}={\omega}\cup{\eta} $. Let $\Gamma=\partial{\Omega}$, we assume that the local diffusion model given by the Poisson equation
\begin{equation} \label{local-problem}
\left\{\begin{array}{rlll}
-\Delta u_l & = & f_l & \bm{x}\in {\Omega} \\
u_l & = & \sigma_l & \bm{x}\in \Gamma,
\end{array}\right.
\end{equation}
with suitable boundary data $\sigma_l\in H^\frac12(\Gamma)$ and forcing term $f_l\in L^2({\Omega})$ is a good approximation of \eqref{nonlocal-problem} whenever the latter has sufficiently ``nice'' solutions. In this work we define $f_l$ to be an extension of $f_n$ by $0$ in ${\eta}$, specifically,
\begin{equation}\label{fl-def}
f_l=\left\{
\begin{aligned}
f_n & \quad \bm{x}\in{\omega}\\
0 & \quad \bm{x}\in{\eta}.
\end{aligned}\right.
\end{equation}
For a symmetric positive definite ${\boldsymbol\Phi}$ standard arguments of variational theory show that the weak form\footnote{Multiplication of \eqref{nonlocal-problem} by a test function $z_n\in V_{\eta}({\Omega})$, integration over ${\omega}$ and application of the first nonlocal Green's identity \cite{Du_13_MMMAS} yield the weak form \eqref{nonlocal-weak} of the nonlocal problem.}
\begin{equation}\label{nonlocal-weak}
\int_{{\Omega}}\int_{{\Omega}} \mathcal{G}u_n \cdot ({\boldsymbol\Phi}\mathcal{G}z_n)\, d\bm{y} \, d\bm{x} = \int_{\omega} f_n z_n \, d\bm{x}
\qquad \forall\,z_n\in V_{\eta}({{\Omega}}) \,
\end{equation}
of \eqref{nonlocal-problem} is well-posed \cite{Du_12_SIREV}, i.e., \eqref{nonlocal-weak} has a unique solution such that
\begin{equation}\label{cont-data-nonlocal}
|||u_n|||_{{\Omega}} \leq K_n (\|f_n\|_{0,{\omega}}+\|\sigma_n\|_{\widetilde V({\eta})})
\end{equation}
for some positive constant $K_n$. In this work, for simplicity and without loss of generality, we set ${\boldsymbol\Phi}={\bf I}$.
Although \eqref{nonlocal-weak} and the nonlocal calculus \cite{Du_12_SIREV} enable formulation and analysis of finite elements for \eqref{nonlocal-problem}, which parallel those for the Poisson equation \eqref{local-problem}, resulting methods may be computationally intractable for large domains. The root cause for this is that long-range interactions increase the density of the resulting algebraic system making it more expensive to assemble and solve.
\section{Optimization-based LtN formulation}\label{optimization}
For clarity we consider \eqref{nonlocal-problem} and \eqref{local-problem} with homogeneous Dirichlet conditions on ${\eta}$ and $\Gamma$, respectively.
To describe the approach it suffices to examine a coupling scenario where these problems operate on two overlapping subdomains of ${\Omega}$.
Thus we consider partitioning of ${\Omega}$ into a nonlocal subdomain ${\omega_n}$ with interaction volume ${\eta_n}$ and a local subdomain ${\Omega_l}$, with boundary $\Gamma_l$, such that ${\Omega_n}:={\omg_n\hspace{-.4mm}\cup\omgw}\subset{\Omega}$, $\Omega={\Omega_n}\cup{\Omega_l}$ and ${\Omega_o} = {\Omega_n}\cap{\Omega_l}\neq\emptyset$; see Fig. \ref{domains}.
\begin{figure}
\centering
{\includegraphics[width=0.55\textwidth]{2D-domain-rectangle}}
\vspace{-2ex}
\caption{An example LtN domain configuration in two-dimensions.}
\vspace{-2ex}
\label{domains}
\end{figure}
Let ${\eta_D}={\eta}\cap{\eta_n}$, ${\eta_c}={\eta_n}\setminus{\eta_D}$, ${\Gamma_D}=\Gamma\cap\Gamma_l$, and ${\Gamma_c}=\Gamma_l\setminus{\Gamma_D}$; see Fig. \ref{domains} and Appendix \ref{app:notation} for a summary of notation and definitions. Restrictions of \eqref{nonlocal-problem} and \eqref{local-problem} to ${\omega_n}$ and ${\Omega_l}$ are given by
\begin{equation}\label{eq:subprob}
\left\{\begin{array}{rlll}
-\mathcal{L}u_n & = & f_n & \bm{x}\in{\omega_n}\\
u_n & = & \theta_n & \bm{x}\in{\eta_c}\\
u_n & = & 0 & \bm{x}\in{\eta_D}
\end{array}\right.
\quad\mbox{and}\quad
\left\{\begin{array}{rlll}
-\Delta u_l & = & f_l & \bm{x}\in{\Omega_l}\\
u_l & = & \theta_l & \bm{x}\in{\Gamma_c}\\
u_l & = & 0 & \bm{x}\in{\Gamma_D},
\end{array}\right.
\end{equation}
respectively, where $\theta_n\in \Theta_n=\{v_n|_{\eta_c}: v_n\in V_{\eta_D}({\Omega_n})\}$ and
$\theta_l\in\Theta_l=\{v_l|_{\Gamma_c}:v_l\in H^1_{\Gamma_D}({\Omega_l})\}$
are an undetermined Dirichlet volume constraint and an undetermined Dirichlet boundary condition, respectively.
The following constrained optimization problem
\begin{equation}\label{minimization}
\begin{array}{c}
\displaystyle\min\limits_{u_n,u_l,\theta_n,\theta_l} \displaystyle J(u_n,u_l)
\;\;
\mbox{subject to \eqref{eq:subprob},}
\quad \hbox{where} \;\;
J(u_n,u_l) =\frac12 \| u_n-u_l \|^2_{0,{\Omega_o}}
\end{array}
\end{equation}
defines the optimization-based LtN coupling. In this formulation the subdomain problems \eqref{eq:subprob} are the optimization constraints, $u_n$ and $u_l$ are the states and $\theta_n$ and $\theta_l$ are the controls. We equip the control space $\Theta_n\times\Theta_l$ with the norm
\begin{equation}\label{eq:control-norm}
\|(\sigma_n,\sigma_l)\|^2_{\Theta_n\times\Theta_l} =
\|\sigma_n\|^2_{0,\eta_c} + \|\sigma_l\|^2_{\frac12,{\Gamma_c}}.
\end{equation}
In contrast to blending, \eqref{minimization} is an example of a divide-and-conquer strategy as the local and nonlocal problems operate independently in ${\Omega_n}$ and ${\Omega_l}$.
Given an optimal solution $(u_n^*,u_l^*, \theta_n^*,\theta_l^*)\in V_{\eta_D}({\Omega_n})\times H^1_{\Gamma_D}({\Omega_l})\times\Theta_n\times\Theta_l$ of \eqref{minimization} we define the LtN solution $u^{*} \in L^2({\Omega})$ by splicing together the optimal states:
\begin{equation}\label{unl-definition}
u^{*} = \left\{
\begin{array}{ll}
u_n^* & \bm{x}\in{\Omega_n}\\
u_l^* & \bm{x}\in{\Omega_l}\setminus{\Omega_o}.
\end{array}\right.
\end{equation}
We note that there are several ways to define the LtN solution; since our ultimate goal is to approximate a globally nonlocal solution, we set $u^*$ equal to the optimal nonlocal solution over the whole nonlocal domain.
The next section verifies that \eqref{minimization} is well-posed.
\subsection{Well-posedeness}\label{well-posedness}
We show that for any pair of controls subproblems \eqref{eq:subprob} have unique solutions $u_n(\theta_n)$ and $u_l(\theta_l)$, respectively. Elimination of the states from \eqref{minimization} yields the equivalent reduced space form of this problem in terms of $\theta_n$ and $\theta_l$ only:
\begin{equation}\label{reduced-minimization}
\displaystyle\min\limits_{\theta_n,\theta_l} J(\theta_n,\theta_l) \quad \hbox{where} \quad
J(\theta_n,\theta_l) = \frac12 \| u_n(\theta_n)-u_l(\theta_l) \|^2_{0,{\Omega_o}}.
\end{equation}
To show that \eqref{reduced-minimization} is well-posed we start as in \cite{Gervasio_01_NM,Bochev_14_SINUM} and
split, for any given $(\theta_n,\theta_l)$, the solutions of the state equations into a harmonic and a homogeneous part.
The harmonic components $v_n(\theta_n)$ and $v_l(\theta_l)$ of the states solve the equations
\begin{equation}\label{eq:harm}
\left\{\begin{array}{rll}
-\mathcal{L}v_n & = 0 & \bm{x}\in{\omega_n}\\
v_n & = \theta_n & \bm{x}\in{\eta_c}\\
v_n & = 0 & \bm{x}\in{\eta_D},
\end{array}\right.
\quad\mbox{and}\quad
\left\{\begin{array}{rll}
-\Deltav_l & = 0 & \bm{x}\in{\Omega_l}\\
v_l & =\theta_l & \bm{x}\in{\Gamma_c}\\
v_l & =0 & \bm{x}\in{\Gamma_D},
\end{array}\right.
\end{equation}
respectively. The homogeneous components $u_n^0$ and $u_l^0$ solve a similar set of equations but with homogeneous volume constraint and boundary condition, respectively:
\begin{equation}\label{eq:homo}
\left\{\begin{array}{rll}
-\mathcal{L}u_n^0 &= f_n & \bm{x}\in{\omega_n}\\
u_n^0 &=0 & \bm{x}\in{\eta_n}
\end{array}\right.
\quad\mbox{and}\quad
\left\{\begin{array}{rll}
-\Deltau_l^0 &= f_l & \bm{x}\in{\Omega_l}\\
u_l^0 & =0 & \bm{x}\in\Gamma_l .
\end{array}\right.
\end{equation}
In terms of these components
$u_n=v_n(\theta_n)+u_n^0$, $ u_l =v_l(\theta_l)+ u_l^0$, and the objective
\begin{displaymath}
\begin{aligned}
J(\theta_n,\theta_l) &
=
\frac12 \| v_n(\theta_n)\!-\!v_l(\theta_l) \|^2_{0,{\Omega_o}} +
\big( u_n^0 \!-\! u_l^0, v_n(\theta_n) \!-\! v_l(\theta_l) \big)_{0,{\Omega_o}}\!\! +
\frac12 \| u_n^0\!-\!u_l^0 \|^2_{0,{\Omega_o}}.
\end{aligned}
\end{displaymath}
The Euler-Lagrange equation of (\ref{reduced-minimization}) is given by:
seek $(\sigma_n,\sigma_l)\in \Theta_n\times\Theta_l$ such that
\begin{equation}\label{eq:EL-reduced}
Q(\sigma_n,\sigma_l;\mu_n,\mu_l) = F(\mu_n,\mu_l)\quad\forall \, (\mu_n,\mu_l) \in \Theta_n\times\Theta_l,
\end{equation}
where
$Q(\sigma_n,\sigma_l;\mu_n,\mu_l) = \big(v_n(\sigma_n)-v_l(\sigma_l), \,v_n(\mu_n)-v_l(\mu_l)\big)_{0,{\Omega_o}}$ and
$F(\mu_n,\mu_l) = -\big( u_n^0 - u_l^0, v_n(\mu_n) - v_l(\mu_l) \big)_{0,{\Omega_o}}$.
The following lemma establishes a key property of $Q$.
\smallskip\begin{lemma}\label{lemma-ip}
The form $Q(\cdot;\cdot)$ defines an inner product on $\Theta_n\times\Theta_l$.
\end{lemma}
\smallskip{\it Proof.}
By construction $Q(\cdot;\cdot)$ is symmetric and bilinear. Thus, it suffices to show that $Q(\cdot;\cdot)$ is positive definite,
i.e., $Q(\sigma_n,\sigma_l; \sigma_n,\sigma_l)=0$ if and only if $(\sigma_n,\sigma_l)=(0,0)$. Let $(\sigma_n,\sigma_l)=(0,0)$ then $v_n(\sigma_n)=0$ and $v_l(\sigma_l)=0$, implying $Q(\sigma_n,\sigma_l;\sigma_n,\sigma_l)=0$. Conversely, if $Q(\sigma_n,\sigma_l;\sigma_n,\sigma_l)=0$, then $v_n(\sigma_n)-v_l(\sigma_l)=0$ in ${\Omega_o}$. Let $v = v_n(\sigma_n)=v_l(\sigma_l)$ in ${\Omega_o}$.
Then we have that (i) $\Delta v=0$ for all $\bm{x}\in{\Omega_o}$, i.e., $v$ is harmonic in ${\Omega_o}$, and (ii) $v=0$ for all $\bm{x}\in {\Omega_o}\cap{\eta_D}$, i.e., $v$ vanishes on a non-empty interior set of ${\Omega_o}$. By the identity principle for harmonic functions, $v\equiv 0$ in $\overline{\Omega_o}$.
Because $\sigma_n=v$ in ${\eta_c}$ and $\sigma_l=v$ on ${\Gamma_c}$ it follows that $(\sigma_n,\sigma_l)=(0,0)$.
$\square$
\smallskip As a results, $Q(\cdot;\cdot)$ endows the control space $\Theta_n\times\Theta_l$ with the ``energy'' norm
\begin{equation}\label{Qnorm}
\|(\sigma_n,\sigma_l)\|^2_* := Q(\sigma_n,\sigma_l;\sigma_n,\sigma_l) =\|v_n(\sigma_n)-v_l(\sigma_l) \|^2_{0,{\Omega_o}}.
\end{equation}
Note that $Q$ and $F$ are continuous with respect to the energy norm.
However, the control space $\Theta_n\times\Theta_l$ may not be complete with respect to the energy norm. In this case, following \cite{Abdulle_15,Discacciati_13_SIOPT}, we consider the optimization problem \eqref{reduced-minimization} on the completion $\widetilde\Theta_n\times\widetilde\Theta_l$ of the control space.
\begin{theorem}\label{thm:well-posedness}
Let $\widetilde\Theta_n\times\widetilde\Theta_l$ denote a completion\footnote{If $\Theta_n\times\Theta_l$ is complete, then of course we have that $\widetilde\Theta_n\times\widetilde\Theta_l=\Theta_n\times\Theta_l$.} of the control space with respect to the energy norm \eqref{Qnorm}. Then, the minimization problem
\begin{equation}\label{eq:complete-reduced-minimization}
\min_{(\theta_n,\theta_l)\in\widetilde\Theta_n\times\widetilde\Theta_l} J(\theta_n,\theta_l)
\end{equation}
has a unique minimizer $(\widetilde\theta_n^*,\widetilde\theta_l^*)\in\widetilde\Theta_n\times\widetilde\Theta_l$ such that
\begin{equation}\label{eq:complete-Euler-Lagrange}
\widetilde Q(\widetilde\theta_n^*,\widetilde\theta_l^*;\mu_n,\mu_l) = \widetilde F(\mu_n,\mu_l)
\quad \forall \, (\mu_n,\mu_l)\in \widetilde\Theta_n\times\widetilde\Theta_l.
\end{equation}
\end{theorem}
{\it Proof.}
Equation \eqref{eq:complete-Euler-Lagrange} is a necessary condition for any minimizer of \eqref{eq:complete-reduced-minimization}. Assume first that the control space is complete, i.e. $\Theta_n\times\Theta_l = \widetilde\Theta_n\times\widetilde\Theta_l$. Then $\Theta_n\times\Theta_l$ is Hilbert and the projection theorem implies that \eqref{eq:complete-Euler-Lagrange} has a unique solution.
When $\Theta_n\times\Theta_l$ is not complete, the continuous bilinear form $Q$ and the continuous functional $F$ defined on $\Theta_n\times\Theta_l$ can be uniquely extended by using the Hahn-Banach theorem to a continuous bilinear form $\widetilde Q$ and a continuous functional $\widetilde F$ in $\widetilde\Theta_n\times\widetilde\Theta_l$. Then, the existence and uniqueness of the minimizer follow as before. $\square$
To avoid technical distractions, in what follows we assume that the minimizer $(\theta_n^*,\theta_l^*)$ belongs to $\Theta_n\times\Theta_l$ and hence
$u_n^*=u_n(\theta_n^*)\in V_{\eta_D}({\Omega_n})$ and
$u_l^*=u_l(\theta_l^*)\in H^1_{\Gamma_D}({\Omega_l})$.
We note that in the finite dimensional case the completeness is not an issue, as the discrete control space is Hilbert with respect to the discrete energy norm, see Section \ref{finite-dim}.
\section{Analysis of the LtN coupling error}\label{error-analysis}
We define the LtN coupling error as the $L^2$-norm of the difference between the global nonlocal solution $\widehat u_n$ of \eqref{nonlocal-problem} with homogeneous volume constraints and the LtN solution $u^{*} \in L^2({\Omega})$ given by \eqref{unl-definition}.
This section shows that the coupling error is bounded by the \emph{modeling error} on the local subdomain, i.e., the error made by replacing the ``true'' nonlocal diffusion operator on ${\Omega_l}$ by the Laplacian.
We prove this result under the following assumptions.
\smallskip
\indent H.1 The kernel $\gamma$ satisfies \eqref{eq:gamma-conds} and \eqref{eq:integrable-kernel}.
\indent H.2 The global nonlocal solution\footnote{This assumption can be relaxed to: $\widehat u_n\in L^2({\Omega})$ has a well-defined trace on ${\Gamma_c}$.} $\widehat u_n\in H^1({\Omega})$.
\smallskip
We also need the {\it trace} operator $T :H^1({{\Omega}})\to\Theta_n\!\times\!\Theta_l$ such that
\begin{equation}\label{operatorR}
T(v) := (T_n(v),\;T_l(v)) = (v|_{\eta_c}, \; v|_{\Gamma_c})
\qquad\forall \, v\in H^1({{\Omega}}),
\end{equation}
and the lifting operator $L:\Theta_n \times \Theta_l\to L^2({\Omega})$, $L(\sigma_n,\sigma_l)=H(\sigma_n,\sigma_l)+u^0$, where
\begin{equation}\label{operatorB}
H(\sigma_n,\sigma_l) := \left\{\begin{array}{ll}
v_n(\sigma_n) & {\Omega_n}\\
v_l(\sigma_l) & {\Omega_l}\setminus{\Omega_o},
\end{array}\right.
\qquad \hbox{and} \qquad
u^0 := \left\{\begin{array}{ll}
u_n^0 & {\Omega_n}\\
u_l^0 & {\Omega_l}\setminus{\Omega_o}
\end{array}\right.
\end{equation}
are a harmonic lifting operator and the homogenous part of the states, respectively.
Our main result is the following theorem.
\begin{theorem}\label{thm-coupling-error}
Assume that H.1 and H.2 hold. Then, there exists a positive constant $C$ such that
\begin{displaymath}
\|\widehat u_n-u^{*}\|_{0,{{\Omega}}}\leq C \|\widehat u_n-\widehat u_l\|_{0,{\Omega_l}} \,,
\end{displaymath}
where $\widehat u_l = v_l(T_{l}(\widehat u_n))+u_l^0$.
\end{theorem}
\smallskip For clarity we break the proof of this theorem into several steps arranged in Sections \ref{sec:prelim}--\ref{sec:proof}. Also for clarity, below we list the ancillary results required for the proof of Theorem \ref{thm-coupling-error}, but to avoid clutter, we collect all their proofs in Appendix \ref{app:op-norm}. In what follows we let $C$ and $C_i$, $i=1,2,\ldots$, denote generic positive constants.
\smallskip
\begin{lemma}\label{lemma-nonlocal-trace}
\emph{[Nonlocal trace inequality]} Let ${\Omega}=\omega\cup\eta$ where ${\omega}$ and ${\eta}$ are an open bounded domain and its associated interaction domain and let $\Sigma\subset{{\Omega}}$, $\tau\subset{\eta}$, $\tau\subset\Sigma$ as in Fig. \ref{fig:nonlocal-trace}. Assume that $\sigma\in\widetilde V({\eta})$ is such that $\sigma|_{{\eta}\setminus\tau}=0$. Then
\begin{equation}\label{nonlocal-trace}
\|\sigma\|_{\widetilde V({\eta})} \leq C |||v |||_{\Sigma}
\quad \forall \, v\in V({{\Omega}}), \, v|_{\eta}=\sigma.
\end{equation}
\end{lemma}
\smallskip
\begin{lemma}\label{lem:closed-subsp}
Let ${\Omega}$, $\omega$ and ${\eta}$ be defined as in Lemma \ref{lemma-nonlocal-trace}. The trace space $\widetilde V_{{\eta}\setminus\tau}({\eta})$ is a closed subspace of $L^2({\eta})$.
Furthermore, for all $\mu\in \widetilde V_{{\eta}\setminus\tau}({\eta})$ we have that
\begin{equation}\label{eq:trace-space-equivalence}
\|\mu\|_{\widetilde V({\eta})}\leq C\|\mu\|_{0,{\eta}}.
\end{equation}
\end{lemma}
\smallskip
Application of Lemma \ref{lem:closed-subsp} with ${\eta}={\eta_n}$ and $\tau = {\eta_c}$ implies that the trace space $\widetilde V_{{\eta_D}}({\eta_n})$ is a closed subspace of $L^2({\eta_n})$, and thus it is a Hilbert space in the $L^2$ topology.
We use this result to prove a strong Cauchy-Schwartz inequality for the nonlocal and local harmonic components of the states, i.e., the solutions to \eqref{eq:harm}. This inequality is essential for the estimate of $\| H\|_{**}$ below.
\smallskip
\begin{lemma}\label{lemma-strongCS}
\emph{[Strong Cauchy-Schwartz inequality]} There exists $\delta<1$ such that for all $(\sigma_n,\sigma_l)\in\Theta_n\times\Theta_l$
\begin{equation}\label{c-s}
|(v_n(\sigma_n),v_l(\sigma_l))_{0,{\Omega_o}}|< \delta \|v_n(\sigma_n)\|_{0,{\Omega_o}} \, \|v_l(\sigma_l)\|_{0,{\Omega_o}}.
\end{equation}
\end{lemma}
\subsection{The harmonic lifting operator is bounded from above}\label{sec:prelim}
We prove that $H$ is bounded in the operator norm $\| \cdot \|_{**}$ induced by the energy norm \eqref{Qnorm}. We refer to Appendix \ref{app:op-norm} for additional notation and auxiliary results used in the proof. We introduce the space
$\widetilde V_\tau(\eta) = \{\mu\in\widetilde V(\eta): \mu|_\tau=0, \;{\rm for}\;\tau\subseteq\eta\}$.
\smallskip
\begin{lemma}\label{lemma-norm-bound}
Assume that H.1 holds. There exists a positive constant $C<\infty$ such that
\begin{equation}\label{norm-bound}
\|H\|_{**} = \sup\limits_{(0,0)\neq(\sigma_n,\sigma_l)\in\Theta_n\times\Theta_l}
\frac{\|H(\sigma_n,\sigma_l)\|_{0,{\Omega}}}{\|(\sigma_n,\sigma_l)\|_*}<\dfrac{C}{\kappa},
\end{equation}
where $\kappa$ is the thickness of ${\Omega_o}$.
\end{lemma}
{\it Proof.}
To prove \eqref{norm-bound} it suffices to show that
\begin{displaymath}
\|H(\sigma_n,\sigma_l)\|_{0,{{\Omega}}} \leq \widetilde C \|(\sigma_n,\sigma_l)\|_*
\qquad \forall\,(0,0)\neq(\sigma_n,\sigma_l)\in\Theta_n\times\Theta_l,
\end{displaymath}
for some positive constant $\widetilde C$, inversely proportional to $\kappa$.
According to the definitions of $H$ and $\|(\cdot,\cdot)\|_*$ in \eqref{operatorB} and \eqref{Qnorm}, this is equivalent to
\begin{displaymath}
\|\chi({\Omega_n})v_n(\sigma_n) + \chi({\Omega_l}\setminus{\Omega_o}) v_l(\sigma_l)\|_{0,{{\Omega}}}\leq
\widetilde C\|v_n(\sigma_n)-v_l(\sigma_l)\|_{0,{\Omega_o}} ,
\end{displaymath}
where $\chi(\cdot)$ is the indicator function. Since $({\Omega_l}\hspace{-.05cm}\setminus\hspace{-.05cm}{\Omega_o})\cap{\Omega_n}=\emptyset$, this inequality reduces to
\begin{displaymath}
\|v_n(\sigma_n)\|^2_{0,{\Omega_n}} +\|v_l(\sigma_l)\|^2_{0,{\Omega_l}\setminus{\Omega_o}}\leq
\widetilde C^2 \|v_n(\sigma_n)-v_l(\sigma_l)\|^2_{0,{\Omega_o}}.
\end{displaymath}
The strong Cauchy-Schwarz inequality for the harmonic component (see Lemma \ref{lemma-strongCS}) yields the following lower bound for the right hand side:
\begin{equation}\label{eq:scbs}
\begin{aligned}
\|v_n(\sigma_n)- & v_l(\sigma_l)\|^2_{0,{\Omega_o}} =
\|v_n(\sigma_n)\|^2_{0,{\Omega_o}} - 2 (v_n(\sigma_n),v_l(\sigma_l))_{0,{\Omega_o}}
+ \|v_l(\sigma_l)\|^2_{0,{\Omega_o}}\\
&\geq \|v_n(\sigma_n)\|^2_{0,{\Omega_o}} - 2 \delta \|v_n(\sigma_n)\|_{0,{\Omega_o}} \, \|v_l(\sigma_l)\|_{0,{\Omega_o}}
+ \|v_l(\sigma_l)\|^2_{0,{\Omega_o}}\\
&\geq (1-\delta) \big(\|v_n(\sigma_n)\|^2_{0,{\Omega_o}} + \|v_l(\sigma_l)\|^2_{0,{\Omega_o}} \big).
\end{aligned}
\end{equation}
We now proceed to bound $\|v_n(\sigma_n)\|_{0,{\Omega_o}}$ and $\|v_l(\sigma_l)\|_{0,{\Omega_o}}$ from below by $\|v_n(\sigma_n)\|_{0,{\Omega_n}}$ and $\|v_l(\sigma_l)\|_{0,{\Omega_l}\setminus{\Omega_o}}$, respectively. We start with the nonlocal term.
Let $\mu_n\in\widetilde V_{{\eta_D}}({\eta_n})$ denote the extension of $\sigma_n$ by zero in ${\eta_D}$, i.e., $\mu_n=\sigma_n$ in ${\eta_c}$ and $0$ in ${\eta_D}$. By using the nonlocal Poincar\'e inequality, the well-posedness of the nonlocal problem and Lemma \ref{lem:closed-subsp} we have
\begin{displaymath}
\begin{array}{rll}
\|v_n(\sigma_n)\|_{0,{\Omega_n}} \!\! \! & \leq C_{pn} |||v_n(\sigma_n)|||_{\Omega_n}
& \mbox{nonlocal Poincar\'e inequality \eqref{eq:L2equivalence}} \\ [0.5ex]
&\le C_{pn} K_n \|\mu_n\|_{\widetilde V({\eta_n})}
& \mbox{nonlocal well-posedness \eqref{cont-data-nonlocal}} \\ [0.5ex]
& \le C_{pn} K_n K_1 \|\mu_n\|_{0,\eta_n}
& \mbox{Lemma \ref{lem:closed-subsp}} \\ [0.5ex]
&\le C_{pn} K_n K_1 K_2 \|v_n(\sigma_n)\|_{0,{\Omega_o}}. \!\! &
\end{array}
\end{displaymath}
Therefore, we have that
\begin{equation}\label{eq:lbn}
\|v_n(\sigma_n)\|^2_{0,{\Omega_n}} \leq \widetilde C_n \|v_n(\sigma_n)\|^2_{0,{\Omega_o}},
\end{equation}
with $\widetilde C_n=C_{pn} K_n K_1 K_2$.
To analyze the local term we derive a Caccioppoli-type inequality for the local harmonic component. We introduce the cutoff function $g\in C^1({\Omega_l})$ such that $g=1$ in $\overline{{\Omega_l}\setminus{\Omega_o}}$, $g=0$ in $\Omega\setminus{\Omega_l}$, $\|\nabla g\|_{\infty}\leq\frac{1}{\kappa}$, and ${\rm supp}(\nabla g)\subset{\Omega_o}$, where $\kappa$ is the thickness of the overlap, see Fig. \ref{domains}. These properties imply that $gv_l$ and $g^2v_l$ belong to $H^1_0({\Omega_l})$. Next, we note that $v_l$ is the solution of the weak formulation of \eqref{local-problem} with $f_l=0$. Using $g^2v_l$ as a test function then yields the following identity
\begin{displaymath}
0 = \int_{\Omega_l} \nablav_l \nabla(g^2v_l)\,d\bm{x}
= 2\int_{\Omega_l} gv_l\,\nablav_l \nabla g\,d\bm{x}
+ \int_{\Omega_l} g^2\nablav_l \nablav_l\,d\bm{x}.
\end{displaymath}
We use the latter to find a bound on $\|\nabla(gv_l)\|_{0,{\Omega_l}}$:
\begin{displaymath}
\begin{aligned}
\|\nabla(gv_l)\|^2_{0,{\Omega_l}}
& = \int_{\Omega_l} \nabla(gv_l)\nabla(gv_l)\,d\bm{x} - \int_{\Omega_l} \nablav_l \nabla(g^2v_l)\,d\bm{x} \\[2mm]
& = \int_{\Omega_l} \nabla(gv_l)\nabla(gv_l)\,d\bm{x}
- 2\int_{\Omega_l} gv_l\,\nablav_l \nabla g\,d\bm{x}
- \int_{\Omega_l} g^2\nablav_l \nablav_l\,d\bm{x} \\[2mm]
& = \int_{\Omega_l} v_l^2 \nabla g \nabla g \,d\bm{x}
= \int_{\Omega_o} v_l^2 \nabla g \nabla g \,d\bm{x}
\leq \frac{1}{\kappa^2} \int_{\Omega_o} v_l^2\,d\bm{x} = \frac{1}{\kappa^2} \|v_l\|^2_{0,{\Omega_o}}.
\end{aligned}
\end{displaymath}
Thus, we conclude that
\begin{displaymath}
\|v_l\|_{0,{\Omega_l}\setminus{\Omega_o}} = \|gv_l\|_{0,{\Omega_l}\setminus{\Omega_o}} \leq \|gv_l\|_{0,{\Omega_l}}
\leq C_p \|\nabla(gv_l)\|_{0,{\Omega_l}} \leq \frac{C_p}{\kappa} \|v_l\|_{0,{\Omega_o}},
\end{displaymath}
where $C_p$ is the local Poincar\'e constant. Let $\widetilde K_{nl}=\max\{\widetilde C_n,C_p^2\}$.
Together with \eqref{eq:scbs} and \eqref{eq:lbn} this yields
\begin{displaymath}
\begin{aligned}
\|v_n(\sigma_n)\|^2_{0,{\Omega_n}} +\|v_l(\sigma_l)\|^2_{0,{\Omega_l}\setminus{\Omega_o}}
& \leq \dfrac{\widetilde K_{nl}^2}{\kappa^2}\left( \|v_n(\sigma_n)\|^2_{0,{\Omega_o}} +
\|v_l(\sigma_l)\|^2_{0,{\Omega_o}} \right) \\
& \leq \dfrac{\widetilde K_{nl}^2}{\kappa^2(1-\delta)}\|v_n(\sigma_n)-v_l(\sigma_l)\|^2_{0,{\Omega_o}}.
\end{aligned}
\end{displaymath}
$\square$
\subsection{The approximation error is bounded by the modeling error}\label{sec:approxer}
The optimal solution $(\theta_n^*,\theta_l^*)$ of the reduced space problem \eqref{reduced-minimization} approximates the trace of the global nonlocal solution $\widehat u_n$ on ${\eta_c}$ and ${\Gamma_c}$, respectively. The following lemma shows that the error in $(\theta_n^*,\theta_l^*)$ is bounded by the modeling error on ${\Omega_l}$.
\smallskip
\begin{lemma}
Let $\widehat u_n$ and $(\theta_n^*,\theta_l^*)$ solve \eqref{nonlocal-problem} and \eqref{reduced-minimization}, respectively.
Then,
\begin{equation}\label{approx-error}
\|T(\widehat u_n)-(\theta_n^*,\theta_l^*)\|_* \leq \|\widehat u_n-\widehat u_l\|_{0,{\Omega_o}}.
\end{equation}
\end{lemma}
{\it Proof.}
Because $(\theta_n^*,\theta_l^*)$ satisfies the Euler-Lagrange equation \eqref{eq:EL-reduced} we have
\begin{equation}\label{eq:hEL-reduced}
Q(\theta_n^*,\theta_l^*;\mu_n,\mu_l) = -\big( u_n^0 - u_l^0, v_n(\mu_n) - v_l(\mu_l) \big)_{0,{\Omega_o}}\quad\forall \, (\mu_n,\mu_l) \in \Theta_n\times\Theta_l.
\end{equation}
Using this identity together with the energy norm definition \eqref{Qnorm} yields
\begin{displaymath}
\begin{array}{l}
\displaystyle
Q(T(\widehat u_n)-(\theta_n^*,\theta_l^*); \mu_n,\mu_l)
=
Q(T(\widehat u_n); \mu_n,\mu_l)-Q(\theta_n^*,\theta_l^*; \mu_n,\mu_l) \\[1ex]
\qquad\displaystyle
=
\big(v_n(T_n(\widehat u_n))-v_l(T_l(\widehat u_n)), v_n(\mu_n) - v_l(\mu_l) \big)_{0,{\Omega_o}}+
\big( u_n^0 - u_l^0, v_n(\mu_n) - v_l(\mu_l) \big)_{0,{\Omega_o}}
\\[1ex]
\qquad\displaystyle
=\big( \widehat u_n - \widehat u_l, v_n(\mu_n) - v_l(\mu_l) \big)_{0,{\Omega_o}}
\le \| \widehat u_n - \widehat u_l \|_{0,{\Omega_o}} \, \| v_n(\mu_n) - v_l(\mu_l) \|_{0,{\Omega_o}}
\\[1ex]
\qquad\displaystyle
= \| \widehat u_n - \widehat u_l \|_{0,{\Omega_o}} \, \|(\mu_n, \mu_l)\|_*.
\end{array}
\end{displaymath}
The lemma follows by setting $(\mu_n, \mu_l) = T(\widehat u_n)-(\theta_n^*,\theta_l^*)$ above and observing that
$ Q(T(\widehat u_n)-(\theta_n^*,\theta_l^*); T(\widehat u_n)-(\theta_n^*,\theta_l^*)) = \| T(\widehat u_n)-(\theta_n^*,\theta_l^*) \|^2_* $.
$\square$
\subsection{Proof of Theorem \ref{thm-coupling-error}}\label{sec:proof}
Let $\widehat{u}:=L(T(\widehat u_n))$. Definitions \eqref{operatorR} and \eqref{operatorB} together with the identities
\begin{equation}\label{local_lifting}
\widehat u_n|_{\Omega_n} = v_n(T_{n}(\widehat u_n))+u_n^0
\quad\mbox{and}\quad
\widehat u_l = v_l(T_{l}(\widehat u_n))+u_l^0,
\end{equation}
imply that
\begin{equation}\label{PRwun}
\widehat{u}=
\left\{\begin{array}{ll}
\widehat u_n & \bm{x}\in{\Omega_n}\\
\widehat u_l & \bm{x}\in{\Omega_l}\setminus{\Omega_o}.
\end{array}\right.
\end{equation}
Likewise, the identities $u_n^*=v_n(\theta_n^*) + u_n^0$ and $u_l^*=v_l(\theta_l^*) + u_l^0$ imply that $u^{*} = L(\theta_n^*,\theta_l^*)$.
Adding and subtracting $\widehat{u}$ to the LtN error then yields
\begin{equation}\label{error-splitting}
\begin{array}{rl}
\|\widehat u_n-u^{*}\|_{0,{\Omega}} & \le \|\widehat u_n-\widehat{u}\|_{0,{\Omega}} + \|\widehat{u} - u^{*} \|_{0,{\Omega}}
\\[0.5ex]
& = \|\widehat u_n-L(T(\widehat u_n))\|_{0,{\Omega}} + \|L(T(\widehat u_n)) - L(\theta_n^*,\theta_l^*) \|_{0,{\Omega}}
\\[1ex]
&\displaystyle
= \|\widehat u_n-L(T(\widehat u_n))\|_{0,{\Omega}}
+ \|H(T(\widehat u_n)) - H(\theta_n^*,\theta_l^*)\|_{0,{\Omega}} \,.
\end{array}
\end{equation}
The first term in \eqref{error-splitting} is the {\it consistency error} of $L$; \eqref{PRwun} implies that
\begin{equation}\label{consistency-error}
\|\widehat u_n - L(T(\widehat u_n))\|_{0,{\Omega}} = \|\widehat u_n - \widehat u_l \|_{0,{\Omega_l}\setminus{\Omega_o}}.
\end{equation}
To estimate the second term we use \eqref{norm-bound} and \eqref{approx-error}:
\begin{displaymath}
\|H(T(\widehat u_n)) - H(\theta_n^*,\theta_l^*)\|_{0,{\Omega}} \le \| H \|_{**} \|T(\widehat u_n)-(\theta_n^*,\theta_l^*)\|_*
\leq \frac{C}{\kappa}\|\widehat u_n-\widehat u_l\|_{0,{\Omega_o}}.
\end{displaymath}
Combining \eqref{error-splitting} with this bound and \eqref{consistency-error} gives
\begin{displaymath}
\|\widehat u_n-u^{*}\|_{0,{\Omega}} \leq \|\widehat u_n-\widehat u_l\|_{0,{\Omega_l}\setminus{\Omega_o}}
+ \dfrac{C}{\kappa} \|\widehat u_n-\widehat u_l\|_{0,{\Omega_o}}
\leq \left(1+\dfrac{C}{\kappa}\right)\|\widehat u_n-\widehat u_l\|_{0,{\Omega_l}} ,
\end{displaymath}
which completes the proof. $\Box$
\subsection{Convergence of the modeling error}\label{sec:modeling-error-convergence}
In this section we show that $\|\widehat u_n-\widehat u_l\|_{0,{\Omega_l}}$ vanishes as $\varepsilon\to 0$.
\begin{lemma}
Let $\widehat u_n$ be the solution of \eqref{nonlocal-problem} with homogeneous volume constraints and let $\widehat u_l$ be defined as in \eqref{local_lifting}. Assume that H.1 and H.2 hold; then,
\begin{displaymath}
\lim\limits_{\varepsilon\to 0}\|\widehat u_n-\widehat u_l\|_{0,{\Omega_l}} = 0.
\end{displaymath}
\end{lemma}
\smallskip
{\it Proof.}
By definition $\widehat u_l$ solves the boundary value problem
\begin{displaymath}
\left\{\begin{array}{rlll}
-\Delta \widehat u_l & = & f_l & \bm{x}\in {\Omega_l} \\
\widehat u_l & = & \widehat u_n & \bm{x}\in{\Gamma_c} \\
\widehat u_l & = & 0 & \bm{x}\in{\Gamma_D},
\end{array}\right.
\end{displaymath}
and so, it is also a solution of the weak equation
\begin{displaymath}
\int_{\Omega_l} \nabla\widehat u_l\cdot\nabla w d\bm{x} = \int_{\Omega_l} f_l\,w d\bm{x}
\quad\forall \, w\in H^1_0({\Omega_l}).
\end{displaymath}
Let $\psi\in H^1_0({\Omega_l})$ solve the dual problem
\begin{equation}\label{eq:dual}
\int_{{\Omega_l}} \nabla w\cdot\nabla\psi d\bm{x} = \int_{{\Omega_l}} (\widehat u_l - \widehat u_n) w d\bm{x}
\quad\forall\, w\in H^1_0({\Omega_l})\,.
\end{equation}
Since $\widehat u_l - \widehat u_n = 0$ on $\Gamma_l$ one can set $w= \widehat u_l - \widehat u_n$ in \eqref{eq:dual} to obtain
\begin{displaymath}
\begin{array}{rll}
\displaystyle \| \widehat u_l-\widehat u_n \|^2_{0,{\Omega_l}}
=& \displaystyle \int_{{\Omega_l}} \nabla (\widehat u_l - \widehat u_n)\cdot\nabla\psi d\bm{x} \\[2ex]
=& \displaystyle \int_{{\Omega_l}} f_l \psi d\bm{x} - \int_{{\Omega_l}} \nabla \widehat u_n\cdot\nabla\psi d\bm{x}
\\[2ex]
=& \cancel{\displaystyle \int_{{\Omega_l}\cap\eta} f_l \psi d\bm{x}}
+ \int_{{\Omega_l}\setminus{\eta}} f_n \psi d\bm{x}
- \int_{{\Omega_l}} \nabla \widehat u_n\cdot\nabla\psi d\bm{x} \\[2ex]
=& \displaystyle\int_{{\Omega_l}\setminus{\eta}} \mathcal{L}\widehat u_n \psi d\bm{x}
- \int_{{\Omega_l}\setminus{\eta}} \nabla \widehat u_n\cdot\nabla\psi d\bm{x}\to 0, \;\;\quad
\mbox{as $\varepsilon\to 0$,}
\end{array}
\end{displaymath}
where the third equality follows from the fact that $f_l$ is extended to zero in $\eta$ and the limit follows from the result in \cite[Section~5]{Du_13_MMMAS}\footnote{It can be shown that when kernels satisfying \eqref{eq:gamma-conds} and \eqref{eq:integrable-kernel} are properly scaled, so that $\mathcal{L}\to\Delta$, $\|\widehat u_n-\widehat u_l\|_{0,{\Omega_l}}\leq C\varepsilon^2+\mathcal{O}(\varepsilon^4)$. The same result holds for the peridynamics kernel in \eqref{test-kernel}.}.
$\square$
\begin{remark}
An immediate consequence of the results in this section is the fact that the LtN solution converges to the corresponding local solution in the limit of vanishing nonlocality. This means that, if an asymptotically-compatible (AC) discretization is used, the discreticretized LtN solution converges to the corresponding continuous local solution as $\epsilon\to 0$ and $h\to 0$ simultaneously, where $h$ is the discretization parameter. In other words, the proposed formulation yields an AC scheme, provided AC discretizations are employed.
\end{remark}
\section{Approximation of the optimization-based LtN formulation} \label{finite-dim}
This section presents the discretization and the error analysis of the LtN formulation \eqref{minimization}. Throughout this section we assume that ${\eta_c}$ and ${\Gamma_c}$ are polygonal domains; this assumption is not restrictive as those are virtual domains that we can define at our discretion.
\subsection{Discretization}\label{sec:discrete}
We use a reduced-space approach to solve the optimization-based LtN problem \eqref{minimization} numerically, i.e., we discretize and solve the problem
\begin{equation}\label{eq:rsp}
\displaystyle\min\limits_{\theta_n,\theta_l} J(\theta_n,\theta_l)
\quad {\rm with} \quad
J(\theta_n,\theta_l) = \frac12 \| u_n(\theta_n)-u_l(\theta_l) \|^2_{0,{\Omega_o}}
\end{equation}
where $u_n(\theta_n)\in V_{\eta_D}({\Omega_n}) $ solves the weak nonlocal equation
\begin{equation}\label{var-state-nn}
{B_n} (u_n(\theta_n); z_n,\kappa_n )
:=\! \int\limits_{\Omega_n} \! \int\limits_{\Omega_n} \mathcal{G}u_n\cdot\mathcal{G}z_n\,d\bm{y} d\bm{x}
+ \! \int\limits_{\eta_c} u_n \kappa_n \,d\bm{x}
= \! \int\limits_{\omega_n} \! f_nz_n\,d\bm{x} +\!\int\limits_{\eta_c} \! \theta_n \kappa_n \,d\bm{x},
\end{equation}
for all $(z_n,\kappa_n)\in V_{\eta_n}\times\Theta_n^*$, and $u_l(\theta_l)\in H^1_{{\Gamma_D}}({\Omega_l})$ solves the weak local equation
\begin{equation}\label{var-state-ll}
{B_l}(u_l(\theta_l); z_l,\kappa_l )
:= \int_{\Omega_l} \nablau_l\nablaz_l\,d\bm{x} + \int_{\Gamma_c} u_l \kappa_l \,d\bm{x}
= \int_{\Omega_l} f_lz_l\,d\bm{x} + \int_{\Gamma_c} \theta_l \kappa_l \,d\bm{x},
\end{equation}
for all $(z_l,\kappa_l)\in H^1_0({\Omega_l})\times \Theta_l^*$. Here, $\Theta_n^*$ and $\Theta_l^*$ are the duals of $\Theta_n$ and $\Theta_l$, respectively.
To discretize \eqref{eq:rsp}--\eqref{var-state-ll} we consider the following conforming finite element spaces \cite{DElia_2020Acta,DElia_2020Cookbook}
\begin{equation}\label{finite-dim-spaces}
\begin{array}{lll}
V_{\eta_D}^h\subset V_{\eta_D}({\Omega_n}), &\;\;
\Theta^h_n \subset \Theta_n, &\,\,
V_{\eta_n}^h\times\Theta^h_n \,\subset \, V_{\eta_n}({\Omega_n}) \times \Theta_n^*, \\[3mm]
H^h_{\Gamma_D}\subset H^1_{\Gamma_D}({\Omega_l}), &\;\;
\Theta^h_l \subset \Theta_l, &\,\,
H^h_0\times\Theta^h_l \,\,\subset \,\, H^1_0({\Omega_l}) \times \Theta_l^*
\end{array}
\end{equation}
for the nonlocal and local states, controls, and test functions\footnote{For simplicity, we approximate $\Theta_n$, $\Theta_l$ and their duals by the same finite dimensional space.}, respectively. In general, the finite element spaces for the nonlocal and local problems can be defined on different meshes with parameters $h_n>0$ and $h_l>0$, respectively, and can have different polynomial orders given by integers $p_n\ge 1$ and $p_l\ge 1$, respectively.
Restriction of \eqref{eq:rsp}--\eqref{var-state-ll} to the finite element spaces \eqref{finite-dim-spaces} defines the discrete reduced-space LtN formulation
\begin{equation}\label{eq:rsph}
\displaystyle\min\limits_{\theta_n^h,\theta_l^h} J_h(\theta_n^h,\theta_l^h)
\quad {\rm with} \quad
J_h(\theta_n^h,\theta_l^h)=\frac12 \| {u_n^h}(\theta_n^h)-{u_l^h}(\theta_l^h) \|^2_{0,{\Omega_o}}
\end{equation}
where ${u_n^h}(\theta_n^h)\in V^h_{\eta_D}$ solves the discrete nonlocal state equation
\begin{equation}\label{var-state-nnh}
{B_n}({u_n^h}(\theta_n^h); {z_n^h},\kappa_n^h) = \int_{\omega_n} f_n{z_n^h}\,d\bm{x} + \int_{\eta_c} \theta_n^h\kappa_n^h\,d\bm{x},
\quad\mbox{$\forall \; ({z_n^h},\kappa_n^h)\in V^h_{\eta_n}\times \Theta_n^h$},
\end{equation}
and ${u_l^h}(\theta_l^h)\in H^h_{\Gamma_D}$ solves the discrete local state equation\footnote{Note that both \eqref{var-state-nnh} and \eqref{var-state-llh} are well-posed.}
\begin{equation}\label{var-state-llh}
{B_l}({u_l^h}(\theta_l^h); {z_l^h},\kappa_l^h)=\int_{\Omega_l} f_lz_l\,d\bm{x} + \int_{\Gamma_c} \theta_l^h\kappa_l^h\,d\bm{x},
\quad\mbox{$\forall\;({z_l^h},\kappa_l^h)\in H^h_0\times \Theta^h_l$}.
\end{equation}
Following Section \ref{well-posedness}, we write the solutions of \eqref{var-state-nnh} and \eqref{var-state-llh} as
\begin{equation}\label{eq:discr-state-decomp}
{u_n^h} = {v_n^h}+{u_n^{h0}} \quad {\rm and} \quad {u_l^h} = {v_l^h} + {u_l^{h0}},
\end{equation}
where ${v_n^h}$ and ${v_l^h}$ are ``harmonic'' components solving \eqref{var-state-nnh} and \eqref{var-state-llh} with $f_n=0$ and $f_l=0$ respectively, whereas ${u_n^{h0}}$ and ${u_l^{h0}}$ are ``homogeneous'' components solving \eqref{var-state-nnh} and \eqref{var-state-llh} with $\theta_n^h=0$ and $\theta_l^h=0$, respectively. In terms of these components
\begin{displaymath}
J_h(\theta_n^h,\theta_l^h) =
\frac12 \| {v_n^h}(\theta_n^h)\!-\!{v_l^h}(\theta_l^h) \|^2_{0,{\Omega_o}} +
\big( {u_n^{h0}} \!-\! {u_l^{h0}}, {v_n^h}(\theta_n^h) \!-\! {v_l^h}(\theta_l^h) \big)_{0,{\Omega_o}}\!\! +
\frac12 \| {u_n^{h0}} \!-\!{u_l^{h0}} \|^2_{0,{\Omega_o}} \,.
\end{displaymath}
The Euler-Lagrange equation of \eqref{eq:rsph} has the form:
seek $({\sigma_n^h},{\sigma_l^h})\in \Theta^h_n\times\Theta^h_l$ such that
\begin{equation}\label{eq:disEL}
Q_h({\sigma_n^h},{\sigma_l^h};\mu^h_n,\mu^h_l) = F_h(\mu^h_n,\mu^h_l)\quad\forall \,(\mu^h_n,\mu^h_l)
\in \Theta^h_n\times\Theta^h_l,
\end{equation}
\smallskip where
$Q_h({\sigma_n^h},{\sigma_l^h};\mu^h_n,\mu^h_l) = \big({v_n^h}({\sigma_n^h})-{v_l^h}({\sigma_l^h}),
\,{v_n^h}(\mu^h_n)-{v_l^h}(\mu^h_l)\big)_{0,{\Omega_o}}$
and
$
F_h(\mu^h_n,\mu^h_l) = -\big( {u_n^{h0}} - {u_l^{h0}}, {v_n^h}(\mu^h_n) - {v_l^h}(\mu^h_l) \big)_{0,{\Omega_o}}$.
To prove the positivity of $Q_h$, the arguments of Lemma \ref{lemma-ip} cannot be extended, as the identity principle does not hold for ${v_l^h}$. We use instead the discrete strong Cauchy-Schwarz inequality in Lemma \ref{lemma-discrete-SCS}.
\smallskip
\begin{lemma}\label{lemma-d-ip}
The form $Q_h(\cdot,\cdot)$ defines an inner product on $\Theta_n^h\times\Theta_l^h$.
\end{lemma}
\smallskip
{\it Proof.}
We prove that $Q_h({\sigma_n^h},{\sigma_l^h}; {\sigma_n^h},{\sigma_l^h})=0$ if and only if $({\sigma_n^h},{\sigma_l^h})=(0,0)$.
If $({\sigma_n^h},{\sigma_l^h})=(0,0)$ then ${v_n^h}({\sigma_n^h})=0$ and ${v_l^h}({\sigma_l^h})=0$, implying $Q_h({\sigma_n^h},{\sigma_l^h}; {\sigma_n^h},{\sigma_l^h})=0$.
Conversely, if $Q_h({\sigma_n^h},{\sigma_l^h}; {\sigma_n^h},{\sigma_l^h})=0$, then
\begin{displaymath}
0=Q_h({\sigma_n^h},{\sigma_l^h}; {\sigma_n^h},{\sigma_l^h})= \|{v_n^h}({\sigma_n^h})\|_{0,{\Omega_o}}^2 + \|{v_l^h}({\sigma_l^h})\|^2_{0,{\Omega_o}} -2({v_n^h}({\sigma_n^h}),{v_l^h}({\sigma_l^h}))_{0,{\Omega_o}}.
\end{displaymath}
The discrete strong Cauchy-Schwarz inequality (see Lemma \ref{lemma-discrete-SCS}) then implies
\begin{equation}\label{discrete_coer}
0\geq(1-\delta) (\|{v_n^h}({\sigma_n^h})\|^2_{0,{\Omega_o}} + \|{v_l^h}({\sigma_l^h})\|^2_{0,{\Omega_o}}).
\end{equation}
Since $\delta<1$ the left hand side in the above inequality is nonnegative. Thus, we must have that ${v_n^h}({\sigma_n^h})=0$ and ${v_l^h}({\sigma_l^h})=0$, which implies $({\sigma_n^h},{\sigma_l^h})=(0,0)$.
$\square$
Lemma \ref{lemma:HilbertQh} proves that $\Theta^h_n\times\Theta^h_l$ is Hilbert with respect to the discrete energy norm
\begin{equation}\label{eq:dis-en}
\|\mu^h_n,\mu^h_l\|^2_{h*}:=Q_h(\mu^h_n,\mu^h_l;\mu^h_n,\mu^h_l).
\end{equation}
This fact, Lemma \ref{lemma-d-ip} and the projection theorem provide the following corollary.
\begin{corollary}\label{cor:rsph}
The reduced space problem \eqref{eq:rsph} has a unique minimizer.
\end{corollary}
\subsection{Convergence analysis}\label{sec:conv-anl}
In this section we prove that the discrete solution $(\theta_n^{h*},\theta_l^{h*})$ converges to the exact solution $(\theta_n^*,\theta_l^*)$ assuming the latter belongs to the ``raw'' control space $\Theta_n\times\Theta_l$. This assumption mirrors the one made in \cite{Abdulle_15} and is necessary because the continuous problem is well-posed in the completion of the raw control space. We prove this result under the following assumptions.
\smallskip
\indent H.3 The optimal solution belongs to the raw space: $(\theta_n^*,\theta_l^*)\in \Theta_n\times\Theta_l$.
\indent H.4 The kernel $\gamma$ is translation invariant, i.e. $\gamma(\bm{x},\bm{y})=\gamma(\bm{x}-\bm{y})$\footnote{Note that this assumption is not too restrictive; in fact, it is very common in nonlocal mechanics applications.}.
\smallskip
Let $(\theta_n^*,\theta_l^*)\in \Theta_n\times\Theta_l$ denote the optimal solution of \eqref{reduced-minimization} and $(\theta_n^{h*},\theta_l^{h*})\in \Theta^h_n\times\Theta^h_l$ be the optimal solution of its discretization \eqref{eq:rsph}. We denote the associated optimal states by $(u_n^*,u_l^*)$, and $(u_n^{h*},u_l^{h*})$, respectively, that is,
\begin{displaymath}
(u_n^*, u_l^*) = (u_n(\theta_n^*), u_l(\theta_l^*))
\quad\mbox{and}\quad
(u_n^{h*}, u_l^{h*}) = (u_n^h(\theta_n^{h*}),u_l(\theta_l^{h*})).
\end{displaymath}
We will estimate the discrete energy norm of the error $(\theta_n^*-\theta_n^{h*};\theta_l^*-\theta_l^{h*})$ using Strang's second\footnote{The discrete problem \eqref{eq:disEL} also fits in the setting of Strang's first Lemma \cite[Lemma 2.27, p.95]{Ern_04_BOOK}. We use the second lemma because it simplifies the analysis.}
Lemma; see, e.g., \cite[Lemma 2.25, p.94]{Ern_04_BOOK}. Application of this lemma is contingent upon two conditions: (i) the discrete form $Q_h$ is continuous and coercive with respect to $\|\cdot\|_{h*}$, and (ii) there exists a positive real constant $C$ such that
\begin{equation}\label{eq:strang-ass}
\| \mu_n,\mu_l \|_{h*} \le C \| \mu_n,\mu_l \|_{*}\quad
\forall \, (\mu_n,\mu_l)\in \Theta_n\times\Theta_l \,.
\end{equation}
The first assumption holds trivially. To verify \eqref{eq:strang-ass} note that
$$
\| \mu_n,\mu_l \|_{h*}^2 := Q_h(\mu_n,\mu_l;\mu_n,\mu_l) = \| {v_n^h}(\mu_n)-{v_l^h}(\mu_l)\|_{0,{\Omega_o}}^2 \,.
$$
Given $\mu_l\in \Theta_l$ the function $v_l^h(\mu_l)$ solves the weak equation
$$
{B_l}({v_l^h}(\mu_l); {z_l^h},\kappa_l^h)=\int_{\Gamma_c} \mu_l\kappa_l^h\,d\bm{x}
=
\int_{\Gamma_c} \Pi_l(\mu_l)\kappa_l^h\,d\bm{x}
\quad\mbox{$\forall\;({z_l^h},\kappa_l^h)\in H^h_0\times \Theta^h_l$},
$$
where $\Pi_l$ is the $L^2$ projection onto $\Theta^h_l$, i.e., $v_l^h(\mu_l) = v_l^h(\Pi_l\mu_l)$. Similarly, we have that $v_n^h(\mu_n) = v_n^h(\Pi_n\mu_n)$, where $\Pi_n$ is the $L^2$ projection onto $\Theta^h_n$.
Additionally, similarly to \cite{Abdulle_15}, we assume that there exist positive constants $\gamma^*_n$, $\gamma_l^*$, $\gamma_{n*}$, and $\gamma_{l*}$ such that for $h_n$ and $h_l$ small enough the following inequalities hold:
\begin{equation}\label{eq:discr-cont-equivalence}
\begin{aligned}
\gamma_{n*} \|v_n({\sigma_n^h})\|_{0,{\Omega_o}} & \leq \|{v_n^h}({\sigma_n^h})\|_{0,{\Omega_o}}
& \leq \gamma_n^*\|v_n({\sigma_n^h})\|_{0,{\Omega_o}}\\
\gamma_{l*} \|v_l({\sigma_l^h})\|_{0,{\Omega_o}} & \leq \|{v_l^h}({\sigma_l^h})\|_{0,{\Omega_o}}
& \leq \gamma_l^*\|v_l({\sigma_l^h})\|_{0,{\Omega_o}}.
\end{aligned}
\end{equation}
The latter, the strong Cauchy-Schwarz inequality and the boundedness of the $L^2$ projection operators yield
\begin{equation}
\begin{aligned}
\| \mu_n,\mu_l \|_{h*}^2
& = \| {v_n^h}(\mu_n)-{v_l^h}(\mu_l)\|_{0,{\Omega_o}}^2
= \| {v_n^h}(\Pi_n(\mu_n))-{v_l^h}(\Pi_l(\mu_l))\|_{0,{\Omega_o}}^2 \\[2ex]
& \leq \| {v_n^h}(\Pi_n(\mu_n))\|_{0,{\Omega_o}}^2 + \|{v_l^h}(\Pi_l(\mu_l))\|_{0,{\Omega_o}}^2 \\[2ex]
& \leq \gamma_n^*\|v_n(\Pi_n(\mu_n))\|_{0,{\Omega_o}}^2 + \gamma_l^*
\|v_l(\Pi_l(\mu_l))\|_{0,{\Omega_o}}^2 \\[2ex]
& \leq \frac{C_1}{1-\delta}\| v_n(\Pi_n(\mu_n))-v_l(\Pi_l(\mu_l))\|_{0,{\Omega_o}}^2 \\[2ex]
& = \frac{C_1}{1-\delta}\|\Pi_n\mu_n,\Pi_l\mu_l \|_{*}^2
\leq \frac{C_2}{1-\delta}\|\mu_n,\mu_l \|_{*}^2.
\end{aligned}
\end{equation}
Application of Strang's second lemma then yields the following error estimate.
\begin{lemma}\label{error}
Let $(\theta_n^*,\theta_l^*)$ and $(\theta_n^{h*},\theta_l^{h*})$ be the solutions of \eqref{eq:EL-reduced} and \eqref{eq:disEL}, then
\begin{equation}\label{eq:strang-EL}
\begin{aligned}
& \|(\theta_n^*-\theta_n^{h*}, \theta_l^*-\theta_l^{h*})\|_{h*} \leq \\
& \inf_{(\sigma_n^h,\sigma_l^h)} \|(\theta_n^*-\sigma_n^h,\theta_l^*-\sigma_l^h)\|_{h*}
+
\sup_{\|(\mu^h_n,\mu^h_l)\|_{h*}=1} |Q_h(\theta_n^*,\theta_l^*;\mu^h_n,\mu^h_l) - F_h(\mu^h_n,\mu^h_l)|
\end{aligned}
\end{equation}
where $(\sigma_n^h,\sigma_l^h),\;(\mu^h_n,\mu^h_l)\in \Theta_n^h\times\Theta_l^h$.
\end{lemma}
\smallskip
We use the result in Lemma \ref{error} to obtain asymptotic convergence rates under the assumption that
1) the homogeneous problems \eqref{eq:homo} have solutions $u_n^0\in H^{p_n+t}({\Omega_n})$, for $t\in[0,1]$, and $u_l^0\in H^{p_l+1}({\Omega_l})$;
2) the control variables $(\theta_n,\theta_l)$ are such that $u_n(\theta_n)\in H^{p_n+t}({\Omega_n})$ and $u_l(\theta_l)\in H^{p_l+1}({\Omega_l})$.
We treat the first term in \eqref{eq:strang-EL} by using the norm-equivalence \eqref{eq:energy-equiv}; we have
\begin{equation}\label{eq:strag-first}
\begin{aligned}
\inf_{(\sigma_n^h,\sigma_l^h)} \|(\theta_n^*-\sigma_n^h,&\theta_l^*-\sigma_l^h)\|^2_{h*}
= \inf_{(\sigma_n^h,\sigma_l^h)} \|(\Pi_n\theta_n^*-\sigma_n^h,\Pi_l\theta_l^*-\sigma_l^h)\|^2_{h*} \\
& \leq C \inf_{(\sigma_n^h,\sigma_l^h)}
\|(\Pi_n\theta_n^*-\sigma_n^h,\Pi_l\theta_l^*-\sigma_l^h)\|^2_{\Theta_n\times\Theta_l} = 0,
\end{aligned}
\end{equation}
where $\|\cdot\|_{\Theta_n\times\Theta_l}$ is defined as in \eqref{eq:control-norm}. We focus on the second term in \eqref{eq:strang-EL}. Adding and subtracting $Q(\theta_n^*,\theta_l^*;\mu^h_n,\mu^h_l)$ and using conformity of $\Theta^h_n\times\Theta^h_l$ gives
\begin{displaymath}
\begin{array}{l}
Q_h(\theta_n^*,\theta_l^*;\mu^h_n,\mu^h_l) - F_h(\mu^h_n,\mu^h_l)=
\\ [1.5ex]
\qquad
=\big[Q_h(\theta_n^*,\theta_l^*;\mu^h_n,\mu^h_l) -Q(\theta_n^*,\theta_l^*;\mu^h_n,\mu^h_l)\big]
+
\big[Q(\theta_n^*,\theta_l^*;\mu^h_n,\mu^h_l)- F_h(\mu^h_n,\mu^h_l)\big] \\[1.5ex]
\qquad
=\big[Q_h(\theta_n^*,\theta_l^*;\mu^h_n,\mu^h_l) -Q(\theta_n^*,\theta_l^*;\mu^h_n,\mu^h_l)\big]
+
\big[F(\mu^h_n,\mu^h_l)- F_h(\mu^h_n,\mu^h_l)\big] .
\end{array}
\end{displaymath}
Adding and subtracting the terms
\begin{displaymath}
(v_n(\theta_n^*)-v_l(\theta_l^*),{v_n^h}(\mu^h_n)-{v_l^h}(\mu^h_l))_{0,{\Omega_o}}
\quad\mbox{and}\quad
(u_n^0-u_l^0,{v_n^h}(\mu^h_n)-{v_l^h}(\mu^h_l))_{0,{\Omega_o}}
\end{displaymath}
to the last expression and using the definitions of $Q$, $F$, $Q_h$ and $F_h$ yields the identity:
\begin{displaymath}
\begin{array}{l}
Q_h(\theta_n^*,\theta_l^*;\mu^h_n,\mu^h_l) - F_h(\mu^h_n,\mu^h_l)= \\[1ex]
\qquad
= \left(({v_n^h}(\theta_n^*)-v_n(\theta_n^*))-({v_l^h}(\theta_l^*)-v_l(\theta_l^*)),{v_n^h}(\mu^h_n)-{v_l^h}(\mu^h_l)\right)_{0,{\Omega_o}} +\\[1ex]
\qquad
\left((u_n^{h0}-u_n^0)-(u_l^{h0}-u_l^0),{v_n^h}(\mu^h_n)-{v_l^h}(\mu^h_l)\right)_{0,{\Omega_o}} + \\[1ex]
\qquad
\left(u_n^*-u_l^*, ({v_n^h}(\mu^h_n)-v_n(\mu^h_n) - ({v_l^h}(\mu^h_l) - v_l(\mu^h_l)\right)_{0,{\Omega_o}}.
\end{array}
\end{displaymath}
Application of the Cauchy-Schwartz inequality then gives the following upper bound:
\begin{displaymath}
\begin{array}{l}
\big| Q_h(\theta_n^*,\theta_l^*;\mu^h_n,\mu^h_l) - F_h(\mu^h_n,\mu^h_l) \big| \le \| {v_n^h}(\mu^h_n)\!-\!{v_l^h}(\mu^h_l)\|_{0,{\Omega_o}} \times\\ [1ex]
\qquad
\Big(
\| {v_n^h}(\theta_n^*)\!-\!v_n(\theta_n^*)\|_{0,{\Omega_o}} \!+\!
\|{v_l^h}(\theta_l^*) \!-\! v_l(\theta_l^*)\|_{0,{\Omega_o}} \!+\!
\| u_n^{h0}\!-\!u_n^0 \|_{0,{\Omega_o}} \!+\!
\| u_l^{h0}\!-\!u_l^0 \|_{0,{\Omega_o}}
\Big) +\\[1ex]
\qquad
\Big(
\|{v_n^h}(\mu^h_n)-v_n(\mu^h_n) \|_{0,{\Omega_o}} +
\|{v_l^h}(\mu^h_l) - v_l(\mu^h_l) \|_{0,{\Omega_o}}
\Big) \times
\| u_n^{*}\!-\!u_l^{*} \|_{0,{\Omega_o}}\,.
\end{array}
\end{displaymath}
Furthermore, note that
$\| {v_n^h}(\mu^h_n)\!-\!{v_l^h}(\mu^h_l)\|_{0,{\Omega_o}} = \|\mu^h_n, \mu^h_l \|_{h*} = 1$,
and that $\| u_n^{*}\!-\!u_l^{*} \|_{0,{\Omega_o}} = J(u_n^*,u_l^*)$ is the optimal value of the objective functional, which is bounded by the modeling error.
The regularity assumptions on the nonlocal solutions in \eqref{eq:homo} allow us to apply Theorem 6.2 in \cite[p.689]{Du_12_SIREV}:
\begin{displaymath}
\| u_n^{h0}\!-\!u_n^0 \|_{0,{\Omega_o}} \le C\,h^{p_n+t}_n \| u_n^0 \|_{p_n+t,{\Omega_n}},
\end{displaymath}
where $t\in[0,1]$.
Furthermore, the regularity assumptions on the local solutions in \eqref{eq:homo} allow us to use Corollary 1.122 in \cite[p.66]{Ern_04_BOOK} to conclude that
\begin{displaymath}
\| u_l^{h0}\!-\!u_l^0 \|_{0,{\Omega_o}} \le Ch^{p_l+1}_l \| u_l^0 \|_{p_l+1,{\Omega_l}}.
\end{displaymath}
According to Weyl's Lemma \cite{Weyl_40_DMJ} the local harmonic liftings $v_l(\theta_l^*)$ and $v_l(\mu^h_l)$ are smooth functions and so there are positive constants $C_1$ and $C_2$ such that
\begin{displaymath}
\|{v_l^h}(\theta_l^*) \!-\! v_l(\theta_l^*)\|_{0,{\Omega_o}} \leq C_1 h^{p_l+1}_l
\quad \hbox{and} \quad
\|{v_l^h}(\mu^h_l) - v_l(\mu^h_l) \|_{0,{\Omega_o}} \leq C_2 h^{p_l+1}_l.
\end{displaymath}
While a similar result holds for the nonlocal harmonic lifting $v_n(\theta_n^*)$, the treatment of $v_n(\mu_n^h)$ is more involved, due to the discrete nature of the Dirichlet data, and it requires
an auxiliary function $\widetilde \mu_n\in C^\infty({\eta_c})$ such that
$\|\widetilde\mu_n-\mu_n^h\|_{L^2({\eta_c})}\leq \epsilon$, for an arbitrarily small $\epsilon$.
Because $v_n$ and ${v_n^h}$ depend continuously on the data,
\begin{equation}\label{eq:nonlocal-harm-lift}
\begin{aligned}
\|&{v_n^h}(\mu^h_n) - v_n(\mu^h_n) \|_{0,{\Omega_o}} \\
& \leq \|{v_n^h}(\mu^h_n)-{v_n^h}(\widetilde\mu_n) \|_{0,{\Omega_o}}
+ \|{v_n^h}(\widetilde\mu_n)-v_n(\widetilde\mu_n) \|_{0,{\Omega_o}}
+ \|v_n(\widetilde\mu_n)-v_n(\mu^h_n) \|_{0,{\Omega_o}} \\
& \leq C_1 h_n^{p_n+t}\|v_n(\widetilde\mu_n)\|_{p_n+t,{\Omega_n}}
+ \|{v_n^h}(\mu^h_n-\widetilde\mu_n) \|_{0,{\Omega_o}}
+ \|v_n(\widetilde\mu_n-\mu^h_n) \|_{0,{\Omega_o}} \\
& \leq C_1 h_n^{p_n+t}\|v_n(\widetilde\mu_n)\|_{p_n+t,{\Omega_n}}
+ C_2 \|\mu^h_n-\widetilde\mu_n\|_{0,{\eta_c}}
+ C_3 \|\widetilde\mu_n-\mu^h_n\|_{0,{\eta_c}}.
\end{aligned}
\end{equation}
Since $\epsilon$ can be arbitrarily small, the last two terms in \eqref{eq:nonlocal-harm-lift} are negligible.
To complete the estimate we only need a uniform bound on $\|v_n(\widetilde\mu_n)\|_{p_n+t,{\Omega_n}}$. To this end, assume that for all $\widetilde\mu_n\in C^\infty({\eta_c})$, $v_n(\widetilde\mu_n)\in C^k({\Omega_n})$ with $k= p_n+t$. Under this assumption
${\rm D}^\beta[\mathcal{L} v_n(\widetilde\mu_n)]=\mathcal{L} {\rm D}^\beta[v_n(\widetilde\mu_n)]$
for all $\beta\leq k$.
Taking into account that $v_n(\widetilde\mu_n)$ is nonlocal harmonic, i.e., $\mathcal{L} v_n(\widetilde\mu_n)=0$, it follows that $\mathcal{L} {\rm D}^\beta[v_n(\widetilde\mu_n)]=0$, i.e., ${\rm D}^\beta[v_n(\widetilde\mu_n)]$ is also nonlocal harmonic for all $\beta\leq k$. Thus, ${\rm D}^\beta[v_n(\widetilde\mu_n)]$ has a uniformly bounded $L^2$ norm, i.e. $\|{\rm D}^\beta[v_n]\|_{0,{\Omega_n}} \leq C_\beta, \; \forall\, \beta\leq k$. This implies the existence of a positive constant $C$ such that, $\|v_n(\widetilde\mu_n)\|_{p_n+t,{\Omega_n}}\leq C$.
It follows that there exist positive constants $C_1$ and $C_2$ such that
\begin{displaymath}
\|{v_n^h}(\theta_n^*) \!-\! v_n(\theta_n^*)\|_{0,{\Omega_o}}\leq C_1 h^{p_n+t}_n
\quad \hbox{and} \quad
\|{v_n^h}(\mu^h_n) - v_n(\mu^h_n) \|_{0,{\Omega_o}} \leq C_2 h^{p_n+t}_n.
\end{displaymath}
We have just shown the following result.
\begin{theorem}\label{thm:hstar-discr-error}
Assume that H.1--H.4 hold. Then, there exist positive constants $C_1,\,C_2$ such that
\begin{equation}\label{eq:hstar-discr-error}
\|(\theta_n^*-\theta_n^{h*}, \theta_l^*-\theta_l^{h*})\|_{h*}
\leq C_1 h_n^{p_n+t} + C_2 h_l^{p_l+1}.
\end{equation}
\end{theorem}
We use Theorem \ref{thm:hstar-discr-error} to estimate the $\Theta_n\times\Theta_l$ norm of the discretization error.
\begin{corollary}\label{cor:L2H12-discr-error}
Assume that H.1--H.4 hold. Then, there exist positive constants $C_1,\,C_2,\,C_3$ such that
\begin{equation}\label{eq:L2H12-discr-error}
\|(\theta_n^*-\theta_n^{h*}, \theta_l^*-\theta_l^{h*})\|^2_{\Theta_n\times\Theta_l}
\leq C_1 h_n^{2(p_n+t)} + C_2 h_l^{2p_l+1}.
\end{equation}
\end{corollary}
{\it Proof.}
Adding and subtracting $\Pi_n\theta_n^*$ and $\Pi_l\theta_l^*$, and using the triangle inequality
\begin{equation}\label{eq:error-splitting}
\begin{aligned}
\|(\theta_n^*&-\theta_n^{h*}, \theta_l^*-\theta_l^{h*})\|_{\Theta_n\times\Theta_l} \\
& \leq \|(\theta_n^*-\Pi_n\theta_n^*, \theta_l^*-\Pi_l\theta_l^*)\|_{\Theta_n\times\Theta_l}
+ \|(\Pi_n\theta_n^*-\theta_n^{h*}, \Pi_l\theta_l^*-\theta_l^{h*})\|_{\Theta_n\times\Theta_l}.
\end{aligned}
\end{equation}
Using standard finite element approximation results for the first term yields
\begin{equation}\label{eq:error-splitting-first}
\|(\theta_n^*-\Pi_n\theta_n^*, \theta_l^*-\Pi_l\theta_l^*)\|^2_{\Theta_n\times\Theta_l}
\leq C_2 h_n^{2(p_n+t)} \|\theta_n^*\|^2_{p_n+t,{\eta_c}}
+ C_3 h_l^{2p_l+1} \|\theta_l^*\|^2_{p_l+\frac12,{\Gamma_c}}.
\end{equation}
We focus on the second term in \eqref{eq:error-splitting}. By the norm-equivalence \eqref{eq:energy-equiv} in the discrete control space, we have
\begin{displaymath}
\|(\Pi_n\theta_n^*-\theta_n^{h*}, \Pi_n\theta_n^*-\theta_l^{h*})\|_{\Theta_n\!\times\Theta_l}
\! \leq \! C \|(\Pi_n\theta_n^*-\theta_n^{h*}, \Pi_n\theta_n^*-\theta_l^{h*})\|_{h*}
\!= \!C \|(\theta_n^*-\theta_n^{h*}, \theta_n^*-\theta_l^{h*})\|_{h*}.
\end{displaymath}
This result along with \eqref{eq:hstar-discr-error} and \eqref{eq:error-splitting-first} implies \eqref{eq:L2H12-discr-error}.
$\square$
Since $u_n^*$ and $u_l^*$ depend continuously on the data, \eqref{eq:L2H12-discr-error}, Corollary \ref{cor:L2H12-discr-error} implies that
\begin{equation}\label{eq:states-discr-error}
\begin{aligned}
\|u_n^*-u_n^{*h}\|^2_{0,{\Omega_n}} & \leq K_{n1} \, h_n^{2(p_n+t)}+
K_{n2} \, h_l^{2p_l+1} \\
\|u_l^*-u_l^{*h}\|^2_{0,{\Omega_l}} & \leq K_{l1} \, h_n^{2(p_n+t)}+
K_{l2} \, h_l^{2p_l+1},
\end{aligned}
\end{equation}
that is, the $L^2$ norm error of the state variables is of the same order as the $L^2\times H^\frac12$ norm error of the controls.
\section{Numerical tests}\label{numerical-tests}
We present numerical tests with the new LtN formulation in one dimension, including a patch test, a convergence study and approximation of discontinuous solutions. Though preliminary, these results show the effectiveness of the coupling method, illustrate the theoretical results, and provide the basis for realistic simulations.
In our examples we use an integrable kernel, $\gamma_i$, satisfying assumptions \eqref{eq:gamma-conds} and \eqref{eq:integrable-kernel} to illustrate theoretical results and a singular kernel, $\gamma_s$, often used in the literature as an approximation of a peridynamic model for nonlocal mechanics. These kernels are given by
\begin{equation}\label{test-kernel}
\gamma_i(x,y) = \frac{3}{2\varepsilon^3}\chi_{(x-\varepsilon,x+\varepsilon)}(y)
\quad {\rm and} \quad
\gamma_s(x,y) = \frac{1}{\varepsilon^2|x-y|}\chi_{(x-\varepsilon,x+\varepsilon)}(y),
\end{equation}
respectively. Even though $\gamma_s$ does not satisfy our theoretical assumptions\footnote{The energy space associated with $\gamma_s$ is not equivalent to a Sobolev space, nevertheless it is a separable Hilbert space whose energy norm satisfies a nonlocal Poincar\'e inequality.}, these numerical results demonstrate the effectiveness of the LtN coupling for realistic, practically important, nonlocal models.
In all examples we consider the LtN problem configuration shown in Fig.~\ref{1D-domain}, where ${\Omega_n}=(-\varepsilon, 1+\varepsilon)$, ${\eta_D}=(-\varepsilon,0)$, ${\eta_c}=(1,1+\varepsilon)$, ${\Omega_l}=(0.75,1.75)$, ${\Gamma_D}=1.75$, ${\Gamma_c}=0.75$, and ${\Omega_o}=(0.75,1+\varepsilon)$.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{1D-domainN.pdf}
\vspace{-2ex}
\caption{One-dimensional LtN configuration used in the numerical tests.}
\label{1D-domain}
\end{figure}
In all numerical tests $V^h_{\eta_D}$, $V^h_{\eta_n}$, and $\Theta^h_n$ are discontinuous piecewise linear finite element spaces, while $H^h_{\Gamma_D}$ and $H^h_0$ are $C^0$ piecewise linear finite elements. We use the same grid size $h$ for the local and nonlocal finite element spaces. To solve the LtN optimization problem we apply the gradient based Quasi-Newton scheme BFGS \cite{wright:99}.
\smallskip\paragraph{The patch test} This test uses the linear manufactured solution $u_n=u_l=x$, $u_n|_{\eta_D}=x$, $u_l(1.75)=1.75$, $f_n=f_l=0$. We expect the LtN formulation to recover this solution exactly, i.e., $u_n^{h*}\equivu_n^*$ and $u_n^{h*}\equivu_l^*$. Figure \ref{patch} shows the optimal states $u_n^{h*}$ and $u_l^{h*}$, computed with $\varepsilon=0.065$ and $h=2^{-7}$, for $\gamma_i$ (left) and $\gamma_s$ (right).
The LtN method recovers the exact solution to machine precision.
\begin{figure}
\centering
\begin{tabular}{ll}
\includegraphics[scale=.3]{patch-integrable-065-2e5.pdf} & \hspace{-.7cm}
\includegraphics[scale=.3]{patch-singular-065-2e5.pdf}
\end{tabular}
\caption{Optimal states for the patch test with $\gamma_i$ (left) and $\gamma_s$ (right).}
\label{patch}
\end{figure}
\smallskip\paragraph{Convergence tests}
We examine the convergence of finite element approximations with respect to the grid size $h$ using the following manufactured solutions:
\begin{description}
\smallskip\item[M.1] $u_n=u_l=x^2$, $u_n|_{\eta_D}=x^2$, $u_l(1.75)=1.75^2$, $f_n=f_l=-2$.
\smallskip\item[M.2] $u_n=u_l=x^3$, $u_n|_{\eta_D}=x^3$, $u_l(1.75)=1.75^3$, $f_n=f_l=-6x$.
\end{description}
\smallskip Note that, for both kernels, the associated nonlocal operator is equavalent to the classical Laplacian for polynomials up to the third order.
For examples \textbf{M.1} and \textbf{M.2} we compute the convergence rates and the $L^2$ norm of the errors for the nonlocal state, $e(u_n^*)$, the local state, $e(u_l^*)$, and the nonlocal control parameter, $e(\theta_n^*)$. The results are reported in Tables \ref{T:convergence2} and \ref{T:convergence3} for $\gamma_i$ and in Tables \ref{T:convergence2s} and \ref{T:convergence3s} for $\gamma_s$ in correspondence of different values of interaction radius $\varepsilon$ and grid size $h$. In Fig.~\ref{optimal-poly} we also report the optimal discrete solutions.
Results in Tables \ref{T:convergence2} and \ref{T:convergence3} show optimal convergence for state and control variables. We note that according to \cite{Du_12_SIREV} and FE convergence theory \cite{Ern_04_BOOK} this is the same rate as for the independent discretization of the nonlocal and local equations by piecewise linear elements.
\begin{remark}
The convergence analysis in Section \ref{sec:conv-anl} establishes a suboptimal convergence rate in the $L^2$ norm of the discretization error of the state variables as we lose half order of convergence. We believe that the bound in \eqref{eq:states-discr-error} is not sharp, in fact, additional numerical tests (with $h=2^{-8}, \ldots 2^{-12}$) show that there is no convergence deterioration.
\end{remark}
For the singular kernel $\gamma_s$ there are no theoretical convergence results; however, there is numerical evidence that piecewise linear approximations of \eqref{nonlocal-weak} are second-order accurate; see \cite{Chen_11_CMAME}. Our numerical experiments in Tables \ref{T:convergence2s} and \ref{T:convergence3s} show that the optimization-based LtN solution converges at the same rate.
\begin{figure}[h!]
\centering
\begin{tabular}{ll}
\includegraphics[scale=.3]{quad-integrable-065-2e5.pdf} & \hspace{-.7cm}
\includegraphics[scale=.3]{cubic-integrable-065-2e5.pdf} \\\vspace{-3ex}
\includegraphics[scale=.3]{quad-singular-065-2e5.pdf} & \hspace{-.7cm}
\includegraphics[scale=.3]{cubic-singular-065-2e5.pdf}
\end{tabular}
\caption{Optimal states for \textbf{M.1} (left) and \textbf{M.2} (right) with $\gamma_i$ (top) and $\gamma_s$ (bottom).}
\label{optimal-poly}
\end{figure}
\begin{table}[h!]
\begin{center}
\footnotesize
\begin{tabular}{| c | c | l | c | l | c | l | c |}
\hline
$\varepsilon$ &$h$ & $e(u_n^*)$& rate & $e(u_l^*)$& rate & $e(\theta_n^*)$& rate \\ \hline
& $2^{-3}$ & 2.63e-03 & - & 2.76e-03 & - & 5.59e-05 & - \\ \cline{2-8}
& $2^{-4}$ & 6.16e-04 & 2.10 & 6.74e-04 & 2.04 & 2.63e-05 & 1.09 \\ \cline{2-8}
0.010 & $2^{-5}$ & 1.40e-04 & 2.13 & 1.63e-04 & 2.05 & 1.14e-05 & 1.20 \\ \cline{2-8}
& $2^{-6}$ & 3.46e-05 & 2.02 & 4.04e-05 & 2.01 & 4.18e-06 & 1.47 \\ \cline{2-8}
& $2^{-7}$ & & & & & & \\ \hline
& $2^{-3}$ & 2.24e-03 & - & 2.56e-03 & - & 6.34e-04 & - \\ \cline{2-8}
& $2^{-4}$ & 7.56e-04 & 1.56 & 7.13e-04 & 1.85 & 1.78e-04 & 1.83 \\ \cline{2-8}
0.065 & $2^{-5}$ & 1.89e-04 & 2.00 & 1.78e-04 & 2.00 & 4.46e-05 & 2.00 \\ \cline{2-8}
& $2^{-6}$ & 4.73e-05 & 2.00 & 4.46e-05 & 2.00 & 1.12e-05 & 2.00 \\ \cline{2-8}
& $2^{-7}$ & 1.18e-05 & 2.00 & 1.11e-05 & 2.00 & 2.82e-06 & 1.99 \\ \hline
\end{tabular}
\caption{Example \textbf{M.1} with $\gamma_i$: dependence on the grid size $h$ and interaction radius $\varepsilon$ of the error.}
\label{T:convergence2}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\footnotesize
\begin{tabular}{ | c | c | l | c | l | c | l | c | }
\hline
$\varepsilon$ &$h$ & $e(u_n^*)$& rate & $e(u_l^*)$& rate & $e(\theta_n^*)$& rate \\ \hline
& $2^{-3}$ & 4.89e-03 & - & 1.09e-02 & - & 2.04e-04 & - \\ \cline{2-8}
& $2^{-4}$ & 1.23e-03 & 1.99 & 2.74e-03 & 2.00 & 9.63e-05 & 1.08 \\ \cline{2-8}
0.010 & $2^{-5}$ & 3.11e-04 & 1.99 & 6.86e-04 & 2.00 & 4.16e-05 & 1.21 \\ \cline{2-8}
& $2^{-6}$ & 7.85e-05 & 1.99 & 1.72e-04 & 2.00 & 1.45e-05 & 1.51 \\ \cline{2-8}
& $2^{-7}$ & 1.95e-05 & 2.01 & 4.29e-05 & 2.00 & 3.16e-06 & 2.20 \\ \hline
& $2^{-3}$ & 5.41e-03 & - & 1.09e-02 & - & 2.29e-03 & - \\ \cline{2-8}
& $2^{-4}$ & 1.34e-03 & 2.01 & 2.74e-03 & 2.00 & 5.46e-04 & 2.07 \\ \cline{2-8}
0.065 & $2^{-5}$ & 3.38e-04 & 1.99 & 6.86e-04 & 2.00 & 1.38e-04 & 1.99 \\ \cline{2-8}
& $2^{-6}$ & 8.46e-05 & 2.00 & 1.71e-04 & 2.00 & 3.46e-05 & 1.99 \\ \cline{2-8}
& $2^{-7}$ & 2.12e-05 & 2.00 & 4.29e-05 & 2.00 & 8.73e-06 & 1.99 \\ \hline
\end{tabular}
\caption{Example \textbf{M.2} with $\gamma_i$: dependence on the grid size $h$ and interaction radius $\varepsilon$ of the error.}
\label{T:convergence3}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\footnotesize
\begin{tabular}{| c | c | l | c | l | c | l | c |}
\hline
$\varepsilon$ &$h$ & $e(u_n^*)$& rate & $e(u_l^*)$& rate & $e(\theta_n^*)$& rate \\ \hline
& $2^{-3}$ & 2.67e-03 & - & 2.78e-03 & - & 5.79e-05 & - \\ \cline{2-8}
& $2^{-4}$ & 6.33e-04 & 2.08 & 6.81e-04 & 2.03 & 2.72e-05 & 1.09 \\ \cline{2-8}
0.010 & $2^{-5}$ & 1.47e-04 & 2.11 & 1.65e-04 & 2.04 & 1.19e-05 & 1.20 \\ \cline{2-8}
& $2^{-6}$ & 3.63e-05 & 2.01 & 4.11e-05 & 2.01 & 4.29e-06 & 1.47 \\ \cline{2-8}
& $2^{-7}$ & 9.10e-06 & 2.00 & 1.03e-05 & 2.00 & 1.05e-06 & 2.03 \\ \hline
& $2^{-3}$ & 2.36e-03 & - & 2.62e-03 & - & 6.52e-04 & - \\ \cline{2-8}
& $2^{-4}$ & 7.54e-04 & 1.65 & 7.12e-04 & 1.88 & 1.78e-04 & 1.87 \\ \cline{2-8}
0.065 & $2^{-5}$ & 1.88e-04 & 2.00 & 1.78e-04 & 2.00 & 4.45e-05 & 2.00 \\ \cline{2-8}
& $2^{-6}$ & 4.67e-05 & 2.01 & 4.44e-05 & 2.00 & 1.11e-05 & 2.00 \\ \cline{2-8}
& $2^{-7}$ & 1.14e-05 & 2.04 & 1.10e-05 & 2.01 & 2.76e-06 & 2.01 \\ \hline
\end{tabular}
\caption{Example \textbf{M.1} with $\gamma_s$: dependence on the grid size $h$ and interaction radius $\varepsilon$ of the error.}
\label{T:convergence2s}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\footnotesize
\begin{tabular}{ | c | c | l | c | l | c | l | c | }
\hline
$\varepsilon$ &$h$ & $e(u_n^*)$& rate & $e(u_l^*)$& rate & $e(\theta_n^*)$& rate \\ \hline
& $2^{-3}$ & 4.90e-03 & - & 1.09e-02 & - & 2.07e-04 & - \\ \cline{2-8}
& $2^{-4}$ & 1.23e-03 & 1.99 & 2.74e-03 & 2.00 & 9.68e-05 & 1.10 \\ \cline{2-8}
0.010 & $2^{-5}$ & 3.11e-04 & 1.99 & 6.86e-04 & 2.00 & 4.17e-05 & 1.21 \\ \cline{2-8}
& $2^{-6}$ & 7.85e-05 & 1.99 & 1.72e-04 & 2.00 & 1.46e-05 & 1.52 \\ \cline{2-8}
& $2^{-7}$ & 1.96e-05 & 2.01 & 4.29e-05 & 2.00 & 3.17e-06 & 2.00 \\ \hline
& $2^{-3}$ & 5.40e-03 & - & 1.09e-02 & - & 2.31e-03 & - \\ \cline{2-8}
& $2^{-4}$ & 1.34e-03 & 2.01 & 2.74e-03 & 2.00 & 5.46e-04 & 2.08 \\ \cline{2-8}
0.065 & $2^{-5}$ & 3.37e-04 & 1.99 & 6.86e-04 & 2.00 & 1.38e-04 & 2.00 \\ \cline{2-8}
& $2^{-6}$ & 8.46e-05 & 2.00 & 1.72e-04 & 2.00 & 3.46e-05 & 1.99 \\ \cline{2-8}
& $2^{-7}$ & 2.12e-05 & 2.00 & 4.29e-05 & 2.00 & 8.73e-06 & 1.99 \\ \hline
\end{tabular}
\caption{Example \textbf{M.2} with $\gamma_s$: dependence on the grid size $h$ and interaction radius $\varepsilon$ of the error.}
\label{T:convergence3s}
\end{center}
\end{table}
\paragraph{Recovery of singular features} The tests in this section are motivated by nonlocal mechanics applications and demonstrate the effectiveness of the coupling method in the presence of point forces and discontinuities. We use the following two manufactured solution examples:
\begin{description}
\smallskip\item[A.1] $u_n|_{\eta_D}=0$, $u_l(1.75)=0$, $f_n=f_l=\delta(x-0.25)$, being $\delta$ the Dirac function.
\smallskip\item[A.2] $u_n|_{\eta_D}=0$, $u_l(1.75)=0$,
\begin{displaymath}
f_n=f_l=\left\{\begin{array}{ll}
0 & x<\frac12-\varepsilon\\[4mm]
-\frac{2}{\varepsilon}\left(\frac12\varepsilon^2-\varepsilon+\frac38+\left(2\varepsilon-\frac32-\log\varepsilon\right)x\right. + & \\
\qquad\left.\left(\frac32+\log\varepsilon\right)x^2-\log\left(\frac12-x\right)(x^2-x)\right) & \frac12-\varepsilon\leq x<\frac12\\[4mm]
-\frac{2}{\varepsilon}\left(\frac12\varepsilon^2-\varepsilon-\frac38+\left(2\varepsilon+\frac32+\log\varepsilon\right)x\right. - & \\
\qquad \left.\left(\frac32+\log\varepsilon\right)x^2-\log\left(x-\frac12\right)(x^2-x)\right) & \frac12\leq x<\frac12+\varepsilon\\[4mm]
0 & x\geq \frac12 +\varepsilon.
\end{array}\right.
\end{displaymath}
\end{description}
In Fig.~\ref{optimal-disc} we report the optimal discrete solutions for $h=2^{-7}$ and $\varepsilon=0.065$. In particular, {\bf A.2} is a significant example that shows the usefulness of the coupling method in approximating the true solution with a local model where the nonlocality effects are not pronounced, i.e. the solution is smooth.
\begin{figure}
\centering
\begin{tabular}{ll}
\includegraphics[scale=.3]{delta-singular-065-2e5.pdf} & \hspace{-.7cm}
\includegraphics[scale=.3]{discontinuous-singular-065-2e5.pdf}
\end{tabular}
\vspace{-3ex}
\caption{Optimal states for examples \textbf{A.1} and \textbf{A.2}.}
\label{optimal-disc}
\end{figure}
\newpage
\section*{Acknowledgements}
\begin{wrapfigure}{r}{0.4\linewidth}
\vspace{-5.4ex}
\centering
\includegraphics[width=0.7\linewidth]{trace-domain-rectangle.pdf}
\noindent
\vspace{-2.8ex}
\caption{Domain for Lemma \ref{lemma-nonlocal-trace}.}
\label{fig:nonlocal-trace}
\end{wrapfigure}
This work was supported by the Sandia National Laboratories (SNL) Laboratory-directed Research and Development (LDRD) program, and the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under Award Number DE-SC-0000230927 and under the Collaboratory on Mathematics and Physics-Informed Learning Machines for Multiscale and Multiphysics Problems (PhILMs) project. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energys National Nuclear Security Administration contract number DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. Report Number: SAND2019-12937.
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
State-of-the-art systems have shown performance comparable with humans on many recent natural language understanding (NLU) datasets \cite{devlin-etal-2019-bert,sun-etal-2021-ernie}, suggesting that these benchmarks will no longer be able to measure future progress.
To move beyond this, we will need to find better ways of building difficult datasets, ideally without sacrificing diversity or coverage \cite{bowman-dahl-2021-will}.
To obtain such human-written examples at scale, there are active lines of crowdsourcing research on protocols of worker handling and feedback \cite{nangia-etal-2021-ingredients} and the design of the collection task \cite{ning-etal-2020-torque,rogers-etal-2020-getting}.
However, we do not have clear information on what aspects of \textit{text sources} affect the difficulty and diversity of examples.
\begin{figure}[t]
\centering \small
\fbox{\parbox{0.95\linewidth}{
\textbf{MCTest}: Tony walked home from school on his birthday. He was surprised to see a lot of cars in front of his house. When he opened the door and entered the house, he heard a lot of people yell, ``Surprise!'' It was a surprise party for his birthday. His parents called all his friends' parents and invited them to come to a party for Tony. [...] \\
Q: \textit{Who were invited to the party and by who?} \\
$\rlap{\hphantom{$\checkmark$}}\square$~ \textit{Tony's parents invited only his friends} \\
$\rlap{\hphantom{$\checkmark$}}\square$~ \textit{Tony invited his friends and their parents} \\
$\rlap{\hphantom{$\checkmark$}}\square$~ \textit{Tony's parents invited his friends' parents} \\
$\rlap{$\checkmark$}\square$~ \textit{Tony's parents invited his friends and their parents}
}}
\fbox{\parbox{0.95\linewidth}{
\textbf{ReClor}: Humanitarian considerations aside, sheer economics dictates that country X should institute, as country Y has done, a nationwide system of air and ground transportation for conveying seriously injured persons to specialized trauma centers. Timely access to the kind of medical care that only specialized centers can provide could save the lives of many people. [...] \\
Q: \textit{What is the economic argument supporting the idea of \hphantom{Q:}a transportation system across the nation of Country X?} \\
$\rlap{\hphantom{$\checkmark$}}\square$~ \textit{Building the transportation system creates a substantial \hphantom{$\rlap{\hphantom{$\checkmark$}}\square$~}increase of jobs for the locals} \\
$\rlap{$\checkmark$}\square$~ \textit{Increasing access to specialized medical centers can \hphantom{$\rlap{\hphantom{$\checkmark$}}\square$~}lower the chance of the workforce population dying} \\
$\rlap{\hphantom{$\checkmark$}}\square$~ \textit{Transportation ticket prices directly contribute to the \hphantom{$\rlap{\hphantom{$\checkmark$}}\square$~}government's revenue} \\
$\rlap{\hphantom{$\checkmark$}}\square$~ \textit{Country Y was successful with their attempts to poten-\hphantom{$\rlap{\hphantom{$\checkmark$}}\square$~}tially save lives so Country X should try it as well}
}}
\caption{
Example questions for passages from simple narratives (MCTest) and technical arguments (ReClor).
}
\label{fig:intro-example}
\end{figure}
Crowdsourced datasets in reading comprehension use passages taken from a variety of sources, such as news articles, exams, and blogs, about which questions are written \cite{lai-etal-2017-race,trischler-etal-2017-newsqa,rogers-etal-2020-getting}.
The first example in Figure~\ref{fig:intro-example} is from MCTest~\cite{richardson-etal-2013-mctest}, the passages of which are written in grade-school-level English.
The second example is from ReClor~\cite{Yu2020ReClor}, which consists of passages and questions written for graduate and law school admission examinations.
We hypothesize that difficult passages, such as those in the second example, are more suitable for crowdsourcing challenging questions.
Passages that are linguistically complex and have dense information could help facilitate the writing of questions that require understanding a wide range of linguistic and world knowledge, following intricate events, and comprehending logical arguments.
In contrast, easy passages, as in children's stories, likely talk about common situations and simple facts, which might prevent workers from writing difficult questions.
In this work, we crowdsource multiple-choice reading comprehension questions to analyze how question difficulty and type are affected by the choice of source passage.
Using passages extracted from seven different sources, we ask crowdworkers to write questions about the given passages.
We compute the difference between human and machine accuracy, using it as a measure of the question difficulty, to investigate whether there is a correlation between the question difficulty and linguistic aspects of the passage, such as their source, length, and readability.
In addition to a standard setting where we directly accept crowdworkers' submissions, we use an adversarial setting in which they have to write questions that fool a strong reading comprehension model \cite{bartolo-etal-2020-beat,kiela-etal-2021-dynabench}.
Previous work finds that questions that require numerical reasoning frequently appear in the adversarial data collection of the extractive QA task on Wikipedia articles \cite{kaushik-etal-2021-efficacy}, but our aim is to see whether we observe a similar trend in multiple-choice questions written for different passage sources or if the adversarial setting is useful for collecting especially diverse questions.
To our surprise, we find that the difficulty of collected questions does not depend on the differences of passages in linguistic aspects such as passage source, passage length, Flesch--Kincaid grade level \cite{kincaid1975derivation}, syntactic and lexical surprisal, elapsed time for answering, and the average word frequency in a passage.
Our main positive finding comes through our manual annotation of the types of reasoning that each question targets, where we observe that questions that require numerical reasoning and logical reasoning are relatively difficult.
In addition, we find several trends between the passage sources and reasoning types.
For example, logical reasoning is more often required in questions written for technical passages, whereas understanding of a given passage's gestalt and the author's attitude toward it are more frequently required for argumentative and subjective passages than expository passages.
These results suggest that when creating a new benchmark dataset or choosing one for evaluating NLU systems, selecting a diverse set of passages can help ensure a diverse range of question types, but that passage \textit{difficulty} need not be a priority.
Our collected datasets could be useful for training reading comprehension models and for further analysis of requisite knowledge and comprehension types in answering challenging multiple-choice questions.\footnote{Our datasets, annotation instructions and results, and crowdsourcing scripts are available at \url{https://github.com/nii-cl/qa-text-source-comparison}.}
\section{Related Work}
\paragraph{Crowdsourcing NLU Datasets}
Crowdsourcing has been widely used to collect human-written examples at scale \cite{rajpurkar-etal-2016-squad,trischler-etal-2017-newsqa}.
Crowdworkers are usually asked to write questions about a given text, sometimes with constraints imposed to obtain questions that require specific reasoning skills such as multi-hop reasoning \cite{yang-etal-2018-hotpotqa} or understanding of temporal order, coreference, or causality \cite{rogers-etal-2020-getting}.
In this study, to analyze naturally written examples, we do not consider specific constraints on questions or answer options.
Current benchmark datasets constructed by crowdsourcing may not be of sufficient quality to precisely evaluate human-level NLU.
For example, \citet{ribeiro-etal-2020-beyond} reveal that state-of-the-art models in traditional NLP benchmarks fail simple behavioral tests of linguistic capabilities (\textit{checklists}).
\citet{chen-durrett-2019-understanding} and \citet{min-etal-2019-compositional} show that questions in multi-hop reasoning datasets such as HotpotQA by \citet{yang-etal-2018-hotpotqa} do not necessarily require multi-hop reasoning across multiple paragraphs.
To investigate how to collect high-quality, challenging questions through crowdsourcing, \citet{nangia-etal-2021-ingredients} compare different sourcing protocols and find that training workers and providing feedback about their submissions improve the difficulty and quality of their reading comprehension questions.
To encourage workers to write difficult examples, \citet{bartolo-etal-2020-beat} propose to collect questions using a model-in-the-loop setting.
Although this adversarial approach enables us to collect challenging questions efficiently, \citet{gardner-etal-2020-evaluating} point out that the collected examples might be biased towards the quirks of the adversary models.
\citet{bowman-dahl-2021-will} extend this argument, and point out that adversarial methods can systematically eliminate coverage of some phenomena.
This is also supported by \citet{kaushik-etal-2021-efficacy}, but their findings are limited to extractive QA for Wikipedia articles.
Our motivation is to see if this argument is applicable to the multiple-choice format with a wide range of passage sources for which we expect crowdworkers to write linguistically diverse questions and answer options.
\paragraph{Sources of NLU Datasets}
Reading comprehension datasets are often constructed with a limited number of passage sources.
\citet{rajpurkar-etal-2016-squad} sample about five hundred articles from the top 10,000 articles in PageRank of Wikipedia.
Similarly, \citet{dua-etal-2019-drop} curate passages from Wikipedia articles containing numeric values to collect questions for mathematical and symbolic reasoning.
\citet{khashabi-etal-2018-looking} construct a dataset in which questions are written for various passage sources such as news articles, science textbooks, and narratives.
However, we cannot use their questions for our analysis of the variation of naturally written questions because they are designed to require local multi-sentence reasoning (such as coreference resolution and paraphrasing) by filtering out questions answerable only with a single sentence.
Similarly to our work, \citet{sugawara-etal-2017-evaluation} find that readability metrics and question difficulty do not correlate in reading comprehension datasets.
Our study differs in the following two points, which could cause different findings:
First, their observational study of existing datasets has fundamental confounding factors because the questions they examine are constructed using different sourcing methods (e.g., automatic generation, expert writing, and crowdsourcing), which could have an impact on the question difficulty.
We aim to investigate uniformly crowdsourced examples across seven different sources to obtain insights for future data construction research using crowdsourcing.
Second, they define question difficulty using human annotations alone, but this does not necessarily reflect the difficulty for current state-of-the-art models.
In this study, we define the question difficulty as the human--machine performance gap using eight recent strong models, which enables a more fine-grained analysis of the collected questions for a better benchmark of current models.
\citet{fisch-etal-2019-mrqa} propose a shared task consisting of different in-domain and out-domain datasets.
However, they combine datasets in different task formats and sourcing methods, which prevents us from comparing questions across passage sources alone.
In contrast, our focus is to compare questions collected by crowdsourcing for the same task format to analyze the question difficulty for current state-of-the-art models.
We adopt the multiple-choice format because, as discussed by \citet{huang-etal-2019-cosmos}, it allows us to evaluate both human and machine performance easily.
\section{Crowdsourcing Tasks}
This study aims to analyze what kinds of passages make crowdsourced reading comprehension questions difficult.
We use Amazon Mechanical Turk.
To collect difficult and high-quality examples, we require crowdworkers to take a qualification test \mbox{before} accepting our question writing and validation tasks.
\subsection{Worker Qualification}
\label{sec:qualification}
The qualification test has two parts, which we run in separate tasks: question answering and writing.
To take the qualification test, workers have to meet the following minimum qualifications: based in the United States, Canada, or United Kingdom, have an approval rate of at least 98\%, and have at least 1,000 approved tasks.
The question answering task is used to identify workers who answer reading comprehension questions carefully.
A single question answering task has five questions that are randomly sampled from the validation set of ReClor in which most questions are taken from actual exams.
Those who correctly answer at least four out of the five questions proceed to the next qualification phase.
The question writing task is used to familiarize workers with the writing of multiple-choice reading comprehension questions and select those who can carefully write examples.
We ask workers to write two questions given two different passages randomly sampled from the validation set of RACE \cite{lai-etal-2017-race}.
This dataset consists of self-contained passages written for middle- and high-school exams in various subjects, which we expect the workers to be able to write questions for easily.
Following \citet{nangia-etal-2021-ingredients}, we then review the workers' submissions and grade them using a rubric with four criteria: the question (1) is answerable without ambiguity (\textit{yes} or \textit{no}); (2) requires reading the whole passage (five-point scale); (3) is creative and non-obvious (five-point scale); and (4) has distractor answers that could look correct to someone who has not read the passage carefully (\textit{more than one}, \textit{one}, or \textit{no}).
We rank workers using this rubric and allow approximately the top 50\% of workers to proceed to the main writing task.
We make sure that these workers write two unambiguous and answerable questions.
\subsection{Writing Task}
\label{sec:writing}
In the main writing task, a worker is shown a single passage and asked to write a question about it along with four answer options.
We provide instructions where we describe that questions have to be challenging but still answerable and unambiguous for humans, and we include good and bad examples to illustrate what kinds of questions we aim to collect.
For example, good examples require reading the whole passage and ask about characters' motivations or consequences of described events, while bad examples only ask about a simple fact or are answerable without reading the passage (Appendix~\ref{app:instructions}).
Each worker who passes the qualification round is randomly assigned to either standard or adversarial data collection.
In the standard collection, we accept workers' submissions without any filtering.
In the adversarial collection, a written question is sent to a reading comprehension model immediately.
If the model cannot answer that question correctly, we accept it.
We allow workers to submit questions (i.e., get paid) after three attempts even if they keep failing to fool the model.
We use UnifiedQA 3B v2 \cite{khashabi-etal-2020-unifiedqa} for the adversary model, which is trained on a wide variety of question answering datasets such as MCTest, RACE, NarrativeQA \cite{kocisky-etal-2018-narrativeqa}, and SQuAD.
While the source of training data that we use in our models will inevitably influence our findings, focusing on a model with very diverse pretraining and fine-tuning will minimize this effect.
\paragraph{Passage Sources}
We use passages from the following seven sources: (1) MCTest children's narratives, (2) Project Gutenberg narratives, (3) Slate online magazine articles from the 1990s sourced from the Open American National Corpus \cite{ide-suderman-2006-integrating}, (4) middle- and high-school exams from RACE, (5) graduate-level exams from ReClor, and (6) science and (7) arts articles from Wikipedia.
We use the passages from the training sets of MCTest, RACE, and ReClor.
For Gutenberg, Slate, and Wikipedia, we split available books and articles into passages.
Details are in Appendix~\ref{app:passage-source}.
In the writing task, a passage is randomly taken from a passage pool in which there are the same number of passages extracted from each source.
\subsection{Validation Task}
\label{sec:validation}
We collect the votes of five workers for each of the collected questions.
Those workers who passed the question answering task of the qualification round can accept the validation tasks.
To incentivize workers, we use preexisting gold-labeled examples \citep[from][]{nangia-etal-2021-ingredients} as catch trials, representing about 10\% of the tasks, and pay a bonus of \$0.50 USD if a worker can answer those questions correctly at least 80\% of the time.
If a worker fails to answer them at least 60\% of the time, we disqualify the worker from future rounds of data collection.
\paragraph{Worker Pay and Logistics}
For the writing tasks, the base pay is \$2.00 per question, which we estimate to be approximately \$15.00 per hour based on measurements from our pilot runs.
If a worker succeeds in fooling the model in adversarial data collection, they receive an additional bonus of \$1.00.
For validation, a single task consisting of five questions pays \$2.00, which we estimate to be approximately \$15.00 per hour as well.
\section{Crowdsourcing Results}
\subsection{Dataset Construction}
We collect a total of 4,340 questions, with 620 in each of the seven sources, further divided into 310 each for the standard and adversarial methods. Each passage is paired with only one question.
We randomly sample two out of five validation votes to validate the collected examples and use the remaining three votes for measuring human performance.
In the validation, we regard a question as valid if at least one of the two votes is the same as the writer's gold answer.
If both votes are the same as the gold answer, the question is regarded as a high-agreement example.
We find that 90.3\% of the collected questions are valid (92.0\% for standard collection and 88.7\% for adversarial collection).
In addition, 65.7\% of the collected questions are classified as high-agreement (68.7\% and 62.7\% for standard and adversarial collection, respectively).
We present the dataset and worker statistics in Appendices~\ref{app:dataset} and \ref{app:worker}.
\begin{table*}[t]
\centering
\small
\def1.5{1.1}
\begin{tabular}{l@{\hspace{0.8\tabcolsep}}l@{\hspace{0.7\tabcolsep}}rrrrr@{\hspace{1.0\tabcolsep}}rrrrr}
\toprule
\input{main_result}
\bottomrule
\end{tabular}
\caption{
Accuracy of humans and models and the difference ($\Delta$) between human accuracy and the average zero-shot performance of eight different models (\textit{M-avg}.) for all valid questions and the high-agreement portion of them.
The highest and lowest gaps are highlighted in bold and underlined.
The questions are crowdsourced with (\textit{Adv.}) and without (\textit{Dir.}) adversarial feedback.
\textit{UniQA} is the zero-shot performance by the UnifiedQA 3B model used in the adversarial data collection.
\textit{DeBERTa} is the performance by the xlarge model fine-tuned on RACE.
}
\label{tbl:main}
\end{table*}
\subsection{Human Performance}
Table~\ref{tbl:main} displays human and model performance.
We use the questions that are validated using two out of five human votes in the validation step above and take the majority vote of the remaining three votes to measure human performance on them.
We observe 3.3\% and 2.0\% gaps between the standard and adversarial collection in the valid and high-agreement questions, respectively.
\subsection{Machine Performance}
To establish the model performance that is not biased towards a single model, we compute the average accuracy (\textit{M-avg.}) of eight different models from the following two classes:
RoBERTa large \citep[four models with different random seeds;][]{liu2019roberta} and DeBERTa large and xlarge \citep[v2;][]{he2020deberta} either fine-tuned on MNLI \cite{williams-etal-2018-broad} first or not.
The RoBERTa and DeBERTa models are all fine-tuned on RACE.
Among these models, DeBERTa xlarge (MNLI-fine-tuned) performs best on RACE, achieving 86.8\% accuracy.
Because UnifiedQA 3B (72.3\% on RACE) is used in the adversarial data collection, it shows lower accuracy on the adversarial questions (not included in the average).
The performance of these two models is shown for comparison in Table~\ref{tbl:main}.
Except where noted, we do not train the models on any collected questions.
\paragraph{Supervised Performance}
For each dataset, we evaluate the performance of DeBERTa large trained on the datasets other than the target dataset in a leave-one-out manner.
Our motivation is to see whether the accuracy values significantly improve by training (i.e., the human--model gaps decrease).
If there is a large gain, it would imply that the datasets have simple patterns among examples that the models can exploit.
The results show no significant gains in the adversarial datasets, but the standard datasets show some small gains (Appendix~\ref{app:supervised}).
\begin{figure*}[t!]
\begin{minipage}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/passage-length-plot.pdf}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/readability-plot.pdf}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/surprisal-both-plot.pdf}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/elapsed-time-plot-answering.pdf}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/elapsed-time-plot-writing.pdf}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/average-word-frequency-plot.pdf}
\end{minipage}
\caption{Passage length, Flesch--Kincaid grade level, syntactic and lexical surprisal, elapsed time for question answering and writing, and average word frequency of passages in the easy and hard examples.}
\label{fig:plots}
\end{figure*}
\paragraph{Partial-Input Performance}
As \citet{kaushik-lipton-2018-much} point out, reading comprehension datasets might have annotation artifacts that enable models to answer questions without passages or question sentences.
To investigate such artifacts in our collected examples, we evaluate the performance of two DeBERTa models (xlarge and large fine-tuned on MNLI), which are stronger than the others, with the ablation of questions (\textit{P+A}), passages (\textit{Q+A}), and both questions and passages (\textit{A only}).
We see large drops in the zero-shot performance of DeBERTa xlarge.
In addition, we do not observe a significant performance improvement in the supervised performance by DeBERTa large (MNLI-fine-tuned).
These results demonstrate that the collected questions and answer options do not have severe annotation artifacts for any passage source (Appendix~\ref{app:partial}).
\subsection{Human--Model Performance Gap}
Following \citet{nangia-etal-2021-ingredients}, we compute the human--model performance gap ($\Delta$) between the human and the average model accuracies to estimate the difficulty of questions for models.
We observe a small variation in the gap for different passage sources in the high-agreement questions $(\Delta=14.9\pm3.6)$.
We find the highest human performance for MCTest questions in the high-agreement portion and the lowest for Gutenberg, whereas the model's highest performance is for Slate and the lowest for MCTest.
Surprisingly, the questions sourced from MCTest, which consists of simple narrative passages, show the largest gap out of all sources for the high-agreement questions.
Although ReClor consists of passages for graduate-level exams, it produces smaller gaps than RACE, which consists of passages for middle- and high-school English exams.
Gutenberg passages are written for adults, but the examples written for those passages do not show larger gaps than those for MCTest passages.
We find a trend in the human performance: the questions of easy-to-read sources (e.g., MCTest and RACE) show higher accuracy and those of difficult-to-read sources (e.g., Gutenberg and Slate) show lower, but this trend is not observed either in the machine performance or human--machine performance gap.
These observations are inconsistent with our initial expectations in the introduction.
\section{Linguistic Analysis}
\label{sec:analysis}
We analyze how the linguistic aspects of the collected examples correlate with the human--model performance gap computed in the experiments.
To get a better estimate of human performance, we use the high-agreement examples \cite{nie-etal-2020-learn}.
For ease of comparison, we split these examples into two subsets: easy ($\Delta \leq 20\%$) and hard ($\Delta \geq 40\%$). These subsets have 1,970 and 547 examples, respectively.
Appendix~\ref{app:easy-hard-subsets} provides the frequency of easy and hard examples across the passage sources and collection methods.
\subsection{Readability Measures}
We compute the correlation between the human--model performance gap and readability measures across all valid examples (Pearson's $r$ and $p$-value) and independence between the distributions of the easy and hard subsets about the measures ($p$-value in Welch's t-test).
Figure~\ref{fig:plots} shows the density distributions of the easy and hard subsets,
while Appendices~\ref{app:passage-length} to \ref{app:average-word-frequency} provide the plots of all valid examples.
\paragraph{Passage Length}
We use the number of words (except for punctuation) as the passage length (top left in Figure~\ref{fig:plots}).
Across all examples, we observe $r=0.01$ ($p=0.47$) (the full plot is in Appendix~\ref{app:passage-length}).
The t-test shows $p=0.51$.
We observe no relationship between the passage length and question difficulty.
We also analyze question and option length in Appendix~\ref{app:question-length}.
\paragraph{Flesch--Kincaid Grade Level}
We use the Flesch--Kincaid grade level \cite{kincaid1975derivation} as a basic metric of text readability (top center in Figure~\ref{fig:plots}).
This metric defines readability based on an approximate US grade level with no upper bound (higher is more difficult to read). It is computed for a passage using the average number of words that appear in a sentence and the average number of syllables in a word (Appendix~\ref{app:readability}).
The correlation between the grade and human--model performance gap is $r = -0.08$ ($p<0.001$) and the t-test shows $p<0.001$.
This result demonstrates that passage readability has a small negative effect on the question difficulty, perhaps pointing to an interfering effect whereby our pre-qualified \textit{human} annotators are more likely to make mistakes on more complex passages.
\paragraph{Syntactic and Lexical Surprisal}
The Flesch--Kincaid grade level only considers sentence length and the number of syllables.
To better estimate the passage difficulty in terms of the psycholinguistic modeling of human text processing, we use syntactic and lexical surprisal measures \cite{roark-etal-2009-deriving}.
These measures are computed using incremental parsing and proved to be useful for predicting human reading time.
We observe $r=0.000$ ($p=0.99$) for syntactic surprisal and $r=-0.007$ ($p=0.66$) for lexical surprisal across all examples.
We do not observe any statistically significant difference between the easy and hard subsets (syntactic $p=0.52$ and lexical $p=0.57$ in the t-test; see top right in Figure~\ref{fig:plots}).
Appendix~\ref{app:surprisal} describes details of the calculation.
\paragraph{Annotation Speed}
Inspired by the psycholinguistic study of text complexity \cite{gibson1998linguistic,lapata-2006-automatic}, we measure the average time crowdworkers spent answering questions in the validation tasks (see bottom left in Figure \ref{fig:plots}).
This measures the elapsed time of both reading a given passage and thinking about its question, which is used as an approximation of reading time (as a proxy of text readability).
The correlation coefficient ($r=-0.06$ with $p<0.001$) and t-test ($p=0.88$) show that there is only a small negative correlation with question difficulty.
We also measure the elapsed time for writing questions as a reference (bottom center in Figure~\ref{fig:plots} and Appendix~\ref{app:elapsed-time-writing}), observing that there is no strong correlation ($r=0.02$ with $p=0.27$).
\paragraph{Word Frequencies}
Following \citet{chen-meurers-2016-characterizing}, we analyze the effect of word frequencies on text readability.
Using word frequencies per one million words in SUBTLEXus \cite{brysbaert2009moving}, we calculate the average frequency of words appearing in a passage as a measure of passage difficulty in terms of vocabulary (a lower average frequency implies greater difficult).
We do not observe any statistically significant difference by the t-test $p=0.14$ (bottom right in Figure~\ref{fig:plots}) or Pearson's $r=0.02$ with $p=0.27$ (Appendix~\ref{app:average-word-frequency}).
We observe similar trends even when using the human performance as the difficulty measure (Appendix~\ref{app:human_difficulty}).
\subsection{Question Types}
We analyze how passage sources and collection methods affect question types in this section.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/qtype_pies_qdifficulty_easy_hard_only.pdf}
\caption{
Question words and their two subsequent words in the (a) easy and (b) hard examples.
}
\label{fig:qtype-pie-difficulty}
\end{figure}
\paragraph{Question Words}
We automatically extract the first \textit{wh}-words that appear in each valid question;
if no \textit{wh}-word is extracted, we count the question as polar.
Figure~\ref{fig:qtype-pie-difficulty} plots the question words and their two subsequent words (except articles) in the easy and hard questions. From this we observe that the hard questions are generic, not specific to given passages (e.g., \textit{which of the following is correct?}) more often than the easy questions.
This probably results from the difference between the standard and adversarial data collection.
The workers in the adversarial collection tend to write generic questions, while those in the standard collection write questions that are more balanced (e.g., there are more easy \textit{why} and \textit{how} questions).
We also notice that the hard subset has more \textit{how many} questions.
This is likely due to the fact that it is easy for annotators to learn that numeric questions often fool the adversary model.
These observations imply that adversarial data collection tends to concentrate the distribution of questions towards a few specific question types (e.g., generic and numeric).
This is consistent with the observations in \citet{kaushik-etal-2021-efficacy}.
See Appendix~\ref{app:question-type} for details.
\begin{figure}[t]
\includegraphics[width=\linewidth]{figures/reasoning_to_freq_with_method.pdf}
\caption{
Frequency of comprehension types in the easy and hard examples for each collection method.
}
\label{fig:difficulty-to-reasoning}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/reasoning_stacked.pdf}
\caption{
Frequency of comprehension types across passage sources and collection methods.
Because a question can have multiple labels, the sum of the frequencies may exceed 100\%.
}
\label{fig:reasoning-type-detail}
\end{figure}
\paragraph{Comprehension Types}
Following \citet{bartolo-etal-2020-beat} and \citet{williams2020anlizing}, we analyze what kind of comprehension is required to answer the collected questions.
We sample a total of 980 high-agreement questions, 70 from each passage source and collection method, and then manually annotate them with one or more labels of seven comprehension types.
The definitions of these types, examples, and detailed results are presented in Appendix~\ref{app:question-type}.
Figure~\ref{fig:difficulty-to-reasoning} shows the frequency of comprehension types for different question difficulties (676 easy, 172 hard) and the collection methods.
We find that 868 questions have one label, 110 have two labels, and two have three labels.
We can see that \textit{numeric}, \textit{spatial/temporal}, and \textit{logical} questions appear more often in the hard subset in both collection methods.\footnote{In contrast, when we use the average human performance as the question difficulty measure, no comprehension type is significantly harder than the others (Appendix~\ref{app:human_difficulty}).}
Looking at the frequency across the passage sources in Figure~\ref{fig:reasoning-type-detail}, we find that there are some trends between the sources and comprehension types as follows:
\begin{itemize}
\item Technical documents, such as those used in graduate-school-level reading comprehension exams, tend to yield logical reasoning questions (e.g., ReClor and Slate).
\item Child-level texts tend to yield numerical reasoning questions in the standard setting (e.g., MCTest and RACE).
In the adversarial setting, passages containing many numerical values tend to yield such questions (e.g., MCTest and Wikipedia arts).
\item To collect gestalt questions or those considering the author's attitude in a given passage, passages covering subjective or argumentative topics (e.g., Gutenberg, Slate, and ReClor) are suitable.
In contrast, expository passages such as Wikipedia articles are not.
\item Narratives and related texts (e.g., MCTest, Gutenberg, and part of RACE) involve events with characters, which tend to yield spatial/temporal reasoning questions.
\end{itemize}
Although the definitions of our comprehension types are coarse and these trends do not ensure that specific kinds of passages always yield the target comprehension type, considering passage sources might be an effective strategy for collecting questions of an intended comprehension type.
Adversarial data collection for this purpose might not be useful because it may encourage workers to focus on writing only a few specific types of questions (e.g., numeric).
\section{Conclusion}
To make an NLU benchmark useful, it has to consist of examples that are linguistically diverse and difficult enough to discriminate among state-of-the-art models.
We crowdsource multiple-choice reading comprehension questions for passages extracted from seven different sources and analyze the effects of passage source on question difficulty and diversity.
Although we expect that the difficulty of a passage affects the difficulty of questions about that passage, the collected questions do not show any strong correlation between the human--machine performance gap and passage source, length, or readability measures.
Our manual annotation of comprehension types reveals that questions requiring numerical or logical reasoning are relatively difficult.
We also find several trends between passage sources and comprehension types.
These results suggest that when creating a new benchmark dataset, we need to select passage sources carefully, so that the resulting dataset contains questions that require an understanding of the linguistic phenomena that we are interested in.
This is especially important in the adversarial setting because it could concentrate the distribution of questions towards a few specific question types.
\section*{Ethics Statement}
We aim to accelerate scientific progress on robust general question answering, which could translate downstream to useful tools.
We are not looking at possible sources of social bias, although this issue should be highly relevant to those considering sources to use as training data for applied systems \cite{li-etal-2020-unqovering,parrish2021bbq}.
We are using Amazon Mechanical Turk despite its history of sometimes treating workers unfairly \cite{kummerfeld-2021-quantifying}, especially in recourse for unfair rejections.
We make sure that our own pay and rejection policies are comparable to in-person employment, but acknowledge that our study could encourage others to use Mechanical Turk, and that they might not be so careful.
This work passed review or is exempt from the oversight of the internal review boards of the authors' institutes.
\section*{Acknowledgments}
We thank Saranya Venkatraman and Ethan Perez for their feedback on early drafts of this paper.
For his early contributions to this project, we thank Harsh Trivedi.
SS was supported by JST PRESTO Grant No. JPMJPR20C4.
This project has benefited from financial support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), Samsung Research (under the project \textit{Improving Deep Learning using Latent Structure}), and Apple.
This material is based upon work supported by the National Science Foundation under Grant Nos. 1922658 and 2046556. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
\bibliographystyle{acl_natbib}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
Receding horizon control (RHC), also known as model predictive control (MPC), has been popular in the process control industry for several years \cite{MPC_Overview,MPCSurvey}, and recently gaining popularity in aerospace applications, see \cite{raktimf16}. It is based on the idea of repetitive solution of an optimal control problem and updating states with the first input of the optimal command sequence. The repetitive nature of the algorithm results in a state dependent feedback control law. The attractive aspect of this method is the ability to incorporate state and control limits as constraints in the optimization formulation. When the model is linear, the optimization problem is quadratic if the performance index is expressed via a $\mathcal{L}_2 $-norm, or linear if expressed via a $\mathcal{L}_1/\mathcal{L}_\infty$-norm. Issues regarding feasibility of online computation, stability and performance are largely understood for linear systems and can be found in refs. \cite{kwon94,rhcbook1}. For nonlinear systems, stability of RHC methods is guaranteed by \cite{Primbs-Thesis,Ali_ACC99}, by using an appropriate control Lyapunov function . For a survey of the state-of-the-art in nonlinear receding horizon control problems the reader is directed to \cite{NLRHC_survey}.
Traditional RHC laws perform best when modeling error is small. \cite{myrhc} has shown that system uncertainty can lead to significant oscillatory behavior and possibly instabilty. Furthermore, \cite{rhcnonrobust} showed that in the presence of modeling uncertainty RHC strategy may not be robust with RHC designs. Many approaches have been taken to improve robustness of RHC strategy in the presence of unknown disturbances and bounded uncertainty, see work of \cite{rakovic, rhcworstcase, rhcefficient, mayne2000cmp}. These approaches involve the computation of a feedback gain to ensure robustness. The difficulty with this approach is that, even for linear systems, the problem becomes difficult to solve, as the unknown feedback gain transforms the quadratic programming problem into a nonlinear programming problem.
In this paper we address the problem of RHC design for linear systems with probabilistic uncertainty in system parameters. Parametric uncertainty arises in systems when the physics governing the system is known and the system parameters are either not known precisely or are expected to vary in the operational lifetime. Such uncertainty also occurs when system models are build from experimental data using system identification techniques. As a result of experimental measurements, the values of the parameters in the system model have a range of variations with quantifiable likelihood of occurrence. In either case, the range of variation of these parameters and the likelihood of their occurrence are assumed to be known and it is desired to design controllers that achieve specified performance for these variations.
While the area of robust RHC is not new, approaching the problem from a stochastic standpoint is only recently receiving attention, for example \cite{hessem1,Batina}. These approaches however suffered from either computational complexity, high degree of conservativeness or do not address closed-loop stability. The key difficulty in stochastic RHC is the propagation of uncertainty over the prediction horizon. More recently, \cite{Cannon2009167} avoid this difficulty by using an autonomous augmented formulation of the prediction dynamics. Constraint satisfaction and stability is achieved in \cite{Cannon2009167} by extending ellipsoid invariance theory to invariance with a given probability. The cost function minimized was the expected value of a quadratic function of random state and control trajectories. Additionally, the uncertainty in the system parameters were assumed to have normal distribution.
This paper presents formulation of robust RHC design problems in polynomial chaos framework, where parametric uncertainty can be governed by any probability density function. In this approach the solution, not the dynamics, of the random process is approximated using a series expansion. It is assumed that the random process to be controlled has finite second moment, which is the assumption of the polynomial chaos framework. The polynomial chaos based approach predicts the propagation of uncertainty more accurately, is computationally cheaper than methods based on Monte-Carlo or series approximation of the dynamics, and is less conservative than the invariance based methods.
The paper is organized as follows. We first present a brief introduction to polynomial chaos and its application in transforming linear stochastic dynamics to linear deterministic dynamics in higher dimensional state-space. Next stability of stochastic linear dynamics in the polynomial chaos framework is presented. This is followed by formulation of RHC design for discrete-time stochastic linear systems. Stability of the proposed RHC algorithm is then analyzed. The paper concludes with numerical examples that assesses the performance of the proposed method.
\section{Background on Polynomial Chaos}
Recently, use of polynomial chaos to study stochastic differential equations is gaining popularity. It is a non-sampling based method to determine evolution of uncertainty in dynamical system, when there is probabilistic uncertainty in the system parameters. Polynomial chaos was first introduced by \cite{wienerPC}
where Hermite polynomials were used to model stochastic processes
with Gaussian random variables. It can be thought of as an extension of Volterra's theory of nonlinear functionals \cite{volterra} for stochastic systems \cite{pcFEM}. According to \cite{CameronMartin} such an expansion converges in the
$\mathcal{L}_2$ sense for any arbitrary stochastic process with
finite second moment. This applies to most physical systems. \cite{Xiu} generalized the result of Cameron-Martin to various
continuous and discrete distributions using orthogonal polynomials
from the so called Askey-scheme \cite{Askey-Polynomials} and
demonstrated $\mathcal{L}_2$ convergence in the corresponding Hilbert
functional space. This is popularly known as the generalized
polynomial chaos (gPC) framework. The gPC framework has been applied to applications including stochastic fluid dynamics \cite{pcFluids2},stochastic finite elements \cite{pcFEM}, and solid mechanics
\cite{pcSolids1}. It has been shown in \cite{Xiu} that gPC based methods are computationally far superior than Monte-Carlo based methods. However, application of gPC to control related problems has been surprisingly limited and is only recently gaining popularity. See \cite{vinh-JGCD, fisher2008sld, pctrajgen} for control related application of gPC theory.
\subsection{Wiener-Askey Polynomial Chaos}
Let $(\Omega,\mathcal{F},P)$ be a probability space, where $\Omega$
is the sample space, $\mathcal{F}$ is the $\sigma$-algebra of the
subsets of $\Omega$, and $P$ is the probability measure. Let
$\Delta(\omega) =
(\Delta_1(\omega),\cdots,\Delta_d(\omega)):(\Omega,\mathcal{F})\rightarrow(\mathbb R^d,\mathcal{B}^d)$
be an $\mathbb R^d$-valued continuous random variable, where
$d\in\mathbb N$, and $\mathcal{B}^d$ is the $\sigma$-algebra of Borel
subsets of $\mathbb R^d$. A general second order process $X(\omega)\in
\mathcal{L}_2(\Omega,\mathcal{F},P)$ can be expressed by polynomial
chaos as
\begin{equation}
\label{eqn.gPC}
X(\omega) = \sum_{i=0}^{\infty} x_i\phi_i({\Delta}(\omega)),
\end{equation}
where $\omega$ is the random event and $\phi_i({\Delta}(\omega))$
denotes the gPC basis of degree $p$ in terms of the random variables
$\Delta(\omega)$. The functions $\{\phi_i\}$ are a family of
orthogonal basis in $\mathcal{L}_2(\Omega,\mathcal{F},P)$ satisfying
the relation
\begin{equation}
\langle \phi_i \phi_j \rangle := \int_{\mathcal{D}_{\Delta(\omega)}}{\phi_i\phi_jw(\Delta(\omega))
\,d\Delta(\omega)}=h_i^2\delta_{ij}
\end{equation}
where $\delta_{ij}$ is the Kronecker delta, $h_i$ is a constant
term corresponding to $\int_{\mathcal{D}_{\Delta}}{\phi_i^2w(\Delta)\,d\Delta}$,
$\mathcal{D}_{\Delta}$ is the domain of the random variable $\Delta(\omega)$, and
$w(\Delta)$ is a weighting function. Henceforth, we will use $\Delta$ to
represent $\Delta(\omega)$.
For random variables $\Delta$ with certain distributions, the family
of orthogonal basis functions $\{\phi_i\}$ can be chosen in such a
way that its weight function has the same form as the probability
density function $f(\Delta)$. When these types of polynomials are chosen,
we have $f(\Delta)=w(\Delta)$ and
\begin{equation}
\int_{\mathcal{D}_{\Delta}}{\phi_i\phi_jf(\Delta)\,d\Delta}=
\Expected{\phi_i\phi_j} = \Expected{\phi_i^2}\delta_{ij},
\end{equation}
where $\Expected{\cdot}$
denotes the expectation with respect to the probability measure
$dP(\Delta(\omega))=f(\Delta(\omega))d\Delta(\omega)$ and probability density
function $f(\Delta(\omega))$.
The orthogonal polynomials that are chosen are the
members of the Askey-scheme of polynomials (\cite{Askey-Polynomials}),
which forms a complete basis in the Hilbert space determined by
their corresponding support. Table \ref{table.pc} summarizes the
correspondence between the choice of polynomials for a given distribution
of $\Delta$. See \cite{Xiu} for more details.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|}
\hline
Random Variable $\Delta$ & $\phi_i(\Delta)$ of the Wiener-Askey Scheme\\ \hline
Gaussian & Hermite \\
Uniform & Legendre \\
Gamma & Laguerre \\
Beta & Jacobi\\\hline
\end{tabular}
\caption{Correspondence between choice of polynomials and given
distribution of $\Delta(\omega)$ \cite{Xiu}.} \label{table.pc}
\end{table}
\subsection{Approximation of Stochastic Linear Dynamics Using Polynomial Chaos Expansions}
Here we derive a generalized representation of the deterministic dynamics obtained from the stochastic system by approximating the solution with polynomial chaos expansions.
Define a linear discete-time stochastic system in the following manner
\begin{equation}
x(k+1,\Delta)=A(\Delta)x(k,\Delta) + B(\Delta)u(k,\Delta), \label{eqn:lti}
\end{equation}
where $x\in\mathbb R^n, u\in\mathbb R^m$. The system has probabilistic
uncertainty in the system parameters, characterized by $A(\Delta),
B(\Delta)$, which are matrix functions of random variable
$\Delta\equiv\Delta(\omega)\in\mathbb R^d$ with certain
\textit{stationary} distributions. Due to the stochastic nature of
$(A,B)$, the system trajectory $x(k,\Delta)$ will also be stochastic.
By applying the Wiener-Askey gPC expansion of finite order to $x(k,\Delta),
A(\Delta)$ and $B(\Delta)$, we get the following approximations,
\begin{eqnarray}
\hat{x}(k,\Delta) &=& \sum_{i=0}^{p}x_i(k)\phi_i(\Delta),\, x_i(k)\in\mathbb R^n \label{eqn:gPC.x}\\
\hat{u}(k,\Delta) &=& \sum_{i=0}^{p} u_{i}(k)\phi_i(\Delta),\, u_i(k)\in\mathbb R^m\label{eqn:gPC.u}\\
\hat{A}(\Delta) &=& \sum_{i=0}^{p}A_{i}\phi_i(\Delta),\, A_i = \frac{\langle A(\Delta),\phi_i(\Delta) \rangle}{\langle \phi_i(\Delta)^2 \rangle}\in\mathbb R^{n\times n}\\
\hat{B}(\Delta) &=& \sum_{i=0}^{p}B_{i}\phi_i(\Delta), \, B_i = \frac{\langle B(\Delta),\phi_i(\Delta) \rangle}{\langle \phi_i(\Delta)^2 \rangle}\in\mathbb R^{n\times m}.
\end{eqnarray}
The inner product or ensemble average $\langle\cdot,\cdot \rangle$, used in the above equations and in the rest of the paper, utilizes the weighting function associated with the assumed probability distribution, as listed in table~\ref{table.pc}.
The number of terms $p$ is determined by the dimension $d$ of $\Delta$ and the order $r$ of the orthogonal polynomials $\{\phi_k\}$, satisfying $p+1 = \frac{(d+r)!}{d!r!}$. The $n(p+1)$ time varying coefficients, $\{x_{i}(k)\}; k=0,\cdots,p$, are obtained by substituting the
approximated solution in the governing equation (eqn.(\ref{eqn:lti})) and conducting Galerkin projection on the basis functions $\{\phi_k\}_{k=0}^p$, to yield $n(p+1)$ \textit{deterministic} linear system of equations, which given by
\begin{equation}
\label{eq:pcrhcdyn}
\v{X}(k+1) = \v{A}\v{X}(k) + \v{B}\v{U}(k),
\end{equation}
where
\begin{eqnarray}
\v{X}(k) &=& [x_0(k)^T\; x_1(k)^T \;\cdots x_p(k)^T]^T, \label{eqn:Xdef}\\
\v{U}(k) &=& [u_0(k)^T\; u_1(k)^T \;\cdots u_p(k)^T]^T. \label{eqn:Udef}\\
\end{eqnarray}
Matrices $\mathbf{A}\in\mathbb R^{n(p+1)\times n(p+1)}$ and $\mathbf{B}\in\mathbb R^{n(p+1)\times m}$ are defined as
\begin{eqnarray}
\v{A} &=& (W\otimes I_n)^{-1}\left[
\begin{array}{c}
H_A(E_0\otimes I_n)\\ \vdots \\ H_A(E_p\otimes I_n)
\end{array}\right],\\
\v{B} &=& (W\otimes I_n)^{-1}\left[
\begin{array}{c}
H_B(E_0\otimes I_m)\\ \vdots \\ H_B(E_p\otimes I_m)
\end{array}\right],
\end{eqnarray}
where $H_A = \left[ A_0 \, \cdots \, A_p \right]$,
$H_B = \left[ B_0 \, \cdots \, B_p \right]$, $W = diag(\langle \phi_0^2 \rangle, \cdots, \langle \phi_p^2 \rangle )$, and
\[E_i = \left[\begin{array}{ccc} \langle \phi_i,\phi_0,\phi_0 \rangle & \cdots & \langle \phi_i,\phi_0,\phi_p \rangle \\ \vdots & & \vdots \\ \langle \phi_i,\phi_p,\phi_0 \rangle & \cdots & \langle \phi_i,\phi_p,\phi_p \rangle
\end{array}\right],
\]
with $I_n$ and $I_m$ as the identity matrix of dimension $n\times n$ and $m\times m$ respectively. It can be easily shown that $\Expected{x(k)} = x_0(k)$, or $\Expected{x(k)} = \left[I_n \; 0_{n\times np}\right]\v{X}(k).$
Therefore, transformation of a stochastic
linear system with $x\in\mathbb R^n,u\in\mathbb R^m$, with $p^{th}$ order
gPC expansion, results in a \textit{deterministic} linear system
with increased dimensionality equal to $n(p+1)$.
\section{Stochastic Receding Horizon Control}
Here we develop a RHC methodology for stochastic linear systems similar to that developed for deterministic systems, presented by \cite{goodwin2005cca}. Let $x(k,\Delta)$ be the solution of the system in \eqnref{eqn:lti} with control $u(k,\Delta)$. Consider the following optimal control problem defined by,
\begin{align}
\label{eq:rhc1}
&V_N^*= \min\,V_N(\{x(k+1,\Delta)\},\{u(k,\Delta)\}) \\ \nonumber
&{\rm subject\,to:}\\
&x(k+1,\Delta)=A(\Delta)x(k,\Delta)+B(\Delta)u(k,\Delta),\\
&\textrm{Initial Condition: }x(0,\Delta);\\
&\mu(u(k,\Delta))\in\mathbb{U}\subset \mathbb R^{m},\\
&\mu(x(k,\Delta))\in\mathbb{X}\subset \mathbb R^{n},\\
&\mu(x(N,\Delta))\in\mathbb{X}_f\subset \mathbb{X},
\end{align}
for $k=0,\cdots,N-1$; where $N$ is the horizon length, $\mathbb{U}$ and $\mathbb{X}$ are feasible sets for
$u(k,\Delta)$ and $x(k,\Delta)$ with respect to control and state constraints. $\mu(\cdot)$ represents moments based constraints on state and control.The set
$\mathbb{X}_f$ is a terminal constraint set. The cost function $V_N$ is given by
\begin{equation}
\begin{split}
& V_N = \sum_{k=1}^{N} \mathbf{E}\left[ x^T(k,\Delta)Qx(k,\Delta) + \right. \\
& \left. u^T(k-1,\Delta)Ru(k-1,\Delta) \right] + C_f(x(N),\Delta),
\end{split}
\label{eqn:V_N}
\end{equation}
where $C_f(x(N),\Delta)$ is a terminal cost function, and $Q=Q^T>0$, $R=R^T>0$ are matrices with appropriate dimensions.
\subsection{Control Structure}
Here we consider the control structure,
\begin{equation}
u(k,\Delta) = \bar{u}(k)+K(k)\left(x(k,\Delta)-\Expected{x(k,\Delta)}\right), \label{eqn:rhclaw2}
\end{equation}
where $\bar{u}(k)$, and $K(k)$ are unknown \textit{deterministic} quantities. This is similar to that proposed by Primbs \textit{et al.} \cite{primbs2009stochastic} and enables us to regulate the mean trajectory using open loop control and deviations about the mean using a state-feedback control.
In terms of gPC coefficients, the system dynamics in \eqnref{eqn:lti} with the first control structure is given by \eqnref{eq:pcrhcdyn}. The system dynamics in term of the gPC expansions, with the second control structure is given by
\begin{equation}
\v{X}(k+1)=(\v{A}+\v{B} (\v{M} \otimes K(k)))\v{X}(k)+\v{B}\bar{U}(k), \label{eqn:gPCDyn1}
\end{equation}
where $\bar{U}(k)= [1\,\,0_{1\times p}]^T \otimes \bar{u}(k)$ and $\v{M} = \left[\begin{array}{cc} 0 & 0_{1\times p} \\ 0_{p\times 1} & I_{p\times p}
\end{array}\right]$.
\subsection{Cost Functions}
Here we derive the cost function in \eqnref{eqn:V_N} is derived in terms of the gPC coefficients $\v{X}$ and $\v{U}$. For scalar $x$, the quantity $\Expected{x^2}$ in terms of its gPC expansions is given by
\begin{equation}
\Expected{x^2} = \sum_{i=0}^{p}\sum_{j=0}^{p} x_ix_j\int_{\mathcal{D}_\Delta}\phi_i \phi_j f d\Delta
= \mathbf{x}^TW\mathbf{x},
\end{equation}
where $\mathcal{D}_\Delta$ is the domain of $\Delta$ , $x_i$ are the
gPC expansions of $x$, $f\equiv f(\Delta)$ is the probability
distribution of $\Delta$. Here we use the notation $\v{x}$ to represent the gPC state vector for scalar $x$. The expression
$\Expected{x^2}$ can be generalized for $x\in\mathbb R^n$ where
$\Expected{x^T Q x}$ is given by
\begin{equation}
\Expected{x^T Q x} = \mathbf{X}^T(W \otimes Q)\mathbf{X}.
\end{equation}
The expression for the cost function in \eqnref{eqn:V_N}, in terms of gPC states and control is
\begin{equation}
\begin{split}
&V_N = \sum_{k=0}^{N-1} [\v{X}^T(k)\bar{Q}\v{X}(k) + \\
& (\bar{U}^T(k)+\v{X}^T(k)(\v{M} \otimes K^T(k)))\bar{R} (\bar{U}(k) + \\
& (\v{M} \otimes K(k)))\v{X}(k))] + C_f(x(N),\Delta),
\end{split}
\end{equation}
where $\bar{Q}=W\otimes Q$ and $\bar{R}=W\otimes R$.
In deterministic RHC, the terminal cost is the cost-to-go from the terminal state to the origin by the local controller \cite{goodwin2005cca}. In the stochastic setting, a local controller can be synthesized using methods presented in our previous work \cite{fisher2008sld}. The cost-to-go from a given stochastic state variable $x(N,\Delta)$ can then be written as
\begin{equation}
C_f(x(N),\Delta) = \v{X}^T(N)P\v{X}(N), \label{eqn:term-cost}
\end{equation}
where $\v{X}(N)$ are gPC states corresponding to $x(N,\Delta)$ and $P=P^T>0$ is a $n(p+1)\times n(p+1)$-dimensional matrix, obtained from the synthesis of the terminal control law \cite{fisher2008sld}. In the current stochastic RHC literature, the terminal cost function has been defined on the expected value of the final state \cite{lee1998ofc,delapenad2005spa,primbs2009stochastic,bertsekas2005dpa} or using a combination of mean and variance \cite{darlington2000decreasing,nagy2003robust}. The terminal cost function in \eqnref{eqn:term-cost} is more general than the terminal cost functions used in the literature because it penalizes all the moments of the random variable $x(N,\Delta)$, as they are functions of $\v{X}(N)$. This can be shown as follows.
To avoid tensor notation and without loss of generality, we consider $x(k,\Delta)\in \mathbb R$ and let $\v{X}(k) = [x_0(k), x_1(k), \cdots, x_{p}(k)]^T$ be the gPC expansion of $x(k,\Delta)$. The $p^{th}$ moment in terms of $x_i(k)$ are then given by
\begin{equation}
\begin{split}
& m_p(k) = \sum_{i_1=0}^P \cdots \sum_{{i_p}=0}^P x_{i_1}(k) \cdots x_{i_p}(k) \int_{\mathcal{D}_\Delta} \phi_{i_1}(\Delta) \cdots \\
& \phi_{i_p}(\Delta)f(\Delta)d\Delta.
\end{split}
\label{eqn:mp}
\end{equation}
Thus, minimizing $C_f(x(N),\Delta)$ in \eqnref{eqn:term-cost} minimizes all moments of $x(N,\Delta)$. Consequently, constraining the probability density function of $x(N,\Delta)$.
\subsection{State and Control Constraints}
In this section we present the state and control constraints for the receding
horizon policy.
\subsubsection{Expectation Based Constraints}
Here we first consider constraints of the following form,
\begin{eqnarray}
\Expected{x(k,\Delta)^T H_{x} x(k,\Delta) + G_x x(k,\Delta)} & \le & \alpha_{i,x},\label{eqn:xMeanConstr}\\
\Expected{u(k,\Delta)^T H_{u} u(k,\Delta) + G_u u(k,\Delta)} & \le & \alpha_{i,u}, \label{eqn:uMeanConstr}
\end{eqnarray}
for $k=0\ldots N$.
These constraints are on the {\it expected value} of the quadratic functions. Thus, instead of requiring
that the constraint be met for all trajectories, they instead imply that the constraints should be satisfied on average.
These constraints can be expressed in terms of the gPC states as
\begin{eqnarray}
\v{X}(k)^T \bar{H}_{x} \v{X}(k) + \bar{G}_x \v{X}(k) & \le & \alpha_{i,x},\label{eqn:xpcMeanConstr}\\
\v{U}(k)^T \bar{H}_{u} \v{U}(k) + \bar{G}_u \v{U}(k) & \le & \alpha_{i,u},\label{eqn:upcMeanConstr}
\end{eqnarray}
where $\bar{H}_{x}=W \otimes H_{x}$, $\bar{H}_{u}=W \otimes H_{u}$, $\bar{G}_{x} = G_{x}\left[I_n \; 0_{n\times np}\right]$, and $\bar{G}_{u} = G_{u}\left[I_n \; 0_{n\times np}\right]$.
\subsubsection{Variance Based Constraints}
In many practical applications, it may be desirable to constrain the
second moment of the state trajectories, either at each time step or at final time. One means of achieving this
is to use a constraint of the form
\begin{equation}
\Trace{\Expected{(x(k)-\Expected{x(k)})(x(k)-\Expected{x(k)})^T}}\le \alpha_{\sigma^2}.
\end{equation}
For scalar $x$, the variance $\sigma^2(x)$ in terms of the gPC
expansions can be shown to be
\[
\sigma^2 = \Expected{x-\Expected{x}}^2 = \Expected{x^2} - \Expected{x}^2 = \mathbf{x}^T W \mathbf{x}- \Expected{x}^2,
\]
where
\[
\begin{split}
& \Expected{x} = \Expected{\sum_{i=0}^p x_i\phi_i}= \sum_{i=0}^p x_i \Expected{\phi_i} = \sum_{i=0}^p x_i \int_{\mathcal{D}_\Delta}\phi_i f d\Delta \\
& = \mathbf{x}^T F,
\end{split}
\]
and $F = \left[\begin{array}{cccc}1 \; 0 \; \cdots \;0 \end{array}\right]^{T}$. Therefore, $\sigma^2$ for scalar $x$ can be written in a compact form as
\begin{equation}
\sigma^2 = \mathbf{x}^T(W-FF^T)\mathbf{x}.
\end{equation}
In order to represent the covariance for $x\in\mathbb R^{n}$, in terms of the gPC states, let us define
$\Phi = [\phi_{0} \cdots \phi_{p+1}]^{T}$ and write $G = \int_{\mathcal{D}_{\Delta}}\Phi\Phi^{T}f d\Delta$. Let us represent a sub-vector of $\v{X}$, defined by elements $n_{1}$ to $n_{2}$, as $X_{n_{1}\cdots n{2}}$, where $n_{1}$ and $n_{2}$ are positive integers. Let us next define matrix $M_{\v{X}}$, with subvectors of $\v{X}$, as $M_{\v{X}} = [\v{X}_{1\cdots n}\; \v{X}_{n+1 \cdots 2n} \; \cdots \v{X}_{np+1 \cdots n(p+1)}]$. For $x\in\mathbb R^{n}$, it can be shown that
\begin{equation}
\Expected{x} = (F\otimes I_{n})\v{X}, \label{eqn:mean}
\end{equation}
and the covariance can then be shown to be
\begin{equation}
\textbf{Cov}(x) = M_{\v{X}}GM_{\v{X}}^{T} - (F \otimes I_{n})\v{X}\v{X}^{T}(F^{T}\otimes I_{n}).
\label{eqn:cov}
\end{equation}
The trace of the covariance matrix $\textbf{Cov}(x)$ can then be written as
\[
\Trace{\textbf{Cov}(x)} = \v{X}^{T}((W-FF^{T})\otimes I_{n}) \v{X}.
\]
Therefore, a constraint of the type
\[
\Trace{\textbf{Cov}(x(k))} \le \alpha_{\sigma^2}
\]
can be written in term of gPC states as
\begin{equation}
\v{X}^{T}Q_{\sigma^2} \v{X} \le \alpha_{\sigma^2},
\label{eqn:cov-constraint}
\end{equation}
where $Q_{\sigma^2}=(W-FF^T)\otimes I_{n}$.
\section{Stability of the RHC Policy}
Here we show the stability properties of the receding horizon policy
when it is applied to the system in \eqnref{eq:pcrhcdyn}. Using gPC theory we can convert the underlying stochastic RHC formulation in $x(t,\Delta)$ and $u(t,\Delta)$ to a deterministic RHC formulation in $\v{X}(k)$ and $\v{U}(k)$. The stability of $\v{X}(k)$ in an RHC setting, with suitable terminal controller, can be proved using results by \cite{goodwin2005cca}, which shows that $\lim_{k\rightarrow \infty} \v{X}(k)\rightarrow 0$, when a receding horizon policy is employed. To relate this result to the stability of $x(k,\Delta)$, we first present the following known result in stochastic stability in terms of the moments of $x(k,\Delta)$. For stochastic dynamical systems in general, stability of moments is a weaker definition of stability than the \textit{almost sure stability} definition. However, the two definitions are equivalent for linear autonomous systems (pg. 296, \cite{Khasminski} also pg. 349 \cite{Chen}). Here we present the definition of asymptotic stability in the $p^{th}$ moment for discrete-time systems.
\begin{definition}
The zero equilibrium state is said to be stable in the $p^{th}$ moment if $\forall \epsilon>0, \, \exists \delta > 0$ such that
\begin{equation}
\sup_{k\geq0} \Expected{x(k,\Delta)^p} \leq \epsilon, \; \forall x(0,\Delta):||x(0,\Delta)||\leq\delta, \forall \Delta \in \mathcal{D}_{\Delta}.
\label{eqn:stab1}
\end{equation}
\end{definition}
\begin{definition}
The zero equilibrium state is said to be asymptotically stable in the $p^{th}$ moment if it is stable in $p^{th}$ moment and
\begin{equation}
\lim_{k\rightarrow \infty} \Expected{x(k,\Delta)^p} = 0, \label{eqn:stab2}\end{equation}
for all $x(0,\Delta)$ in the neighbourhood of the zero equilibrium.
\end{definition}
\begin{proposition}
For the system in \eqnref{eqn:lti}, $\lim_{k\rightarrow \infty} \v{X}(k)\rightarrow 0$ is a sufficient condition for the asymptotic stability of the zero equilibrium state, in all moments.
\end{proposition}
\begin{proof}
To avoid tensor notation and without loss of generality, we consider $x(k,\Delta)\in \mathbb R$ and let $\v{X}(k) = [x_0(k), x_1(k), \cdots, x_{p}(k)]^T$ be the gPC expansion of $x(k,\Delta)$. The moments in terms of $x_i(k)$ are given by eqn.(\ref{eqn:mp}).
Therefore, if $ \lim_{k\rightarrow \infty} \v{X}(k)\rightarrow 0$ $\implies \lim_{k\rightarrow \infty} x_i(k) \rightarrow 0$. Consequently, $\lim_{k\rightarrow \infty} m_i(k) \rightarrow 0$ for $i=1,2,\cdots$, and \eqnref{eqn:stab2} is satisfied.
This completes the proof. $\square$
\end{proof}
\section{Numerical Example}
Here we consider the following linear system,
similar to that considered in \cite{primbs2009stochastic},
\begin{equation}
x(k+1) = (A+G(\Delta))x(k)+Bu(k)
\end{equation}
where
\[
A=\left[\begin{array}{cc}1.02&-0.1\\.1&.98\end{array}\right],\, B=\left[
\begin{array}{c}0.1\\ 0.05\end{array}\right],\,
G = \left[\begin{array}{cc}0.04&0\\ 0&0.04\end{array}\right]\Delta.
\]
The system in consideration is open-loop unstable and the uncertainty appears
linearly in the $G$ matrix. Here, $\Delta \in [-1,1]$ and is governed
by a uniform distribution, that doesn't change with time. Consequently, Legendre polynomials is used for gPC approximation and polynomials up to $4^{th}$ order are used to formulate the control. Additionally, we assume that there is no uncertainty in the initial condition. The expectation based constraint is imposed on $x(k,\Delta))$ as
\[
\Expected{\;[1 \;\; 0 ] x(k,\Delta)\;}\ge -1,
\]
which in terms of the gPC states, this corresponds to
\[
\left[\begin{array}{cc}1&\v{0}_{1\times 2p+1}\end{array}\right]\v{X}(k)\ge -1.
\]
The terminal controller is designed using probabilistic LQR design techniques described by \cite{fisher2008sld}. The cost matrices used to determine the
terminal controller are
\[
Q = \left[\begin{array}{cc}2&0\\0&5\end{array}\right],\, R = 1.
\]
\Fig{fig:rhc} illustrates the performance of the proposed RHC policy The resulting optimization problem is a nonlinear programming problem which has been solved using MATLAB's \texttt{fmincon(...)} function. From the figure, we see that the constraint on the expected value of $x_1$ has been satisfied and the RHC algorithm was able to stabilize the system. These plots have been obtained using $4^{th}$ order gPC approximation of the stochastic dynamics.
\section{Summary}
In this paper we present a RHC strategy for linear discrete time systems with probabilistic system parameters. We have used the polynomial chaos framework to design stochastic RHC algorithms in an equivalent deterministic setting. The controller structure has an open loop component that controls the mean behavior of the system, and a state-feedback component that controls deviations about the mean trajectory. This controller structure results in a polynomial optimization problem with polynomial constraints that is solved in the general nonlinear programming framework. Theoretical guarantees for the stability of the proposed algorithm has also been presented. Performance of the RHC algorithm has been assessed using a two dimensional dynamical system.
\begin{figure}[h!]
\includegraphics[width=0.45\textwidth]{expX_snopt.pdf}
\caption{State trajectories with expectation constraints.}\label{fig:rhc}
\end{figure}
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The time integration of the equations of motion of charged particles is a crucial step in particle methods of plasma physics~\cite{birdsall05ppv}. In the strong magnetic field regime, the charged particles exhibit very fast rotations of small radius around a guiding centre. This often brings stringent restriction on the time stepsize for numerical integrators. There are many works aiming at designing large-stepsize integrators with good accuracy for the charged-particle dynamics such as~\cite{vu95ans,filbet2017asymptotically,chartier2020uniformly,ricketson20aec,xiao21smc,hairer2022large}. Among them, a Boris-type integrator with appropriate modifications shows striking numerical results~\cite{xiao21smc} and the rigorous analysis is provided in~\cite{lubich2022large}. It is proved that the position and the parallel velocity are approximated with $O(h^2)$ accuracy for the modified Boris algorithm with large step sizes $h^2\sim \varepsilon$ for fixed $T=O(1)$, where $\varepsilon\ll 1$ is a small parameter whose inverse corresponds to the strength of the magnetic field.
In this paper, we are interested in analyzing the long time behavior (over O($\varepsilon^{-1}$)) of the modified Boris algorithm in a toroidal axi-symmetric geometry, with a magnetic field everywhere toroidal and an electric field everywhere orthogonal to the magnetic field. This geometry has already been proposed in~\cite{filbet2020asymptotics} and a first order description of the slow dynamics for the continuous case is derived. Here we will use a different technique of modulated Fourier expansions~\cite{hairer02gni}, which recently has been used for charged-particle dynamics in a strong magnetic field~\cite{hairer20lta,hairer20afb,hairer2022large,lubich2022large,wang21eeo}, to derive the guiding centre drifts of the exact solution in such toroidal geometry. Since this technique can be extended to numerical discretization equally, the analysis of the modified Boris algorithm is also performed.
In Section~\ref{sec:setting} we formulate the equations of motion in a strongly non-uniform strong magnetic field, describe the concerned toroidal axi-symmetric geometry and the electromagnetic field, and introduce the modified Boris scheme.
In Section~\ref{sec:main} we state the main results of this paper: Theorem~\ref{thm:exact} states the slow drift motion over $O(\varepsilon^{-1})$ in toroidal geometry for the continuous system and Theorem~\ref{thm:num} states the long-time accuracy of the modified Boris algorithm. Section~\ref{sec:num} presents experiments that illustrate the theoretical results. In Section~\ref{sec:proof} we give the proofs for our main results.
\section{Setting}\label{sec:setting}
\subsection{Charged-particle dynamics in toroidal geometry}
\label{subsec:setting}
We consider the differential equation that describes the motion of a charged particle (of unit mass and charge) in a magnetic and electric field,
\begin{equation}\label{ode}
\ddot x = \dot x \times B(x) + E(x),
\end{equation}
where $x(t)\in\mathbb R^3$ is the position at time $t$, $v(t)=\dot x(t)$ is the velocity, $B(x)$ is the magnetic field and $E(x)$ is the electric field. $B$ and $E$ can be expressed via vector potential $A(x)\in{\mathbb R}^3$ and scalar potential $\phi(x)\in{\mathbb R}$ as $B(x) = \nabla \times A(x)$ and $E(x) = - \nabla \phi(x)$. Here we are interested in the situation of a strong magnetic field
\begin{equation}\label{B-eps}
B(x) = B_\varepsilon(x)= \frac 1\varepsilon \, B_1(x), \quad\ 0<\varepsilon\ll 1,
\end{equation}
where $B_1$ is smooth and independent of the small parameter $\varepsilon$, with $|B_1(x)|\ge 1$ for all $x$. The initial values $(x(0),\dot x(0))$ are bounded independently of $\varepsilon$: for some constants $M_0,M_1$,
\begin{equation} \label{xv-bounds}
|x(0)| \le M_0, \quad\ |\dot x(0)| \le M_1.
\end{equation}
In this paper, we consider the so called toroidal axi-symmetric geometry which is introduced in~\cite{filbet2020asymptotics}.
To be specific, fix a unitary vector ${\mathrm e}_z$, and for any vector $x\in\mathbb R^3$, it can be expressed as
\[
x=r(x)\,{\mathrm e}_r(x)+z(x)\,{\mathrm e}_z
\]
with $z(x)= {\mathrm e}_z^\top x$, $r(x)=|{\mathrm e}_z\times x|$, and ${\mathrm e}_r(x)=(x-z(x)\,{\mathrm e}_z)/r(x)$. It is assumed that far from the axis ${\mathrm e}_z$ the magnetic field is stationary, toroidal, axi-symmetric and non vanishing, that is, for some $r_0>0$, when $r(x)\geq r_0$
\begin{equation}\label{eq:B}
{\mathrm e}_\parallel(x)=\frac{{\mathrm e}_z\times x}{r(x)} \quad \text{and} \quad |B_1(x)|=b(r(x),z(x))
\end{equation}
for some function $b$. The electric field satisfies $E_\parallel(x)=0$ and $E$ is axi-symmetric when $r(x)\geq r_0$, that is,
\begin{equation}\label{eq:E}
E(x)=E_\perp(x)=E_r(r(x),z(x))\,{\mathrm e}_r(x)+E_z(r(x),z(x))\,{\mathrm e}_z.
\end{equation}
In our proofs, we assume that the functions $b$, $E_r$, $E_z$ and all their derivatives are bounded independently of $\varepsilon$.
\begin{figure}[h]
\centerline{
\includegraphics[scale=0.5]{torus.png}}
\caption{Toroidal geometry with ${\mathrm e}_r(x),{\mathrm e}_\parallel(x),{\mathrm e}_z$ the local frame and the magnetic field along ${\mathrm e}_\parallel(x)$.}
\end{figure}
It is noted that $({\mathrm e}_r(x), {\mathrm e}_\parallel(x), {\mathrm e}_z)$ forms the orthonormal basis and
\[
\begin{aligned}
{\mathrm e}_r(x) &= \left(\frac{x_1}{r}, \frac{x_2}{r}, 0 \right)^\top\quad {\mathrm e}_\parallel(x)=\left(-\frac{x_2}{r}, \frac{x_1}{r}, 0\right)^\top\quad
{\mathrm e}_z&=(0,0,1)^\top
\end{aligned}
\]
with $r=\sqrt{x_1^2+x_2^2}$. The following relations are useful in our proof
\begin{equation}\label{eq:relation}
\begin{aligned}
&{\mathrm e}'_r(x)=\frac{1}{r(x)}{\mathrm e}_\parallel(x) {\mathrm e}_\parallel(x)^\top, \quad {\mathrm e}'_\parallel(x)=-\frac{1}{r(x)} {\mathrm e}_r(x) {\mathrm e}_\parallel(x)^\top\\
&\nabla_x r(x)= {\mathrm e}_r(x), \quad B_1'(x)={\mathrm e}_\parallel (\nabla_x b)^\top-\frac{b(r(x),z(x))}{r(x)}{\mathrm e}_r(x) {\mathrm e}_\parallel(x)^\top,
\end{aligned}
\end{equation}
where $'$ denotes the Jacobian of the functions considered and $\nabla_x$ is the gradient.
\subsection{Modified Boris method }
The modified Boris method proposed in \cite{xiao21smc} is used to solve the charged-particle dynamics under a strong magnetic field with large stepsizes.
The analysis of accuracy order of such method for general non-uniform strong magnetic field with stepsize $h^2\sim \varepsilon$ until $T=O(1)$ is recently provided in~\cite{lubich2022large}.
This algorithm has the following two-step formulation
\begin{equation}\label{mboris}
\frac{x^{n+1}-2x^n+x^{n-1}}{h^2}=v^n \times B(x^n) + E(x^n)- \mu^0\, \nabla |B|(x^n)
\end{equation}
with the initial magnetic moment
\[\mu^0=\mu(x(0),\dot x(0))=\frac{1}{2}\frac{|\dot{x}(0)\times B(x(0))|^2}{|B(x(0))|^3}.
\] The velocity is computed as
\begin{equation} \label{vn}
v^n = \frac{x^{n+1}-x^{n-1}}{2h}.
\end{equation}
The modified Boris method starts from modified initial values
\begin{equation}\label{mod-init}
x^0 = x(0), \quad v^0 = P_\parallel(x^0) \,\dot x(0),
\end{equation}
where $P_\parallel(x^0)$ is the orthogonal projection onto the span of $B(x^0)$. This means the component of the initial velocity orthogonal to the magnetic field is filtered out, i.e., $v^0_\perp=P_\perp(x^0)v^0=0$ with $P_\perp(x^0)=I-P_\parallel(x^0)$.
We note that the modified Boris method is identical to the standard Boris integrator for the modified electric field $E_\mathrm{mod}(x) = E(x)- \mu^0 \,\nabla |B|(x) =-\nabla (\phi + \mu^0 |B|)(x)$ and can be implemented as the common one-step formulation of the Boris algorithm~\cite{boris70rps}.
\section{Main results}\label{sec:main}
Introducing $\tilde{r}(t), \tilde{z}(t), \tilde{v}(t)$ such that they are the solutions of the following initial-value problem for the slow differential equations
\begin{equation}\label{eq:limit}
\begin{aligned}
\frac{{\mathrm d} \tilde r}{{\mathrm d} t}&=-\varepsilon\frac{E_z(\tilde{r},\tilde{z})}{b(\tilde{r},\tilde{z})}+\varepsilon\frac{\mu^0}{b(\tilde{r},\tilde{z})}\partial_z b(\tilde{r},\tilde{z}), \quad \tilde{r}(0)=r(x(0))\\
\frac{{\mathrm d} \tilde z}{{\mathrm d} t}&=\varepsilon\frac{\tilde{v}^2}{\tilde{r} b(\tilde{r},\tilde{z})}+\varepsilon\frac{E_r(\tilde{r},\tilde{z})}{b(\tilde{r},\tilde{z})}-\varepsilon\frac{\mu^0}{b(\tilde{r},\tilde{z})}\partial_r b(\tilde{r}, \tilde{z}), \quad \tilde{z}(0)=z(x(0))\\
\frac{{\mathrm d} \tilde v}{{\mathrm d} t}&=\varepsilon\frac{\tilde{v}}{\tilde{r}}\left(\frac{E_z(\tilde{r}, \tilde{z})}{b(\tilde{r}, \tilde{z})}-\frac{\mu^0}{b(\tilde{r}, \tilde{z})}\partial_z b(\tilde{r}, \tilde{z})\right), \quad \tilde{v}(0)={\mathrm e}_\parallel(x(0))^\top \dot{x}(0),
\end{aligned}
\end{equation}
we then have the following results.
\begin{theorem}[Drift motion of the exact solution]\label{thm:exact}
Let $x(t)=r(x(t))\,{\mathrm e}_r(x(t))+z(x(t))\,{\mathrm e}_z$ be a solution of \eqref{ode}--\eqref{xv-bounds} with \eqref{eq:B} and \eqref{eq:E}, which stays in a compact set $K$ for $0\leq t\leq c\varepsilon^{-1}$ (with $K$ and $c$ independent of $\varepsilon$) and $v_\parallel(t)= {\mathrm e}_\parallel(x(t))^\top \dot{x}(t)$ be the parallel velocity. Denote $r(t)=r(x(t))$ and $z(t)=z(x(t))$, then we have
\[
|r(t)-\tilde{r}(t)|\leq C\varepsilon, \ |z(t)-\tilde{z}(t)|\leq C\varepsilon, \ |v_\parallel(t)-\tilde{v}(t)|\leq C\varepsilon, \quad 0\leq t \leq c/\varepsilon.
\]
The constant $C$ is independent of $\varepsilon$ and $t$ with $0\leq t\leq c/\varepsilon$, but depends on $c$ and on bounds of derivatives of $B_1$ and $E$ on the compact set $K$.
\end{theorem}
\begin{remark} A similar result is given by Proposition 5.2 in~\cite{filbet2020asymptotics}. Here we provide a different proof of the modulated Fourier expansions, which can be extended to the analysis of numerical methods and enables us to obtain the following result.
\end{remark}
For the numerical approximation, the nondegeneracy condition is needed as in~\cite{lubich2022large}:
\begin{align}\label{ass-nondeg}
&\text{For $(x,v)$ along the numerical trajectory, the linear maps} \nonumber
\\[1mm]
& \text{$L_{x,v}:P_\perp(x)\mathbb{R}^3 \to P_\perp(x)\mathbb{R}^3, \quad z \mapsto z + \tfrac14 h^2\, P_\perp(x)\bigl(v \times B'(x)z\bigr)$}
\\[1mm]
&\text{have an inverse that is bounded independently of $(x,v)$ and of
} \nonumber
\\[-1mm]
&\text{$h$ and $\varepsilon$ with $h^2/\varepsilon\le C_*$.} \nonumber
\end{align}
This determines an upper bound $C_*$ on the ratio $h^2/\varepsilon$.
\begin{theorem}[Drift approximation by the numerical solution]\label{thm:num}
Consider applying the modified Boris method to \eqref{ode}--\eqref{xv-bounds} with \eqref{eq:B} and \eqref{eq:E} and with modified initial values \eqref{mod-init} using a step size $h$ with $h^2\sim \varepsilon$, i.e.,
\[
c_* \varepsilon \le h^2 \le C_* \varepsilon
\]
for some positive constants $c_*$ and $C_*$. Under the nondegeneracy condition \eqref{ass-nondeg} and provided that the numerical solution $x^n=r(x^n)\,{\mathrm e}_r(x^n)+z(x^n)\,{\mathrm e}_z$ stays in a compact set $K$ for $0\leq nh\leq c\varepsilon^{-1}$ (with $K$ and $c$ independent of $\varepsilon$ and $h$). $v_\parallel^n={\mathrm e}_\parallel(x^n)^\top v^n$ denotes the parallel component of the numerical velocity $v^n$. Then
\[
|r(x^n)-\tilde{r}(t_n)|\leq Ch^2, \ |z(x^n)-\tilde{z}(t_n)|\leq Ch^2, \ |v_\parallel^n-\tilde{v}(t_n)|\leq Ch^2, \quad 0\leq t_n=nh\leq c/\varepsilon.
\]
The constant $C$ is independent of $\varepsilon$ and $h$ and $n$ with $0\leq nh\leq c/\varepsilon$, but depends on on $c$, on bounds of derivatives of $B_1$ and $E$ on the compact set $K$ and on $c_*$ and $C_*$.
\end{theorem}
\section{Numerical experiments}\label{sec:num}
\begin{figure}[htbp]
\centerline{
\includegraphics[scale=0.5]{orbit1.eps}}
\caption{Particle trajectories for $t\leq 1/\varepsilon$ with $\varepsilon=10^{-3}$ as computed by the modified Boris with $h=0.04$ (left) and by the Boris method with $h=0.01$ (right). }\label{fig:orbit1}
\end{figure}
\begin{figure}[htbp]
\centerline{
\includegraphics[scale=0.5]{orbit2.eps}}
\caption{Particle trajectories for $t\leq 1/\varepsilon$ with $\varepsilon=10^{-3}$ projected onto the $r-z$ plane as computed by the modified Boris with $h=0.04$ (left) and by the Boris method with $h=0.01$ (right). }\label{fig:orbit2}
\end{figure}
\begin{figure}[htbp]
\centerline{
\includegraphics[scale=0.5]{r1.png}}
\centerline{
\includegraphics[scale=0.5]{z1.png}}
\centerline{
\includegraphics[scale=0.5]{vv1.png}}
\caption{Absolute errors $|r(x^n)-r^{\text{ref}}|$, $|z(x^n)-z^{\text{ref}}|$ and $|v_\parallel^n-v_\parallel^{\text{ref}}|$ as functions of time, along the numerical solution of the modified Boris algorithm with $\varepsilon=10^{-3}$ and three different $h$.}\label{fig:order1}
\end{figure}
\begin{figure}[htbp]
\centerline{
\includegraphics[scale=0.5]{r2.png}}
\centerline{
\includegraphics[scale=0.5]{z2.png}}
\centerline{
\includegraphics[scale=0.5]{vv2.png}}
\caption{Absolute errors $|r(x^n)-r^{\text{ref}}|$, $|z(x^n)-z^{\text{ref}}|$ and $|v_\parallel^n-v_\parallel^{\text{ref}}|$ as functions of time, along the numerical solution of the modified Boris algorithm with $\varepsilon=10^{-4}$ and three different $h$.}\label{fig:order2}
\end{figure}
To illustrate the statement of the preceding section we consider the following electromagnetic fields
\[
\begin{aligned}
E(x)&=0.1 z(x) \, {\mathrm e}_r(x)+0.1 r(x)\, {\mathrm e}_z=0.1\left (\frac{x_1 x_3}{r}, \, \frac{x_2 x_3}{r},\, r\right)^\top, \\
B(x)&=\frac{r(x)+z^2(x)}{\varepsilon}{\mathrm e}_\parallel(x)=\frac{r+x^2_3}{\varepsilon}\left(-\frac{x_2}{r},\, \frac{x_1}{r}, \, 0\right)^\top
\end{aligned}
\]
with $r=\sqrt{x_1^2+x_2^2}$. The initial values are chosen as
\[
x(0)=(1/3,1/4,1/2)^\top, \quad \dot{x}(0)=(2/5, 2/3,1)^\top.
\]
Figure~\ref{fig:orbit1} shows the trajectories computed by the standard Boris and modified Boris with final time $T=1/\varepsilon$. It is observed that the modified Boris method can give the correct trajectories even with very large time step size $h=40\varepsilon$. For the standard Boris method, the drift motions are totally wrong with large time step size. The projection of the computed particle trajectory onto the $(r,z)$ plane is given in Figure \ref{fig:orbit2}. Figure \ref{fig:order1} shows the absolute errors of $r$, $z$, and $v_\parallel$ along the numerical solution of the modified Boris algorithm with $\varepsilon=10^{-3}$ and $T=0.5/\varepsilon$, which is observed to be size of $O(h^2)$ in agreement with our theoretical results. Figure \ref{fig:order2} shows the similar results for $\varepsilon=10^{-4}$. All the reference solutions are obtained by using standard Boris with small time step size $h=0.05\varepsilon$.
\section{Proof of main results}~\label{sec:proof}
The theorems will be proved mainly based on the modulated Fourier expansions for the exact and numerical solutions given in~\cite{lubich2022large}. In this section, we will write the guiding centre equations in the toroidal geometry and express all the $O(\varepsilon)$ terms explicitly.
Following~\cite{hairer20lta}, we diagonalize the linear map $v\mapsto v\times B(x)$ and denote the eigenvalues as $\lambda_1=\mathrm{i}|B(x)|$, $\lambda_0=0$, and $\lambda_{-1}=-\mathrm{i}|B(x)|$. The corresponding normalised eigenvectors are denoted by $\nu_1(x), \, \nu_0(x), \, \nu_{-1}(x)$ and the orthogonal projections onto the eigenspaces are denoted by $P_j(x)=\nu_j(x)\nu_j(x)^*$. It is noted that $P_\parallel(x)=P_0(x)$ and $P_\perp(x)=I-P_\parallel(x)=P_1(x)+P_{-1}(x)$.
\subsection{Proof of Theorem~\ref{thm:exact}}
The proof is structured into three parts (a)-(c).
(a) The equation of guiding centre motion in cartesian coordinate.
According to Theorem 4.1 of \cite{hairer20lta}, it is known that the solution of \eqref{ode}--\eqref{B-eps} can be written as
\[
x(t)=\sum_{|k|\leq N-1}y^k(t)\,{\mathrm e}^{{\mathrm{i}}k\varphi(t)/\varepsilon} + R_N(t), \qquad 0\le t \le c\varepsilon,
\]
where the phase function satisfies $\dot{\varphi}(t)=|B_1(y^0(t))|$. The coefficient functions $y^k(t)$ together with their derivatives (up to order $N$) are bounded as
\[
y^k=O(\varepsilon^{|k|}) \quad \qquad\hbox{ for all }\ |k|\leq N-1
\]
and further satisfy
\[
\dot y^0 \times B_1(z^0)=O(\varepsilon), \quad y^k_j =O(\varepsilon^2) \quad \text{for} \quad |k|=1, j\neq k,
\]
where $y^k(t)=y^k_1(t)+y^k_0(t)+y^k_{-1}(t)$ with $y^k_j=P_j(y^0)y^k$.
The remainder term and its derivative are bounded by
\[
R_N(t)=O(\varepsilon^N),\quad \dot{R}_N(t)=O(\varepsilon^{N-1}).
\]
Similar to Theorem 4.1 of \cite{lubich2022large}, we can divide the interval $[0, c/\varepsilon]$ into small intervals of length $O(\varepsilon)$ and on each subinterval we consider the above modulated Fourier expansion, which means $x(t)$ can be written as the modulated Fourier expansion for longer time intervals
\begin{equation}\label{eq:mfe}
x(t)=\sum_{|k|\leq N-1}y^k(t)\,{\mathrm e}^{{\mathrm{i}}k\varphi(t)/\varepsilon} + R_N(t), \qquad 0\le t \le \frac{c}{\varepsilon},
\end{equation}
where $y^k(t)$ are piecewise continuous with jumps of size $O(\varepsilon^{N})$ at integral multiples of $\varepsilon$ and are smooth elsewhere. The sizes of the coefficients and remainder term are the same as above.
Inserting \eqref{eq:mfe} into the continuous system and comparing the coefficients of ${\mathrm e}^{\mathrm{i} k\varphi(t)/\varepsilon}$ yield the differential equations for $y^k(t)$. For $k=0$ and $k=\pm 1$, we have
\begin{equation}\label{eq:eqs}
\begin{aligned}
\ddot{y}^0=\dot{y}^0\times B(y^0)+E(y^0)+\underbrace{2\mathrm{Re}\left(\mathrm{i}|B|y^1\times B'(y^0)y^{-1}\right)}_{=:I}+\underbrace{2\mathrm{Re}\left(\dot{y}_1^1\times B'(y^0)y^{-1}_{-1}\right)}_{=:II}+O(\varepsilon^2),\\
\pm 2\mathrm{i}\frac{\dot{\varphi}}{\varepsilon}\dot{y}^{\pm1}+\left(\pm\mathrm{i}\frac{\ddot{\varphi}}{\varepsilon}-\frac{\dot{\varphi}^2}{\varepsilon^2}\right)y^{\pm1}=\left(\dot{y}^{\pm1}\pm \mathrm{i}\frac{\dot{\varphi}}{\varepsilon}y^{\pm1}\right)\times B(y^0)+\dot{y}^0\times B'(y^0)y^{\pm1}_{\pm1}+O(\varepsilon).
\end{aligned}
\end{equation}
From the first equation of \eqref{eq:eqs}, it is straightforward to get several slow drifts for $P_\perp\dot{y}^0$ (see Remark 4.3 of \cite{lubich2022large}) and the guiding centre motion of $y^0(t)$ satisfies
\begin{equation}\label{eq:y}
\dot{y}^0=P_{\parallel}\dot{y}^0+\frac{1}{|B|}P_{\parallel}\dot{y}^0\times \frac{{\mathrm d} {\mathrm e}_\parallel}{{\mathrm d} t}+\frac{1}{|B|^2}\left(E-\mu^0\,\nabla |B|\right)\times B+O(\varepsilon^2),
\end{equation}
with $B, \nabla |B|, P_\parallel={\mathrm e}_\parallel {\mathrm e}_\parallel^\top$ and $E$ evaluated at the guiding centre $y^0$. The initial value of $y^0$ is
\[
y^0(0)= x(0)+ \frac{\dot{x}(0)\times B(x(0))}{|B(x(0))|^2}+O(\varepsilon^2).\\
\]
(b) The equations in toroidal geometry.
In the toroidal geometry, $y^0(t)$ can be written as
\[
y^0=r(y^0){\mathrm e}_r(y^0)+z(y^0){\mathrm e}_z=:r^0{\mathrm e}_r(y^0)+z^0 {\mathrm e}_z.
\]
\noindent
--- Multiplying \eqref{eq:y} with ${\mathrm e}_r^\top={\mathrm e}_r(y^0)^\top$ gives
\begin{equation}\label{eq:r}
{\mathrm e}_r^\top\dot{y}^0 =\frac{\varepsilon v^0_\parallel}{b} {\mathrm e}_r^\top\left({\mathrm e}_\parallel\times\frac{{\mathrm d} {\mathrm e}_\parallel}{{\mathrm d} t}\right)+\frac{\varepsilon}{b} {\mathrm e}_r^\top \left((E-\mu^0\,\nabla b)\times {\mathrm e}_\parallel\right) +O(\varepsilon^2),
\end{equation}
where ${\mathrm e}_r, {\mathrm e}_\parallel$ are evaluated at $y^0$ and
\begin{equation}\label{eq:vv}
v^0_\parallel:= {\mathrm e}_\parallel^\top\dot{y}^0.
\end{equation}
From \eqref{eq:relation}, it is known that
\begin{equation}\label{eq:deriv}
\begin{aligned}
\dot{{\mathrm e}}_r(y^0)&=\frac{v^0_\parallel}{r^0}{\mathrm e}_\parallel(y^0), \quad \dot{{\mathrm e}}_\parallel(y^0)=-\frac{v^0_\parallel}{r(y^0)} {\mathrm e}_r(y^0),\\ \ddot{{\mathrm e}}_\parallel(y^0)&=-\frac{{\mathrm d}}{{\mathrm d} t}\left(\frac{v^0_\parallel}{r^0}\right){\mathrm e}_r(y^0)-\left(\frac{v^0_\parallel}{r^0}\right)^2{\mathrm e}_\parallel(y^0).
\end{aligned}
\end{equation}
Then the left hand side of \eqref{eq:r} can be expressed as
\[
{\mathrm e}_r^\top \dot{y}^0 = \frac{{\mathrm d}}{{\mathrm d} t}( {\mathrm e}_r^\top y^0 ) - \dot{{\mathrm e}}_r^\top y^0=\frac{{\mathrm d} r^0}{{\mathrm d} t},
\]
and the first term on the right hand side of \eqref{eq:r} vanishes since
\[
{\mathrm e}_r^\top \left( {\mathrm e}_\parallel\times \frac{{\mathrm d} {\mathrm e}_\parallel}{{\mathrm d} t} \right)= -\frac{v^0_\parallel}{r} {\mathrm e}_r^\top( {\mathrm e}_\parallel\times {\mathrm e}_r)=0.
\]
Using the fact that ${\mathrm e}_r\times{\mathrm e}_\parallel={\mathrm e}_z, \, {\mathrm e}_z\times {\mathrm e}_\parallel=-{\mathrm e}_r$, $E=E_r{\mathrm e}_r+E_z{\mathrm e}_z$ and $\nabla b=\partial_r b \, {\mathrm e}_r+\partial_z b \, {\mathrm e}_z$, we obtain
\[
{\mathrm e}_r^\top \left((E-\mu^0\,\nabla b(r^0, z^0))\times {\mathrm e}_\parallel\right) = -E_z+\mu^0\partial_z b.
\]
Thus \eqref{eq:r} is equivalent to
\[
\frac{{\mathrm d} r^0}{{\mathrm d} t}=\frac{\varepsilon}{b}( -E_z+\mu^0\partial_z b)+O(\varepsilon^2),
\]
where the functions $E_z, b, \partial_z b$ are evaluated at $(r^0,z^0)$.
The initial value of $r^0$ can be expressed as
\[
r^0(0)={\mathrm e}_r(y(0))^\top y^0(0)={\mathrm e}_r(x(0))^\top x(0) + O(\varepsilon)=r(x(0))+O(\varepsilon).
\]
\noindent
--- Multiplying \eqref{eq:y} with ${\mathrm e}_z^\top$ gives
\begin{equation}\label{eq:z}
{\mathrm e}_z^\top \dot{y}^0 =\frac{\varepsilon v^0_\parallel}{b(r^0, z^0)} {\mathrm e}_z^\top \left({\mathrm e}_\parallel\times \frac{{\mathrm d} {\mathrm e}_\parallel}{{\mathrm d} t}\right)+\frac{\varepsilon}{b(r^0, z^0)} {\mathrm e}_z^\top \left((E-\mu^0\,\nabla b(r^0, z^0))\times {\mathrm e}_\parallel\right) +O(\varepsilon^2)
\end{equation}
Similarly, we have
\[
{\mathrm e}_z^\top \dot{y}^0 = \frac{{\mathrm d}}{{\mathrm d} t}({\mathrm e}_z^\top y^0)=\frac{{\mathrm d} z^0}{{\mathrm d} t},
\]
\[
{\mathrm e}_z^\top\left( {\mathrm e}_\parallel\times \frac{{\mathrm d} {\mathrm e}_\parallel}{{\mathrm d} t}\right) = -\frac{v^0_\parallel}{r^0}{\mathrm e}_z^\top( {\mathrm e}_\parallel\times {\mathrm e}_r)=\frac{v^0_\parallel}{r^0},
\]
and
\[
{\mathrm e}_z^\top \left((E-\mu^0\,\nabla b)\times {\mathrm e}_\parallel\right) = E_r-\mu^0\partial_r b,
\]
then \eqref{eq:z} can be expressed as
\[
\frac{{\mathrm d} z^0}{{\mathrm d} t}=\varepsilon\frac{(v^0_\parallel)^2}{r^0 b}+\varepsilon\frac{E_r}{b}-\varepsilon\frac{\mu^0}{b}\partial_r b+O(\varepsilon^2),
\]
where the functions $E_r, b, \partial_r b$ are evaluated at $(r^0,z^0)$.
The initial value of $z^0$ is
\[
z^0(0)={\mathrm e}_z^\top y^0(0)={\mathrm e}_z^\top x(0) + O(\varepsilon)=z(x(0))+O(\varepsilon).
\]
\noindent
--- By the definition of \eqref{eq:vv} we can derive the equation for $v^0_\parallel$
\begin{equation}\label{eq:dvv}
\frac{{\mathrm d}}{{\mathrm d} t}v^0_\parallel=\frac{{\mathrm d}}{{\mathrm d} t} ({\mathrm e}_\parallel^\top \dot{y}^0)= \dot{{\mathrm e}}_\parallel^\top \dot{y}^0+ {\mathrm e}_\parallel^\top \ddot{y}^0.
\end{equation}
The first term on the right hand side is
\begin{equation}\label{eq:form}
\dot{{\mathrm e}}_\parallel^\top \dot{y}^0=-\frac{v^0_\parallel}{r^0}\frac{{\mathrm d} r^0}{{\mathrm d} t}
\end{equation}
using \eqref{eq:deriv}.
In the following we will show that the second term ${\mathrm e}_\parallel^\top \ddot{y}^0$ is of size $O(\varepsilon^2)$.
From the expression of $B'$ given in \eqref{eq:relation}, it is known that $ B'(y^0)y^{k}_{\pm1}$ is parallel to ${\mathrm e}_\parallel$, thus $ {\mathrm e}_\parallel^\top II=0$ and
\[
{\mathrm e}_\parallel^\top I= {\mathrm e}_\parallel^\top \left(2 \, \mathrm{Re}(\mathrm{i}|B|y_1^1\times B'(y^0)y_0^{-1}\right)+O(\varepsilon^2).
\]
The algebraic equation of $y_0^{\pm1}$ can be derived by applying $P_\parallel(y^0)$ to the second equation of \eqref{eq:eqs}
\[
\pm 2\mathrm{i}\frac{\dot{\varphi}}{\varepsilon}P_\parallel\dot{y}^{\pm1}+\left(\pm\mathrm{i}\frac{\ddot{\varphi}}{\varepsilon}-\frac{\dot{\varphi}^2}{\varepsilon^2}\right)y_0^{\pm1}=P_\parallel\left(\dot{y}^0\times B'(y^0)y^{\pm1}_{\pm1}\right)+O(\varepsilon).
\]
The dominant term is $-\dot{\varphi}^2/\varepsilon^2 y_0^{\pm1}$ and the right hand side $P_\parallel\left(\dot{y}^0\times B'(y^0)y^{\pm1}_{\pm1}\right)=0$ using that $B'(y^0)y^{k}_{\pm1}$ is parallel to ${\mathrm e}_\parallel$.
Hence we obtain the following relation for $y_0^{\pm1}$
\[
\begin{aligned}
y_0^{\pm1}&=\pm 2\mathrm{i} \frac{\varepsilon}{\dot{\varphi}}P_\parallel\dot{y}^{\pm1}+O(\varepsilon^3)\\
&=\pm 2\mathrm{i} \frac{\varepsilon}{\dot{\varphi}}\dot{y}^{\pm1}_0 \mp 2\mathrm{i} \frac{\varepsilon}{\dot{\varphi}}\dot{P}_\parallel y^{\pm1}_{\pm1}+O(\varepsilon^3).
\end{aligned}
\]
By differential and substitution the first term on the right hand side of above equation can be removed. Using \eqref{eq:deriv}, we have
\[
\begin{aligned}
y_0^{\pm1}&=\mp 2\mathrm{i} \frac{\varepsilon}{\dot{\varphi}}(\dot{{\mathrm e}}_\parallel {\mathrm e}^\top_\parallel+{\mathrm e}_\parallel \dot{{\mathrm e}}^\top_\parallel) y^{\pm1}_{\pm1}+O(\varepsilon^3)\\
&=\pm 2\mathrm{i} \frac{\varepsilon}{\dot{\varphi}}\frac{v^0_\parallel}{r^0}({{\mathrm e}}^\top_r y^{\pm1}_{\pm1}){\mathrm e}_\parallel +O(\varepsilon^3).\\
\end{aligned}
\]
Denoting $y^{1}_{1}=\zeta\,\nu_1$, $y^{-1}_{-1}=\bar{\zeta}\,\nu_{-1}$ with $\nu_{\pm1}=({\mathrm e}_z \pm\mathrm{i} \,{\mathrm e}_r(y^0))/{\sqrt 2}$ and substituting it to the above equation give $y^{1}_0=\eta \, {\mathrm e}_\parallel+O(\varepsilon^3)$ and $y^{-1}_0=\bar{\eta}\,{\mathrm e}_\parallel+O(\varepsilon^3)$ with $\eta=-\varepsilon(\sqrt{2}{ v^0_\parallel}/{\dot{\varphi}r^0}) \zeta$. Then we have
\[
\begin{aligned}
2\,\mathrm{Re}\left(\mathrm{i}|B|y_1^1\times B'(y^0)y_0^{-1}\right)&=\sqrt{2}\,\mathrm{Re}\left(\mathrm{i}|B| \zeta\bar{\eta}({\mathrm e}_z\times B'(y^0){\mathrm e}_\parallel+\mathrm{i} \,{\mathrm e}_r\times B'(y^0){\mathrm e}_\parallel)\right)+O(\varepsilon^2)\\
&=-\sqrt{2}\,|B| \zeta\bar{\eta} \ {\mathrm e}_r\times B'(y^0){\mathrm e}_\parallel +O(\varepsilon^2).
\end{aligned}
\]
From the expression of $B'$ in \eqref{eq:relation}, we konw that $B'(y^0){\mathrm e}_\parallel$ is the combination of ${\mathrm e}_\parallel$ and ${\mathrm e}_r$ and thus ${\mathrm e}_\parallel^\top( {\mathrm e}_r\times B'(y^0){\mathrm e}_\parallel)=0$. This means
\[
{\mathrm e}_\parallel^\top \ddot{y}^0= {\mathrm e}_\parallel^\top I +O(\varepsilon^2)=O(\varepsilon^2),
\]
and \eqref{eq:dvv} is equivalent to
\[
\frac{{\mathrm d}}{{\mathrm d} t}v^0_\parallel=-\frac{v^0_\parallel}{r^0}\frac{{\mathrm d} r^0}{{\mathrm d} t}+ O(\varepsilon^2).
\]
The initial value of $v^0_\parallel$ is
\[
v^0_\parallel(0)={\mathrm e}_\parallel(y^0(0))^\top \dot{y}^0(0)= {\mathrm e}_\parallel(x(0))^\top \dot{x}(0) + O(\varepsilon).
\]
(c) From short to long time intervals
Denoting by $y^{0,[n]}, r^{0,[n]}, z^{0,[n]}, v^{0,[n]}_\parallel$ the functions $y^{0}, r^0, z^0, v^0_\parallel$ on time interval $n\varepsilon\leq t\leq (n+1)\varepsilon$, from (b), it is known that these coefficients satisfy the following equations
\begin{equation}\label{eq:gc}
\begin{aligned}
\frac{{\mathrm d} r^{0,[n]}}{{\mathrm d} t}&=-\varepsilon\frac{E_z}{b}+\varepsilon\frac{\mu^0}{b}\partial_z b+O(\varepsilon^2), \quad r^{0,[n]}(n\varepsilon)=r(x(n\varepsilon))+O(\varepsilon)\\
\frac{{\mathrm d} z^{0,[n]}}{{\mathrm d} t}&=\varepsilon\frac{(v^{0,[n]}_\parallel)^2}{b r^{0,[n]}}+\varepsilon\frac{E_r}{b}-\varepsilon\frac{\mu^0}{b}\partial_r b+O(\varepsilon^2), \quad
z^{0,[n]}(n\varepsilon)=z(x(n\varepsilon))+O(\varepsilon)\\
\frac{{\mathrm d} v^{0,[n]}_\parallel}{{\mathrm d} t}&=\varepsilon\frac{v^{0,[n]}_\parallel}{r^{0,[n]}}\left(\frac{E_z}{b}-\frac{\mu^0}{b}\partial_z b\right)+O(\varepsilon^2), \quad v^{0,[n]}_\parallel(n\varepsilon)=\langle \dot{x}(n\varepsilon), {\mathrm e}_\parallel(x(n\varepsilon)) \rangle + O(\varepsilon),
\end{aligned}
\end{equation}
with $E_r, E_z, b, \partial_r b, \partial_z b$ evaluated at $(r^{0,[n]},z^{0,[n]})$.
From equation \eqref{eq:mfe}, on every time interval, we have
\[
x(t)=y^{0,[n]}(t)+ O(\varepsilon), \ v_\parallel(t)=v_\parallel^{0,[n]}(t)+ O(\varepsilon), \quad n\varepsilon \leq t \leq (n+1)\varepsilon,
\]
and thus
\[
r(t)=r^{0,[n]} + O(\varepsilon), \ z(t)=z^{0,[n]} + O(\varepsilon), \ v_\parallel(t)=v_\parallel^{0,[n]} + O(\varepsilon), \quad n\varepsilon \leq t \leq (n+1)\varepsilon.
\]
In view of the factor $\varepsilon$ in front of the right hand side of the differential equations \eqref{eq:limit} and \eqref{eq:gc}, we have
\[
r^{0,[0]}(t)-\tilde{r}(t)=O(\varepsilon), \quad z^{0,[0]}(t)-\tilde{z}(t)=O(\varepsilon), \quad v^{0,[0]}_\parallel(t)-\tilde{v}(t)=O(\varepsilon), \quad 0\leq t \leq c/\varepsilon.
\]
Since $y^{0,[n-1]}(n\varepsilon)=y^{0,[n]}(n\varepsilon)+O(\varepsilon^N), \dot{y}^{0,[n-1]}(n\varepsilon)=\dot{y}^{0,[n]}(n\varepsilon)+O(\varepsilon^{N-1})$, we have
\[
\begin{aligned}
r^{0,[n-1]}(n\varepsilon)&=r^{0,[n]}(n\varepsilon)+O(\varepsilon^N)\\
z^{0,[n-1]}(n\varepsilon)&=z^{0,[n]}(n\varepsilon)+O(\varepsilon^N)\\
v_\parallel^{0,[n-1]}(n\varepsilon)&=v_\parallel^{0,[n]}(n\varepsilon)+O(\varepsilon^{N-1}).
\end{aligned}
\]
In view of the factor $\varepsilon$ in front of the right hand side of the \eqref{eq:gc}, we have
\[
\begin{aligned}
r^{0,[n]}(t)-r^{0,[n-1]}(t)&=O(\varepsilon^N), \\
z^{0,[n]}(t)-z^{0,[n-1]}(t)&=O(\varepsilon^N), \quad n\varepsilon\leq t\leq c/\varepsilon.\\
v_\parallel^{0,[n]}(t)-v_\parallel^{0,[n-1]}(t)&=O(\varepsilon^{N-1}),
\end{aligned}
\]
With the above estimates, we obtain, for $n\varepsilon \leq t\leq (n+1)\varepsilon\leq c/\varepsilon$
\[
\begin{aligned}
r(t)-\tilde{r}(t)&=r(t)-r^{0,[n]}+\sum_{j=1}^{n} \left(r^{0,[j]}(t)-r^{0,[j-1]}(t)\right)+ r^{0,[0]}(t)-\tilde{r}(t)\\
&=O(\varepsilon)+O(n\varepsilon^N)+O(\varepsilon)=O(\varepsilon), \\
z(t)-\tilde{z}(t)&=z(t)-z^{0,[n]}+\sum_{j=1}^{n} \left(z^{0,[j]}(t)-z^{0,[j-1]}(t)\right)+ z^{0,[0]}(t)-\tilde{z}(t)\\
&=O(\varepsilon)+O(n\varepsilon^N)+O(\varepsilon)=O(\varepsilon), \\
v_\parallel(t)-\tilde{v}(t)&=v_\parallel(t)-v_\parallel^{0,[n]}+\sum_{j=1}^{n} \left(v_\parallel^{0,[j]}(t)-v_\parallel^{0,[j-1]}(t)\right)+ v_\parallel^{0,[0]}(t)-\tilde{v}(t)\\
&=O(\varepsilon)+O(n\varepsilon^{N-1})+O(\varepsilon)=O(\varepsilon), \\
\end{aligned}
\]
which is the stated result of Theorem~\ref{thm:exact}.
\subsection{Proof of Theorem~\ref{thm:num}}
Similar to the proof of Theorem~\ref{thm:exact}, we structure the proof into three parts.
(a)
For a general strong magnetic field, the time interval of modulated Fourier expansion for numerical solution is validated over $O(h)$. Using the uniqueness of the modulated Fourier expansion, we can patch together many short-time expansions in the same way as it was done for the exact solution and obtain the expansion for longer time $O(1/\varepsilon)$.
From Theorem 4.2 of \cite{lubich2022large}, it is known that the numerical solution $x^n$ given by the modified Boris algorithm \eqref{mboris}-\eqref{mod-init} with a step size $h$ satisfying
\[
c_*\varepsilon \le h^2 \le C_*\varepsilon
\]
can be written as
\begin{equation}\label{eq:mfe_num}
x^n=y^0(t_n) + (-1)^n y^1(t_n) +R_N(t_n), \qquad t_n=nh \le c/\varepsilon,
\end{equation}
where $y^0=O(1)$, $y^1=O(h^2)$ are peicewise continuous with jumps of size $O(h^N)$ at integral multiples of $h$ and smooth elsewhere. They are unique up to $O(h^N)$ and $P_\perp(y^0)\dot{y}^0=O(h^2)$, $P_0(y^0)y^1=O(h^4)$.
Inserting \eqref{eq:mfe_num} into the numerical scheme \eqref{mboris} and separating the terms without $(-1)^n$ give the equation for guiding centre $y^0(t)$
\begin{equation}\label{gc_Boris}
\begin{aligned}
\ddot{y}^0+h^2\ddddot{y}^0+O(h^4)=&\bigl(\dot{y}^0+ h^2\dddot{y}^0+ O(h^4)\bigr)\times B(y^0)+E(y^0)-\mu^0\,\nabla|B|(y^0)\\
&-\underbrace{\dot{y}^1_\perp\times B'(y^0)y^1_\perp}_{=:III}+O(h^4),
\end{aligned}
\end{equation}
where $III=O(h^2)$ in our stepsize regime $h^2\sim\varepsilon$.
Taking the projection $P_{\pm 1}=P_{\pm 1}(y^0)$ on both sides gives
\[
\begin{aligned}
P_{\pm 1}\ddot{y}^0+O(h^2)=&\pm\mathrm{i}|B(y^0)|P_{\pm 1}(\dot{y}^0+ h^2\dddot{y}^0+ O(h^4))\\
&+P_{\pm 1}\left(E(y^0)-\mu^0\,\nabla|B|(y^0)\right)+O(h^2),
\end{aligned}
\]
which means (recall that $h^2\sim\varepsilon$)
\[
P_{\pm 1}\dot{y}^0=-h^2P_{\pm 1}\dddot{y}^0\mp\frac{\mathrm{i}}{|B|}P_{\pm1}\ddot{y}^0\pm\frac{\mathrm{i}}{|B|}P_{\pm1}\left(E(y^0)-\mu^0\,\nabla|B|(y^0)\right)+O(\varepsilon h^2).
\]
Denoting $g_{\pm1}=P_{\pm1}\dot{y}^0$, we have $P_{\pm1}\ddot{y}^0=\dot{g}_{\pm1}-\dot{P}_{\pm1}\dot{y}^0$ and $P_{\pm1}\dddot{y}^0=\ddot{g}_{\pm1}-2\dot{P}_{\pm1}\ddot{y}^0-\ddot{P}_{\pm1}\dot{y}^0$. By differentiation and substitution the derivatives of $g_{\pm1}$ can be removed and we have
\[
P_{\pm1}\dot{y}^0=h^2(2\dot{P}_{\pm1}\ddot{y}^0+\ddot{P}_{\pm1}\dot{y}^0)\pm\frac{\mathrm{i}}{|B|}\dot{P}_{\pm1}\dot{y}^0\pm\frac{\mathrm{i}}{|B|}P_{\pm1}\left(E(y^0)-\mu_0\nabla|B|(y^0)\right)+O(\varepsilon h^2).
\]
Using that $\dot{P}_{1}+\dot{P}_{-1}+\dot{P}_{0}=0$, we obtain
\begin{equation}\label{eq:y_Boris}
\begin{aligned}
\dot{y}^
&=P_{\parallel}\dot{y}^0+P_{1}\dot{y}^0+P_{-1}\dot{y}^0\\
&=P_{\parallel}\dot{y}^0-h^2(2\dot{P}_\parallel\ddot{y}^0+\ddot{P}_\parallel \dot{y}^0)+\frac{1}{|B|}P_{\parallel}\dot{y}^0\times \frac{{\mathrm d} {\mathrm e}_\parallel}{{\mathrm d} t}+\frac{1}{|B|^2}\left(E-\mu^0\,\nabla |B|\right)\times B+O(\varepsilon h^2),
\end{aligned}
\end{equation}
with $B, \nabla |B|, P_\parallel={\mathrm e}_\parallel {\mathrm e}_\parallel^\top$ and $E$ evaluated at the guiding centre $y^0$. Compared to the guiding centre equation \eqref{eq:y} of the exact solution, it is noticed that there are additional $O(h^2)$ terms in \eqref{eq:y_Boris}.
(b) Next we derive the guiding centre equation in toroidal geometry where $y^0(t)$ can be written as
\[
y^0=r(y^0){\mathrm e}_r(y^0)+z(y^0){\mathrm e}_z=:r^0{\mathrm e}_r(y^0)+z^0{\mathrm e}_z.
\]
\noindent
--- Multiplying \eqref{eq:y_Boris} with ${\mathrm e}_r^\top={\mathrm e}_r(y^0)^\top$ gives
\begin{equation}\label{eq:r_Boris}
\begin{aligned}
{\mathrm e}_r^\top \dot{y}^0 =&-2h^2 {\mathrm e}_r^\top \dot{P}_\parallel\ddot{y}^0-h^2{\mathrm e}_r^\top \ddot{P}_\parallel \dot{y}^0\\
&+\frac{\varepsilon v^0_\parallel}{b(r,z)} {\mathrm e}_r^\top \left( {\mathrm e}_\parallel\times \frac{{\mathrm d} {\mathrm e}_\parallel}{{\mathrm d} t}\right)+\frac{\varepsilon}{b(r,z)} {\mathrm e}_r^\top \left( (E-\mu^0\,\nabla b(r,z))\times {\mathrm e}_\parallel \right)+O(\varepsilon h^2)
\end{aligned}
\end{equation}
with $v^0_\parallel:={\mathrm e}_\parallel^\top \dot{y}^0$. Compared to \eqref{eq:r}, the only difference comes from the first two terms on the right hand side which we calculate in the following.
Multiplying \eqref{gc_Boris} with ${\mathrm e}_\parallel^\top={\mathrm e}_\parallel(y^0)^\top$ gives
\[
{\mathrm e}_\parallel^\top \ddot{y}^0=O(h^2),
\]
then the first term on the right hand side of \eqref{eq:r_Boris} is
\[
-2h^2{\mathrm e}_r^\top \dot{P}_\parallel\ddot{y}^0=-2h^2{\mathrm e}_r^\top (\dot{{\mathrm e}}_\parallel {\mathrm e}_\parallel^\top \ddot{y}^0)=O(h^4).
\]
Using \eqref{eq:deriv}, the second term on the right hand side of \eqref{eq:r_Boris} can be expressed as
\begin{equation}\label{eq:formula}
\begin{aligned}
{\mathrm e}_r^\top\ddot{P}_\parallel \dot{y}^0&={\mathrm e}_r^\top( \ddot{{\mathrm e}}_\parallel {\mathrm e}_\parallel^\top \dot{y}^0)+2{\mathrm e}_r^\top( \dot{{\mathrm e}}_\parallel\dot{{\mathrm e}}_\parallel^\top \dot{y}^0)\\
&=-v^0_\parallel\frac{{\mathrm d}}{{\mathrm d} t}\left(\frac{v^0_\parallel}{r^0}\right)+2\left(\frac{v^0_\parallel}{r^0}\right)^2\frac{{\mathrm d} r^0}{{\mathrm d} t}\\
&=-\frac{v^0_\parallel}{r^0}\frac{{\mathrm d} v^0_\parallel}{{\mathrm d} t} + 3
\left(\frac{v^0_\parallel}{r^0}\right)^2\frac{{\mathrm d} r^0}{{\mathrm d} t}.
\end{aligned}
\end{equation}
Inserting
\[
\frac{{\mathrm d} v^0_\parallel}{{\mathrm d} t}=\frac{{\mathrm d}}{{\mathrm d} t} ({\mathrm e}_\parallel^\top\dot{y}^0)= \dot{{\mathrm e}}_\parallel^\top \dot{y}^0+ {\mathrm e}_\parallel^\top\ddot{y}^0=-\frac{v^0_\parallel}{r^0}\frac{{\mathrm d} r^0}{{\mathrm d} t}+ O(h^2)
\]
into \eqref{eq:formula} gives
\[
{\mathrm e}_r^\top\ddot{P}_\parallel \dot{y}^0=4\left(\frac{v^0_\parallel}{r^0}\right)^2\frac{{\mathrm d} r^0}{{\mathrm d} t}+O(h^2).
\]
Then \eqref{eq:r_Boris} can be expressed as
\[
\left(1+4h^2\left(\frac{v^0_\parallel}{r^0}\right)^2\right)\frac{{\mathrm d} r^0}{{\mathrm d} t}=\frac{\varepsilon}{b} {\mathrm e}_r^\top \left((E-\mu^0\,\nabla b)\times {\mathrm e}_\parallel \right) +O(\varepsilon h^2),
\]
which yields
\[
\frac{{\mathrm d} r^0}{{\mathrm d} t}=-\varepsilon\frac{E_z}{b}+\varepsilon\frac{\mu^0}{b}\partial_z b+O(\varepsilon h^2),
\]
where the functions $E_z, b, \partial_z b$ are evaluated at $(r^0,z^0)$.
The initial value of $r^0$ can be expressed as
\[
r^0(0)=\langle y^0(0),{\mathrm e}_r(y(0))\rangle=\langle x^0,{\mathrm e}_r(x^0))\rangle + O(h^2)=r(x^0)+O(h^2).
\]
\noindent
--- Multiplying \eqref{eq:y_Boris} with ${\mathrm e}_z^\top$ gives
\begin{equation}\label{eq:z_Boris}
\begin{aligned}
{\mathrm e}_z^\top \dot{y}^0 =&-2h^2 {\mathrm e}_z^\top \dot{P}_\parallel\ddot{y}^0 -h^2 {\mathrm e}_z^\top \ddot{P}_\parallel \dot{y}^0\\
&+\frac{\varepsilon v_\parallel}{b(r,z)} {\mathrm e}_z^\top\left( {\mathrm e}_\parallel\times \frac{{\mathrm d} {\mathrm e}_\parallel}{{\mathrm d} t}\right)+\frac{\varepsilon}{b(r,z)} {\mathrm e}_z^\top \left((E-\mu^0\,\nabla b(r,z))\times {\mathrm e}_\parallel\right) +O(\varepsilon h^2),
\end{aligned}
\end{equation}
where the first two terms on the right hand side vanish using \eqref{eq:deriv} and the orthorgonality of ${\mathrm e}_z,{\mathrm e}_r,{\mathrm e}_\parallel$. Similar to the continuous case, we obtain
\[
\frac{{\mathrm d} z^0}{{\mathrm d} t}=\varepsilon\frac{\left(v^0_\parallel\right)^2}{r^0b}+\varepsilon\frac{E_r}{b}-\varepsilon\frac{\mu^0}{b}\partial_r b+O(\varepsilon h^2),
\]
where the functions $E_r, b, \partial_r b$ are evaluated at $(r^0,z^0)$.
The initial value of $z^0$ is
\[
z^0(0)=\langle y^0(0),{\mathrm e}_z)\rangle=\langle x^0,{\mathrm e}_z)\rangle + O(h^2)=z(x^0)+O(h^2).
\]
\noindent
--- Finally, we need to derive the differential equation for $v^0_\parallel$ which can be directly computed as the continuous case
\begin{equation}\label{eq:parallel_Boris}
\frac{{\mathrm d} v^0_\parallel}{{\mathrm d} t}=\frac{{\mathrm d}}{{\mathrm d} t}\left( {\mathrm e}_\parallel^\top \dot{y}^0\right)= \dot{{\mathrm e}}_\parallel^\top \dot{y}^0+ {\mathrm e}_\parallel^\top\ddot{y}^0.
\end{equation}
The first term on the right hand side is the same as \eqref{eq:form}.
Multiplying \eqref{gc_Boris} with ${\mathrm e}_\parallel^\top={\mathrm e}_\parallel(y^0)^\top$ yields
\begin{equation}\label{eq:ddoty_Boris}
\begin{aligned}
{\mathrm e}_\parallel^\top \ddot{y}^0&=-h^2 {\mathrm e}_\parallel^\top \ddddot{y}^0+ O(h^4)\\
&=-h^2 \frac{{\mathrm d}^2}{{\mathrm d} t^2}({\mathrm e}_\parallel^\top \ddot{y}^0)+2h^2 (\dot{{\mathrm e}}_\parallel^\top \dddot{y}^0)+h^2 \ddot{{\mathrm e}}_\parallel^\top\ddot{y}^0+ O(h^4),
\end{aligned}
\end{equation}
where we use $ {\mathrm e}_\parallel^\top(E-\mu^0\,\nabla|B|) =0$ and ${\mathrm e}_\parallel^\top III =0$.
Since the derivatives of $r^0$ are $O(\varepsilon)$ and using \eqref{eq:deriv}, we have
\[
\begin{aligned}
\dot{{\mathrm e}}_\parallel^\top \dddot{y}^0&=-\frac{v^0_\parallel}{r^0} {\mathrm e}_r^\top \dddot{y}^0\\
&=-\frac{v^0_\parallel}{r^0}\left( \dddot{r^0} - 3 \dot{{\mathrm e}}_r^\top \ddot{y}^0 - 3 \ddot{{\mathrm e}}_r^\top\dot{y}^0- \dddot{{\mathrm e}}_r^\top y^0 \right)\\
&=-\frac{v^0_\parallel}{r^0}\left(- 3\frac{v^0_\parallel}{r^0} {\mathrm e}_\parallel^\top \ddot{y}^0 -3v^0_\parallel\frac{{\mathrm d}}{{\mathrm d} t}\left(\frac{v^0_\parallel}{r^0}\right) + v^0_\parallel\frac{{\mathrm d}}{{\mathrm d} t}\left(\frac{v^0_\parallel}{r^0}\right) + r^0\frac{{\mathrm d}}{{\mathrm d} t}\left(\left(\frac{v^0_\parallel}{r^0}\right)^2\right) \right)+O(\varepsilon)\\
&=3\left(\frac{v^0_\parallel}{r^0}\right)^2 {\mathrm e}_\parallel^\top \ddot{y}^0+O(\varepsilon)
\end{aligned}
\]
and
\[
\begin{aligned}
\ddot{{\mathrm e}}_\parallel^\top \ddot{y}^0&=-{\mathrm e}_r^\top \ddot{y}^0\frac{{\mathrm d}}{{\mathrm d} t}\left(\frac{v^0_\parallel}{r^0}\right)-\left(\frac{v^0_\parallel}{r^0}\right)^2 {\mathrm e}_\parallel^\top \ddot{y}^0\\
&=-\left(\frac{{\mathrm d}}{{\mathrm d} t}({\mathrm e}_r^\top \dot{y}^0)-\dot{{\mathrm e}}_r^\top\dot{y}^0\right)\frac{{\mathrm d}}{{\mathrm d} t}\left(\frac{v^0_\parallel}{r^0}\right)-\left(\frac{v^0_\parallel}{r^0}\right)^2 {\mathrm e}_\parallel^\top\ddot{y}^0\\
&=\frac{\left(v^0_\parallel\right)^2}{r^0}\frac{{\mathrm d}}{{\mathrm d} t}\left(\frac{v^0_\parallel}{r^0}\right)-\left(\frac{v^0_\parallel}{r^0}\right)^2 {\mathrm e}_\parallel^\top\ddot{y}^0+O(\varepsilon),
\end{aligned}
\]
then \eqref{eq:ddoty_Boris} can be written as
\[
(1-5h^2(v^0_\parallel/r^0)^2) {\mathrm e}_\parallel^\top \ddot{y}^0=h^2\left(\frac{v^0_\parallel}{r^0}\right)^2\frac{{\mathrm d} v^0_\parallel}{{\mathrm d} t}+ O(h^4).
\]
This gives
\[
{\mathrm e}_\parallel^\top \ddot{y}^0=h^2\left(\frac{v^0_\parallel}{r^0}\right)^2\frac{{\mathrm d} v^0_\parallel}{{\mathrm d} t}+ O(h^4).
\]
\eqref{eq:parallel_Boris} now can be written as
\[
(1-h^2(v^0_\parallel/r^0)^2)\frac{{\mathrm d} v^0_\parallel}{{\mathrm d} t}=-\frac{v^0_\parallel}{r^0}\frac{{\mathrm d} r^0}{{\mathrm d} t}+ O(h^4),
\]
which gives
\[
\frac{{\mathrm d} v^0_\parallel}{{\mathrm d} t}=-\frac{v^0_\parallel}{r^0}\frac{{\mathrm d} r^0}{{\mathrm d} t}+ O(h^4).
\]
The initial value of $v^0_\parallel$ is
\[
v^0_\parallel(0)=\langle \dot{y}^0(0), {\mathrm e}_\parallel(y^0(0))\rangle=\langle v^0,{\mathrm e}_\parallel(x^0) \rangle + O(h^2).
\]
(c)
Denoting by $y^{0,[n]}, r^{0,[n]}, z^{0,[n]}, v^{0,[n]}_\parallel$ the functions $y^{0}, r^0, z^0, v^0_\parallel$ on time interval $nh\leq t\leq (n+1)h$, from (b), it is known that these coefficients satisfy the following equations
\[
\begin{aligned}
\frac{{\mathrm d} r^{0,[n]}}{{\mathrm d} t}&=-\varepsilon\frac{E_z}{b}+\varepsilon\frac{\mu^0}{b}\partial_z b+O(\varepsilon h^2), \quad r^{0,[n]}(nh)=r(x(nh))+O(h^2)\\
\frac{{\mathrm d} z^{0,[n]}}{{\mathrm d} t}&=\varepsilon\frac{(v^{0,[n]}_\parallel)^2}{b r^{0,[n]}}+\varepsilon\frac{E_r}{b}-\varepsilon\frac{\mu^0}{b}\partial_r b+O(\varepsilon h^2), \quad
z^{0,[n]}(nh)=z(x(nh))+O(h^2)\\
\frac{{\mathrm d} v^{0,[n]}_\parallel}{{\mathrm d} t}&=\varepsilon\frac{v^{0,[n]}_\parallel}{r^{0,[n]}}\left(\frac{E_z}{b}-\frac{\mu^0}{b}\partial_z b\right)+O(\varepsilon h^2), \quad v^{0,[n]}_\parallel(nh)=\langle \dot{x}(nh), {\mathrm e}_\parallel(x(nh)) \rangle + O(h^2),
\end{aligned}
\]
with $E_r, E_z, b, \partial_r b, \partial_z b$ evaluated at $(r^{0,[n]},z^{0,[n]})$.
By patching together the errors together as what was done for the continuous case, we prove that
\[
r(x^n)-\tilde{r}(t_n)=O(h^2), \quad z(x^n)-\tilde{z}(t_n)=O(h^2), \quad v^n_\parallel-\tilde{v}(t_n)=O(h^2), \quad 0\leq t \leq c/\varepsilon.
\]
\section*{Acknowledgement}
The author thanks Professor Christian Lubich for many useful discussions and comments.
This work was supported by the Sino-German (CSC-DAAD) Postdoc Scholarship, Program No. 57575640.
\bibliographystyle{abbrv}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section*{Methods}
Device fabrication. Photodetectors were fabricated on a standard silicon-on-insulator (SOI)
wafer. Buried silicon waveguides with dimensions of an effective width $w$ = 400\,nm and a
height $h$ = 220\,nm were first built by using the LOCal Oxidation of Silicon (LOCOS)
technique (see Supplementary Information, S2). Grating couplers (GCs) were produced by a
shallow etching of silicon. A top 5\,nm thick SiN dielectric layer was then deposited by
atomic layer deposition for an electrical isolation from the silicon layer underneath. Next to
the waveguide, bottom metallic pads which were used to contact the graphene electrode were
subsequently defined by electron-beam lithography, evaporation of 5\,nm Ti and 50\,nm Au,
and a lift-off process. Mechanical exfoliation was employed to obtain crystalline flakes of
MoTe$_2$, graphene and hBN, which were identified with an optical microscope and whose
thicknesses were characterized with an atomic force microscope (AFM). The graphene MoTe$_2$ heterostructure were stacked by using a polymer-based pick-up technique with a
polydimethylsiloxane (PDMS) polypropylene carbonate (PPC) stamp, transferred to the
device chips, and aligned to the silicon waveguides with the help of the micromechanical
stage of a SUSS MJB4 mask aligner. 200\,nm wide and 20\,nm thick top Au contact pads were formed again by
electron-beam lithography, metal evaporation, and a lift-off process. The whole devices were
finally encapsulated by hBN flakes. The measurements were performed at ambient
conditions at room temperature.
\begin{acknowledgements}
This research was supported by the Swiss National Science Foundation (grant no. 200021\_165841). K.W. and T.T. acknowledge support from the Elemental Strategy Initiative
conducted by the MEXT, Japan, A3 Foresight by JSPS and the CREST
(JPMJCR15F3), JST.
This work was carried out partially at the Binnig and Rohrer Nanotechnology Center and the FIRST Center for Micro- and Nanotechnology at ETH Zurich.
\end{acknowledgements}
\section*{Author contributions}
N.F. and P.M. conceived the concept, designed and fabricated the devices, designed and performed the experiments, and analyzed the data. Y.S. contributed to the experiments. A.E. contributed to the device fabrication. T.T. and K.W. synthesized the hBN crystals. N.F., P.M., J.L., and L.N. co-wrote the manuscript, with support from all authors.\\
N.F. and P.M. contributed equally.
\bibliographystyle{naturemag}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
\label{intro}
A random number generator (RNG) is classified in two basic classes \cite{kocc2009cryptographic}: first, a deterministic random number generator (DRNG) or a pseudorandom number generator (PRNG) which needs a seed value as input and produces random looking bitstreams using some deterministic algorithm. Second, a true random number generator (TRNG) which uses physical and non-physical sources to generate true randomness. It is imperative to note that unlike PRNG or DRNG, TRNG does not need any seed but uses non-deterministic effects or physical experiments to generate the true random bits \cite{kocc2009cryptographic}. The significant differences between TRNG and PRNG are that TRNG generates non-reproducible arbitrary length random bitstreams without using any seed or initializer whereas the PRNG generates arbitrary length pseudorandom bitstreams using a seed value or initializer. TRNG is slow, having infinite period, costly in deployment and has the possibility of manipulation. Unlike TRNG, PRNG has less development and deployment cost (no need of dedicated hardware) but can produce reasonably good random looking bitstreams.
The design goals of RNG heavily depend on its target applications. A simple application like stochastic simulations or Monte Carlo integrations may require RNG to generate nothing more than a random looking bitstream \cite{kocc2009cryptographic}. However, a sensitive application of RNG like an operating system kernel on top of which entire critical systems or applications run, certainly requires RNG to generate high quality pseudorandom bitstreams which are also provably secure, unpredictable and must be non-reproducible.
A kernel uses a RNG to create ASLR offsets \cite{marco2019address}, generate salts to securely store users passwords \cite{Tanenbaum2006} and generate random keys to implement various cryptographic primitives such as authentication etc. The ASLR is one of the most important techniques used by the kernel (in special cases termed as Kernel-ASLR or KASLR) which randomizes the process layout to protect the locations of the targeted functions such as stack, heap, executable, dynamic linker/loader etc. \cite{marco2019address}. The ASLR not only demands statistically qualified high quality pseudorandom number generator but also requires the output bitstream to be provably secure and unpredictable. Hence, a CSPRNG (or simply a PRNG with regular entropy inputs for unpredictability) is a preferred type of RNG for kernel applications. There are many good CSPRNGs which are implemented in various operating systems and are used by their kernels. Fortuna, Yarrow and /dev/(u)random are the popular CSPRNGs which are currently implemented by Windows, MacOs/iOS/FreeBSD and Linux/Android operating systems respectively \cite{dodis2017eat,dorre2015pseudo}. In this paper, a new CSPRNG which exhibits `non-reproducibility' property of a TRNG is proposed taking security of the above kernel applications into consideration.
\medskip
In particular, the key contributions of this paper are as follows:
\begin{itemize}
\itemsep=0.8pt
\item{A novel CSPRNG design comprises of two non-standard and verified secure elliptic curves and nine LFSRs uniquely configured in a clock-controlled fashion to attain exponential linear complexity is used to construct the proposed KCS-PRNG.}
\item{A novel architecture of the KCS-PRNG is proposed to mitigate the gap of `non-reproducibility' property.}
\item{Two new non-standard and verified elliptic curves are introduced in this paper which are used by the proposed KCS-PRNG to mitigate the gap of `non-reproducibility' property. Both elliptic curves are generated randomly over 256-bit prime fields to ensure cryptographic and implementation security.}
\item{Extensive security analysis of the proposed KCS-PRNG is carried out to ensure theoretical security.}
\item{Experimental validation and demonstration of statistical qualities of randomness using National Institute of Standards and Technology (NIST), Diehard, TestU01 test suites.}
\item{Experimental validation and demonstration of `non-reproducibility' property of the proposed KCS-PRNG.}
\item{The proposed KCS-PRNG is compared with present kernel CSPRNGs such as Fortuna, Yarrow and dev/random and an existing PRNG \cite{alhadawi2019designing}. The KCS-PRNG is also compared with an existing TRNG \cite{anandakumar2019fpga} in context of non-reproducibility of the generated random bitstreams.}
\end{itemize}
Rest of the paper is organized as follows: Section $\ref{sec2}$ briefly discusses the randomness requirements of the kernel applications and standard RNG requirements. Section $\ref{sec3}$ reviews current CSPRNGs implemented by popular operating system kernels. Section $\ref{sec4}$ presents the proposed design of the KCS-PRNG. Subsequently, Section $\ref{sec5}$ presents the security analysis and Section $\ref{sec6}$ elaborates experimental validation and demonstration of the proposed KCS-PRNG. Section $\ref{sec7}$ presents the details of the two new elliptic curves selected over large prime fields for use in the proposed KCS-PRNG.~Section $\ref{sec8}$ shows the important obtained results of the proposed KCS-PRNG.~Section $\ref{sec9}$ briefly analyses the performance of KCS-PRNG. Section $\ref{sec10}$ compares KCS-PRNG with existing kernel CSPRNGs as well as recent PRNG, CSPRNG and TRNG used by various cryptographic applications. Finally, Section $\ref{sec11}$ concludes the findings of this paper.
\section{Preliminaries}\label{sec2}
\subsection{Randomness for Kernel Applications} \label{subsec1}
One of the most important kernel applications that requires high quality randomness is ASLR \cite{marco2019address} which is an efficient mitigation technique against remote code execution attacks by randomizing the memory address of processes to disable memory exploitation. The ASLR currently uses CSPRNG to randomize the logical elements contained in the memory objects at the time of pre-linking (at the time of installation of the application), per-boot (on every time the system boots), per-exec (when new executable image is loaded in memory called pre-process randomization), per-fork (every time a new process is created) and per-object (every time a new object is created). Figure $\ref{fig_aslr}$ shows the Per-boot versus Per-exec randomization to point out when randomization takes place in both the per-boot and per-exec processes. Similarly, Figure $\ref{fig_PerObject}$ shows that $mmap()$ system call allocates all the objects side by side in the $mmap\_area$ area during the per-object randomization taking place. The $rand()$ provides random bits of desired length to the objects as shown in Figure $\ref{fig_PerObject}$.
\begin{figure
\centering
\includegraphics[width=2.8in]{ASLR}
\caption{Per-boot versus Per-exec randomization \cite{marco2019address}}\label{fig_aslr}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=2.8in]{PerObject}
\caption{Per-object randomization \cite{marco2019address}}\label{fig_PerObject
\end{figure}
Moreover, the degree of security provided by ASLR technique depends on the predictability of the random memory layout of a program and therefore, `non-reproducibility' of the random sequences used in ASLR needs to be additonally ensured. This particular requirement is also attended in the present work.
Another important kernel application is the Morris-Thompson scheme \cite{Tanenbaum2006,silberschatz2015instructor} which associates a $n$-bit random number with each password and concatenate and then encrypt together before storing it in the password file. A CSPRNG is used whenever a password is changed and a random number is required.
\subsection{RNG requirements} \label{subsec4}
Koc \cite{kocc2009cryptographic} and Schneier \cite{schneier2007applied} collated the properties that various classes of RNG exhibit and formulated the following requirements:
\begin{enumerate}
\itemsep=0.9pt
\item $R1$ \label{R1}: A random sequence generated by a RNG should have good statistical properties.
This requirement enables a RNG with a large period.
\item $R2$ \label{R2}: A random sequence generated by a RNG should be unpredictable.
This requirement makes the prediction of the next bit infeasible in the stream, given the complete knowledge of the algorithm or hardware which generates the sequence and all of the previous bits in the stream. This gives the notion of Backward Secrecy.
\item $R3$ \label{R3}: A random sequence generated by a RNG should not allow to compute previous internal state or values of the generator even if the internal state is known.
This gives the notion of Forward Secrecy.
\item $R4$ \label{R4}: A random sequence generated by a RNG should not be reliably reproduced.
If the RNG is run twice with exactly the same input, it should produce two completely unrelated random sequences.
\end{enumerate}
From definition, a PRNG meets only $R\ref{R1}$ requirement whereas CSPRNG meets $R\ref{R1}$, $R\ref{R2}$ and $R\ref{R3}$ requirements of RNG \cite{schneier2007applied}. However, a TRNG meets $R\ref{R2}$, $R\ref{R3}$ and $R\ref{R4}$ requirements of the RNG \cite{schneier2007applied}. In this paper, the proposed KCS-PRNG is designed in such a way that it meets the $R\ref{R1}$, $R\ref{R2}$ and $R\ref{R3}$ requirements along with the $R\ref{R4}$ requirement of RNG to a practical extent.
\section{Cryptographically secure random number generators for kernels} \label{sec3}
Linux and Android kernels use /dev/random and /dev/urandom which are considered as CSPRNG i.e. the PRNG with inputs (meeting the requirement $R\ref{R2}$) for randomness generation. The limitations of these CSPRNGs are that they do not have enough entropy in the pool and they are not generating keys larger than the hash function that they used internally \cite{viega2003practical}. /dev/random keeps awaiting for the entropy pool to get sufficiently filled in, which results in diminished performance of the generator. /dev/random meets the RNG requirements $R\ref{R1}$, $R\ref{R2}$ and $R\ref{R3}$ but does not meet the $R\ref{R4}$ requirement.~Though /dev/urandom has provision for unblocked fast supply of random sequences through unblocking pool of entropy but faces predictability issues \cite{dodis2013security}. /dev/urandom meets the requirements $R\ref{R1}$ and $R\ref{R3}$ but does not meet the requirement $R\ref{R2}$ and $R\ref{R4}$.
Yarrow \cite{kelsey1999yarrow} is a PRNG with true random inputs used by MacOS/iOS/FreeBSD kernels. This CSPRNG is too complex and under-specified in entropy handling context and also slow to provide an initial seed \cite{viega2003practical}. It uses Triple Data Encryption Standard (DES) block cipher for pseudorandom bitstream generation. Like /dev/random, Yarrow meets the requirements $R\ref{R1}$, $R\ref{R2}$ and $R\ref{R3}$ but does not meet the requirement $R\ref{R4}$.
Fortuna \cite{ferguson2011cryptography} is a popular CSPRNG and a refinement over Yarrow, used by the Windows kernel which uses its entropy effectively. It uses Advanced Encryption Standard (AES)-like cipher for the generator with 256-bit size of the block cipher key and a 128-bit counter. Fortuna produces a very good throughput of 20 clock cycles per byte on CPU type PC \cite{ferguson2011cryptography} and 7.2 Mbps throughput in software \cite{mcevoy2006fortuna}. Fortuna implicitly accumulates entropy through hash, partitions the incoming entropy into multiple entropy pools and uses its pools at different rate for output generation in order to guarantee that at least one pool will remain available for use \cite{dodis2017eat}. Though Viega \cite{viega2003practical} observed that it completely foregoes the entropy estimation and Fortuna and Yarrow both do not exhibit information-theoretic security. Like Yarrow, Fortuna also meets the requirements $R\ref{R1}$, $R\ref{R2}$ and $R\ref{R3}$ but does not meet `non-reproducibility' i.e., the requirement $R\ref{R4}$.
It is imperative to note that the present kernel CSPRNGs do not meet the requirement of `non-reproducibility' i.e., the requirement $R\ref{R4}$ which is a crucial feature that helps to prevent the kernel better from exploitation as discussed in Section $\ref{subsec1}$. In this work, the proposed KCS-PRNG is designed in such a way that all the four requirements ($R\ref{R1}$ to $R\ref{R4}$) of an ideal RNG are met to ensure better prevention of the kernel from exploitation.
\section{The proposed design of KCS-PRNG} \label{sec4}
Generation of high quality cryptographically secure pseudorandom bitstreams is an intricate task which needs efficient design of the generator taking statistical properties of randomness ($R\ref{R1}$), unpredictability ($R\ref{R2}$, $R\ref{R3}$) and non-reproducibility ($R\ref{R4}$) of the output bitstreams into consideration.~For this reason, the proposed KCS-PRNG binds two modules in its design: first, a combination of two cryptographically safe elliptic curves and a nonlinear Sequence Generator consisting of nine clock-controlled LFSRs in alternating step configuration.~Following are the design decisions and assumptions of the proposed KCS-PRNG:
\subsection{Selection of elliptic curves} \label{SelECs}
The main motivation of using elliptic curves in the proposed KCS-PRNG instead of stream ciphers/block ciphers like ChaCha20 and Triple DES or AES respectively as used by /dev/(u)random \cite{viega2003practical}, Yarrow \cite{kelsey1999yarrow} and Fortuna \cite{ferguson2011cryptography} respectively is that one can choose different points on the selected elliptic curve to generate completely unrelated bitstreams under identical start conditions.~Hence, the combination of elliptic curves and clock-controlled LFSRs in the proposed KCS-PRNG generates non-reproducible cryptographically secure pseudorandom bitstreams.~Moreover, the combination of elliptic curve and LFSR has been proven to exhibit enhanced randomness properties \cite{gong2002linear}. Two elliptic curves are used in KCS-PRNG for added complexity where each elliptic curve provides nearly $2^{128}$ key space.~The advantages of keeping elliptic curves with the clock-controlled LFSRs are twofold: first, the elliptic curves are used for mitigating the gap of `non-reproducibility' property ($R\ref{R4}$) by the proposed method of replacing them periodically from a look-up table.~Second, elliptic curves are used to generate bitstreams which are non-invertible due to underlying hard Elliptic Curve Discrete Logarithm Problem (ECDLP) and hence, they make the proposed KCS-PRNG provably secure as well as forward secure to resist backtracking attacks.~However, the choice of elliptic curves is considered to be a randomly generated one rather than the standard elliptic curves with fixed coefficients as being recommended by agencies like NIST \cite{kerry2013digital}, Brainpool \cite{brainpoolbrainpool} etc., so that a look-up table can be created consisting of reasonably large number of cryptographically secure elliptic curves of one's choice.~The random derivation of elliptic curve parameters ensures trust and transparency in the implementation of elliptic curves \cite{bernstein2015manipulate}.~The details of the two elliptic curves selected for use in the KCS-PRNG are presented in Section $\ref{sec7}$.~One can create a look-up table consisting of elliptic curves of 256 bit field order of one's choice for use in the KCS-PRNG.~The discussion on generation mechanism of elliptic curves is outside the scope of this paper due to space limitation.
\subsection{Selection of clock-controlled LFSRs} \label{SelLFSR}
The proposed KCS-PRNG is targeted for integration in the operating system kernel and therefore, it is implemented in software. However, implementation of LFSR in software is slower than its hardware implementation \cite{schneier2007applied,mukhopadhyay2006application}. To address this performance issue, the Galois scheme is selected for optimal performance gain by the LFSRs in software without compromising the LFSR period and its cryptographic properties \cite{schneier2007applied}. The chosen Galois configuration also saves excess operations as all the XOR operations are performed as a single operation \cite{schneier2007applied}. A nonlinear Sequence Generator consisting of nine LFSRs $L_1, L_2, L_3, L_4, L_5, L_6, L_7, L_8$ and $L_9$ with corresponding primitive polynomial degrees 29, 31, 37, 41, 43, 47, 53, 59 and 61 respectively is designed. The primitive polynomials for these LFSRs feedback functions are
\begin{center}
$L_1 = x^{29}+x^{25}+x^{21}+x^{17}+x^{14}+x^{10}+x^{6}+x^3+1$, \\
$L_2 = x^{31}+x^{27}+x^{23}+x^{19}+x^{15}+x^{11}+x^{7}+x^3+1$, \\
$L_3 = x^{37}+x^{32}+x^{27}+x^{23}+x^{18}+x^{13}+x^{9}+x^5+1$, \\
$L_4 = x^{41}+x^{36}+x^{31}+x^{26}+x^{20}+x^{15}+x^{10}+x^5+1$, \\
$L_5 = x^{43}+x^{37}+x^{31}+x^{25}+x^{20}+x^{15}+x^{10}+x^5+1$, \\
$L_6 = x^{47}+x^{41}+x^{35}+x^{29}+x^{23}+x^{17}+x^{11}+x^5+1$, \\
$L_7 = x^{53}+x^{46}+x^{40}+x^{33}+x^{26}+x^{19}+x^{13}+x^7+1$, \\
$L_8 = x^{59}+x^{52}+x^{44}+x^{36}+x^{29}+x^{22}+x^{14}+x^7+1$, \\
$L_9 = x^{61}+x^{53}+x^{45}+x^{38}+x^{30}+x^{23}+x^{15}+x^7+1$.
\end{center}
These primitive polynomials used by the nine LFSRs have uniformly distributed feedback coefficients selected from \cite{rajski2003primitive}. These nine LFSRs $L_1, L_2, \cdots, L_9$ are further divided into three groups called Sequence Generator 1 ($SG_1$), Sequence Generator 2 ($SG_2$) and Sequence Generator 3 ($SG_3$). $SG_1$ has three LFSRs $L_1, L_2$ and $L_3$ whose output streams $x_1, x_2$ and $x_3$ are combined nonlinearly using nonlinear function
\begin{equation} \label{nonLinearFn}
y_1: f(x_1, x_2, x_3) = x_1x_2 \oplus x_2x_3\oplus x_3x_1 \vspace{1mm}
\end{equation}
The resulting sequence $y_1$ has period $(2^{L_1}-1)(2^{L_2}-1)(2^{L_3}-1)$ and linear complexity $(L_1L_2+L_2L_3+L_1L_3)$.~Similarly, the period and linear complexity of the sequence $y_2$ generated from $SG_2$ using $L_4$, $L_5$, $L_6$ are $(2^{L_4}-1)(2^{L_5}-1)(2^{L_6}-1)$ and $(L_4L_5+L_5L_6+L_6L_4)$ respectively whereas the period and linear complexity of the sequence $y_3$ generated from $SG_3$ using $L_7$, $L_8$, $L_9$ are $(2^{L_7}-1)(2^{L_8}-1)(2^{L_9}-1)$ and $(L_7L_8+L_8L_9+L_9L_7)$ respectively.~It may be noted that the initial state bits of all LFSRs together are $\sum_{i=1}^{9} L_i$ = 401 bits.
\medskip
$SG_1$, $SG_2$ and $SG_3$ are configured in alternating step scheme to provide high linear complexity and large period of the Sequence Generator \cite{menezes2018handbook}. $SG_1$ is considered as the Controller of the Sequence Generator in the alternating step mode. It is known that the linear complexity $LC(x)$ of the overall alternating step generator is bounded as follows \cite{menezes2018handbook}:
\begin{equation} \label{linComplexity}
(LC_2 + LC_3)^{2LC_1-1} < LC(x) \leq (LC_2 + LC_3)^{2LC_1}
\end{equation}
where $LC_1, LC_2$ and $LC_3$ are the linear complexities of $SG_1, SG_2$ and $SG_3$ respectively. The Alternating Step Sequence Generator used in the proposed KCS-PRNG is depicted in Figure $\ref{fig_AlterStepSeqGen}$ and described in Algorithm $\ref{seqGen}$ \cite{menezes2018handbook}.
\begin{figure*}[!h]
\centering
\includegraphics[width=5.3in]{AlterStepSeqGen}\vspace*{-2mm}
\caption{Sequence Generator in Alternating Step Configuration}
\label{fig_AlterStepSeqGen} \vspace*{5mm}
\end{figure*}
\begin{algorithm}[h!]
\caption{Alternating Step Sequence Generator using Clock-controlled LFSRs}
\SetKwInOut{KwIn}{Input}
\SetKwInOut{KwOut}{Output}
\KwIn{Sequence Generators $SG_1, SG_2$ and $SG_3$}
\KwOut{bit length $n$}
\For{$i \gets 1$ \textbf{to} $n$ } {
$SG_1$ is clocked \\
\eIf{ $SG_1 == 1$} {
$SG_2$ is clocked. \tcc{$SG_3$ is not clocked but its previous output bit is repeated.~In case of the first clock cycle, previous output bit of $SG_3$ is taken as 0.}}
{ $SG_3$ is clocked \tcc{$SG_2$ is not clocked but its previous output bit is repeated.~In~case of the first clock cycle, previous output bit of $SG_2$ is taken as 0.}
}
\KwRet{$y_2$ $\oplus$ $y_3$} \tcp{Output of Sequence Generator in alternating step}
}\label{seqGen}
\end{algorithm}
\begin{figure*}[!h]
\vspace*{5mm}
\centering
\includegraphics[width=5.2in]{KCSPRNG}
\caption{The proposed KCS-PRNG Architecture}
\label{fig_KCSPRNG}
\end{figure*}
\subsection{The proposed novel KCS-PRNG architecture} \label{architecture}
The proposed KCS-PRNG architecture is shown in Figure $\ref{fig_KCSPRNG}$. The KCS-PRNG uses a Field converter, Elliptic curve Point Multiplication and a Selector in addition to the Sequence Generator and two elliptic curves in its design.
\begin{algorithm}[h!] \caption{Selection of 2 Elliptic curves}
\SetKwInOut{KwIn}{Input}
\SetKwInOut{KwOut}{Output}
\KwIn{Look-up Table $\mathcal{T}$($EC_n$, $EC_n\_ID\_Status$) where $n$ is number of elliptic curves}
\KwOut{Elliptic curves $EC_r$, $EC_s$ from $\mathcal{T}$ where $r, s \in [1,n]$ and $r \neq s$}
Count $n$ \tcp{Elliptic curves with $EC_n\_ID\_Status=0$ $\forall$ $n$ in $\mathcal{T}$}
\eIf{$n \geq 2$} {
Fetch $EC_r$, $EC_s$ from $\mathcal{T}$ where $EC_r\_ID\_Status=0$ and $EC_s\_ID\_Status=0$ \\
Set $EC_r\_ID\_Status \longleftarrow 1, EC_s\_ID\_Status \longleftarrow 1$ \\
Update $\mathcal{T}$ \\
\KwRet{$EC_r, EC_s$}
}
{
Set $EC_n\_ID\_Status=0$ $\forall$ $n$ in $\mathcal{T}$ \\
Go to previous step
}
\label{ecsSel}
\end{algorithm}
The two elliptic curves are selected using the procedure as shown in Algorithm $\ref{ecsSel}$. A look-up table $\mathcal{T}$ with tuples ($EC$, $EC\_ID\_Status$) is created where $EC$ is the elliptic curve and $EC\_ID\_Status$ is the flag value to mark 0 for `un-used curve' and 1 for the `used curve'. $\mathcal{T}$ consists of 256 elliptic curves initially which are randomly generated and are cryptographically secure non-standard curves. All elliptic curves in $\mathcal{T}$ are initially marked with $EC\_ID\_Status=0$. On each reboot of the proposed KCS-PRNG, it picks up two elliptic curves from $\mathcal{T}$ using Algorithm $\ref{ecsSel}$ and sets the corresponding $EC\_ID\_Status=1$ of both the used elliptic curves in $\mathcal{T}$. The advantage of $\mathcal{T}$ is that even if the same seed (entropy) is supplied to the proposed KCS-PRNG on reboot of the generator, two new elliptic curves with $EC\_ID\_Status=0$ will be selected from $\mathcal{T}$. The change of elliptic curves on each reboot of the KCS-PRNG changes the final output by altering the masking value between the output bits of the elliptic curves and the Sequence Generator. Hence, entirely unrelated bitstream are obtained as the output of the proposed generator even using exactly the same seed as input. When all elliptic curves in $\mathcal{T}$ are used then $EC\_ID\_Status$ flags are reset to $0$ for all elliptic curves in $\mathcal{T}$ in order to maintain unblocked supply of elliptic curves to the KCS-PRNG. More elliptic curves can be inserted into $\mathcal{T}$ to consistently mitigate the requirement of `non-reproducibility' property $R\ref{R4}$ of the KCS-PRNG. Here, the mitigating factor of the the RNG requirement $R\ref{R4}$ is directly proportional to the number of un-used elliptic curves available in $\mathcal{T}$. This idea makes the proposed KCS-PRNG to mitigate the RNG requirement $R\ref{R4}$ to a practical extent.
\subsection{Initialization of KCS-PRNG} \label{initialization}
The proposed KCS-PRNG uses two phases of pseudorandom bitstreams generation. In the first phase, the Sequence Generator is initialized whereas in the second phase, the desired number of bits of the pseudorandom sequence are generated using the Sequence Generator and the elliptic curves. The initialization phase involves two stages which includes, first, loading the key and initialization vector (IV) into the generator and second, diffusing the key-IV pair across the entire states of the Sequence Generator \cite{teo2013analysis} as shown in Figure $\ref{init1}$ and Figure $\ref{init2}$ described in the Algorithm $\ref{init}$.
\begin{figure
\centering
\includegraphics[width=2.5in]{init1}
\caption{Initialization Stage 1: Loading and diffusion of the key}
\label{init1}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=6in]{init2}
\caption{Initialization Stage 2: Loading of IV}
\label{init2}
\end{figure*}
\begin{algorithm}[h!] \small
\caption{Initialization of Sequence Generator}
\SetKwInOut{KwIn}{Input}
\SetKwInOut{KwOut}{Output}
\KwIn{401-bit entropy for Key and 173-bit entropy for Initialization Vector (IV)}
\KwOut{Initialized Sequence Generator}
\tcp{Stage 1: Loading LFSRs from the input Key}
Initialize $SG_1, SG_2$ and $SG_3$ with 401-bit Key \\
\If{ MSB of any LFSR == 0} {
Ensure MSB of LFSR as 1
}
\tcp{Stage 2: Diffusion of key into all LFSRs states}
Clock Sequence Generator 128 times
\tcp{Stage 1: Loading 173-bit IV to $SG_1$}
\For{$i \gets 1$ \textbf{to} $173$} {
Clock $SG_1$ with feedback = Feedback bit $\oplus$ IV bit $\oplus$ output bit of Sequence Generator
}
\tcp{Stage 2: Diffusion of IV into all LFSRs states in $SG_1$}
Clock Sequence Generator 128 times\\
\If{ MSB of the Sequence Generator == 0} {
Ensure MSB of the Sequence Generator as 1 }
\KwRet{Initialized Sequence Generator}
\label{init}
\end{algorithm}
\medskip
Algorithm $\ref{init}$ takes 574-bit of entropy bits which are harvested from various physical non- de\-terministic noise sources and generates 401-bit of key and 173-bit of Initialization Vector (IV). The key is first parallelly loaded into $SG_1, SG_2$ and $SG_3$ of the Sequence Generator.~It is ensured that all the Most Significant Bits (MSBs) of $L_1, L_2$ and $L_3$ will be set to 1. The Sequence Generator is then clocked 128 times so that the key is diffused across the entire states of all the nine LFSRs $L_1, L_2, \cdots, L_9$ and a new state of the Sequence Generator is obtained as shown in Figure $\ref{init1}$. Further, a 173-bit IV is loaded into $L_1, L_2$ and $L_3$ of $SG_1$ in bitwise fashion by XORing with the feedback bit of the LFSR and the output bit of the Sequence Generator as shown in Figure $\ref{init2}$. The Sequence Generator is once again clocked 128 times to diffuse the IV completely among the LFSRs in $SG_1$ and gets entirely new states of all the nine LFSRs. It is ensured that the MSBs of all the nine LFSRs $L_1, L_2, \cdots, L_9$ are set to 1. Finally, the initialized Sequence Generator is returned.
\subsection{KCS-PRNG bitstream generation}
\begin{algorithm}[!b]\small
\caption{Elliptic curve point multiplication}
\KwIn{Secrets $P_{r_1}$ and $P_{r_2}$ for 2 elliptic curves}
\KwOut{Points $P_{b_1}$ and $P_{b_2}$ of 2 elliptic curves in integer form}
$P_{b_1} \longleftarrow G_1 \times P_{r_1}$ \tcc{$G_1$ is the base point selected on first elliptic curve and $P_{b_1}$ is the $x$-coordinate of the resultant point}
$P_{b_1} \longleftarrow Integer(P_{b_1})$ \tcp{Function to transform field to integer}
$P_{b_2}\longleftarrow G_2 \times P_{r_2}$ \tcc{$G_2$ is the base point selected on second elliptic curve and $P_{b_2}$ is the $x$-coordinate of the resultant point}
$P_{b_2} \longleftarrow Integer(P_{b_2})$\\
\KwRet{$P_{b_1}$, $P_{b_2}$}
\label{pubKeyGen}
\end{algorithm}
\begin{algorithm}[!ht]
\caption{The proposed KCS-PRNG bitstream generation}
\footnotesize{
\KwIn{Desired length of bitstream $n$, ($574 \times r$)-bit entropy for key and IV where $r$ = $\left\lceil \frac{n}{100000} \right\rceil$ = number of (re)seeding required for KCS-PRNG}
\KwOut{$n$-bit cryptographically secure pseudorandom bitstream}
Run Algorithm $\ref{ecsSel}$ to select two elliptic curves from $\mathcal{T}$ \\
\tcp{Perform (Re)seeding to initialize the Sequence Generator}
Run Algorithm $\ref{init}$ with input of 401-bit key and 173-bit IV to initialize the Sequence Generator\\
Run Algorithm $\ref{seqGen}$ to generate 256-bit sequence $z_1$\\
Transform $z_1$ into field element $P_{r_1}$ of first elliptic curve using field converter\\
Run Algorithm $\ref{pubKeyGen}$ with $P_{r_1}$ as input to generate the integer $P_{b_1}$\\
Run Algorithm $\ref{seqGen}$ to generate 256-bit sequence $z_2$\\
Transform $z_2$ into field element $P_{r_2}$ of second elliptic curve using field converter\\
Run Algorithm $\ref{pubKeyGen}$ with $P_{r_2}$ as input to generate the integer $P_{b_2}$\\
Set $countSel = 1$\\
Set $bitCount = 1$\\
Set $t = 1$ where $t = 1$ to $\left\lceil \frac{n}{256} \right\rceil$\\
\For{$i \gets 1$ \textbf{to} $\left\lceil \frac{n}{256} \right\rceil$} {
\If{$countSel == t \times 256$ } {
\tcp{Use Selector to select between the two elliptic curves}
\eIf{$t$ is even }{
Set $el = P_{b_2}$
}
{Set $el = P_{b_1}$ }
$countSel = 0$ \\
$t^{++}$
}
Clock Sequence Generator 256 times to generate 256-bit sequence $s$ \\
\eIf{$n < 256$}{
\KwRet{$X \oplus i^{th}$ position of $el$ from LSB ($i=0$) to MSB ($i=255$) where $X$ is 1-bit output from Sequence Generator and $i$ = 0 to 255} \tcp{Output of KCS-PRNG}}
{\KwRet{$el \oplus s$} \tcp{Output of KCS-PRNG}
}
$i^{++}$ \\
\If{$i == 255$ }{
$i = 0$
}
$countSel^{++}$\\
$bitCount^{++}$ \\
\If{$bitCount == j \times 100000$ where $j = 1$ to $r$}
{
$n = n - (j \times 100000)$ \\
$j^{++}$\\
\tcp{Reseed the KCS-PRNG on every 100000 bits of output}
Go to reseeding step in the beginning }
}
}
\label{bitGen}
\end{algorithm}
The Sequence Generator generates two sequences $z_1$ and $z_2$ of 256-bit length each, which are used by the field converter as the inputs. The field converter transforms $z_1$ and $z_2$ into integers and then transforms them into the field elements $P_{r_1}$ and $P_{r_2}$ of the two elliptic curves. These field elements or the secrets $P_{r_1}$ and $P_{r_2}$ are given as inputs to the two elliptic curve point multiplication functions as described in Algorithm $\ref{pubKeyGen}$. The secrets $P_{r_1}$ and $P_{r_2}$ are multiplied with their corresponding base points $G_1$ and $G_2$ which yields a new point on each of the elliptic curves respectively. The $x$-coordinates of the two points obtained are the two integers $P_{b_1}$ and $P_{b_2}$ after transformation from the field elements. A selector is used to switch between the outputs of the two elliptic curves point multiplication functions to double the size of key space offered by the proposed KCS-PRNG.
\medskip
Algorithm $\ref{bitGen}$ describes the cryptographically secure pseudorandom bitstream generation scheme of the proposed KCS-PRNG. Initially, two elliptic curves with hard ECDLP are selected from $\mathcal{T}$. The Sequence Generator is then initialized with 401-bit key and 173-bit IV as discussed in Algorithm $\ref{init}$. The Sequence Generator is used to generate 256-bit sequence $z_1$ by clocking 256 times. Further, $z_1$ is converted into the field element of the first elliptic curve and considered as the secret $P_{r_1}$.~The integer $P_{b_1}$ is generated by using elliptic curve point multiplication function taking the secret $P_{r_1}$ as input. Similarly, the integer $P_{b_2}$ is also generated from the second elliptic curve point multiplication function. The Sequence Generator continuously generates $n$-bit length sequences as bounded by $\left\lceil \frac{n}{256} \right\rceil$ times loop. The proposed KCS-PRNG uses a selector iteratively select among $P_{b_1}$ and $P_{b_2}$. The Sequence Generator is then clocked 256 times to generate 256-bit sequence $s$. The integers $P_{b_1}$ or $P_{b_2}$ is masked with $s$ to produce 256-bit output by the KCS-PRNG. If $n < 256$, then 1-bit output of the Sequence Generator is masked with 1-bit of $P_{b_1}$ or $P_{b_2}$ (as decided by the selector) traversing from its Least Significant Bit (LSB) to MSB and result is returned. Once MSB of the $P_{b_1}$ or $P_{b_2}$ is used, the masking of the output of the Sequence Generator starts from the LSB of the $P_{b_1}$ or $P_{b_2}$ once again in rotating fashion. The KCS-PRNG is reseeded on every 100000 bit of output to maintain backward secrecy as shown in Algorithm $\ref{bitGen}$.
\subsection{Assumptions}
Following assumptions are made in the proposed design of KCS-PRNG:
\begin{itemize}
\itemsep=0.85pt
\item{KCS-PRNG always maintains 574-bit initial entropy.}
\item{KCS-PRNG expects high per-bit entropy $\approxeq$ 1 for initialization. The generation details of entropy used in KCS-PRNG is outside the scope of this work.}
\item{The Key and IV are parts of the seed and hence, they are immediately shredded after use and is non-recoverable.}
\item{The (Re)keying and (Re)IVing are done using different TRNGs or entropy harvesters using various different physical noise sources.}
\item{Elliptic curves used in KCS-PRNG are randomly generated, cryptographically safe and trustworthy.}
\item{Look-up Table $\mathcal{T}$ has authorized access only.}
\end{itemize}
\section{Security analysis of the proposed KCS-PRNG} \label{sec5}
\subsection{Linear complexity analysis}
Let linear complexities of the Sequence Generators $SG_1, SG_2$ and $SG_3$ be $LC_1, LC_2$ and $LC_3$ respectively and following equation ($\ref{nonLinearFn}$), are given by
\begin{equation} \label{linSGs}
\begin{array}{lll}
LC_1 = L_1L_2 + L_2L_3 + L_1L_3 = 3119 \\
LC_2 = L_4L_5 + L_5L_6 + L_4L_6 = 5711 \\
LC_3 = L_7L_8 + L_8L_9 + L_7L_9 = 9959
\end{array}
\end{equation}
where $L_1, L_2, \cdots, L_9 $ are the lengths of the LFSRs.
\eject
Moreover, while $SG_1$ is clocked regularly, $SG_2$ and $SG_3$ are connected in alternating step configuration. Thus, following equation ($\ref{linComplexity}$), the overall linear complexity ($LC$) of the scheme is given by
\begin{equation} \label{linSG}
\begin{array}{l}
(5711 + 9959)^{2 \times 3119-1} < LC(x) \leq (5711 + 9959)^{2 \times 3119} \\
\implies 15670^{6237} < LC(x) \leq 15670^{6238}
\end{array}
\end{equation}
It is imperative to note that the Sequence Generator of the proposed KCS-PRNG exhibits exponentially large linear complexity as demonstrated in equation ($\ref{linSG}$) and therefore, the proposed generator is resistant to the Berlekamp-Massey attack \cite{menezes2018handbook}.
\begin{table}[!b]
\vspace*{-2mm}
\centering
\scriptsize{
\caption{Correlation test of the proposed KCS-PRNG.} \label{corrTest}\vspace*{-3mm}
\begin{tabular}{|l|l|}
\hline
\begin{tabular}[c]{@{}l@{}} sstring-AutoCor test \end{tabular} & \begin{tabular}[c]{@{}l@{}}$N=1, n=1048513, r=0, s=32, d=1$ \end{tabular} \\ \hline \hline
Normal statistic & 0.41 \\ \hline
p-value of test & 0.34 \\ \hline
Number of bits used & 1048544 \\ \hline
Result & Passed the test \\ \hline
sstring-AutoCor test & $N=1, n=1048514, r=0, s=32, d=2$ \\ \hline
Normal statistic & 0.80 \\ \hline
p-value of test & 0.21 \\ \hline
Number of bits used & 1048544 \\ \hline
Result & Passed the test \\ \hline \hline
sstring-HammingCorr test & $N=1, n=32768, r=0, s=32, L=32$ \\ \hline \hline
Normal statistic & -0.56 \\ \hline
p-value of test & 0.71 \\ \hline
Number of bits used & 1048576 \\ \hline
Result & Passed the test \\ \hline \hline
sstring-HammingCorr test & $N=1, n=16384, r=0, s=32, L=64$ \\ \hline \hline
Normal statistic & 0.45 \\ \hline
p-value of test & 0.33 \\ \hline
Number of bits used & 1048576 \\ \hline
Result & Passed the test \\ \hline \hline
sstring-HammingCorr test & $N=1, n=8192, r=0, s=32, L=128$ \\ \hline \hline
Normal statistic & 1.57 \\ \hline
p-value of test & 0.06 \\ \hline
Number of bits used & 1048576 \\ \hline
Result & Passed the test \\ \hline
\end{tabular}
}
\end{table}
\subsection{Correlations test}
We conducted two correlation tests of random bitstreams generated by the proposed KCS-PRNG to verify non-correlation in the bitstream. The first test conducted was Serial or Autocorrelation test ($sstring-AutoCor$ test) which measures the correlation between the bits with the lag $d$ \cite{l2007testu01}. In this test, a $n$-bit string is generated by the KCS-PRNG at the first level and the test statistic is computed such that it has the binomial distribution with the parameters being approximately standard normal for large $n-d$. The restriction imposed were $r+s \leq 32$ and $1 \leq d \leq \lfloor \frac{n}{2} \rfloor$ where $r$ be the number of MSBs which are eliminated from the output before applying the test, $s$ be the MSBs chosen from each generated random number and $N$ be second-level number of replications \cite{l2007testu01,l2013testu01}. The second test conducted was the Hamming Correlation test ($sstring-HammingCorr$) \cite{walker2008ent} was used to measure bitwise correlation in the random bitstream file of 1GB size generated by the proposed KCS-PRNG which was estimated to be 0.000034. The obtained correlation is very close to the ideal correlation value of 0.0 and thus, concludes that the proposed design of the KCS-PRNG has no correlation issues and their results are shown in Table $\ref{corrTest}$.
\subsection{Period analysis (validation of requirement R$\ref{R1}$)}
The Sequence Generator used in the KCS-PRNG comprises of nine LFSRs whose lengths $L_1, L_2, \cdots, L_9$ are coprime to each other. Hence, the period ($P$) of the Sequence Generator is given by
\begin{equation} \label{period1}
P = \prod\limits_{i=1}^9 (2^{L_i} - 1)
\end{equation}
As the output of the Sequence Generator is masked with the integer obtained from the $x$-coordinate of the public key of one of the two elliptic curves, therefore, the period $\mathcal{P}$ of the proposed KCS-PRNG is given by
\begin{equation} \label{periodKCSPRNG}
\mathcal{P} = \left\{
\begin{array}{ll}
N_1 \times (\prod\limits_{i=1}^9 (2^{L_i} - 1)) & \quad if (n \leq 256) \\
(N_1 + N_2) \times (\prod\limits_{i=1}^9 (2^{L_i} - 1)) & \quad if (n > 256)
\end{array}
\right.
\end{equation}
where $n$ be the number of output bits and $N_1, N_2$ are the order of the two elliptic curves and let $N_1 < N_2$.
\medskip
It is prudent from equation ($\ref{periodKCSPRNG}$) that the period $\mathcal{P}$ of the proposed KCS-PRNG approximately lies in the range $[N_1 \times 2^{401}, (N_1 + N_2) \times 2^{401}]$ per boot which enables the proposed KCS-PRNG to generate very large bitstream without compromising the statistical properties of randomness.
\subsection{Key space analysis}
It is evident from equation ($\ref{period1}$) that the Sequence Generator in KCS-PRNG has a period of $2^{401}$ and thus, provides $2^{401}$ key space in case the generator gets seeded once and no reseeding happens. Moreover, the KCS-PRNG also uses two elliptic curves which provides $2^{128}$ and $2^{256}$ key space for $n \leq 256$ and $n > 256$ bits of output respectively to impose a successful Pollard's rho attack to solve the ECDLP. Hence the key space offered by the proposed KCS-PRNG is given by
\begin{equation} \label{keySpace}
\mathcal{K} = \left\{
\begin{array}{ll}
(2^{401} \times 2^{128})^r = 2^{529r} & \quad if (n \leq 256) \\
(2^{401} \times 2^{256})^r = 2^{657r} & \quad if (n > 256)
\end{array}
\right.
\end{equation}
where $r$ be the number of (re)seeding the KCS-PRNG and $n$ be the number of output bits of the proposed KCS-PRNG.
\medskip
It is imperative to note that the key space offered by the proposed KCS-PRNG depends on the number of times the KCS-PRNG (re)seeds itself in single boot and therefore, exhibits virtually infinite key space in the range $\mathcal{K} \in [2^{529}, \infty)$ which is quite higher than the safe key space threshold of $2^{128}$ as recommended by \cite{alhadawi2019designing,ii2010ecrypt}. Therefore, the proposed KCS-PRNG comfortably resists brute force\linebreak attacks.
\subsection{Cross layer attack on kernel PRNG}
A practical attack \cite{klein2021cross} using the weakness in the Linux Kernel PRNG is discovered that allowed the hackers to mount the cross-layer attacks against the Linux kernel to retrieve the internal states of the PRNG.~The internal states of the kernel PRNG were compromised due to the linearity, same set of instances being used by the applications of the kernel PRNG and partially re-seeding issues respectively.~The attackers were able to extract data from one PRNG consumer (network protocols like IPv4/IPv6, UDP etc.) in one Open Systems Interconnection (OSI) layer and used them to exploit another PRNG consumer in difference OSI layer.~This weakness in the PRNG also allowed hackers to identify and track both the Linux and the Android devices.~The compromised kernel was then used to downgrade E-mail security, hijack E-mails, hijack Hyper Text Transport Protocol (HTTP) traffic, circumvent E-mail anti-spam and blacklisting mechanisms, mount a local Denial of Service (DoS) attack (blackhole hosts), poison reverse Domain Name Server (DNS) resolutions and attack the machine's Network Time Protocol (NTP) client which is responsible for the machine's \linebreak clock.
\medskip
It is imperative to note that the compromised internal states of the kernel PRNG enabled the attackers to predict entire random sequences generated by it.~However, the proposed KCS-PRNG does not allow such leakage of its internal states due to its unique design that leverages very high degree of non-linearity (as given in equation ($\ref{linSG}$)) of the generator and generates non-reproducible random bit sequences to provide entirely unrelated pseudorandom sequences for each user applications.
\section{Experimental validation of the proposed KCS-PRNG} \label{sec6}
\subsection{Experimental validation of requirement $R\ref{R1}$}
\begin{itemize}
\item[i.] NIST statistical test results \newline
NIST test suite consists of 15 statistical tests to certify statistical strength of randomness of the RNG. An output bitstream of 1GB file size is generated by the proposed KCS-PRNG and subjected to the NIST tests using NIST statistical test suite SP 800-22 version 2.1.2 \cite{bassham2010sp}. The input block size was set to be 1000000 bits and 1000 bitstreams. The significance level $\alpha$ was selected as 99\% to conduct the test. The proposed KCS-PRNG passed all the NIST statistical tests and the details of test results obtained are depicted in Table $\ref{nist}$.
\medskip
The p-value measures randomness and supposed to be greater than 0.01 i.e., the confidence level to conclude that the sequence is uniformly distributed whereas the proportion i.e., the minimum pass rate for the test should fall in the range [0.98056, 0.99943] having the confidence interval $\alpha$=0.01 and 1000 bitstreams \cite{anandakumar2019fpga}. As indicated in Table $\ref{nist}$, the proposed KCS-PRNG not only qualifies the pass rate threshold of 0.98056 but also reports better pass rate of 0.9896 as compared to the pass rates of 0.987 and 0.9887 reported by the TRNG \cite{anandakumar2019fpga} and the PRNG \cite{alhadawi2019designing} respectively.
\begin{table}[!ht]
\centering
\small{
\caption{NIST test results of the proposed KCS-PRNG output bitstreams of 1GB file size
with the input of 1000000-bit block size and 1000 bitstreams.}
\begin{tabular}{|l|l|l|l|}
\hline
Statistical Test & $P-value$ & Proportion & Result \\ \hline \hline
Frequency & 0.737915 & 0.991 & Pass \\ \hline
Block Frequency & 0.591409 & 0.988 & Pass \\ \hline
CumulativeSums* & 0.680755 & 0.993 & Pass \\ \hline
Runs & 0.281232 & 0.992 & Pass \\ \hline
Longest Run & 0.526105 & 0.996 & Pass \\ \hline
Rank & 0.036113 & 0.996 & Pass \\ \hline
FFT & 0.103138 & 0.990 & Pass \\ \hline
NonOverlappingTemplate* & 0.794391 & 0.990 & Pass \\ \hline
Overlapping & 0.779188 & 0.987 & Pass \\ \hline
Universal & 0.773405 & 0.991 & Pass \\ \hline
Approx Entropy & 0.653773 & 0.989 & Pass \\ \hline
RandomExcursions* & 0.489508 & 0.983 & Pass \\ \hline
RandomExcursionsVariant* & 0.163362 & 0.985 & Pass \\ \hline
Serial* & 0.680755 & 0.988 & Pass \\ \hline
Linear Complexity & 0.682823 & 0.985 & Pass \\ \hline
\end{tabular}
\label{nist} \newline\newline
*Only the result of first test instance is indicated here from the original results due to limitation of space
}
\end{table}
\item[ii.] Diehard test results \cite{marsaglia1998diehard} \newline
Diehard version 3.31.1 tests conduct a series of statistical tests and determine the p-values of the output bitstreams. The p-values indicate deviation of bit prediction from ideally expected probability of half. The expected p-value of a test should be in the range [0.025, 0.975] \cite{bhattacharjee2018search}. The proposed KCS-PRNG passed all the diehard tests as shown in Table $\ref{diehard}$.
\begin{table}[!ht]
\centering
\small{
\caption{Diehard test results of the proposed KCS-PRNG output bitstreams of 1GB file size.}\vspace*{-2mm}
\scalebox{0.9}{
\begin{tabular}{|l |c |c |c |c |c|}
\hline
test-name & ntup & tsamples & psamples & p-value & Assessment\\ \hline \hline
diehard-birthdays & 0 & 100 & 100 & 0.27561288 & Passed \\ \hline
diehard-operm5 & 0 & 1000000 & 100 & 0.13184067 & Passed \\ \hline
diehard-rank-32x32 & 0 & 40000 & 100 & 0.44295780 & Passed \\ \hline
diehard-rank-6x8 & 0 & 100000 & 100 & 0.88076181 & Passed \\ \hline
diehard-bitstream & 0 & 2097152 & 100 & 0.42947798 & Passed \\ \hline
diehard-opso & 0 & 2097152 & 100 & 0.12604767 & Passed \\ \hline
diehard-oqso & 0 & 2097152 & 100 & 0.94641900 & Passed \\ \hline
diehard-dna & 0 & 2097152 & 100 & 0.24390543 & Passed \\ \hline
diehard-count-1s-str & 0 & 256000 & 100 & 0.62287409 & Passed \\ \hline
diehard-count-1s-byt & 0 & 256000 & 100 & 0.91047395 & Passed \\ \hline
diehard-parking-lot & 0 & 12000 & 100 & 0.79390338 & Passed \\ \hline
diehard-2dsphere & 2 & 8000 & 100 & 0.17731451 & Passed \\ \hline
diehard-3dsphere & 3 & 4000 & 100 & 0.45129204 & Passed \\ \hline
diehard-squeeze & 0 & 100000 & 100 & 0.53561994 & Passed \\ \hline
diehard-sums & 0 & 100 & 100 & 0.94209561 & Passed \\ \hline
diehard-runs* & 0 & 100000 & 100 & 0.14811353 & Passed \\ \hline
diehard-craps* & 0 & 200000 & 100 & 0.92115680 & Passed \\ \hline
marsaglia-tsang-gcd* & 0 & 10000000 & 100 & 0.53120802 & Passed \\ \hline
sts-monobit & 1 & 100000 & 100 & 0.64501072 & Passed \\ \hline
sts-runs & 2 & 100000 & 100 & 0.94961272 & Passed \\ \hline
sts-serial* & 1 & 100000 & 100 & 0.62077367 & Passed \\ \hline
rgb-bitdist* & 1 & 100000 & 100 & 0.95378266 & Passed \\ \hline
rgb-minimum-distance* & 2 & 10000 & 1000 & 0.87517368 & Passed \\ \hline
rgb-permutations* & 2 & 100000 & 100 & 0.75286377 & Passed \\ \hline
rgb-lagged-sum* & 0 & 1000000 & 100 & 0.00308570 & Passed \\ \hline
rgb-kstest-test & 0 & 10000 & 1000 & 0.03414230 & Passed \\ \hline
dab-bytedistrib & 0 & 51200000 & 1 & 0.17158919 & Passed \\ \hline
dab-dct & 256 & 50000 & 1 & 0.07312246 & Passed \\ \hline
dab-filltree* & 32 & 15000000 & 1 & 0.61801753 & Passed \\ \hline
dab-filltree2* & 0 & 5000000 & 1 & 0.69361846 & Passed \\ \hline
dab-monobit2 & 12 & 65000000 & 1 & 0.42742922 & Passed \\ \hline
\end{tabular} }
\label{diehard} \newline\newline
*Only the result of first test instance is indicated here from the original results due to limitation of space
}
\end{table}
\item[iii.] TestU01 test results \cite{l2007testu01} \newline
TestU01 is believed to impose the toughest tests to evaluate the statistical quality of random bitstreams \cite{alhadawi2019designing}. The binary bitstream of 1GB file size generated by the proposed KCS-PRNG is subjected to the Rabbit and Alphabit test batteries of TestU01. The Rabbit and the Alphabit, by default, selected 1048576 bits ($2^{20}$ bits) for SmallCrush (a fast statistical test battery) evaluation and applied 38 and 17 statistical tests respectively to the proposed KCS-PRNG output bitstream. The output bitstreams of KCS-PRNG are found to have p-values within the acceptable range of [0.001, 0.999] \cite{bhattacharjee2018search} which proved that the proposed KCS-PRNG exhibits long period, good structure and non-linearity.
\end{itemize}
\subsection{Validation of requirements R$\ref{R2}$ and R$\ref{R3}$}
\begin{itemize}
\item[i.] Next bit test \newline
This test states that if a sequence of $m$-bits is generated by a generator, there should not be any feasible method which can predict the ($m+1$)th bit with the probability significantly higher than half \cite{lavasani2009practical,Rose2011crypto}. This test is associated with predictability of the successive bits generated by the KCS-PRNG.
Since the KCS-PRNG is reseeded with fresh additional entropy of 574 bits (401 bits of key and 173 bits of IV), therefore, it maintains backward security \cite{ferguson2011cryptography}.
\item[ii.] Test for state compromise extension attacks \newline
This test states that if some state of a generator is leaked at a given time to an attacker, it would not be possible to recover unknown PRNG outputs from that known state \cite{kelsey1998cryptanalytic}. Fundamentally, the state compromise extension imposes two kinds of attack: first, a backtracking attack to learn previous outputs of the generator knowing some internal state of the generator at a particular time and second, the permanent compromise attack which enables all the future and past states of the generator vulnerable with the knowledge of some state at a given time \cite{kelsey1998cryptanalytic}.
Since the proposed KCS-PRNG is forward secure and provably secure due to underlying ECDLP intractability, therefore, it is resistant to the backtracking attack. Furthermore, as discussed in the next bit test, the proposed KCS-PRNG is (re)seeded on every 100000 bits of output generation, therefore, it exhibits backward secrecy and thus, resists the permanent compromise attack as well.
\item[iii.] Entropy Estimation (Experimental Validation of Requirement R$\ref{R2}$, R$\ref{R3}$)
Entropy is the measurement of unpredictability or uncertainty. For an ideal TRNG, the expected entropy is 1 per bit which means that each bit i.e., `0' or `1' have equal proportion 0.5 in the file containing random bitstream \cite{anandakumar2019fpga}. The proposed KCS-PRNG is subjected to ENT tool \cite{walker2008ent} for estimation of the entropy of the KCS-PRNG generated 1GB file of random bitstream. The observed value of the entropy of output bitstream generated by the proposed KCS-PRNG is found to be 0.99999975 per bit which asserts that the design of KCS-PRNG maintains nearly an ideal unpredictability.
\end{itemize}
\subsection{Experimental validation of requirement R$\ref{R4}$}
\subsubsection{Non-reproducibility test}
The non-reproducibility test is conducted to validate if the RNG requirement $R\ref{R4}$ is met by the proposed KCS-PRNG. This test is conducted by running the generator twice with exactly the same input and verifying if the output sequences are completely unrelated. Authors \cite{anandakumar2019fpga} have referred the non-reproducibility test as the restart test and they validated the first 20 bit output sequences of the generator six times under identical start conditions. Table $\ref{nonReprTest}$ shows that the proposed KCS-PRNG has passed the non-reproducibility test six times by producing six completely unrelated 32 bits using the same inputs to the proposed generator.
Moreover, the KCS-PRNG uses two different elliptic curves on each boot and therefore, the output bitstream would be entirely unrelated even generated under identical start conditions. Hence, it is inferred that the proposed KCS-PRNG generates non-reproducible pseudorandom bitstreams, provided it maintains minimum number of un-used elliptic curves (i.e., $t+1$ where $t \ge 1$ is the number of (re)boots made by the KCS-PRNG such that the generator gets at least two un-used elliptic curve on each (re)boot) in the look-up table consisting of elliptic curves.
\clearpage
\begin{table}[!ht]
\vspace*{-2mm}
\centering
\footnotesize{
\caption{Non-reproducibility test of the proposed KCS-PRNG under identical start conditions.}\vspace*{-2mm}
\scalebox{0.88}{
\begin{tabular}{|l|l|}
\hline
\begin{tabular}[c]{@{}l@{}}Key Input (401-bit entropy)\end{tabular} & \begin{tabular}[c]{@{}l@{}}1905119BCDC809077DB45D \\ 1B3921DB5C06D11 C56C7FE \\ B4F8EE935A2FB16B055281816\\ DFC551AC73C3BBF76EE26B13 \\ 0B8F5E68 \end{tabular}\\ \hline
\begin{tabular}[c]{@{}l@{}} IV Input (173-bit entropy)\end{tabular} & \begin{tabular}[c]{@{}l@{}}190B6B491CDD9E97E6AB \\ 26552990F5481183DEF9AE55\end{tabular}\\ \hline \hline
Check & First run of KCS-PRNG \\ \hline
32-bit Output & 01010100111011111110001110100100 \\ \hline
Check & Second run of KCS-PRNG \\ \hline
32-bit Output & 00010010000100001111001111111110 \\ \hline
Check & Third run of KCS-PRNG \\ \hline
32-bit Output & 11000101110001101011100101111101 \\ \hline
Check & Fourth run of KCS-PRNG \\ \hline
32-bit Output & 01101010010110101011000010110101 \\ \hline
Check & Fifth run of KCS-PRNG \\ \hline
32-bit Output & 10110001000111011001101100011011 \\ \hline
Check & Sixth run of KCS-PRNG \\ \hline
32-bit Output & 01001100110010111100010011100110 \\ \hline
\end{tabular} }
\label{nonReprTest}\vspace*{6mm}
}
\centering
\scriptsize{
\caption{Details of first elliptic curve with verification details \cite{dj2015safe} used in the proposed KCS-PRNG}\vspace*{-2mm}
\scalebox{0.98}{ \begin{tabular}{|l |l|}
\hline
Elliptic curve parameter/Validation & Value \\ \hline \hline
Equation Model & Short Weierstrass \\ \hline
Prime field $p$ & 0xEEAA0DB0A46CE48AFCD288C714939E4063E1D801C55D1118202C76798B62B483 \\ \hline
Coefficient $a$ & 0x33866AAA5914BC27D9ED986D7AF431BD8FC217D8E07D5BA5E44C1A4A355C7DD4\!\! \\ \hline
Coefficient $b$ & 0xCAA0537DF123F85EC185A991B7200396B996C7921E6A7E07F08ED2A4801B0CA2 \\ \hline
Co-factor $h$ & 0x1 \\ \hline
Base Point $G_x$ & 0x3FBE1FF3CC8A893B2B018CC7D3D61961233F87F66FCB257D21805D1327426DE9 \\ \hline
Base Point $G_y$ & 0xC5B219E84B008A4CB36CDF05B44E95354913756FCD92251F90BFB0A4F4D84AD8 \\ \hline
$Rho$ & \begin{tabular}[c]{@{}l@{}}127.8 \tcp{Key space of $2^{127.8}$ for Pollard's Rho attack on ECDLP}\end{tabular} \\ \hline
$Twist-rho$ & \begin{tabular}[c]{@{}l@{}} 127.8 \end{tabular} \\ \hline
$Joint-rho$ & \begin{tabular}[c]{@{}l@{}} 127.8 \end{tabular} \\ \hline
$verify-isElliptic$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring elliptic curve} \end{tabular} \\ \hline
$verify-P_{r_1}isOnCurve$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring private key as an elliptic curve group element} \end{tabular} \\ \hline
$verify-P_{b_1}isOnCurve$ & \begin{tabular}[c]{@{}l@{}}True \tcp{Ensuring public key as an elliptic curve group element}\end{tabular} \\ \hline
$verify-safeField$ & \begin{tabular}[c]{@{}l@{}}True \tcp{Elliptic curve defined over a suitable prime field} \end{tabular} \\ \hline
$verify-safeEquation$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring short Weierstrass equation} \end{tabular} \\ \hline
$verify-safeBase$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring base point with prime order}\end{tabular} \\ \hline
$verify-safeRho$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring ECDLP security} \end{tabular} \\
$verify-safeTransfer$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring ECDLP security from MOV attacks} \end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}$verify-safeDiscriminant$ \end{tabular}& \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring cubic curve} \end{tabular} \\ \hline
$verify-safeRigid$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring elliptic curve is generated using explained procedure}\end{tabular} \\ \hline
$verify-safeTwist$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring twist of the elliptic curve is safe} \end{tabular} \\ \hline
$verify-safeCurve$ & \begin{tabular}[c]{@{}l@{}} True \tcp{if and only if all the other validations return `True'} \end{tabular} \\ \hline
\end{tabular} }
\label{EC1Details}
}
\end{table}
\begin{table}[!ht]
\centering
\scriptsize{
\caption{Details of second elliptic curve with verification details \cite{dj2015safe} used in the proposed KCS-PRNG}\vspace*{-2mm}
\scalebox{0.98}{ \begin{tabular}{|l|l|}
\hline
\begin{tabular}[c]{@{}l@{}} Elliptic curve parameter/\\Validation \end{tabular} & \begin{tabular}[c]{@{}l@{}}Value \end{tabular}\\ \hline \hline
\begin{tabular}[c]{@{}l@{}} Equation Model \end{tabular} & \begin{tabular}[c]{@{}l@{}}Short Weierstrass \end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}} Prime field $p$ \end{tabular} & 0xF2A284E729748EA8BE82173F13412FC257C42095408D706528F5D8964BF2E237 \\ \hline
\begin{tabular}[c]{@{}l@{}} Coefficient $a$ \end{tabular} & 0xB29C202E105FE4C7EE5DECAF48258BFAB2E890AF5D96DE4553D82C3EC5D03C06 \\ \hline
\begin{tabular}[c]{@{}l@{}} Coefficient $b$ \end{tabular} & 0xC36BBDD9EE50EF046EA1D4DA85300673531B323B013043F9DC97B2FDD6A807B4 \\ \hline
\begin{tabular}[c]{@{}l@{}} Co-factor $h$ \end{tabular} & 0x1 \\ \hline
\begin{tabular}[c]{@{}l@{}} Base Point $G_x$ \end{tabular} & 0x1216C78C1FB8707C6B7B2496226B6F13CE25347DD9283A36FA354D09E2CDF4C3 \\ \hline
\begin{tabular}[c]{@{}l@{}} Base Point $G_y$ \end{tabular} & 0xA0AC0431A50C5DA5D25DCA1026946A2AADA19756ED326DA85A203B4A0B2BE342 \\ \hline
$Rho$ & \begin{tabular}[c]{@{}l@{}}127.8 \tcp{Key space of $2^{127.8}$ for Pollard's Rho attack on ECDLP} \end{tabular}\\ \hline
$Twist-rho$ & \begin{tabular}[c]{@{}l@{}} 127.8 \end{tabular}\\ \hline
$Joint-rho$ & \begin{tabular}[c]{@{}l@{}} 127.8 \end{tabular} \\ \hline
$verify-isElliptic$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring elliptic curve}\end{tabular} \\ \hline
$verify-P_{r_1}isOnCurve$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring private key as an elliptic curve group element}\end{tabular} \\ \hline
$verify-P_{b_1}isOnCurve$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring public key as an elliptic curve group element} \end{tabular}\\ \hline
$verify-safeField$ & \begin{tabular}[c]{@{}l@{}}True \tcp{Elliptic curve defined over a suitable prime field} \end{tabular}\\ \hline
$verify-safeEquation$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring short Weierstrass equation} \end{tabular}\\ \hline
$verify-safeBase$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring base point with prime order} \end{tabular}\\ \hline
$verify-safeRho$ & \begin{tabular}[c]{@{}l@{}}True \tcp{Ensuring ECDLP security} \end{tabular}\\ \hline
$verify-safeTransfer$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring ECDLP security from MOV attacks}\end{tabular} \\ \hline
$verify-safeDiscriminant$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring cubic curve} \end{tabular}\\ \hline
$verify-safeRigid$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring elliptic curve is generated using explained procedure} \end{tabular}\\ \hline
$verify-safeTwist$ & \begin{tabular}[c]{@{}l@{}} True \tcp{Ensuring twist of the elliptic curve is safe} \end{tabular}\\ \hline
\begin{tabular}[c]{@{}l@{}}$verify-safeCurve$ \end{tabular}& \begin{tabular}[c]{@{}l@{}} True \tcp{if and only if all the other validations return `True'} \end{tabular}\\ \hline
\end{tabular} }
\label{EC2Details}
}\vspace*{2mm}
\end{table}
\section{Details of two elliptic curves used in the proposed KCS-PRNG} \label{sec7}
Elliptic curves over 256-bit prime fields whose ECDLPs are found to be hard and method of computation is transparent and trustworthy, are selected for use in the proposed KCS-PRNG. The elliptic curves are generated randomly over the 256-bit prime field size in order to build the trust as indicated in \cite{bernstein2015manipulate,shumow2007possibility,hales2013nsa,bernstein2013security}. The generation mechanism of cryptographically safe elliptic curves is referred from \cite{dj2015safe,konstantinou2010efficient,menezes1993reducing,cheng2008hard,bos2016selecting,smart1999discrete,koblitz2004guide} and followed with the procedure suggested in \cite{abhishek2021computation} to achieve trusted security. The proposed KCS-PRNG uses two elliptic curves which are generated randomly and verified for their cryptographic security as per the recommendations given in \cite{dj2015safe}. The verification details against the criteria as suggested in \cite{dj2015safe} of the two elliptic curves selected for experimentation purposes in this work are summarized in Table $\ref{EC1Details}$ and Table $\ref{EC2Details}$ respectively. A look-up table $\mathcal{T}$ used in the proposed KCS-PRNG is created with 256 such elliptic curves initially as discussed in Section $\ref{architecture}$.
\section{Results}\label{sec8}
The security analysis carried out in Section $\ref{sec5}$ has proved that the proposed KCS-PRNG exhibits: higher security property (from RNG requirements $R\ref{R1}$ to $R\ref{R4}$), provably secure, very high per bit entropy rate, minimal bitwise correlation, highly nonlinear with linear complexity $LC(x)$ bounded as $15670^{6237} < LC(x) \leq 15670^{6238}$, very large period in the range $[N_1 \times 2^{401}, (N_1 + N_2) \times 2^{401}]$ per boot where $N_1 < N_2$ being the order of two elliptic curves used, huge key space in the range $[2^{529}, \infty)$ and impressive throughput of 2.5 Megabits per second as discussed in Section $\ref{sec9}$ to generate uninterrupted cryptographically secure bitstreams.
\medskip
The proposed KCS-PRNG passed all the tests of NIST, Diehard and TestU01 test suites along with other tests to validate statistical qualities of randomness, cryptographic security and non-reproducibility as discussed in Section $\ref{sec6}$. The NIST test also proved that the proposed KCS-PRNG exhibits impressive and the highest proportion i.e., the pass rate of 0.9896 as compared to the existing PRNG \cite{alhadawi2019designing} with 0.9887 and TRNG \cite{anandakumar2019fpga} with 0.987 proportion values. The KCS-PRNG demonstrated to exhibit nearly an ideal 0.99999975 per bit entropy and minimal serial correlation of 0.000034 in its generated bitstream.
\section{Performance analysis of the proposed KCS-PRNG} \label{sec9}\vspace{0.5mm}
The proposed KCS-PRNG was run on Intel$\textsuperscript{\textregistered}$ Core$\textsuperscript{TM}$ i7-7700 CPU @ 3.60GHz processor. The source code of the KCS-PRNG is developed in C++ and extensively used CryptoPP version 8.2.1 library. The KCS-PRNG software program was run on Ubuntu version 16.04.1 with kernel version 4.15.0-96-generic. The KCS-PRNG program was (re)seeded on every 100000 bits output in generation of 1GB file of pseudorandom bitstream. It gave an impressive throughput of 2.5 Mbps in software which asserts its high throughput-oriented design. The proposed KCS-PRNG for kernel applications offers a better security by meeting all the RNG requirements from $R\ref{R1}$ to $R\ref{R4}$ as compared to the existing PRNG \cite{alhadawi2019designing} and kernel CSPRNGs like/dev/random \cite{viega2003practical,dodis2013security}, Yarrow \cite{kelsey1999yarrow}, and \linebreak Fortuna \cite{ferguson2011cryptography}.
\section{Comparison of proposed KCS-PRNG with recent CSPRNGs for kernel applications} \label{sec10}\vspace{0.5mm}
The proposed KCS-PRNG is designed to meet all the requirements of a RNG as discussed in Section $\ref{subsec4}$. The features of the proposed KCS-PRNG are compared with the popular CSPRNGs used by the current operating system kernels and a recently well acknowledged TRNG \cite{anandakumar2019fpga} in Table $\ref{compCSPRNGs1}$. The reason behind the comparison of KCS-PRNG with TRNG is that, it meets the RNG requirement $R\ref{R4}$ which a TRNG only meets. Table $\ref{compCSPRNGs1}$ also consolidates interesting comparison results of KCS-PRNG with an existing TRNG based on Oscillator-Rings \cite{anandakumar2019fpga}.
\medskip
The KCS-PRNG is compared with popular kernel CSPRNGs namely /dev/(u)random used by Linux and Android kernels, Yarrow used by MacOS/iOS/FreeBSD kernel and Fortuna used by Windows kernel respectively on the basis of various criteria related to cryptographic security, randomness tests and throughput to conclude their suitability for strategic applicatons such as kernel applications.
\begingroup
\setlength{\tabcolsep}{7.5pt}
\renewcommand{\arraystretch}{1.0}
\begin{landscape} \hbox{}\hspace*{80mm}
\begin{center}
\begin{table}[h!]
\centering
\footnotesize{
\caption{Comparison of the proposed KCS-PRNG with recent Kernel CSPRNGs and TRNG}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
$\textbf{Criterian}$ & \textbf{/dev/(u)random} & \textbf{Yarrow} & \textbf{Fortuna} & \textbf{KCS-PRNG} & \textbf{TRNG} \\ \hline \hline
\begin{tabular}[c]{@{}l@{}}Hard problem used \end{tabular} & \begin{tabular}[c]{@{}l@{}}ChaCha20 \\Stream cipher \end{tabular} & \begin{tabular}[c]{@{}l@{}}3DES \end{tabular} & \begin{tabular}[c]{@{}l@{}}AES128 in\\ counter mode \end{tabular} & ECDLP & \begin{tabular}[c]{@{}l@{}}Physical property of \\Oscillator-Rings \end{tabular}\\ \hline
Hash function & \begin{tabular}[c]{@{}l@{}}SHA160, MD5 \\ \cite{rock2005pseudorandom} \end{tabular} & SHA160 & SHA256 & SHA256 & \begin{tabular}[c]{@{}l@{}}Not applicable \end{tabular}\\ \hline
\begin{tabular}[c]{@{}l@{}}RNG requirements met\end{tabular} & $R\ref{R1},R\ref{R2}, R\ref{R3}$ & $R\ref{R1},R\ref{R2},R\ref{R3}$ & $R\ref{R1},R\ref{R2},R\ref{R3}$ & \begin{tabular}[c]{@{}l@{}}$R\ref{R1},R\ref{R2},R\ref{R3}$, \\ $R\ref{R4}$ (Mitigated) \end{tabular} & \begin{tabular}[c]{@{}l@{}}$R\ref{R1},R\ref{R2},R\ref{R3}$, $R\ref{R4}$ \end{tabular}\\ \hline
\begin{tabular}[c]{@{}l@{}}Unblocked supply of random\\ bits \end{tabular}& No & No & Yes & Yes & Yes\\ \hline
Correlation Test & *& *& * & \begin{tabular}[c]{@{}l@{}}Passed (serial correlation of \\0.000034) \end{tabular} & Passed\\ \hline
\begin{tabular}[c]{@{}l@{}}Per bit entropy rate \end{tabular}& *& *& * & 0.99999975 & \begin{tabular}[c]{@{}l@{}}0.9993 \end{tabular}\\ \hline
\begin{tabular}[c]{@{}l@{}}Linear complexity $LC(x)$ \end{tabular} & *& *& * & \begin{tabular}[c]{@{}l@{}}$15670^{6237} < LC(x) \leq 15670^{6238}$ \end{tabular} & \begin{tabular}[c]{@{}l@{}}Not applicable \end{tabular}\\ \hline
Period & * & * & \begin{tabular}[c]{@{}l@{}}$2^{128}$ in \\ single call \\ \cite{mcevoy2006fortuna} \end{tabular} & \begin{tabular}[c]{@{}l@{}}$[N_1 \times 2^{401}, (N_1 + N_2) \times 2^{401}]$ \end{tabular} & \begin{tabular}[c]{@{}l@{}}Infinite \end{tabular}\\ \hline
Key space & * & * & * & $[2^{529}, \infty)$ & \begin{tabular}[c]{@{}l@{}}Infinite \end{tabular}\\ \hline
Throughput & 8-12Kbps \cite{rock2005pseudorandom} & \begin{tabular}[c]{@{}l@{}} No results \\\cite{rock2005pseudorandom} \end{tabular} & 7.2 Mbps & 2.5 Mbps & \begin{tabular}[c]{@{}l@{}}6 Mbps on Xilinx \\Spartan-3A FPGA \end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Statistical tests passed \end{tabular} & \begin{tabular}[c]{@{}l@{}}Diehard \cite{rock2005pseudorandom} \end{tabular} & \begin{tabular}[c]{@{}l@{}}Not available \\ \cite{rock2005pseudorandom} \end{tabular} & \begin{tabular}[c]{@{}l@{}}Diehard \cite{mcevoy2006fortuna} \end{tabular} & \begin{tabular}[c]{@{}l@{}}NIST, Diehard, TestU01 \end{tabular} & \begin{tabular}[c]{@{}l@{}}NIST \end{tabular} \\ \hline
NIST proportion obtained & * & * & * & 0.9896 & \begin{tabular}[c]{@{}l@{}}0.987 \end{tabular}\\ \hline
Restart/Non-reproducibility Test & * & * & * & Passed & Passed\\ \hline
\end{tabular}
\label{compCSPRNGs1}
} \newline \newline
\scriptsize{*No reference available}
\end{table}
\end{center}
\end{landscape}
\endgroup
\section{Conclusion} \label{sec11}
The operating system kernel demands a high quality CSPRNG for its randomness requirements. A novel CSPRNG called KCS-PRNG is presented in this paper which exhibits qualities of a CSPRNG as well as of a TRNG (i.e., it also includes non-reproducibility of the generated random bitstreams) for use in kernel and in various other cryptographic applications. The combination of clock-controlled LFSRs as a nonlinear sequence generator and two non-standard and trusted elliptic curves is proven to be an excellent choice of designing a CSPRNG.~An extensive security analysis of the proposed KCS-PRNG was performed which proved that the proposed generator is resistant to important attacks like Berlekamp-Massey attacks, brute force attacks, next-bit tests, state compromise extension attacks and correlation attacks on the proposed generator.~The proposed design of the KCS-PRNG allows periodic change of elliptic curves in the elliptic curve look-up table maintained by the generator to mitigate the gap of the security property $R\ref{R4}$ i.e., `non-reproducibility' requirement to a practical extent for the first time in the literature. The use of elliptic curves from its look-up table makes the proposed KCS-PRNG customizable than the current popular kernel CSPRNGs like /dev/random, Yarrow and Fortuna whose designs are based on block ciphers like Triple DES and AES respectively.~Hence, it is inferred that the proposed KCS-PRNG qualifies as a competent CSPRNG for adoption in the kernel applications.
\begin{acknowledgements}
The authors thank Society for Electronic Transactions and Security (SETS), Chennai for providing the research opportunity to carry out this proposed work. The authors show their deepest gratitude to Dr. P. V. Ananda Mohan and Dr. Reshmi T. R. for their inputs and anonymous reviewers for their review and Mr. T. Santhosh Kumar for help in experimentation. Authors also thank to Mr. Ritesh Dhote, Mr. Aditya Saha, Ms. Sonal Priya Kamal and Ms. Diya V. A. for help in final formatting.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |
\section{Introduction}
The Gr\" uss inequality \cite{GRU}, as a complement of Chebyshev's
inequality, states that if $f$ and $g$ are integrable real functions
on $[a,b]$ such that $C\leq f(x)\le D$ and $E\leq g(x)\le F$ hold
for some real constants $C,D,E,F$ and for all $x\in[a,b]$, then
\begin{equation}
\left|\frac1{b-a}\int_a^bf(x)g(x)dx-\frac1{(b-a)^2}\int_a^b f(x)dx\int_a^b g(x)dx\right|
\leq\frac14(D-C)(F-E)\,;\label{grisovaca}
\end{equation}
see \cite{M-M} for several proofs of this inequality in the discrete
form. It has been the subject of intensive investigation, in which
conditions on functions are varied to obtain different estimates;
see \cite{DRA2,M-P-F} and references therein. This inequality has
been investigated, applied and generalized by many authors in
different areas of mathematics, among others in inner product
spaces \cite{DRA1}, quadrature formulae \cite{CHE,UJE}, finite
Fourier transforms \cite{B-C-D-R}, linear functionals
\cite{A-B,I-P-T}, matrix traces \cite{REN}, inner product modules
over $H^*$-algebras and $C^*$-algebras \cite{B-I-V, I-V}, positive
maps \cite{M-R} and completely bounded maps \cite{P-R}.
\subsection{Symmetric gauge functions, unitarily invariant norms
and their norm ideals}
Let $\mathcalb{B}(\mathcal{H})$ and
$\mathcalb{C}_\infty(\mathcal{H})$ denote respectively spaces of all
bounded and all compact linear operators acting on a separable,
complex Hilbert space $\mathcal{H}$. Each "symmetric gauge" (s.g.)
function $\Phi$ on sequences gives rise to a unitarily invariant
(u.i) norm on operators defined by
$\left\|X\right\|_\Phi=\Phi(\{s_n(X)\}_{n=1}^\infty)$, with
$s_1(X)\ge s_2(X)\ge\hdots$ being the singular values of $X$, i.e.,
the eigenvalues of $|X|=(X^*X)^\frac12.$ We will denote by the
symbol $\left|\left|\left|\cdot\right|\right|\right|$ any such norm,
which is therefore defined on a naturally associated norm ideal
$\ccu$ of $\mathcalb{C}_\infty(\mathcal{H})$ and satisfies the
invariance property $\lluo UXV\rruo=\lluo X\rruo$ for all $X\in\ccu$
and for all unitary operators $U,V\in\BH$.
Specially well known among u.i. norms are the Schatten $p$-norms
defined for $1\le p<\infty$ as $\|X\|_p=\sqrt[p]{\,\sum_{n=1}^\infty
s_n^p(X)}$, while $\|X\|_\infty =\|X\|=s_1(X)$ coincides with the
operator norm $\|X\|$. Minimal and maximal u.i. norm are among
Schatten norms, i.e., $\|X\|_\infty\le\llu X\rru\le\|X\|_1$ for all
$X\in\mathcalb{C}_1(\mathcal{H})$ (see inequality (IV.38) in
\cite{bhatijinaknjiga}). For $f,g\in\mathcal{H}$, we will denote by
$g^*\otimes f$ one dimensional operator $(g^*\otimes f)h=\langle
h,g\rangle f$ for all $h\in\mathcal{H}$, known that the linear span
of $\{g^*\otimes f\,|\, f,g\in\HH$ is dense in each of
$\mathcalb{C}_p(\mathcal{H})$ for $1\le p\le\infty$. Schatten
$p$-norms are also classical examples of $p$-reconvexized norms.
Namely, any u.i. norm $\lln\cdot\rrn_\Phi$ could be
$p$-reconvexized for any $p\ge1$ by setting $\lln A\rrn_{\Phi^{(p)}}
= \lln |A|^p\rrn_{\Phi}^{\frac1p}$ for all $A\in\BH$ such that
$|A|^p\in \ccc_{\Phi}(\HH)$. For the proof of the triangle
inequality and other properties of these norms see preliminary
section in \cite{joc09ii}; for the characterization of the dual norm
for $p$-reconvexized one see Th. 2.1 in \cite{joc09ii}.
\subsection{Gel'fand integral of operator valued functions}
Here we will recall the basic properties and terminology related to
the notion of Gel'fand integral, when it applies to operator valued
(o.v.) functions. Since this theory is well known, we give those
properties without the proof.
Following \cite{du}, p.53., if $(\WWW,\WWWalg,\mu)$ is a measure
space,
the mapping $\AAA:\WWW\ttt \BH$ will be called \wmeasur if a scalar
valued function $t\ttt \tr (\AAt Y)$ is measurable for any
$Y\in\ccj$. In addition, if all these functions are in $\LJ$, then
according to the fact that $\BH$ is the dual space of $\ccj$, for
any $E\in\WWWalg$ there will be the unique operator $\InE\in\BH$,
called the Gel'fand ({\cyr Gelp1fand}) or weak $^*$-integral of
$\AAA$ over $E$, such that
\begin{equation}
\tr(\InE Y)=\int_E \tr(\AAt Y)\dt \textrm{\qquad for all
$Y\in\ccj$.} \label{geljfandovintegral}
\end{equation}
We will denote it by $\int_E\AAt\dt$, $\int_E\AAA\dm$ or
exceptionally by
$\gggint_{\,E}\AAA\dm,$
if the context requires to distinguish this one from other types of
integration.
A practical tool for this type of integrability to deal with is the
following
\begin{lemma}\label{lema1}
$\AAA:\WWW\ttt \BH$ is \wmeasur
(resp. $[\mu]$ weakly $^*$-integrable) iff
scalar valued functions $t\ttt \left<\AAt f,f\right>$ are $[\mu]$ measurable (resp. integrable) for every
$f\in\HH$.
\end{lemma}
In view of Lemma \ref{lema1}
the basic definition (\ref{geljfandovintegral}) of Gel'fand integral
for o.v. functions can be reformulated as follows:
\begin{lemma}\label{novadefinicionalema}
If $\left<\AAA f,f\right>\in L^1(E,\mu)$ for all $f\in\HH$, for
some $E\in\WWWalg$ and a $\BH$-valued function $\AAA$ on $E$, then
the mapping $f\ttt\int_E \left<\AAt f,f\right>\dt$ represents a
quadratic form of (the unique) bounded operator (denoted by)
$\int_E\AAA\dm$ or $\int_E\AAA_t\dt$,
(we refer to it as to ``intuitive'' integral
of $\AAA$ over $E$),
satisfying
\begin{equation}
\left<\left(\int_E\AAA_t\dt\right) f,g\right>=
\innE \left<\AAt f,g\right>\,d\mu (t) \textrm{\qquad for all $f,g\in\HH$,}
\nonumbe
\end{equation}
as well as
\begin{equation}
\textrm{\rm tr}\!\left(\int_E\AAA_t\dt Y\right)=
\innE \tr(\AAt Y) \dt \textrm{\qquad for all $Y\in\ccj$.}
\nonumbe
\end{equation}
\end{lemma}
In other words, integrability of quadratic forms of an o.v. function
assures its Gel'fand's integrability and so the notions of
``intuitive'' and Gel'fand's integral for o.v. functions coincide.
Following Ex.2 in \cite{JOC},
for a \wmeasur
function
$\AAA:\WWW\ttt \BH$ we have that $\AAA^*\AAA$ is Gel'fand
integrable iff $ \int_\Omega \lln\AAA_t f\rrn^2\dt<\iii$ for all
$f\in\HH$. Moreover,
for a \wmeasur
function
$\AAA:\WWW\ttt \BH$ let us consider the operator of ``vector valued functionalization'' (of vectors),
i.e., a linear transformation $\vec{\AAA}:D_{\vec{\AAA}}\ttt\LDH$,
with the domain $D_{\vec{\AAA}}=\llv f\in\HH \,| \,
\int_\Omega \lln\AAA_t f\rrn^2\dt<\iii\rrv$, defined by
\begin{equation}
({\vec{\AAA}}f)(t)=\AAA_t f \qquad
\textrm{\rm for $[\mu]$ \,a.e.\, $t\in\WWW$ and all $f\in
D_{\vec{\AAA}}$.}
\nonumbe
\end{equation}
Now, another way for understanding Gel'fand's integrability of
$\AAA^*\AAA$ is provided by the following. \bbl
\label{zatvorenjeogranichen}
$\vec{\AAA}$ is a closed operator; it is
bounded if and only if $\AAA^*\AAA$ is Gel'fand integrable,
and whenever this is the case, then $ \llaj\vec{\AAA}\rraj=\sqrt{
\int_\WWW \AAA_t^*\AAA_t \dt}$ and
\begin{equation}
\llnj\vec{\AAA}\rrnj_{\BBBog(\HH,\LDH)}=
\lln \int_\WWW \AAA_t^*\AAA_t \dt\rrn^\frac12.
\nonumbe
\end{equation}
If additionally $ \sqrt{ \int_\WWW \AAA_t^*\AAA_t \dt}\in
\ccc_{\Phi}(\HH)$, then
\begin{equation}
\llnj\vec{\AAA}\rrnj_{\ccc_{\Phi}(\HH,\LDH)}=
\lln \sqrt{ \int_\WWW \AAA_t^*\AAA_t \dt}\rrn_{\ccc_{\Phi}(\HH)}.
\label{zatvogr2}
\end{equation}
\eel
Denoting $\lln\cdot\rrn_{\ccc_{\Phi}(\HH)}$ by $\llu\cdot\rru$, in
view of both Definition 1 and equality (3) in \cite{JOC} we now get
\begin{eqnarray}
\lluj{\AAA}\rruj_2&:=&\lln \int_\WWW \AAA_t^*\AAA_t
\dt\rrn_{\ccc_{\Phi}(\HH)}^\frac12=
\lln \sqrt{ \int_\WWW \AAA_t^*\AAA_t
\dt}\rrn_{\ccc_{\Phi^{(2)}}(\HH)}\label{zatvogr3}\\ \nonumber
&=&\llnj \vec{\AAA}\rrnj_{\ccc_{\Phi^{(2)}}(\HH,\LDH)}.
\end{eqnarray}
Thus we have recognized the space $L_G^2(\WWW,\dm,\ccc_\Phi(\HH))$
of square integrable o.v. functions $\AAA$ such that
$\int_\WWW\AAA_t^*\AAA_t\dt\in \ccc_\Phi(\HH)$ as the isometrically
isomorph to the norm ideal of operators
$\ccc_{\Phi^{(2)}}(\HH,\LDH)\subseteq
{\BBBog_{\Phi^{(2)}}(\HH,\LDH)}$, associated to a (2-reconvexized) s.g function
$\Phi^{(2)}$. Therefore the normability and the completeness of the
space $L_G^2(\WWW,\dm,\ccc_\Phi(\HH))$, as stated in Theorem 2.1 in
\cite{JOC}, follow immediately. This effectively gives us a
representation of $L_G^2(\WWW,\dm,\ccc_\Phi(\HH))$, as a Hilbert
module over a Banach $^*$-algebra ${\ccc_{\Phi}}(\HH)$ with its
${\ccc_{\Phi}}(\HH)$-valued inner product
$$\left<\!\left<\AAA,\BBB\right>\!\right>=
\int_\WWW \AAt^*\BBt\dt \qquad\textrm{\rm for all $\AAA,\BBB\in
L_G^2(\WWW,\dm,\ccc_\Phi(\HH))$,}$$ as a norm ideal
$\ccc_{\Phi^{(2)}}(\HH,\LDH)$ of operators from $\HH$ into $\LDH$
equiped with its $\ccc_{\Phi}(\HH)$-valued inner product
$$\left<\!\!\left<\vec{\AAA},\vec{\BBB}\right>\!\!\right>=
\vec{\AAA^*}\vec{\BBB} =\int_\WWW \AAt^*\BBt\dt \qquad\textrm{\rm
for all $\vec{\AAA},\vec{\BBB} \in
\ccc_{\Phi^{(2)}}(\HH,\LDH).$}$$
\subsection{Elementary operators and inner product type integral transformers in norm ideals}
For weakly$^*$-measurable o.v. functions $\AAA,\BBB:\WWW\to \BH$ and
for all $X\in\BH$ the function $t\to\AAA_t X\BBB_t$ is also
weakly$^*$-measurable. If these functions are Gel'fand integrable
for all $X\in\BH$, then the inner product type linear
transformation $X\to\int_\WWW \AAA_t X\BBB_t\dt$ will be called an
inner product type (i.p.t.) transformer on $\BH$ and denoted by
$\int_\WWW \AAA_t \otimes\BBB_t\dt$ or ${\mathcal I}_{\AAA,\BBB}$. A
special case when $\mu$ is the counting measure on $\mathbb N$ is
mostly known and widely investigated, and such transformers are
known as elementary mappings or elementary operators.
As shown in Lemma 3.1 (a) in \cite{JOC}, a sufficient condition is
provided when $\AAA^*$ and $\BBB$ are both in
$L^2_G(\WWW,\dm,\BH).$ If each of families $(\AAA_t)_{t\in\WWW}$
and $(\BBB_t)_{t\in\WWW}$ consists of commuting normal operators,
then by Theorem 3.2 in \cite{JOC} the \ipt transformer $\int_\WWW
\AAA_t \otimes\BBB_t\dt$ leaves every u.i. norm ideal
$\ccc_{\llu\cdot\rru}(\HH)$ invariant and the following
Cauchy-Schwarz inequality holds:
\begin{equation}
\llu \int_\WWW \AAA_t X\BBB_t\dt \rru\le \llu \sqrt{\int_\WWW
\AAA_t^*\AAA_t \dt} X
\sqrt{\int_\WWW \BBB_t^*\BBB_t \dt}\rru
\end{equation}
for all $X\in \ccc_{\llu\cdot\rru}(\HH)$.
Normality and commutativity condition can be dropped for Schatten
$p$-norms as shown in Theorem 3.3 in \cite{JOC}. In Theorem 3.1 in
\cite{joc09i} a formula for the exact norm of the \ipt transformer
$\int_\WWW \AAA_t \otimes\BBB_t\dt$ acting on $\ccc_2(\HH)$ is
found. In Theorem 2.1 in \cite{joc09i} the exact norm of the \ipt
transformer $\int_\WWW \AAA_t^* \otimes\AAA_t\dt$ is given for two
specific cases:
\begin{equation}
\lln \int_\WWW \AAA_t^*\otimes\AAA_t\dt
\rrn_{\BH\to\ccc_{\Phi}(\HH)}= \lln \int_\WWW \AAA_t^*\AAA_t\dt
\rrn_{\ccc_\Phi(\HH)}, \label{bhubiloshta}
\end{equation}
\begin{equation}
\lln \int_\WWW \AAA_t^*\otimes\AAA_t\dt
\rrn_{\ccc_{\Phi}(\HH)\to\ccc_1(\HH)}= \lln \int_\WWW
\AAA_t\AAA_t^*\dt \rrn_{\ccc_{\Phi_*}(\HH)},
\nonumbe
\end{equation}
where $\Phi_*$ stands for a s.g. function related to the dual space
$(\ccc_{\Phi}(\HH))^*$.
Also, as already noted in \cite{joc09i} at the end of page 2964, the
norm appearing in (\ref{zatvogr3}) equals to a square root of the
norm of the \ipt transformer $X\to\int_\WWW\AAA_t^* X\AAA_t\dt$ when
acting from $\BH$ to $\ccc_\Phi(\HH)$. As this quantity actually
presents a norm on the Banach space
$L_G^2(\WWW,\dm,\BH,\ccc_\Phi(\HH))$ as elaborated in Theorem 2.2 in
\cite{joc09i}, therefore we conclude that spaces
$L_G^2(\WWW,\dm,\BH,\ccc_\Phi(\HH))$ are
both isometrically isomorph to the norm ideal
$\ccc_{\Phi^{(2)}}(\HH,\LDH).$ As the objects of consideration in
all those spaces are families of operators, from now on we will
refer to such objects as to {\bf field of operators} (for example
$(\AAA_t)_{t\in\WWW}$) in $L^2(\WWW,\mu,\ccc_\Phi(\HH))$. When we
additionally require that the adjoint field of operators
$(\AAA_t^*)_{t\in\WWW}$ also belongs to
$L^2(\WWW,\mu,\ccc_\Phi(\HH))$, then we will say that
$(\AAA_t)_{t\in\WWW}$ in doubly $\mu$ square integrable in
$\ccc_\Phi(\HH)$ on $\WWW.$
The norm appearing in (\ref{bhubiloshta})
and its associated space $L_G^2(\WWW,\dm,\BH,\ccc_\Phi(\HH))$
present only a spacial case of norming a field
$\AAA=(\AAA_t)_{t\in\WWW}$. A much wider class of norms $ \lln
\cdot\rrn_{\Phi,\Psi}$ and their associated spaces
$L_G^2(\WWW,\dm,\ccc_\Phi(\HH),\ccc_\Psi(\HH))$ are
given in
\cite{joc09i} by
\begin{equation}
\lln \AAA\rrn_{\Phi,\Psi}=
\lln \int_\WWW \AAA_t^*\otimes \AAA_t\dt
\rrn_{\BBBog(\ccc_\Phi(\HH),\ccc_\Psi(\HH))}^\frac12
\end{equation}
for an arbitrary pair of s.g. functions $\Phi$ and $\Psi$. For the
proof of completeness of the space
$L_G^2(\WWW,\dm,\ccc_\Phi(\HH),\ccc_\Psi(\HH))$ see Theorem 2.2 in
\cite{joc09i}.
The potential for finding Gr\"uss type inequalities for \ipt
transformers relies on the fact that $\int_\Omega
\AAA_t\otimes\BBB_td\mu(t)-\int_\Omega\AAA_t\dt \otimes
\int_\Omega\mathscr{B}_t \dt$ is also an \ipt transformer. As the
representation for an
\ipt transformer is not unique (as a rule), the successfulness of
the application of some known inequalities to $\int_\Omega
\AAA_t\otimes\BBB_td\mu(t)-\int_\Omega\AAA_t\dt \otimes
\int_\Omega\BBB_t\dt$ mainly depends on the right choice for its
representation.
Before exposing main results, we will draw our attention to the
following lemma, which we will use in the sequel.
\begin{lemma}
\label{optlema} If $\mu$ is a probability measure on $\Omega$, then
for every field $(\mathscr{A}_t)_{t\in\Omega}$ in
$L^2(\Omega,\mu,\mathcalb{B}(\mathcal{H}))$, for all
$B\in\mathcalb{B}(\mathcal{H})$, for all unitarily invariant norms
$\lluo\cdot\rruo$ and for all $\theta>0$,
\begin{eqnarray}
\lefteqn{ \int_\Omega\left|\mathscr{A}_t-B\right|^2 \dt =
\int_\Omega\left|\mathscr{A}_t-\int_\Omega\AAA_t \dt\right|^2 \dt
+\lla \int_\Omega\AAA_t \dt-B\rra^2}\label{nulto}\\
&\ge& \int_\Omega\left|\mathscr{A}_t-\int_\Omega\AAA_t
\dt\right|^2 \dt =\int_\Omega|\mathscr{A}_t|^2
\dt-\left|\int_\Omega\AAA_t \dt\right|^2; \label{prvo}
\end{eqnarray}
\begin{multline}
\min_{B\in\mathcalb{B}(\mathcal{H})}\llu \,\left|\int_\Omega\left|\mathscr{A}_t-B\right|^2\dt\rrr|^\theta\rru\\
= \llu\, \left|\int_\Omega\left|\mathscr{A}_t-\int_\Omega\AAA_t
\dt\right|^2 \dt\right|^\theta \rru=\llu \,
\left|\int_\Omega|\mathscr{A}_t|^2 \dt- \left|\int_\Omega\AAA_t
\dt\right|^2\rrr|^\theta\rru. \label{drugo}
\end{multline}
\end{lemma}
Thus, the considered minimum is always obtained for
$B=\int_\Omega\AAA_t \dt$.
\begin{proof}
The expression in (\ref{nulto}) equals
\begin{eqnarray}
\lefteqn{ \int_\Omega\left|\mathscr{A}_t-B\right|^2 \dt =
\int_\Omega\left|\mathscr{A}_t-\int_\Omega\AAA_t \dt+\int_\Omega\AAA_t \dt-B\right|^2\dt=}\nonumber\\
&=& \int_\Omega\left|\mathscr{A}_t-\int_\Omega\AAA_t\dt\right|^2 \dt
+ \int_\WWW\lla \int_\WWW\mathscr{A}_t\dt-B\rra^2 \dt\nonumber\\
&+&2\Re \int_\Omega\llm \mathscr{A}_t-\int_\Omega\AAA_t\dt\rrm^*
\llm\int_\Omega\AAA_t \dt-B\rrm\dt\nonumber\\
&=& \int_\Omega\left|\mathscr{A}_t-\int_\Omega\AAA_t\dt\right|^2 \dt
+\lla\int_\Omega\mathscr{A}_t\dt-B\rra^2,
\nonumbe
\end{eqnarray}
as $\displaystyle\int_\Omega\llm
\mathscr{A}_t-\int_\Omega\AAA_t\dt\rrm^{*}\llm\int_\Omega\AAA_t
\dt-B\rrm\dt =$ $$=\llm\int_\Omega
\mathscr{A}_t^*\dt-\int_\Omega\AAA_t^*\dt\rrm\llm\int_\Omega\AAA_t
\dt-B\rrm=0.$$
Inequality in (\ref{prvo}) follows from (\ref{nulto}), while
identity in \eqref{prvo} is just a
a special case of Lemma 2.1 in \cite{JOC} applied for $k=1$ and $\delta_1=\Omega$.
As $0\le A\le B$ for $A,B\in\cci$ implies $ s_n^\theta(A)\le
s_n^\theta(B)$ for all $n\inN$, as well as $\llu A^\theta\rru\le
\llu B^\theta\rru,$ then (\ref{drugo}) follows.
\end{proof}
\section{Main results}
Let us recall that for a pair of random real variables $(Y,Z)$ its
coefficient of correlation
$$\rho_{Y,Z}=\frac{\lla E(YZ)-E(Y)E(Z)\rra}{\sigma(Y)\sigma(Z)}=
\frac{\lla E(YZ)-E(Y)E(Z)\rra}{
\sqrt{E(Y^2)-E^2(Y)} \sqrt{E(Z^2)-E^2(Z)}}$$ always satisfies
$|\rho_{Y,Z}|\le 1.$
For square integrable functions $f$ and $g$ on $[0,1]$ and
$D(f,g)=\int_0^1f(t)g(t)\,d t-
\int_0^1f(t)\,d t\int_0^1g(t)\,d t$
Landau proved (see \cite{lan1,lan2})
$$ \lla D(f,g)\rra\le \sqrt{D(f,f)D(g,g)},$$
and the following theorem is a generalization of these facts to
\ipt{} transformers.
\begin{theorem}[Landau type inequality for \ipt transformers in u.i. norm ideals]
\label{normalnisluchaj} If $\mu$ is a probability measure on
$\Omega$, let both fields $(\AAA_t)_{t\in\Omega}$ and
$(\BBB_t)_{t\in\Omega}$ be in $L^2(\Omega,\mu,\BH)$
consisting of commuting normal operators
and let
$$\sqrt{\,\int_\Omega|\AAt|^2 \dt-\left|\int_\Omega\AAt \dt\right|^2}X
\sqrt{\,\int_\Omega|\BBt|^2 \dt-\left|\int_\Omega\BBt
\dt\right|^2}\in \ccu$$ for some $X\in\BH$. Then $$\int_\Omega
\AAA_tX\BBB_td\mu(t)-\int_\Omega\AAt \dt X\!\!\int_\Omega\BBt \dt
\in\ccu$$ and
\begin{multline}
\llu\int_\Omega \AAA_t X\BBB_td\mu(t)- \int_\Omega\AAt \dt X\!
\!\int_\Omega\BBt \dt
\rru\\
\leq\llu
\sqrt{\,\int_\Omega|\AAt|^2 \dt-\left|\int_\Omega\AAt \dt\right|^2}X
\sqrt{\,\int_\Omega|\BBt|^2 \dt-\left|\int_\Omega\BBt \dt\right|^2}
\rru.\label{21}
\end{multline}
\end{theorem}
\begin{proof}
First we note that we have the following Korkine type identity for
\ipt transformers:
\begin{eqnarray}
\nonumber && \int_\Omega \AAA_tX\BBB_td\mu(t)\!-\!\int_\Omega\AAt
\dt X\! \! \int_\Omega\BBt \dt
\!\!\\
\nonumber&&\lefteqn{=\!\!\int_\Omega d\mu(s)\int_\Omega \AAA_tX\BBB_t \dt\!-\!\int_\Omega\!\int_\Omega \AAA_tX\BBB_s\,d\mu(s)d\mu(t)}\\
\!\!&&=\!\!\dfrac12\int_{\Omega^2}(\AAA_s-\AAA_t)X(\BBB_s-\BBB_t)d(\mu\times\mu)(s,t).\label{22}
\end{eqnarray}
In this representation we have $(\AAA_s-\AAA_t)_{(s,t)\in\Omega^2}$
and $(\BBB_s-\BBB_t)_{(s,t)\in\Omega^2}$ to be in
$L^2(\Omega^2,\mu\times\mu,\BH)$ because by an application of the
identity (\ref{22}),
\begin{eqnarray}
\nonumber
\dfrac12\int_{\Omega^2}\left|\AAA_s-\AAA_t\right|^2d(\mu\times\mu)(s,t)&=&\int_\Omega|\AAt|^2
\dt-
\left|\int_\Omega\AAt \dt\right|^2\\
&=&\int_\Omega\left|\AAA_t-\int_\Omega\AAt \dt\right|^2 \dt
\in\BH.\label{23}
\end{eqnarray}
Both families $(\AAA_s-\AAA_t)_{(s,t)\in\Omega^2}$ and
$(\BBB_s-\BBB_t)_{(s,t)\in\Omega^2}$ consist of commuting normal
operators and by Theorem 3.2 in \cite{JOC}
$$\dfrac12\int_{\Omega^2}(\AAA_s-\AAA_t)X(\BBB_s-\BBB_t)d(\mu\times\mu)(s,t)\in \ccc_{\lluo\cdot|
\rruo}(\HH)\qquad\mbox{and}$$
\begin{eqnarray*}
\lefteqn{ \llu \int_\Omega \AAA_tX\BBB_t \dt-
\int_\Omega\AAt \dt X\!\!\int_\Omega\BBt \dt
\rru}\\
&=&\llu\dfrac12\int_{\Omega^2}(\AAA_s-\AAA_t)X(\BBB_s-\BBB_t)d(\mu\times\mu)(s,t)\rru\\
&\le&\llu\sqrt{\,\dfrac12\int_{\Omega^2}|\AAA_s-\AAA_t|^2d(\mu\times\mu)(s,t)}
X\sqrt{\,\dfrac12\int_{\Omega^2}|\BBB_s-\BBB_t|^2d(\mu\times\mu)(s,t)}\rru\\
&=&\llu\sqrt{\,\int_\Omega|\AAt|^2 \dt-\left|\int_\Omega\AAt
\dt\right|^2} X\sqrt{\,\int_\Omega|\BBt|^2 \dt-\left|\int_\Omega\BBt
\dt\right|^2}\rru,
\end{eqnarray*}
due to identities (\ref{22}) and (\ref{23}). And so the conclusion
(\ref{21}) follows.
\end{proof}
\begin{lemma
Let $\mu$ (resp. $\nu$) be a probability measure on $\Omega$ (resp.
$\mho$), let both families
$\{\AAA_s,\CCC_t\}_{(s,t)\in\Omega\times\mho}$ and
$\{\BBB_s,\DDD_t\}_{(s,t)\in\Omega\times\mho}$
consist of commuting normal operators
and let
\begin{eqnarray*}&&\sqrt{\,\int_\Omega|\AAA_s|^2d\mu(s)
\int_\mho|\CCC_t|^2d\nu(t)-\left|\int_\Omega\AAA_sd\mu(s)\int_\mho\CCC_td\nu(t)\right|^2}\cdot X\cdot\\
&&\sqrt{\,\int_\Omega|\BBB_s|^2d\mu(s)
\int_\mho|\DDD_t|^2d\nu(t)-\left|\int_\Omega\BBB_sd\mu(s)\int_\mho\DDD_td\nu(t)\right|^2}
\end{eqnarray*}
be in $\ccu$ for some $X\in\BH$.
Then \begin{eqnarray*}&&\int_\Omega \int_\mho\AAA_s
\CCC_tX\BBB_s\DDD_t\,d\mu(s)\,d\nu(t) -\\&-&\int_\Omega\AAA_s
\,d\mu(s)\int_\mho\CCC_t\,d\nu(t) X\!\!\int_\Omega\BBB_s \,d\mu(s)
\int_\mho\DDD_t\,d\nu(t) \in\ccu\end{eqnarray*}
and
$$
\llu \!\int_\Omega \!\int_\mho\AAA_s
\CCC_tX\BBB_s\DDD_t\,d\mu(s)\,d\nu(t) -\!\int_\Omega\AAA_s
\,d\mu(s)\!\int_\mho\CCC_t\,d\nu(t) X\!\!\int_\Omega\BBB_s
\,d\mu(s)\kern-2pt \int_\mho\DDD_t\,d\nu(t) \rru $$
\begin{eqnarray}
& \leq&\llu \sqrt{\!\int_\Omega|\AAA_s|^2d\mu(s)
\!\int_\mho|\CCC_t|^2d\nu(t)\!-\!\left|\!\int_\Omega \AAA_s d\mu(s)\!\int_\mho\CCC_t d\nu(t)\right|^2}\right.\right.\right.X\nonumber\\
&\cdot& \left.\left.\left.\sqrt{\!\int_\Omega|\BBB_s|^2d\mu(s)
\!\int_\mho|\DDD_t|^2d\nu(t)\!-\!\left|\!\int_\Omega \BBB_s d\mu(s)\!\int_\mho\DDD_t d\nu(t)\right|^2}\rru.
\nonumbe
\end{eqnarray}
\end{lemma}
\begin{proof}
Apply Theorem \ref{normalnisluchaj} to the probability measure
$\mu\times\nu$ on $\WWW\times\mho$ and families
$(\AAA_s\CCC_t)_{(s,t)\in\Omega\times\mho}$ and
$(\BBB_s\DDD_t)_{(s,t)\in\Omega\times\mho}$ of normal commuting
operators in $L_G^2(\Omega\times\mho,d\mu\times\nu,\BH),$ taking in
account that
\begin{eqnarray*} \int_{\WWW\times\mho}\AAA_s \CCC_t\,d(\mu\times\nu)(s,t)
&=&\int_\Omega\AAA_s \,d\mu(s)\!\int_\mho\CCC_t\,d\nu(t),\\
\int_{\WWW\times\mho}\BBB_s \DDD_t\,d(\mu\times\nu)(s,t)
&=&\int_\Omega\BBB_s
\,d\mu(s)\!\int_\mho\DDD_t\,d\nu(t),\end{eqnarray*} and similarly
\begin{eqnarray*} \int_{\WWW\times\mho}\lla\AAA_s\CCC_t\rra^2d(\mu\times\nu)(s,t)
&=&\int_\Omega\lla\AAA_s\rra^2d\mu(s)\int_\mho\lla\CCC_t\rra^2d\nu(t),\\
\int_{\WWW\times\mho}\lla\BBB_s \DDD_t\rra^2d(\mu\times\nu)(s,t)
&=&\int_\Omega\lla\BBB_s
\rra^2d\mu(s)\int_\mho\lla\DDD_t\rra^2d\nu(t).\end{eqnarray*}
\end{proof}
By the use of the mathematical induction, the previous lemma enables
us to get straightforwardly the following
\begin{corollary}
If $\mu$, $(\AAA_t)_{t\in\Omega}$ and
$(\BBB_t)_{t\in\Omega}$ are as in Theorem \ref{normalnisluchaj},
then for all $n\in\mathbb N$ and for any
$(*_1,\cdots,*_{2n})\in\{*,1\}^{2n}$,
\begin{eqnarray}
\lefteqn{
\llu\int_{\Omega^n}\prod_{k=1}^n \AAA_{t_k}^{*_k}X
\prod_{k=1}^n \BBB_{t_{n+k}}^{*_{n+k}} \prod_{k=1}^{n} d\mu(t_k)\rrr.\rrr.\rrr.}\nonumber\\
& - &\lll.\lll.\lll. \llm\int_{\Omega}\AAA_{t}^*\dt\rrm^i
\llm\int_{\Omega}\AAA_{t}\dt\rrm^{n-i}X
\llm\int_{\Omega}\BBB_{t}^*\dt\rrm^j
\llm\int_{\Omega}\BBB_{t}\dt\rrm^{n-j}\rru\nonumber
\end{eqnarray}
$$\le
\llu \llm\int_\Omega|\AAt|^2 \dt-\left|\int_\Omega\AAt
\dt\right|^2\rrm^{\frac{n}2}X \llm\int_\Omega|\BBB_t|^2
\dt-\left|\int_\Omega\BBB_t \dt\right|^2\rrm^{\frac{n}2} \rru,$$
where $i$ (resp. $j$) stands for the cardinality of $\{k\in\mathbb N
\,\,|\, 1\le k\le n \,\,\& \,*_k=*\}$ and (resp. $\{k\in\mathbb N
\,\,|\, 1\le k\le n \,\,\& \,*_{n+k}=*\}$).
\end{corollary}
For the Schatten $p$-norms $\|\cdot\|_p$ normality and commutativity
conditions can be dropped, at the inevitable expense of the
simplicity for its formulation. So we have the following
\begin{theorem}[Landau type inequality for \ipt transformers in Schatten ideals]
Let $\mu$ be a probability measure on $\Omega$, let
$(\AAA_t)_{t\in\Omega}$ and $(\BBB_t)_{t\in \Omega}$ be
$\mu$-weak${}^*$ measurable families of bounded Hilbert space
operators such that
$$\int_\Omega\left(\|\AAA_tf\|^2+\|\AAA_t^*f\|^2+\|\BBB_tf\|^2+\|\BBB_t^*f\|^2\right)d\mu(t)<\infty\qquad
\textrm{ \rm for all $f\in\HH$}$$ and let $p,q,r\ge1$ such that
$\dfrac1p=\dfrac1{2q}+\dfrac1{2r}\,$. Then for all
$X\in\ccc_p(\HH)$,
\begin{eqnarray}
\label{grussp}&&\lefteqn{
\lln\int_\Omega \AAA_tX\BBB_t \dt- \int_\Omega\AAt \dt X \int_\Omega\BBt \dt\rrn_p}\\
&\leqslant&\kern-4pt\lln
\left(\int_\Omega\left|\left(\int_\Omega\left|\AAA_t^*-\int_\Omega\AAt^*
\dt \right|^2 \dt \right)^{\frac{q-1}2}
\left(\AAA_t-\int_\Omega\AAt \dt\right)\right|^2
\dt\right)^{\frac1{2q}}\rrr.\nonumber\end{eqnarray}
\begin{equation*} X\lll.\left(\int_\Omega\left|\left(\int_\Omega\left|\BBB_t-\int_\Omega\BBt\dt
\right|^2 \dt \right)^{\frac{r-1}2}
\left(\BBB_t^*-\int_\Omega\BBt^* \dt\right)\right|^2
\dt\right)^{\frac1{2r}}\rrn_p.
\end{equation*}
\end{theorem}
\begin{proof}
According to identity (\ref{23}), application of Theorem 3.3 in
\cite{JOC} to families
$(\mathscr{A}_s-\mathscr{A}_t)_{(s,t)\in\Omega^2}$ and
$(\mathscr{B}_s-\mathscr{B}_t)_{(s,t)\in\Omega^2}$ gives
\begin{eqnarray}
&&
\lln\int_\Omega \AAA_tX\BBB_td\mu(t)-\int_\Omega\mathscr{A}_t\dt X\int_\Omega\mathscr{B}_t\dt\rrn_p \nonumber\\
&=&\lln\dfrac12\int_{\Omega^2}(\AAA_s-\AAA_t)X(\BBB_s-\BBB_t)d(\mu\times\mu)(s,t)\rrn_p\le \nonumbe
\end{eqnarray}
$$
\lln\left(\dfrac12\int_{\Omega^2}(\mathscr{A}_s^*-\mathscr{A}_t^*)
\left(\dfrac12\int_{\Omega^2}|\mathscr{A}_s^*-\mathscr{A}_t^*|^2(\mu\times\mu)(s,t)\right)^{q-1}\kern-13.1pt
(\mathscr{A}_s-\mathscr{A}_t)d(\mu\times\mu)(s,t)\right)^{\frac1{2q}}\rrr.\kern-10pt
X$$ \begin{equation}\label{pnorm}\end{equation}
$$\lll.\left(\dfrac12\int_{\Omega^2}(\mathscr{B}_s-\mathscr{B}_t)
\Bigl(\dfrac12\int_{\Omega^2}|\mathscr{B}_s-\mathscr{B}_t|^2(\mu\times\mu)(s,t)\Bigr)^{r-1}\kern-8pt
(\mathscr{B}_s^*-\mathscr{B}_t^*)d(\mu\times\mu)(s,t)\right)^{\frac1{2r}}\kern-2pt\rrn_p.$$
By application of identity (\ref{23}) once again, the last
expression in (\ref{pnorm}) becomes
$$\bigg\|\biggl(\dfrac12\int_{\Omega^2}(\mathscr{A}_s-\mathscr{A}_t)^*
\left(\int_\Omega\left|\mathscr{A}_t^*-\int_\Omega \mathscr{A}^*
_t\dt\right|^2\dt\right)^{q-1}
(\mathscr{A}_s-\mathscr{A}_t)d(\mu\times\mu)(s,t)\biggr)^{\frac1{2q}}$$
$$X\biggl(\dfrac12\int_{\Omega^2}(\mathscr{B}_s-\mathscr{B}_t)
\Bigl(\int_\Omega\left|\mathscr{B}_s-\int_\Omega \mathscr{B}
_t\dt\right|^2d\mu(s)\Bigr)^{r-1}\kern-5pt(\mathscr{B}_s-\mathscr{B}_t)^*d(\mu\times\mu)(s,t)\biggr)^{\frac1{2r}}\bigg\|_p.\label{odvizraz}
$$
Denoting
$\Bigl(\int_\Omega\left|\AAA_s^*-\int_\Omega\mathscr{A}^*d\mu\right|^2d\mu(s)\Bigr)^{\frac{p-1}2}$
\kern-6.5pt(resp.
$\Bigl(\int_\Omega\left|\BBB_s-\int_\Omega\mathscr{B}d\mu\right|^2d\mu(s)\Bigr)^{\frac{r-1}2}$)
by $Y$ (resp. $Z$),
then
the expression in (\ref{pnorm}) becomes
\begin{eqnarray}\label{saYZ}
\biggl\|\left(\dfrac12\int_{\Omega^2}\left|Y\AAA_s-Y\AAA_t\right|^2d(\mu\times\mu)(s,t)\right)^{\frac1{2q}}X\\
\nonumber
\left(\dfrac12\int_{\Omega^2}\left|Z\BBB_s^*-Z\BBB_t^*\right|^2d(\mu\times\mu)(s,t)\right)^{\frac1{2r}}\biggr\|_p.
\end{eqnarray}
By a new application of identity (\ref{23}) to families
$(Y\AAA_t)_{t\in\Omega}$ and $(Z\BBB_t^*)_{t\in\Omega}$ (\ref{saYZ})
becomes
$$\left\|\left(\int_\Omega\left|Y\mathscr{A}_t-\int_\Omega Y\mathscr{A}_t \dt\right|^2d\mu(t)\right)^{\frac1{2q}}\kern-4pt X
\left(\int_\Omega\left|Z\mathscr{B}_t^*-\int_\Omega Z\mathscr{B}_t^*
\dt\right|^2d\mu(t)\right)^{\frac1{2r}}\right\|_p,$$
which obviously equals to the righthand side expression in
(\ref{grussp}).
\end{proof}
A special case of an abstract H\"older inequality presented in
Theorem 3.1.(e) in \cite{JOC}
gives us
\begin{theorem}[Cauchy-Schwarz inequality for o.v. functions in u.i. norm ideals]
\label{koshishvarcovacha} Let $\mu$ be a measure on $\Omega$, let
$(\AAA_t)_{t\in\Omega}$
and $(\BBB_t)_{t\in \Omega}$ be
$\mu$-weak${}^*$ measurable families of bounded Hilbert space
operators
such that
$\lla\int_\Omega|\AAA_t|^2\dt\rra^\theta$ and
$\lla\int_\Omega|\BBB_t|^2\dt\rra^\theta$ are in in $\ccu$ for some
$\theta>0$ and for some u.i. norm $\lluo\cdot\rruo.$ Then,
\begin{equation}
\llu\lla\int_\Omega \AAA_t^*\BBB_t\dt\rra^\theta \rru\le
\llu\lla\int_\Omega \AAA_t^*\AAA_t\dt\rra^\theta \rru^\frac12
\llu\lla\int_\Omega \BBB_t^*\BBB_t\dt\rra^\theta \rru^\frac12.
\nonumbe
\end{equation}
\end{theorem}
\begin{proof}
Take $\Phi$ to be a
s.g. function that generates u.i. norm $\lluo\cdot\rruo$,
$\Phi_1=\Phi$,
$\Phi_2=\Phi_3=\Phi^{(2)}$ (2-reconvexization of $\Phi$),
$\alpha=2\theta$ and $X=I$, and then apply 3.1.(e) from \cite{JOC}.
\end{proof}
Now we can easily derive the following generalization of Landau
inequality for Gel'fand integrals of o.v. functions:
\begin{theorem}[Landau type inequality for o.v. functions in u.i. norm ideals]
\label{korelacionateorema} If $\mu$ is a probability measure on
$\Omega$, $\theta>0$ and $(\AAA_t)_{t\in\Omega}$
and $(\BBB_t)_{t\in \Omega}$ are as
in Theorem \ref{koshishvarcovacha},
then,
\begin{multline}
\llu\lla\int_\Omega \AAA_t^*\BBB_t\dt
-\int_\Omega \AAA_t^*\dt\int_\WWW\BBB_t\dt\rra^\theta \rru^2\le\\
\llu\lla\int_\Omega \lla\AAA_t\rra^2\dt-\lla\int_\WWW\AAt\dt\rra^2\rra^\theta\rru
\llu\lla\int_\Omega \lla\BBB_t\rra^2\dt-\lla\int_\WWW\BBt\dt\rra^2\rra^\theta\rru.
\label{korelaciona}
\end{multline}
\end{theorem}
\begin{proof}
Apply Theorem \ref{koshishvarcovacha} to o.v. families
$(\AAA_s-\AAA_t)_{(s,t)\in\WWW^2}$ and
$(\BBB_s-\BBB_t)_{(s,t)\in\WWW^2}$ and use identity (\ref{22}) once
again.
\end{proof}
For for the more general inequality in an arbitrary Hilbert
$C^*$-module see Theorem 3.4 in \cite{I-V}. A case $\theta=1$ and
$\lluo\cdot\lluo=\lln\cdot\rrn$ in Theorem \ref{korelacionateorema}
offers the proof for Hilbert $C^*$-module $L_G^2(\WWW,\dm,\BH)$ in
case of the lifted projection $h(t)=I$ for all $t\in\WWW$.
Case $\theta=1$ and $\lluo\cdot\lluo=\lln\cdot\rrn_1$ of Theorem
\ref{korelacionateorema}
offers the proof for the stronger version of Theorem 3.3 for Hilbert
$H^*$-module $L_G^2(\WWW,\dm,\ccj)$ for the
same lifted projection $h(t)=I$ for all $t\in\WWW$.
\begin{corollary}
Under conditions of Theorem \ref{korelacionateorema} we have
$$ \lla\tr\lll( \int_\Omega \AAA_t^*\BBB_t\dt
-\int_\Omega \AAA_t^*\dt\int_\WWW\BBB_t\dt\rrr)\rra^2$$
\begin{eqnarray}&\le& \lln \int_\Omega \AAA_t^*\BBB_t\dt
-\int_\Omega \AAA_t^*\dt\int_\WWW\BBB_t\dt \rrn_1^2 \label{druga}
\end{eqnarray}
\begin{eqnarray}
&\le&\left({\int_\Omega \lln\AAA_t\rrn_2^2\dt}-\lln\int_\WWW\AAt\dt\rrn_2^2\right)\cdot\nonumber\\
&\cdot&
\left({\int_\Omega \lln\BBB_t\rrn_2^2\dt}-\lln\int_\WWW\BBt\dt\rrn_2^2\right)\nonumber\\
&=&\left(\lln\sqrt{\int_\Omega \lla\AAA_t\rra^2\dt}\rrn_2^2-\lln\int_\WWW\AAt\dt\rrn_2^2\right)\cdot\nonumber\\
&\cdot&\left(\lln\sqrt{\int_\Omega
\lla\BBB_t\rra^2\dt}\rrn_2^2-\lln\int_\WWW\BBt\dt\rrn_2^2\right).\nonumber
\label{korelaciona1}
\end{eqnarray}
\end{corollary}
\begin{proof}
An application of (\ref{korelaciona}) for $\theta=1$ and
$\lluo\cdot\lluo=\lln\cdot\rrn_1$ justifies (\ref{prva}), while
(\ref{druga}) and all the remaining identities in
(\ref{korelaciona2}) are obtainable by a straightforward
calculations, based on elementary properties of the trace $\tr$ and
Gel'fand integrals:
\begin{eqnarray}
&&\lln\int_\Omega \AAA_t^*\BBB_t\dt
-\int_\Omega \AAA_t^*\dt\int_\WWW\BBB_t\dt \rrn_1^2 \nonumber \\
&\le&\lln\int_\Omega
\lla\AAA_t\rra^2\dt\kern-3pt-\kern-3pt\lla\int_\WWW\AAt\dt\rra^2\rrn_1
\kern-4pt\lln\int_\Omega \lla\BBB_t\rra^2\dt\kern-3pt-\kern-3pt\lla\int_\WWW\BBt\dt\rra^2\rrn_1\label{prva}\\
&=&\left(\tr\lll(\int_\Omega \lla\AAA_t\rra^2\dt\rrr)-\tr\lll(\lla\int_\WWW\AAt\dt\rra^2\rrr)\right)\cdot
\nonumber\\
&\cdot&
\left(\tr\lll(\int_\Omega \lla\BBB_t\rra^2\dt\rrr)-\tr\lll(\lla\int_\WWW\BBt\dt\rra^2\rrr)\right)\nonumber\\
&=&\left({\int_\Omega \lln\AAA_t\rrn_2^2\dt}-\lln\int_\WWW\AAt\dt\rrn_2^2\right)\cdot\nonumber\\
&\cdot&
\left({\int_\Omega \lln\BBB_t\rrn_2^2\dt}-\lln\int_\WWW\BBt\dt\rrn_2^2\right)\nonumber\\
&=&\left(\lln\sqrt{\int_\Omega \lla\AAA_t\rra^2\dt}\rrn_2^2-\lln\int_\WWW\AAt\dt\rrn_2^2\right)\cdot\nonumber\\
&\cdot& \left(\lln\sqrt{\int_\Omega
\lla\BBB_t\rra^2\dt}\rrn_2^2-\lln\int_\WWW\BBt\dt\rrn_2^2\right).
\label{korelaciona2}
\end{eqnarray}
\end{proof}
For bounded field of operators $\AAA=(\mathscr{A}_t)_{t\in\Omega}$
one can easily check that the radius of the smallest disk that
essentially contains its range is
$$r_\iii(\AAA)=\inf_{A\in\BH}\supess_{t\in\WWW}\lln \AAt-A\rrn=
\inf_{A\in\BH}\lln\AAA-A\rrn_\infty=\min_{A\in\BH}\lln\AAA-A\rrn_\infty$$
(from the triangle inequality we have
$\bigl|\|\mathscr{A}_t-A'\|-\|\mathscr{A}_t-A\|\bigr|\leq\|A'-A\|$,
so the mapping $A\to\supess_{t\in\WWW}\lln\AAt-A\rrn$ is nonnegative
and continuous on $\BH$; since $(\mathscr{A}_t)_{t\in\Omega}$ is
bounded field of operators, we also have $\lln\AAt-A\rrn\to\infty$
when $\|A\|\to\infty$, so this mapping attains minimum), and it
actually attains at some $A_0\in\BH$, which represents a center of
the disk considered. Any such field of operators is of finite
diameter
$$\diam\nolimits_\iii(\AAA)=\supess_{s,t\in\WWW}\lln \AAA_s-\AAt\rrn,$$ with the simple
inequalities $r_\iii(\AAA)\le \diam_\iii(\AAA)\le 2r_\iii(\AAA)$
relating those quantities. For such
fields of operators we can now state the following stronger
version of Gr\"uss inequality.
\begin{theorem}[Gr\"uss type inequality for \ipt{} transformers in u.i. norm ideals]
\label{th0} Let $\mu$ be a $\sigma$-finite measure on $\Omega$ and
let $\AAA=(\mathscr{A}_t)_{t\in\Omega}$ and $\BBB=(\mathscr{B}_t)_{t\in\Omega}$
be $[\mu]$ a.e. bounded fields of operators. Then for all
$X\in\ccu$,
\begin{multline}
\sup_{\mu(\delta)>0}\llu\frac1{\mu(\delta)}\int_\delta\mathscr{A}_tX\mathscr{B}_t\dt-
\frac1{\mu(\delta)}\int_\delta\mathscr{A}_t\dt \,X
\frac1{\mu(\delta)}\int_\delta\mathscr{B}_t \dt\rru\\
\le\min \llv {r_\iii(\AAA)r_\iii(\BBB)},
\frac{\diam_\iii(\AAA)\diam_\iii(\BBB)}2\rrv\cdot\lluo X\rruo
\label{oshtrina0}
\end{multline}
(i.e. $\sup$ is taken over all measurable sets
$\delta\subseteq\Omega$ such that $0<\mu(\delta)<\infty$).
\end{theorem}
\begin{proof}
Let
$r_\iii(\AAA)=\displaystyle\lln\AAA-A_0\rrn_\infty=\min_{A\in\BH}\lln
\AAA-A\rrn_\infty$, let
$r_\iii(\BBB)=\displaystyle\lln\BBB-B_0\rrn_\infty$
$\displaystyle=\min_{B\in\BH}\lln \BBB-B\rrn_\infty$ and let us note
that
\begin{eqnarray*}
\frac1{\mu(\delta)}\int_\delta\left|\mathscr{A}_t-A_0\right|^2\dt&\le&
\frac1{\mu(\delta)}
\int_\delta\supess_{t\in\WWW}\lln\mathscr{A}_t-A_0\rrn^2
\cdot I\dt\\
&=&\lln \mathscr{A}-A_0\rrn^2_\iii\cdot I=r_\iii^2(\AAA)\cdot
I.\end{eqnarray*} Therefore
$\lln\frac1{\mu(\delta)}\int_\delta\left|\mathscr{A}_t-A_0\right|^2\dt\rrn^\frac12
\le r_\iii(\AAA) $ and
$\lln\frac1{\mu(\delta)}\int_\delta\left|\mathscr{B}_t-B_0\right|^2\dt\rrn^\frac12
\le r_\iii(\BBB) $
goes similarly.
By identity (\ref{22}) applied to probability measure
$\frac{1}{\mu(\delta)}\mu$ on $\delta$ and Lemma \ref{optlema} we
then have
$$
\frac1{2\mu(\delta)^2}\int_{\delta^2}\left|\mathscr{A}_s-\mathscr{A}_t\right|^2d(\mu\times\mu)(s,t)
=\frac1{\mu(\delta)}\int_{\delta}\left|\mathscr{A}_t-\frac1{\mu(\delta)}\int_\delta
\mathscr{A}_t\dt\rra^2\,d\mu(t)=$$
\begin{eqnarray*}&=&\frac1{\mu(\delta)}\int_\delta\left|\mathscr{A}_t-A_0\right|^2-\lla\frac1{\mu(\delta)} \int_\delta\AAA_t\dt-A_0\rra^2
d\mu(t)\\
&\le& \lln \mathscr{A}-A_0\rrn^2_\iii\cdot I-\lla\frac1{\mu(\delta)}
\int_\delta\AAA_t\dt-A_0\rra^2
\end{eqnarray*}
and therefore
\begin{eqnarray}
&&\lln\frac1{2\mu(\delta)^2}\int_{\delta^2}\left|\mathscr{A}_s-\mathscr{A}_t\right|^2d(\mu\times\mu)(s,t)\rrn\label{poluprechnik1}\\
&\le& \lln \lln\mathscr{A}-A_0\rrn^2_\iii\cdot I-\lla
\frac1{2\mu(\delta)}\int_\delta\AAA_t\dt-A_0\rra^2\rrn.\nonumber
\end{eqnarray}
Similarly,
\begin{eqnarray}
&&\lln\frac1{\mu(\delta)^2}\int_{\delta^2}\left|\mathscr{B}_s-\mathscr{B}_t\right|^2d(\mu\times\mu)(s,t)\rrn\label{poluprechnik2}\\
&\le&\lln \lln\mathscr{B}-B_0\rrn^2_\iii\cdot I-\lla
\frac1{\mu(\delta)}\int_\delta\BBB_t\dt-B_0\rra^2\rrn.\nonumber
\end{eqnarray}
Those inequalities show that
subfields
$(\mathscr{A}_t-\mathscr{A}_s)_{(s,t)\in\delta\times\delta}$ and
$(\mathscr{B}_t-\mathscr{B}_s)_{(s,t)\in\delta\times\delta}$ are in
$L^2(\delta\times\delta,\frac1{\mu(\delta)}\mu\times\frac1{\mu(\delta)}\mu,\BH)$,
and therefore according to identity (\ref{22}) and Lemma 3.1(c)
from \cite{JOC},
\begin{eqnarray}
&&\lefteqn{\llu \frac1{\mu(\delta)}\int_\delta \AAA_tX\BBB_td\mu(t)-
\frac1{\mu(\delta)}\int_\delta\mathscr{A}_t\dt X \frac1{\mu(\delta)}\int_\delta\mathscr{B}_t \dt\rru}\nonumber\\
&=&\llu\frac1{2\mu(\delta)^2}\int_{\delta^2}(\AAA_s-\AAA_t)X(\BBB_s-\BBB_t)d(\mu\times\mu)(s,t)\rru\nonumber\\
&\le& \lln\frac1{2\mu(\delta)^2}\int_{\delta^2}\left|\mathscr{A}_s-\mathscr{A}_t\right|^2d(\mu\times\mu)(s,t)\rrn^\frac12\cdot\nonumber\\
&&
\cdot\lln\frac1{2\mu(\delta)^2}\int_{\delta^2}\left|\mathscr{B}_s-\mathscr{B}_t\right|^2d(\mu\times\mu)(s,t)\rrn^\frac12
\!\!\lluo X\rruo\nonumber\\
&\le&
\lln\lln\mathscr{A}-A_0\rrn^2_\iii\cdot I
-\lla \frac1{\mu(\delta)}\int_\delta\AAA_t\dt\!-\!A_0\rra^2\rrn^\frac12\cdot \label{poboljshana}\\
&\cdot&\!\lln \lln\mathscr{B}\!-\!B_0\rrn^2_\iii\cdot I
-\lla\frac1{\mu(\delta)}\int_\delta\BBB_t\dt-B_0\rra^2\rrn^\frac12\lluo
X\rruo
\nonumber\\
&\le&r_\iii(\AAA)r_\iii(\BBB)\lluo X\rruo,
\nonumber
\end{eqnarray}
and the first half of inequality \eqref{oshtrina0} is proved. The
proof for the remaining part of \eqref{oshtrina0} differs from the
previous one only by use of obvious estimates
$$\lln\frac1{\mu(\delta)^2}\int_{\delta^2}\left|\mathscr{A}_s-\mathscr{A}_t\right|^2d(\mu\times\mu)(s,t)\rrn
\le\frac{\diam_\iii^2(\AAA)}2$$ $$\mbox{(resp.
$\lln\frac1{\mu(\delta)^2}\int_{\delta^2}\left|\mathscr{B}_s-\mathscr{B}_t\right|^2d(\mu\times\mu)(s,t)\rrn
\le\frac{\diam_\iii^2(\BBB)}2$)}$$ instead if \eqref{poluprechnik1}
(resp.
\eqref{poluprechnik2}) in \eqref{poboljshana}.
\end{proof}
Now we turn to the less general case when
$(\mathscr{A}_t)_{t\in\Omega}$ and $(\mathscr{B}_t)_{t\in\Omega}$
are bounded fields of self-adjoint (generally non-commuting)
operators, in which case the above inequality has the following
form.
\begin{corollary
\label{th1} If $\mu$ is a probability measure on $\Omega$, let
$C,D,E,F$ be bounded self-adjoint operators and let
$(\mathscr{A}_t)_{t\in\Omega}$ and $(\mathscr{B}_t)_{t\in\Omega}$ be
bounded self-adjoint fields satisfying $C\le\mathscr{A}_t\le D$ and
$E\le\mathscr{B}_t\le F$ for all $t\in\Omega$. Then for all
$X\in\ccu$,
\begin{equation}
\llu\int_\Omega\mathscr{A}_tX\mathscr{B}_t\dt-
\int_\Omega\mathscr{A}_t\dt \,X \int_\Omega\mathscr{B}_t \dt\rru
\le\dfrac{\|D-C\|\cdot\|F-E\|}4\cdot\lluo X\rruo. \label{oshtrina}
\end{equation}
\end{corollary}
\begin{proof}
As $\frac{C-D}2\le\mathscr{A}_t-\frac{C+D}2\le\frac{D-C}2$ for every
$t\in\Omega$, then
\begin{eqnarray*}
\supess_{t\in\WWW}\lln \mathscr{A}_t-\frac{C+D}2\rrn&=&
\supess_{t\in\WWW}\sup_{\lln
f\rrn=1}\lla\llp\llm\mathscr{A}_t-\frac{C+D}2 \rrm f,f\rrp\rra\\
&\le& \sup_{\lln f\rrn=1}\lla\llp\frac{D-C}2 f,f\rrp\rra= \frac{\lln
D-C\rrn}2,
\end{eqnarray*}
which implies
$r_\iii(\AAA)\le
\frac{\lln D-C\rrn}2,$ and similarly
$r_\iii(\BBB)\le\frac{\lln F-E\rrn}2.$ Thus \eqref{oshtrina} follows
directly from
(\ref{oshtrina0}).
\end{proof}
\begin{remark}
Note that similar to the estimate (\ref{poboljshana}) the
righthand side of \eqref{oshtrina} can be improved to
\begin{eqnarray}
\label{poboljshana1}&& \lln \lln\frac{D-C}2\rrn^2
-\lla \int_\Omega\AAA_t\dt-\frac{C+D}2\rra^2\rrn^\frac12\cdot\\
&\cdot&\lefteqn{ \lln \lln\frac{F-E}2\rrn^2
-\lla \int_\Omega\BBB_t\dt-\frac{E+F}2\rra^2\rrn^\frac12\lluo X\rruo.}\nonumber
\end{eqnarray}
Estimate similar to (\ref{poboljshana1}) was given in a Gr\"uss
type inequality for square integrable Hilbert space valued
functions in Theorem 3 in \cite{B-C-D-R}.
\end{remark}
In case of $\HH=\mathbb{C}$ and $\mu$ being the normalized
Lebesgue measure on $[a,b]$ (i.e. $d\,\mu(t)=\frac{dt}{b-a}$), then
(\ref{grisovaca}) comes as an obvious corollary of Theorem
\ref{th1}. This special case also confirms the sharpness of the
constant $\frac14$ in the inequality (\ref{oshtrina}).
Taking $\Omega=\{1,\ldots,n\}$ and $\mu$ to be the normalized
counting measure we get another corollary of Theorem \ref{th1},
which gives us the following
\begin{corollary}[Gr\"uss type inequality for elementary operators]
Let $A_1,
\hdots, A_n$, $B_1, \hdots, B_n$, $C, D, E$ and $F$ be bounded linear self-adjoint operators
acting on a Hilbert space $\HH$
such that
$C\le A_i\le D$ and
$E\le B_i\le F$ for all $i=1,2,\cdots,n$ then for arbitrary
$X\in\ccu$,
\begin{eqnarray}\nonumbe
\llu \frac1n\sum_{i=1}^n A_i XB_i-\frac1{n^2}\sum_{i=1}^nA_i\,
X\sum_{i=1}^nB_i\rru \leq \frac{\|D-C\|\|F-E\|}4 \lluo X\rruo\,.
\end{eqnarray}
\end{corollary}
\bibliographystyle{amsplain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |